code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def fit_meanshift(self, data, bandwidth=None, bin_seeding=False, **kwargs): """ Fit MeanShift clustering algorithm to data. Parameters ---------- data : array-like A dataset formatted by `classifier.fitting_data`. bandwidth : float The bandwidth value used during clustering. If none, determined automatically. Note: the data are scaled before clutering, so this is not in the same units as the data. bin_seeding : bool Whether or not to use 'bin_seeding'. See documentation for `sklearn.cluster.MeanShift`. **kwargs passed to `sklearn.cluster.MeanShift`. Returns ------- Fitted `sklearn.cluster.MeanShift` object. """ if bandwidth is None: bandwidth = cl.estimate_bandwidth(data) ms = cl.MeanShift(bandwidth=bandwidth, bin_seeding=bin_seeding) ms.fit(data) return ms
Fit MeanShift clustering algorithm to data. Parameters ---------- data : array-like A dataset formatted by `classifier.fitting_data`. bandwidth : float The bandwidth value used during clustering. If none, determined automatically. Note: the data are scaled before clutering, so this is not in the same units as the data. bin_seeding : bool Whether or not to use 'bin_seeding'. See documentation for `sklearn.cluster.MeanShift`. **kwargs passed to `sklearn.cluster.MeanShift`. Returns ------- Fitted `sklearn.cluster.MeanShift` object.
Below is the the instruction that describes the task: ### Input: Fit MeanShift clustering algorithm to data. Parameters ---------- data : array-like A dataset formatted by `classifier.fitting_data`. bandwidth : float The bandwidth value used during clustering. If none, determined automatically. Note: the data are scaled before clutering, so this is not in the same units as the data. bin_seeding : bool Whether or not to use 'bin_seeding'. See documentation for `sklearn.cluster.MeanShift`. **kwargs passed to `sklearn.cluster.MeanShift`. Returns ------- Fitted `sklearn.cluster.MeanShift` object. ### Response: def fit_meanshift(self, data, bandwidth=None, bin_seeding=False, **kwargs): """ Fit MeanShift clustering algorithm to data. Parameters ---------- data : array-like A dataset formatted by `classifier.fitting_data`. bandwidth : float The bandwidth value used during clustering. If none, determined automatically. Note: the data are scaled before clutering, so this is not in the same units as the data. bin_seeding : bool Whether or not to use 'bin_seeding'. See documentation for `sklearn.cluster.MeanShift`. **kwargs passed to `sklearn.cluster.MeanShift`. Returns ------- Fitted `sklearn.cluster.MeanShift` object. """ if bandwidth is None: bandwidth = cl.estimate_bandwidth(data) ms = cl.MeanShift(bandwidth=bandwidth, bin_seeding=bin_seeding) ms.fit(data) return ms
def make_general(basis, use_copy=True): """ Makes one large general contraction for each angular momentum If use_copy is True, the input basis set is not modified. The output of this function is not pretty. If you want to make it nicer, use sort_basis afterwards. """ zero = '0.00000000' basis = uncontract_spdf(basis, 0, use_copy) for k, el in basis['elements'].items(): if not 'electron_shells' in el: continue # See what we have all_am = [] for sh in el['electron_shells']: if not sh['angular_momentum'] in all_am: all_am.append(sh['angular_momentum']) all_am = sorted(all_am) newshells = [] for am in all_am: newsh = { 'angular_momentum': am, 'exponents': [], 'coefficients': [], 'region': '', 'function_type': None, } # Do exponents first for sh in el['electron_shells']: if sh['angular_momentum'] != am: continue newsh['exponents'].extend(sh['exponents']) # Number of primitives in the new shell nprim = len(newsh['exponents']) cur_prim = 0 for sh in el['electron_shells']: if sh['angular_momentum'] != am: continue if newsh['function_type'] is None: newsh['function_type'] = sh['function_type'] # Make sure the shells we are merging have the same function types ft1 = newsh['function_type'] ft2 = sh['function_type'] # Check if one function type is the subset of another # (should handle gto/gto_spherical, etc) if ft1 not in ft2 and ft2 not in ft1: raise RuntimeError("Cannot make general contraction of different function types") ngen = len(sh['coefficients']) for g in range(ngen): coef = [zero] * cur_prim coef.extend(sh['coefficients'][g]) coef.extend([zero] * (nprim - len(coef))) newsh['coefficients'].append(coef) cur_prim += len(sh['exponents']) newshells.append(newsh) el['electron_shells'] = newshells return basis
Makes one large general contraction for each angular momentum If use_copy is True, the input basis set is not modified. The output of this function is not pretty. If you want to make it nicer, use sort_basis afterwards.
Below is the the instruction that describes the task: ### Input: Makes one large general contraction for each angular momentum If use_copy is True, the input basis set is not modified. The output of this function is not pretty. If you want to make it nicer, use sort_basis afterwards. ### Response: def make_general(basis, use_copy=True): """ Makes one large general contraction for each angular momentum If use_copy is True, the input basis set is not modified. The output of this function is not pretty. If you want to make it nicer, use sort_basis afterwards. """ zero = '0.00000000' basis = uncontract_spdf(basis, 0, use_copy) for k, el in basis['elements'].items(): if not 'electron_shells' in el: continue # See what we have all_am = [] for sh in el['electron_shells']: if not sh['angular_momentum'] in all_am: all_am.append(sh['angular_momentum']) all_am = sorted(all_am) newshells = [] for am in all_am: newsh = { 'angular_momentum': am, 'exponents': [], 'coefficients': [], 'region': '', 'function_type': None, } # Do exponents first for sh in el['electron_shells']: if sh['angular_momentum'] != am: continue newsh['exponents'].extend(sh['exponents']) # Number of primitives in the new shell nprim = len(newsh['exponents']) cur_prim = 0 for sh in el['electron_shells']: if sh['angular_momentum'] != am: continue if newsh['function_type'] is None: newsh['function_type'] = sh['function_type'] # Make sure the shells we are merging have the same function types ft1 = newsh['function_type'] ft2 = sh['function_type'] # Check if one function type is the subset of another # (should handle gto/gto_spherical, etc) if ft1 not in ft2 and ft2 not in ft1: raise RuntimeError("Cannot make general contraction of different function types") ngen = len(sh['coefficients']) for g in range(ngen): coef = [zero] * cur_prim coef.extend(sh['coefficients'][g]) coef.extend([zero] * (nprim - len(coef))) newsh['coefficients'].append(coef) cur_prim += len(sh['exponents']) newshells.append(newsh) el['electron_shells'] = newshells return basis
def vcs_rbridge_config_input_rbridge_id(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") vcs_rbridge_config = ET.Element("vcs_rbridge_config") config = vcs_rbridge_config input = ET.SubElement(vcs_rbridge_config, "input") rbridge_id = ET.SubElement(input, "rbridge-id") rbridge_id.text = kwargs.pop('rbridge_id') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def vcs_rbridge_config_input_rbridge_id(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") vcs_rbridge_config = ET.Element("vcs_rbridge_config") config = vcs_rbridge_config input = ET.SubElement(vcs_rbridge_config, "input") rbridge_id = ET.SubElement(input, "rbridge-id") rbridge_id.text = kwargs.pop('rbridge_id') callback = kwargs.pop('callback', self._callback) return callback(config)
def remove_dependency_layer(self): """ Removes the dependency layer (if exists) of the object (in memory) """ if self.dependency_layer is not None: this_node = self.dependency_layer.get_node() self.root.remove(this_node) self.dependency_layer = self.my_dependency_extractor = None if self.header is not None: self.header.remove_lp('deps')
Removes the dependency layer (if exists) of the object (in memory)
Below is the the instruction that describes the task: ### Input: Removes the dependency layer (if exists) of the object (in memory) ### Response: def remove_dependency_layer(self): """ Removes the dependency layer (if exists) of the object (in memory) """ if self.dependency_layer is not None: this_node = self.dependency_layer.get_node() self.root.remove(this_node) self.dependency_layer = self.my_dependency_extractor = None if self.header is not None: self.header.remove_lp('deps')
def csrf_generator(secret): """ Generates CSRF token. Inspired by this article: http://blog.ptsecurity.com/2012/10/random-number-security-in-python.html :returns: :class:`str` Random unguessable string. """ # Create hash from random string plus salt. hashed = hashlib.md5(uuid.uuid4().bytes + six.b(secret)).hexdigest() # Each time return random portion of the hash. span = 5 shift = random.randint(0, span) return hashed[shift:shift - span - 1]
Generates CSRF token. Inspired by this article: http://blog.ptsecurity.com/2012/10/random-number-security-in-python.html :returns: :class:`str` Random unguessable string.
Below is the the instruction that describes the task: ### Input: Generates CSRF token. Inspired by this article: http://blog.ptsecurity.com/2012/10/random-number-security-in-python.html :returns: :class:`str` Random unguessable string. ### Response: def csrf_generator(secret): """ Generates CSRF token. Inspired by this article: http://blog.ptsecurity.com/2012/10/random-number-security-in-python.html :returns: :class:`str` Random unguessable string. """ # Create hash from random string plus salt. hashed = hashlib.md5(uuid.uuid4().bytes + six.b(secret)).hexdigest() # Each time return random portion of the hash. span = 5 shift = random.randint(0, span) return hashed[shift:shift - span - 1]
def find_match_scp(self, rule): # pylint: disable-msg=R0911,R0912 """Handle scp commands.""" orig_list = [] orig_list.extend(self.original_command_list) binary = orig_list.pop(0) allowed_binaries = ['scp', '/usr/bin/scp'] if binary not in allowed_binaries: self.logdebug('skipping scp processing - binary "%s" ' 'not in approved list.\n' % binary) return filepath = orig_list.pop() arguments = orig_list if '-f' in arguments: if not rule.get('allow_download'): self.logdebug('scp denied - downloading forbidden.\n') return if '-t' in arguments: if not rule.get('allow_upload'): self.log('scp denied - uploading forbidden.\n') return if '-r' in arguments: if not rule.get('allow_recursion'): self.log('scp denied - recursive transfers forbidden.\n') return if '-p' in arguments: if not rule.get('allow_permissions', 'true'): self.log('scp denied - set/getting permissions ' 'forbidden.\n') return if rule.get('files'): files = rule.get('files') if not isinstance(files, list): files = [files] if filepath not in files: self.log('scp denied - file "%s" - not in approved ' 'list %s\n' % (filepath, files)) return # Allow it! return {'command': self.original_command_list}
Handle scp commands.
Below is the the instruction that describes the task: ### Input: Handle scp commands. ### Response: def find_match_scp(self, rule): # pylint: disable-msg=R0911,R0912 """Handle scp commands.""" orig_list = [] orig_list.extend(self.original_command_list) binary = orig_list.pop(0) allowed_binaries = ['scp', '/usr/bin/scp'] if binary not in allowed_binaries: self.logdebug('skipping scp processing - binary "%s" ' 'not in approved list.\n' % binary) return filepath = orig_list.pop() arguments = orig_list if '-f' in arguments: if not rule.get('allow_download'): self.logdebug('scp denied - downloading forbidden.\n') return if '-t' in arguments: if not rule.get('allow_upload'): self.log('scp denied - uploading forbidden.\n') return if '-r' in arguments: if not rule.get('allow_recursion'): self.log('scp denied - recursive transfers forbidden.\n') return if '-p' in arguments: if not rule.get('allow_permissions', 'true'): self.log('scp denied - set/getting permissions ' 'forbidden.\n') return if rule.get('files'): files = rule.get('files') if not isinstance(files, list): files = [files] if filepath not in files: self.log('scp denied - file "%s" - not in approved ' 'list %s\n' % (filepath, files)) return # Allow it! return {'command': self.original_command_list}
def xor(s, pad): '''XOR a given string ``s`` with the one-time-pad ``pad``''' from itertools import cycle s = bytearray(force_bytes(s, encoding='latin-1')) pad = bytearray(force_bytes(pad, encoding='latin-1')) return binary_type(bytearray(x ^ y for x, y in zip(s, cycle(pad))))
XOR a given string ``s`` with the one-time-pad ``pad``
Below is the the instruction that describes the task: ### Input: XOR a given string ``s`` with the one-time-pad ``pad`` ### Response: def xor(s, pad): '''XOR a given string ``s`` with the one-time-pad ``pad``''' from itertools import cycle s = bytearray(force_bytes(s, encoding='latin-1')) pad = bytearray(force_bytes(pad, encoding='latin-1')) return binary_type(bytearray(x ^ y for x, y in zip(s, cycle(pad))))
def factor_coeff(cls, ops, kwargs): """Factor out coefficients of all factors.""" coeffs, nops = zip(*map(_coeff_term, ops)) coeff = 1 for c in coeffs: coeff *= c if coeff == 1: return nops, coeffs else: return coeff * cls.create(*nops, **kwargs)
Factor out coefficients of all factors.
Below is the the instruction that describes the task: ### Input: Factor out coefficients of all factors. ### Response: def factor_coeff(cls, ops, kwargs): """Factor out coefficients of all factors.""" coeffs, nops = zip(*map(_coeff_term, ops)) coeff = 1 for c in coeffs: coeff *= c if coeff == 1: return nops, coeffs else: return coeff * cls.create(*nops, **kwargs)
def _calculate_feature_stats(feature_list, prepared, serialization_file): # pylint: disable=R0914 """Calculate min, max and mean for each feature. Store it in object.""" # Create feature only list feats = [x for x, _ in prepared] # Label is not necessary # Calculate all means / mins / maxs means = numpy.mean(feats, 0) mins = numpy.min(feats, 0) maxs = numpy.max(feats, 0) # Calculate, min, max and mean vector for each feature with # normalization start = 0 mode = 'w' arguments = {'newline': ''} if sys.version_info.major < 3: mode += 'b' arguments = {} with open(serialization_file, mode, **arguments) as csvfile: spamwriter = csv.writer(csvfile, delimiter=str(';'), quotechar=str('"'), quoting=csv.QUOTE_MINIMAL) for feature in feature_list: end = start + feature.get_dimension() # append the data to the feature class feature.mean = numpy.array(means[start:end]) feature.min = numpy.array(mins[start:end]) feature.max = numpy.array(maxs[start:end]) start = end for mean, fmax, fmin in zip(feature.mean, feature.max, feature.min): spamwriter.writerow([mean, fmax - fmin])
Calculate min, max and mean for each feature. Store it in object.
Below is the the instruction that describes the task: ### Input: Calculate min, max and mean for each feature. Store it in object. ### Response: def _calculate_feature_stats(feature_list, prepared, serialization_file): # pylint: disable=R0914 """Calculate min, max and mean for each feature. Store it in object.""" # Create feature only list feats = [x for x, _ in prepared] # Label is not necessary # Calculate all means / mins / maxs means = numpy.mean(feats, 0) mins = numpy.min(feats, 0) maxs = numpy.max(feats, 0) # Calculate, min, max and mean vector for each feature with # normalization start = 0 mode = 'w' arguments = {'newline': ''} if sys.version_info.major < 3: mode += 'b' arguments = {} with open(serialization_file, mode, **arguments) as csvfile: spamwriter = csv.writer(csvfile, delimiter=str(';'), quotechar=str('"'), quoting=csv.QUOTE_MINIMAL) for feature in feature_list: end = start + feature.get_dimension() # append the data to the feature class feature.mean = numpy.array(means[start:end]) feature.min = numpy.array(mins[start:end]) feature.max = numpy.array(maxs[start:end]) start = end for mean, fmax, fmin in zip(feature.mean, feature.max, feature.min): spamwriter.writerow([mean, fmax - fmin])
def add_channel_info(self, data, clear=False): """ Add channel info data to the channel_id group. :param data: A dictionary of key/value pairs. Keys must be strings. Values can be strings or numeric values. :param clear: If set, any existing channel info data will be removed. """ self.assert_writeable() self._add_attributes(self.global_key + 'channel_id', data, clear)
Add channel info data to the channel_id group. :param data: A dictionary of key/value pairs. Keys must be strings. Values can be strings or numeric values. :param clear: If set, any existing channel info data will be removed.
Below is the the instruction that describes the task: ### Input: Add channel info data to the channel_id group. :param data: A dictionary of key/value pairs. Keys must be strings. Values can be strings or numeric values. :param clear: If set, any existing channel info data will be removed. ### Response: def add_channel_info(self, data, clear=False): """ Add channel info data to the channel_id group. :param data: A dictionary of key/value pairs. Keys must be strings. Values can be strings or numeric values. :param clear: If set, any existing channel info data will be removed. """ self.assert_writeable() self._add_attributes(self.global_key + 'channel_id', data, clear)
def sample(self, input, steps): """ Sample outputs from LM. """ inputs = [[onehot(self.input_dim, x) for x in input]] for _ in range(steps): target = self.compute(inputs)[0,-1].argmax() input.append(target) inputs[0].append(onehot(self.input_dim, target)) return input
Sample outputs from LM.
Below is the the instruction that describes the task: ### Input: Sample outputs from LM. ### Response: def sample(self, input, steps): """ Sample outputs from LM. """ inputs = [[onehot(self.input_dim, x) for x in input]] for _ in range(steps): target = self.compute(inputs)[0,-1].argmax() input.append(target) inputs[0].append(onehot(self.input_dim, target)) return input
def _agl_compliant_name(glyph_name): """Return an AGL-compliant name string or None if we can't make one.""" MAX_GLYPH_NAME_LENGTH = 63 clean_name = re.sub("[^0-9a-zA-Z_.]", "", glyph_name) if len(clean_name) > MAX_GLYPH_NAME_LENGTH: return None return clean_name
Return an AGL-compliant name string or None if we can't make one.
Below is the the instruction that describes the task: ### Input: Return an AGL-compliant name string or None if we can't make one. ### Response: def _agl_compliant_name(glyph_name): """Return an AGL-compliant name string or None if we can't make one.""" MAX_GLYPH_NAME_LENGTH = 63 clean_name = re.sub("[^0-9a-zA-Z_.]", "", glyph_name) if len(clean_name) > MAX_GLYPH_NAME_LENGTH: return None return clean_name
def watch(): """Renerate documentation when it changes.""" # Start with a clean build sphinx_build['-b', 'html', '-E', 'docs', 'docs/_build/html'] & FG handler = ShellCommandTrick( shell_command='sphinx-build -b html docs docs/_build/html', patterns=['*.rst', '*.py'], ignore_patterns=['_build/*'], ignore_directories=['.tox'], drop_during_process=True) observer = Observer() observe_with(observer, handler, pathnames=['.'], recursive=True)
Renerate documentation when it changes.
Below is the the instruction that describes the task: ### Input: Renerate documentation when it changes. ### Response: def watch(): """Renerate documentation when it changes.""" # Start with a clean build sphinx_build['-b', 'html', '-E', 'docs', 'docs/_build/html'] & FG handler = ShellCommandTrick( shell_command='sphinx-build -b html docs docs/_build/html', patterns=['*.rst', '*.py'], ignore_patterns=['_build/*'], ignore_directories=['.tox'], drop_during_process=True) observer = Observer() observe_with(observer, handler, pathnames=['.'], recursive=True)
def grains(tgt=None, tgt_type='glob', **kwargs): ''' .. versionchanged:: 2017.7.0 The ``expr_form`` argument has been renamed to ``tgt_type``, earlier releases must use ``expr_form``. Return cached grains of the targeted minions. tgt Target to match minion ids. .. versionchanged:: 2017.7.5,2018.3.0 The ``tgt`` argument is now required to display cached grains. If not used, the function will not return grains. This optional argument will become mandatory in the Salt ``Sodium`` release. tgt_type The type of targeting to use for matching, such as ``glob``, ``list``, etc. CLI Example: .. code-block:: bash salt-run cache.grains '*' ''' if tgt is None: # Change ``tgt=None`` to ``tgt`` (mandatory kwarg) in Salt Sodium. # This behavior was changed in PR #45588 to fix Issue #45489. salt.utils.versions.warn_until( 'Sodium', 'Detected missing \'tgt\' option. Cached grains will not be returned ' 'without a specified \'tgt\'. This option will be required starting in ' 'Salt Sodium and this warning will be removed.' ) pillar_util = salt.utils.master.MasterPillarUtil(tgt, tgt_type, use_cached_grains=True, grains_fallback=False, opts=__opts__) cached_grains = pillar_util.get_minion_grains() return cached_grains
.. versionchanged:: 2017.7.0 The ``expr_form`` argument has been renamed to ``tgt_type``, earlier releases must use ``expr_form``. Return cached grains of the targeted minions. tgt Target to match minion ids. .. versionchanged:: 2017.7.5,2018.3.0 The ``tgt`` argument is now required to display cached grains. If not used, the function will not return grains. This optional argument will become mandatory in the Salt ``Sodium`` release. tgt_type The type of targeting to use for matching, such as ``glob``, ``list``, etc. CLI Example: .. code-block:: bash salt-run cache.grains '*'
Below is the the instruction that describes the task: ### Input: .. versionchanged:: 2017.7.0 The ``expr_form`` argument has been renamed to ``tgt_type``, earlier releases must use ``expr_form``. Return cached grains of the targeted minions. tgt Target to match minion ids. .. versionchanged:: 2017.7.5,2018.3.0 The ``tgt`` argument is now required to display cached grains. If not used, the function will not return grains. This optional argument will become mandatory in the Salt ``Sodium`` release. tgt_type The type of targeting to use for matching, such as ``glob``, ``list``, etc. CLI Example: .. code-block:: bash salt-run cache.grains '*' ### Response: def grains(tgt=None, tgt_type='glob', **kwargs): ''' .. versionchanged:: 2017.7.0 The ``expr_form`` argument has been renamed to ``tgt_type``, earlier releases must use ``expr_form``. Return cached grains of the targeted minions. tgt Target to match minion ids. .. versionchanged:: 2017.7.5,2018.3.0 The ``tgt`` argument is now required to display cached grains. If not used, the function will not return grains. This optional argument will become mandatory in the Salt ``Sodium`` release. tgt_type The type of targeting to use for matching, such as ``glob``, ``list``, etc. CLI Example: .. code-block:: bash salt-run cache.grains '*' ''' if tgt is None: # Change ``tgt=None`` to ``tgt`` (mandatory kwarg) in Salt Sodium. # This behavior was changed in PR #45588 to fix Issue #45489. salt.utils.versions.warn_until( 'Sodium', 'Detected missing \'tgt\' option. Cached grains will not be returned ' 'without a specified \'tgt\'. This option will be required starting in ' 'Salt Sodium and this warning will be removed.' ) pillar_util = salt.utils.master.MasterPillarUtil(tgt, tgt_type, use_cached_grains=True, grains_fallback=False, opts=__opts__) cached_grains = pillar_util.get_minion_grains() return cached_grains
def load(tiff_filename): """ Import a TIFF file into a numpy array. Arguments: tiff_filename: A string filename of a TIFF datafile Returns: A numpy array with data from the TIFF file """ # Expand filename to be absolute tiff_filename = os.path.expanduser(tiff_filename) try: img = tiff.imread(tiff_filename) except Exception as e: raise ValueError("Could not load file {0} for conversion." .format(tiff_filename)) raise return numpy.array(img)
Import a TIFF file into a numpy array. Arguments: tiff_filename: A string filename of a TIFF datafile Returns: A numpy array with data from the TIFF file
Below is the the instruction that describes the task: ### Input: Import a TIFF file into a numpy array. Arguments: tiff_filename: A string filename of a TIFF datafile Returns: A numpy array with data from the TIFF file ### Response: def load(tiff_filename): """ Import a TIFF file into a numpy array. Arguments: tiff_filename: A string filename of a TIFF datafile Returns: A numpy array with data from the TIFF file """ # Expand filename to be absolute tiff_filename = os.path.expanduser(tiff_filename) try: img = tiff.imread(tiff_filename) except Exception as e: raise ValueError("Could not load file {0} for conversion." .format(tiff_filename)) raise return numpy.array(img)
def contains(self, expected): """ Checks if the reference contains the value. :param expected: (object), the value to check (is allowed to be ``None``). :return: (bool), ``true`` if the value is found, ``false`` otherwise. """ return self._encode_invoke(atomic_reference_contains_codec, expected=self._to_data(expected))
Checks if the reference contains the value. :param expected: (object), the value to check (is allowed to be ``None``). :return: (bool), ``true`` if the value is found, ``false`` otherwise.
Below is the the instruction that describes the task: ### Input: Checks if the reference contains the value. :param expected: (object), the value to check (is allowed to be ``None``). :return: (bool), ``true`` if the value is found, ``false`` otherwise. ### Response: def contains(self, expected): """ Checks if the reference contains the value. :param expected: (object), the value to check (is allowed to be ``None``). :return: (bool), ``true`` if the value is found, ``false`` otherwise. """ return self._encode_invoke(atomic_reference_contains_codec, expected=self._to_data(expected))
def create_vg(self, name, devices): """ Returns a new instance of VolumeGroup with the given name and added physycal volumes (devices):: from lvm2py import * lvm = LVM() vg = lvm.create_vg("myvg", ["/dev/sdb1", "/dev/sdb2"]) *Args:* * name (str): A volume group name. * devices (list): A list of device paths. *Raises:* * HandleError, CommitError, ValueError """ self.open() vgh = lvm_vg_create(self.handle, name) if not bool(vgh): self.close() raise HandleError("Failed to create VG.") for device in devices: if not os.path.exists(device): self._destroy_vg(vgh) raise ValueError("%s does not exist." % device) ext = lvm_vg_extend(vgh, device) if ext != 0: self._destroy_vg(vgh) raise CommitError("Failed to extend Volume Group.") try: self._commit_vg(vgh) except CommitError: self._destroy_vg(vgh) raise CommitError("Failed to add %s to VolumeGroup." % device) self._close_vg(vgh) vg = VolumeGroup(self, name) return vg
Returns a new instance of VolumeGroup with the given name and added physycal volumes (devices):: from lvm2py import * lvm = LVM() vg = lvm.create_vg("myvg", ["/dev/sdb1", "/dev/sdb2"]) *Args:* * name (str): A volume group name. * devices (list): A list of device paths. *Raises:* * HandleError, CommitError, ValueError
Below is the the instruction that describes the task: ### Input: Returns a new instance of VolumeGroup with the given name and added physycal volumes (devices):: from lvm2py import * lvm = LVM() vg = lvm.create_vg("myvg", ["/dev/sdb1", "/dev/sdb2"]) *Args:* * name (str): A volume group name. * devices (list): A list of device paths. *Raises:* * HandleError, CommitError, ValueError ### Response: def create_vg(self, name, devices): """ Returns a new instance of VolumeGroup with the given name and added physycal volumes (devices):: from lvm2py import * lvm = LVM() vg = lvm.create_vg("myvg", ["/dev/sdb1", "/dev/sdb2"]) *Args:* * name (str): A volume group name. * devices (list): A list of device paths. *Raises:* * HandleError, CommitError, ValueError """ self.open() vgh = lvm_vg_create(self.handle, name) if not bool(vgh): self.close() raise HandleError("Failed to create VG.") for device in devices: if not os.path.exists(device): self._destroy_vg(vgh) raise ValueError("%s does not exist." % device) ext = lvm_vg_extend(vgh, device) if ext != 0: self._destroy_vg(vgh) raise CommitError("Failed to extend Volume Group.") try: self._commit_vg(vgh) except CommitError: self._destroy_vg(vgh) raise CommitError("Failed to add %s to VolumeGroup." % device) self._close_vg(vgh) vg = VolumeGroup(self, name) return vg
def kick_chat_member( self, chat_id: Union[int, str], user_id: Union[int, str], until_date: int = 0 ) -> Union["pyrogram.Message", bool]: """Use this method to kick a user from a group, a supergroup or a channel. In the case of supergroups and channels, the user will not be able to return to the group on their own using invite links, etc., unless unbanned first. You must be an administrator in the chat for this to work and must have the appropriate admin rights. Note: In regular groups (non-supergroups), this method will only work if the "All Members Are Admins" setting is off in the target group. Otherwise members may only be removed by the group's creator or by the member that added them. Args: chat_id (``int`` | ``str``): Unique identifier (int) or username (str) of the target chat. user_id (``int`` | ``str``): Unique identifier (int) or username (str) of the target user. For a contact that exists in your Telegram address book you can use his phone number (str). until_date (``int``, *optional*): Date when the user will be unbanned, unix time. If user is banned for more than 366 days or less than 30 seconds from the current time they are considered to be banned forever. Defaults to 0 (ban forever). Returns: On success, either True or a service :obj:`Message <pyrogram.Message>` will be returned (when applicable). Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error. """ chat_peer = self.resolve_peer(chat_id) user_peer = self.resolve_peer(user_id) if isinstance(chat_peer, types.InputPeerChannel): r = self.send( functions.channels.EditBanned( channel=chat_peer, user_id=user_peer, banned_rights=types.ChatBannedRights( until_date=until_date, view_messages=True, send_messages=True, send_media=True, send_stickers=True, send_gifs=True, send_games=True, send_inline=True, embed_links=True ) ) ) else: r = self.send( functions.messages.DeleteChatUser( chat_id=abs(chat_id), user_id=user_peer ) ) for i in r.updates: if isinstance(i, (types.UpdateNewMessage, types.UpdateNewChannelMessage)): return pyrogram.Message._parse( self, i.message, {i.id: i for i in r.users}, {i.id: i for i in r.chats} ) else: return True
Use this method to kick a user from a group, a supergroup or a channel. In the case of supergroups and channels, the user will not be able to return to the group on their own using invite links, etc., unless unbanned first. You must be an administrator in the chat for this to work and must have the appropriate admin rights. Note: In regular groups (non-supergroups), this method will only work if the "All Members Are Admins" setting is off in the target group. Otherwise members may only be removed by the group's creator or by the member that added them. Args: chat_id (``int`` | ``str``): Unique identifier (int) or username (str) of the target chat. user_id (``int`` | ``str``): Unique identifier (int) or username (str) of the target user. For a contact that exists in your Telegram address book you can use his phone number (str). until_date (``int``, *optional*): Date when the user will be unbanned, unix time. If user is banned for more than 366 days or less than 30 seconds from the current time they are considered to be banned forever. Defaults to 0 (ban forever). Returns: On success, either True or a service :obj:`Message <pyrogram.Message>` will be returned (when applicable). Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error.
Below is the the instruction that describes the task: ### Input: Use this method to kick a user from a group, a supergroup or a channel. In the case of supergroups and channels, the user will not be able to return to the group on their own using invite links, etc., unless unbanned first. You must be an administrator in the chat for this to work and must have the appropriate admin rights. Note: In regular groups (non-supergroups), this method will only work if the "All Members Are Admins" setting is off in the target group. Otherwise members may only be removed by the group's creator or by the member that added them. Args: chat_id (``int`` | ``str``): Unique identifier (int) or username (str) of the target chat. user_id (``int`` | ``str``): Unique identifier (int) or username (str) of the target user. For a contact that exists in your Telegram address book you can use his phone number (str). until_date (``int``, *optional*): Date when the user will be unbanned, unix time. If user is banned for more than 366 days or less than 30 seconds from the current time they are considered to be banned forever. Defaults to 0 (ban forever). Returns: On success, either True or a service :obj:`Message <pyrogram.Message>` will be returned (when applicable). Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error. ### Response: def kick_chat_member( self, chat_id: Union[int, str], user_id: Union[int, str], until_date: int = 0 ) -> Union["pyrogram.Message", bool]: """Use this method to kick a user from a group, a supergroup or a channel. In the case of supergroups and channels, the user will not be able to return to the group on their own using invite links, etc., unless unbanned first. You must be an administrator in the chat for this to work and must have the appropriate admin rights. Note: In regular groups (non-supergroups), this method will only work if the "All Members Are Admins" setting is off in the target group. Otherwise members may only be removed by the group's creator or by the member that added them. Args: chat_id (``int`` | ``str``): Unique identifier (int) or username (str) of the target chat. user_id (``int`` | ``str``): Unique identifier (int) or username (str) of the target user. For a contact that exists in your Telegram address book you can use his phone number (str). until_date (``int``, *optional*): Date when the user will be unbanned, unix time. If user is banned for more than 366 days or less than 30 seconds from the current time they are considered to be banned forever. Defaults to 0 (ban forever). Returns: On success, either True or a service :obj:`Message <pyrogram.Message>` will be returned (when applicable). Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error. """ chat_peer = self.resolve_peer(chat_id) user_peer = self.resolve_peer(user_id) if isinstance(chat_peer, types.InputPeerChannel): r = self.send( functions.channels.EditBanned( channel=chat_peer, user_id=user_peer, banned_rights=types.ChatBannedRights( until_date=until_date, view_messages=True, send_messages=True, send_media=True, send_stickers=True, send_gifs=True, send_games=True, send_inline=True, embed_links=True ) ) ) else: r = self.send( functions.messages.DeleteChatUser( chat_id=abs(chat_id), user_id=user_peer ) ) for i in r.updates: if isinstance(i, (types.UpdateNewMessage, types.UpdateNewChannelMessage)): return pyrogram.Message._parse( self, i.message, {i.id: i for i in r.users}, {i.id: i for i in r.chats} ) else: return True
def sparse_message_pass(node_states, adjacency_matrices, num_edge_types, hidden_size, use_bias=True, average_aggregation=False, name="sparse_ggnn"): """One message-passing step for a GNN with a sparse adjacency matrix. Implements equation 2 (the message passing step) in [Li et al. 2015](https://arxiv.org/abs/1511.05493). N = The number of nodes in each batch. H = The size of the hidden states. T = The number of edge types. Args: node_states: Initial states of each node in the graph. Shape is [N, H]. adjacency_matrices: Adjacency matrix of directed edges for each edge type. Shape is [N, N, T] (sparse tensor). num_edge_types: The number of edge types. T. hidden_size: The size of the hidden state. H. use_bias: Whether to use bias in the hidden layer. average_aggregation: How to aggregate the incoming node messages. If average_aggregation is true, the messages are averaged. If it is false, they are summed. name: (optional) The scope within which tf variables should be created. Returns: The result of one step of Gated Graph Neural Network (GGNN) message passing. Shape: [N, H] """ n = tf.shape(node_states)[0] t = num_edge_types incoming_edges_per_type = tf.sparse_reduce_sum(adjacency_matrices, axis=1) # Convert the adjacency matrix into shape [T, N, N] - one [N, N] adjacency # matrix for each edge type. Since sparse tensor multiplication only supports # two-dimensional tensors, we actually convert the adjacency matrix into a # [T * N, N] tensor. adjacency_matrices = tf.sparse_transpose(adjacency_matrices, [2, 0, 1]) adjacency_matrices = tf.sparse_reshape(adjacency_matrices, [t * n, n]) # Multiply the adjacency matrix by the node states, producing a [T * N, H] # tensor. For each (edge type, node) pair, this tensor stores the sum of # the hidden states of the node's neighbors over incoming edges of that type. messages = tf.sparse_tensor_dense_matmul(adjacency_matrices, node_states) # Rearrange this tensor to have shape [N, T * H]. The incoming states of each # nodes neighbors are summed by edge type and then concatenated together into # a single T * H vector. messages = tf.reshape(messages, [t, n, hidden_size]) messages = tf.transpose(messages, [1, 0, 2]) messages = tf.reshape(messages, [n, t * hidden_size]) # Run each of those T * H vectors through a linear layer that produces # a vector of size H. This process is equivalent to running each H-sized # vector through a separate linear layer for each edge type and then adding # the results together. # # Note that, earlier on, we added together all of the states of neighbors # that were connected by edges of the same edge type. Since addition and # multiplying by a linear layer are commutative, this process was equivalent # to running each incoming edge through a linear layer separately and then # adding everything at the end. with tf.variable_scope(name, default_name="sparse_ggnn"): final_node_states = common_layers.dense( messages, hidden_size, use_bias=False) # Multiply the bias by for each edge type by the number of incoming nodes # of that edge type. if use_bias: bias = tf.get_variable("bias", initializer=tf.zeros([t, hidden_size])) final_node_states += tf.matmul(incoming_edges_per_type, bias) if average_aggregation: incoming_edges = tf.reduce_sum(incoming_edges_per_type, -1, keepdims=True) incoming_edges = tf.tile(incoming_edges, [1, hidden_size]) final_node_states /= incoming_edges + 1e-7 return tf.reshape(final_node_states, [n, hidden_size])
One message-passing step for a GNN with a sparse adjacency matrix. Implements equation 2 (the message passing step) in [Li et al. 2015](https://arxiv.org/abs/1511.05493). N = The number of nodes in each batch. H = The size of the hidden states. T = The number of edge types. Args: node_states: Initial states of each node in the graph. Shape is [N, H]. adjacency_matrices: Adjacency matrix of directed edges for each edge type. Shape is [N, N, T] (sparse tensor). num_edge_types: The number of edge types. T. hidden_size: The size of the hidden state. H. use_bias: Whether to use bias in the hidden layer. average_aggregation: How to aggregate the incoming node messages. If average_aggregation is true, the messages are averaged. If it is false, they are summed. name: (optional) The scope within which tf variables should be created. Returns: The result of one step of Gated Graph Neural Network (GGNN) message passing. Shape: [N, H]
Below is the the instruction that describes the task: ### Input: One message-passing step for a GNN with a sparse adjacency matrix. Implements equation 2 (the message passing step) in [Li et al. 2015](https://arxiv.org/abs/1511.05493). N = The number of nodes in each batch. H = The size of the hidden states. T = The number of edge types. Args: node_states: Initial states of each node in the graph. Shape is [N, H]. adjacency_matrices: Adjacency matrix of directed edges for each edge type. Shape is [N, N, T] (sparse tensor). num_edge_types: The number of edge types. T. hidden_size: The size of the hidden state. H. use_bias: Whether to use bias in the hidden layer. average_aggregation: How to aggregate the incoming node messages. If average_aggregation is true, the messages are averaged. If it is false, they are summed. name: (optional) The scope within which tf variables should be created. Returns: The result of one step of Gated Graph Neural Network (GGNN) message passing. Shape: [N, H] ### Response: def sparse_message_pass(node_states, adjacency_matrices, num_edge_types, hidden_size, use_bias=True, average_aggregation=False, name="sparse_ggnn"): """One message-passing step for a GNN with a sparse adjacency matrix. Implements equation 2 (the message passing step) in [Li et al. 2015](https://arxiv.org/abs/1511.05493). N = The number of nodes in each batch. H = The size of the hidden states. T = The number of edge types. Args: node_states: Initial states of each node in the graph. Shape is [N, H]. adjacency_matrices: Adjacency matrix of directed edges for each edge type. Shape is [N, N, T] (sparse tensor). num_edge_types: The number of edge types. T. hidden_size: The size of the hidden state. H. use_bias: Whether to use bias in the hidden layer. average_aggregation: How to aggregate the incoming node messages. If average_aggregation is true, the messages are averaged. If it is false, they are summed. name: (optional) The scope within which tf variables should be created. Returns: The result of one step of Gated Graph Neural Network (GGNN) message passing. Shape: [N, H] """ n = tf.shape(node_states)[0] t = num_edge_types incoming_edges_per_type = tf.sparse_reduce_sum(adjacency_matrices, axis=1) # Convert the adjacency matrix into shape [T, N, N] - one [N, N] adjacency # matrix for each edge type. Since sparse tensor multiplication only supports # two-dimensional tensors, we actually convert the adjacency matrix into a # [T * N, N] tensor. adjacency_matrices = tf.sparse_transpose(adjacency_matrices, [2, 0, 1]) adjacency_matrices = tf.sparse_reshape(adjacency_matrices, [t * n, n]) # Multiply the adjacency matrix by the node states, producing a [T * N, H] # tensor. For each (edge type, node) pair, this tensor stores the sum of # the hidden states of the node's neighbors over incoming edges of that type. messages = tf.sparse_tensor_dense_matmul(adjacency_matrices, node_states) # Rearrange this tensor to have shape [N, T * H]. The incoming states of each # nodes neighbors are summed by edge type and then concatenated together into # a single T * H vector. messages = tf.reshape(messages, [t, n, hidden_size]) messages = tf.transpose(messages, [1, 0, 2]) messages = tf.reshape(messages, [n, t * hidden_size]) # Run each of those T * H vectors through a linear layer that produces # a vector of size H. This process is equivalent to running each H-sized # vector through a separate linear layer for each edge type and then adding # the results together. # # Note that, earlier on, we added together all of the states of neighbors # that were connected by edges of the same edge type. Since addition and # multiplying by a linear layer are commutative, this process was equivalent # to running each incoming edge through a linear layer separately and then # adding everything at the end. with tf.variable_scope(name, default_name="sparse_ggnn"): final_node_states = common_layers.dense( messages, hidden_size, use_bias=False) # Multiply the bias by for each edge type by the number of incoming nodes # of that edge type. if use_bias: bias = tf.get_variable("bias", initializer=tf.zeros([t, hidden_size])) final_node_states += tf.matmul(incoming_edges_per_type, bias) if average_aggregation: incoming_edges = tf.reduce_sum(incoming_edges_per_type, -1, keepdims=True) incoming_edges = tf.tile(incoming_edges, [1, hidden_size]) final_node_states /= incoming_edges + 1e-7 return tf.reshape(final_node_states, [n, hidden_size])
def DumpAsCSV (self, separator=",", file=sys.stdout): """dump as a comma separated value file""" for row in range(1, self.maxRow + 1): sep = "" for column in range(1, self.maxColumn + 1): file.write("%s\"%s\"" % (sep, self.GetCellValue(column, row, ""))) sep = separator file.write("\n")
dump as a comma separated value file
Below is the the instruction that describes the task: ### Input: dump as a comma separated value file ### Response: def DumpAsCSV (self, separator=",", file=sys.stdout): """dump as a comma separated value file""" for row in range(1, self.maxRow + 1): sep = "" for column in range(1, self.maxColumn + 1): file.write("%s\"%s\"" % (sep, self.GetCellValue(column, row, ""))) sep = separator file.write("\n")
def convert_date(value, parameter): ''' Converts to datetime.date: '', '-', None convert to parameter default The first matching format in settings.DATE_INPUT_FORMATS converts to datetime ''' value = _check_default(value, parameter, ( '', '-', None )) if value is None or isinstance(value, datetime.date): return value for fmt in settings.DATE_INPUT_FORMATS: try: return datetime.datetime.strptime(value, fmt).date() except (ValueError, TypeError): continue raise ValueError("`{}` does not match a format in settings.DATE_INPUT_FORMATS".format(value))
Converts to datetime.date: '', '-', None convert to parameter default The first matching format in settings.DATE_INPUT_FORMATS converts to datetime
Below is the the instruction that describes the task: ### Input: Converts to datetime.date: '', '-', None convert to parameter default The first matching format in settings.DATE_INPUT_FORMATS converts to datetime ### Response: def convert_date(value, parameter): ''' Converts to datetime.date: '', '-', None convert to parameter default The first matching format in settings.DATE_INPUT_FORMATS converts to datetime ''' value = _check_default(value, parameter, ( '', '-', None )) if value is None or isinstance(value, datetime.date): return value for fmt in settings.DATE_INPUT_FORMATS: try: return datetime.datetime.strptime(value, fmt).date() except (ValueError, TypeError): continue raise ValueError("`{}` does not match a format in settings.DATE_INPUT_FORMATS".format(value))
def parse_callback_args(self, raw_args): """This is the method that is called from Script.run(), this is the insertion point for parsing all the arguments though on init this will find all args it can, so this method pulls already found args from class variables""" args = [] arg_info = self.arg_info kwargs = dict(arg_info['optional']) parsed_args = [] unknown_args = getattr(self, "unknown_args", False) if unknown_args: parsed_args, parsed_unknown_args = self.parse_known_args(raw_args) # TODO -- can this be moved to UnknownParser? # **kwargs have to be in --key=val form # http://stackoverflow.com/a/12807809/5006 d = defaultdict(list) for k, v in ((k.lstrip('-'), v) for k,v in (a.split('=') for a in parsed_unknown_args)): d[k].append(v) for k in (k for k in d if len(d[k])==1): d[k] = d[k][0] kwargs.update(d) else: parsed_args = self.parse_args(raw_args) # http://parezcoydigo.wordpress.com/2012/08/04/from-argparse-to-dictionary-in-python-2-7/ kwargs.update(vars(parsed_args)) # because of how args works, we need to make sure the kwargs are put in correct # order to be passed to the function, otherwise our real *args won't make it # to the *args variable for k in arg_info['order']: args.append(kwargs.pop(k)) # now that we have the correct order, tack the real *args on the end so they # get correctly placed into the function's *args variable if arg_info['args']: args.extend(kwargs.pop(arg_info['args'])) return args, kwargs
This is the method that is called from Script.run(), this is the insertion point for parsing all the arguments though on init this will find all args it can, so this method pulls already found args from class variables
Below is the the instruction that describes the task: ### Input: This is the method that is called from Script.run(), this is the insertion point for parsing all the arguments though on init this will find all args it can, so this method pulls already found args from class variables ### Response: def parse_callback_args(self, raw_args): """This is the method that is called from Script.run(), this is the insertion point for parsing all the arguments though on init this will find all args it can, so this method pulls already found args from class variables""" args = [] arg_info = self.arg_info kwargs = dict(arg_info['optional']) parsed_args = [] unknown_args = getattr(self, "unknown_args", False) if unknown_args: parsed_args, parsed_unknown_args = self.parse_known_args(raw_args) # TODO -- can this be moved to UnknownParser? # **kwargs have to be in --key=val form # http://stackoverflow.com/a/12807809/5006 d = defaultdict(list) for k, v in ((k.lstrip('-'), v) for k,v in (a.split('=') for a in parsed_unknown_args)): d[k].append(v) for k in (k for k in d if len(d[k])==1): d[k] = d[k][0] kwargs.update(d) else: parsed_args = self.parse_args(raw_args) # http://parezcoydigo.wordpress.com/2012/08/04/from-argparse-to-dictionary-in-python-2-7/ kwargs.update(vars(parsed_args)) # because of how args works, we need to make sure the kwargs are put in correct # order to be passed to the function, otherwise our real *args won't make it # to the *args variable for k in arg_info['order']: args.append(kwargs.pop(k)) # now that we have the correct order, tack the real *args on the end so they # get correctly placed into the function's *args variable if arg_info['args']: args.extend(kwargs.pop(arg_info['args'])) return args, kwargs
def load(self, fpath): """Load a microscopy file. :param fpath: path to microscopy file """ def is_microscopy_item(fpath): """Return True if the fpath is likely to be microscopy data. :param fpath: file path to image :returns: :class:`bool` """ l = fpath.split('.') ext = l[-1] pre_ext = l[-2] if ( (ext == 'tif' or ext == 'tiff') and pre_ext != 'ome' ): return False return True if not self.convert.already_converted(fpath): path_to_manifest = self.convert(fpath) else: path_to_manifest = os.path.join(self.backend.directory, os.path.basename(fpath), 'manifest.json') collection = None if is_microscopy_item(fpath): collection = MicroscopyCollection() else: collection = ImageCollection() collection.parse_manifest(path_to_manifest) self.append(collection) return collection
Load a microscopy file. :param fpath: path to microscopy file
Below is the the instruction that describes the task: ### Input: Load a microscopy file. :param fpath: path to microscopy file ### Response: def load(self, fpath): """Load a microscopy file. :param fpath: path to microscopy file """ def is_microscopy_item(fpath): """Return True if the fpath is likely to be microscopy data. :param fpath: file path to image :returns: :class:`bool` """ l = fpath.split('.') ext = l[-1] pre_ext = l[-2] if ( (ext == 'tif' or ext == 'tiff') and pre_ext != 'ome' ): return False return True if not self.convert.already_converted(fpath): path_to_manifest = self.convert(fpath) else: path_to_manifest = os.path.join(self.backend.directory, os.path.basename(fpath), 'manifest.json') collection = None if is_microscopy_item(fpath): collection = MicroscopyCollection() else: collection = ImageCollection() collection.parse_manifest(path_to_manifest) self.append(collection) return collection
def column_preview(table_name, col_name): """ Return the first ten elements of a column as JSON in Pandas' "split" format. """ col = orca.get_table(table_name).get_column(col_name).head(10) return ( col.to_json(orient='split', date_format='iso'), 200, {'Content-Type': 'application/json'})
Return the first ten elements of a column as JSON in Pandas' "split" format.
Below is the the instruction that describes the task: ### Input: Return the first ten elements of a column as JSON in Pandas' "split" format. ### Response: def column_preview(table_name, col_name): """ Return the first ten elements of a column as JSON in Pandas' "split" format. """ col = orca.get_table(table_name).get_column(col_name).head(10) return ( col.to_json(orient='split', date_format='iso'), 200, {'Content-Type': 'application/json'})
def handleThumbDblClick( self, item ): """ Handles when a thumbnail item is double clicked on. :param item | <QListWidgetItem> """ if ( isinstance(item, RecordListWidgetItem) ): self.emitRecordDoubleClicked(item.record())
Handles when a thumbnail item is double clicked on. :param item | <QListWidgetItem>
Below is the the instruction that describes the task: ### Input: Handles when a thumbnail item is double clicked on. :param item | <QListWidgetItem> ### Response: def handleThumbDblClick( self, item ): """ Handles when a thumbnail item is double clicked on. :param item | <QListWidgetItem> """ if ( isinstance(item, RecordListWidgetItem) ): self.emitRecordDoubleClicked(item.record())
def plot_intrusion_curve(self, fig=None): r""" Plot the percolation curve as the invader volume or number fraction vs the applied capillary pressure. """ # Begin creating nicely formatted plot x, y = self.get_intrusion_data() if fig is None: fig = plt.figure() plt.semilogx(x, y, 'ko-') plt.ylabel('Invading Phase Saturation') plt.xlabel('Capillary Pressure') plt.grid(True) return fig
r""" Plot the percolation curve as the invader volume or number fraction vs the applied capillary pressure.
Below is the the instruction that describes the task: ### Input: r""" Plot the percolation curve as the invader volume or number fraction vs the applied capillary pressure. ### Response: def plot_intrusion_curve(self, fig=None): r""" Plot the percolation curve as the invader volume or number fraction vs the applied capillary pressure. """ # Begin creating nicely formatted plot x, y = self.get_intrusion_data() if fig is None: fig = plt.figure() plt.semilogx(x, y, 'ko-') plt.ylabel('Invading Phase Saturation') plt.xlabel('Capillary Pressure') plt.grid(True) return fig
def _execute(self, queue, tasks, log, locks, queue_lock, all_task_ids): """ Executes the given tasks. Returns a boolean indicating whether the tasks were executed successfully. """ # The tasks must use the same function. assert len(tasks) task_func = tasks[0].serialized_func assert all([task_func == task.serialized_func for task in tasks[1:]]) # Before executing periodic tasks, queue them for the next period. if task_func in self.tiger.periodic_task_funcs: tasks[0]._queue_for_next_period() with g_fork_lock: child_pid = os.fork() if child_pid == 0: # Child process log = log.bind(child_pid=os.getpid()) # Disconnect the Redis connection inherited from the main process. # Note that this doesn't disconnect the socket in the main process. self.connection.connection_pool.disconnect() random.seed() # Ignore Ctrl+C in the child so we don't abort the job -- the main # process already takes care of a graceful shutdown. signal.signal(signal.SIGINT, signal.SIG_IGN) with WorkerContextManagerStack(self.config['CHILD_CONTEXT_MANAGERS']): success = self._execute_forked(tasks, log) # Wait for any threads that might be running in the child, just # like sys.exit() would. Note we don't call sys.exit() directly # because it would perform additional cleanup (e.g. calling atexit # handlers twice). See also: https://bugs.python.org/issue18966 threading._shutdown() os._exit(int(not success)) else: # Main process log = log.bind(child_pid=child_pid) for task in tasks: log.info('processing', func=task_func, task_id=task.id, params={'args': task.args, 'kwargs': task.kwargs}) # Attach a signal handler to SIGCHLD (sent when the child process # exits) so we can capture it. signal.signal(signal.SIGCHLD, sigchld_handler) # Since newer Python versions retry interrupted system calls we can't # rely on the fact that select() is interrupted with EINTR. Instead, # we'll set up a wake-up file descriptor below. # Create a new pipe and apply the non-blocking flag (required for # set_wakeup_fd). pipe_r, pipe_w = os.pipe() flags = fcntl.fcntl(pipe_w, fcntl.F_GETFL, 0) flags = flags | os.O_NONBLOCK fcntl.fcntl(pipe_w, fcntl.F_SETFL, flags) # A byte will be written to pipe_w if a signal occurs (and can be # read from pipe_r). old_wakeup_fd = signal.set_wakeup_fd(pipe_w) def check_child_exit(): """ Do a non-blocking check to see if the child process exited. Returns None if the process is still running, or the exit code value of the child process. """ try: pid, return_code = os.waitpid(child_pid, os.WNOHANG) if pid != 0: # The child process is done. return return_code except OSError as e: # Of course EINTR can happen if the child process exits # while we're checking whether it exited. In this case it # should be safe to retry. if e.errno == errno.EINTR: return check_child_exit() else: raise # Wait for the child to exit and perform a periodic heartbeat. # We check for the child twice in this loop so that we avoid # unnecessary waiting if the child exited just before entering # the while loop or while renewing heartbeat/locks. while True: return_code = check_child_exit() if return_code is not None: break # Wait until the timeout or a signal / child exit occurs. try: select.select([pipe_r], [], [], self.config['ACTIVE_TASK_UPDATE_TIMER']) except select.error as e: if e.args[0] != errno.EINTR: raise return_code = check_child_exit() if return_code is not None: break try: self._heartbeat(queue, all_task_ids) for lock in locks: lock.renew(self.config['ACTIVE_TASK_UPDATE_TIMEOUT']) if queue_lock: acquired, current_locks = queue_lock.renew() if not acquired: log.debug('queue lock renew failure') except OSError as e: # EINTR happens if the task completed. Since we're just # renewing locks/heartbeat it's okay if we get interrupted. if e.errno != errno.EINTR: raise # Restore signals / clean up signal.signal(signal.SIGCHLD, signal.SIG_DFL) signal.set_wakeup_fd(old_wakeup_fd) os.close(pipe_r) os.close(pipe_w) success = (return_code == 0) return success
Executes the given tasks. Returns a boolean indicating whether the tasks were executed successfully.
Below is the the instruction that describes the task: ### Input: Executes the given tasks. Returns a boolean indicating whether the tasks were executed successfully. ### Response: def _execute(self, queue, tasks, log, locks, queue_lock, all_task_ids): """ Executes the given tasks. Returns a boolean indicating whether the tasks were executed successfully. """ # The tasks must use the same function. assert len(tasks) task_func = tasks[0].serialized_func assert all([task_func == task.serialized_func for task in tasks[1:]]) # Before executing periodic tasks, queue them for the next period. if task_func in self.tiger.periodic_task_funcs: tasks[0]._queue_for_next_period() with g_fork_lock: child_pid = os.fork() if child_pid == 0: # Child process log = log.bind(child_pid=os.getpid()) # Disconnect the Redis connection inherited from the main process. # Note that this doesn't disconnect the socket in the main process. self.connection.connection_pool.disconnect() random.seed() # Ignore Ctrl+C in the child so we don't abort the job -- the main # process already takes care of a graceful shutdown. signal.signal(signal.SIGINT, signal.SIG_IGN) with WorkerContextManagerStack(self.config['CHILD_CONTEXT_MANAGERS']): success = self._execute_forked(tasks, log) # Wait for any threads that might be running in the child, just # like sys.exit() would. Note we don't call sys.exit() directly # because it would perform additional cleanup (e.g. calling atexit # handlers twice). See also: https://bugs.python.org/issue18966 threading._shutdown() os._exit(int(not success)) else: # Main process log = log.bind(child_pid=child_pid) for task in tasks: log.info('processing', func=task_func, task_id=task.id, params={'args': task.args, 'kwargs': task.kwargs}) # Attach a signal handler to SIGCHLD (sent when the child process # exits) so we can capture it. signal.signal(signal.SIGCHLD, sigchld_handler) # Since newer Python versions retry interrupted system calls we can't # rely on the fact that select() is interrupted with EINTR. Instead, # we'll set up a wake-up file descriptor below. # Create a new pipe and apply the non-blocking flag (required for # set_wakeup_fd). pipe_r, pipe_w = os.pipe() flags = fcntl.fcntl(pipe_w, fcntl.F_GETFL, 0) flags = flags | os.O_NONBLOCK fcntl.fcntl(pipe_w, fcntl.F_SETFL, flags) # A byte will be written to pipe_w if a signal occurs (and can be # read from pipe_r). old_wakeup_fd = signal.set_wakeup_fd(pipe_w) def check_child_exit(): """ Do a non-blocking check to see if the child process exited. Returns None if the process is still running, or the exit code value of the child process. """ try: pid, return_code = os.waitpid(child_pid, os.WNOHANG) if pid != 0: # The child process is done. return return_code except OSError as e: # Of course EINTR can happen if the child process exits # while we're checking whether it exited. In this case it # should be safe to retry. if e.errno == errno.EINTR: return check_child_exit() else: raise # Wait for the child to exit and perform a periodic heartbeat. # We check for the child twice in this loop so that we avoid # unnecessary waiting if the child exited just before entering # the while loop or while renewing heartbeat/locks. while True: return_code = check_child_exit() if return_code is not None: break # Wait until the timeout or a signal / child exit occurs. try: select.select([pipe_r], [], [], self.config['ACTIVE_TASK_UPDATE_TIMER']) except select.error as e: if e.args[0] != errno.EINTR: raise return_code = check_child_exit() if return_code is not None: break try: self._heartbeat(queue, all_task_ids) for lock in locks: lock.renew(self.config['ACTIVE_TASK_UPDATE_TIMEOUT']) if queue_lock: acquired, current_locks = queue_lock.renew() if not acquired: log.debug('queue lock renew failure') except OSError as e: # EINTR happens if the task completed. Since we're just # renewing locks/heartbeat it's okay if we get interrupted. if e.errno != errno.EINTR: raise # Restore signals / clean up signal.signal(signal.SIGCHLD, signal.SIG_DFL) signal.set_wakeup_fd(old_wakeup_fd) os.close(pipe_r) os.close(pipe_w) success = (return_code == 0) return success
def HEADING(txt=None, c="#"): """ Prints a message to stdout with #### surrounding it. This is useful for nosetests to better distinguish them. :param c: uses the given char to wrap the header :param txt: a text message to be printed :type txt: string """ frame = inspect.getouterframes(inspect.currentframe()) filename = frame[1][1].replace(os.getcwd(), "") line = frame[1][2] - 1 method = frame[1][3] msg = "{}\n# {} {} {}".format(txt, method, filename, line) print() banner(msg, c=c)
Prints a message to stdout with #### surrounding it. This is useful for nosetests to better distinguish them. :param c: uses the given char to wrap the header :param txt: a text message to be printed :type txt: string
Below is the the instruction that describes the task: ### Input: Prints a message to stdout with #### surrounding it. This is useful for nosetests to better distinguish them. :param c: uses the given char to wrap the header :param txt: a text message to be printed :type txt: string ### Response: def HEADING(txt=None, c="#"): """ Prints a message to stdout with #### surrounding it. This is useful for nosetests to better distinguish them. :param c: uses the given char to wrap the header :param txt: a text message to be printed :type txt: string """ frame = inspect.getouterframes(inspect.currentframe()) filename = frame[1][1].replace(os.getcwd(), "") line = frame[1][2] - 1 method = frame[1][3] msg = "{}\n# {} {} {}".format(txt, method, filename, line) print() banner(msg, c=c)
def append_skipped_rules(pyyaml_data, file_text, file_type): """ Uses ruamel.yaml to parse comments then adds a skipped_rules list to the task (or meta yaml block) """ yaml = ruamel.yaml.YAML() ruamel_data = yaml.load(file_text) if file_type in ('tasks', 'handlers'): ruamel_tasks = ruamel_data pyyaml_tasks = pyyaml_data elif file_type == 'playbook': try: ruamel_tasks = [] pyyaml_tasks = [] for ruamel_play, pyyaml_play in zip(ruamel_data, pyyaml_data): ruamel_tasks.extend(ruamel_play.get('tasks')) pyyaml_tasks.extend(pyyaml_play.get('tasks')) except (AttributeError, TypeError): return pyyaml_data elif file_type == 'meta': if not isinstance(pyyaml_data, list): return pyyaml_data ruamel_tasks = [ruamel_data] pyyaml_tasks = pyyaml_data else: return pyyaml_data if len(ruamel_tasks) != len(pyyaml_tasks): return pyyaml_data for ruamel_task, pyyaml_task in zip(ruamel_tasks, pyyaml_tasks): skipped_rules = _get_rule_skips_from_task(ruamel_task) if skipped_rules: pyyaml_task['skipped_rules'] = skipped_rules return pyyaml_data
Uses ruamel.yaml to parse comments then adds a skipped_rules list to the task (or meta yaml block)
Below is the the instruction that describes the task: ### Input: Uses ruamel.yaml to parse comments then adds a skipped_rules list to the task (or meta yaml block) ### Response: def append_skipped_rules(pyyaml_data, file_text, file_type): """ Uses ruamel.yaml to parse comments then adds a skipped_rules list to the task (or meta yaml block) """ yaml = ruamel.yaml.YAML() ruamel_data = yaml.load(file_text) if file_type in ('tasks', 'handlers'): ruamel_tasks = ruamel_data pyyaml_tasks = pyyaml_data elif file_type == 'playbook': try: ruamel_tasks = [] pyyaml_tasks = [] for ruamel_play, pyyaml_play in zip(ruamel_data, pyyaml_data): ruamel_tasks.extend(ruamel_play.get('tasks')) pyyaml_tasks.extend(pyyaml_play.get('tasks')) except (AttributeError, TypeError): return pyyaml_data elif file_type == 'meta': if not isinstance(pyyaml_data, list): return pyyaml_data ruamel_tasks = [ruamel_data] pyyaml_tasks = pyyaml_data else: return pyyaml_data if len(ruamel_tasks) != len(pyyaml_tasks): return pyyaml_data for ruamel_task, pyyaml_task in zip(ruamel_tasks, pyyaml_tasks): skipped_rules = _get_rule_skips_from_task(ruamel_task) if skipped_rules: pyyaml_task['skipped_rules'] = skipped_rules return pyyaml_data
def work_get(self, wallet, account): """ Retrieves work for **account** in **wallet** .. enable_control required .. version 8.0 required :param wallet: Wallet to get account work for :type wallet: str :param account: Account to get work for :type account: str :raises: :py:exc:`nano.rpc.RPCException` >>> rpc.work_get( ... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F", ... account="xrb_1111111111111111111111111111111111111111111111111111hifc8npp" ... ) "432e5cf728c90f4f" """ wallet = self._process_value(wallet, 'wallet') account = self._process_value(account, 'account') payload = {"wallet": wallet, "account": account} resp = self.call('work_get', payload) return resp['work']
Retrieves work for **account** in **wallet** .. enable_control required .. version 8.0 required :param wallet: Wallet to get account work for :type wallet: str :param account: Account to get work for :type account: str :raises: :py:exc:`nano.rpc.RPCException` >>> rpc.work_get( ... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F", ... account="xrb_1111111111111111111111111111111111111111111111111111hifc8npp" ... ) "432e5cf728c90f4f"
Below is the the instruction that describes the task: ### Input: Retrieves work for **account** in **wallet** .. enable_control required .. version 8.0 required :param wallet: Wallet to get account work for :type wallet: str :param account: Account to get work for :type account: str :raises: :py:exc:`nano.rpc.RPCException` >>> rpc.work_get( ... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F", ... account="xrb_1111111111111111111111111111111111111111111111111111hifc8npp" ... ) "432e5cf728c90f4f" ### Response: def work_get(self, wallet, account): """ Retrieves work for **account** in **wallet** .. enable_control required .. version 8.0 required :param wallet: Wallet to get account work for :type wallet: str :param account: Account to get work for :type account: str :raises: :py:exc:`nano.rpc.RPCException` >>> rpc.work_get( ... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F", ... account="xrb_1111111111111111111111111111111111111111111111111111hifc8npp" ... ) "432e5cf728c90f4f" """ wallet = self._process_value(wallet, 'wallet') account = self._process_value(account, 'account') payload = {"wallet": wallet, "account": account} resp = self.call('work_get', payload) return resp['work']
def validate(self, meta, val): """Validate an account_id""" val = string_or_int_as_string_spec().normalise(meta, val) if not regexes['amazon_account_id'].match(val): raise BadOption("Account id must match a particular regex", got=val, should_match=regexes['amazon_account_id'].pattern) return val
Validate an account_id
Below is the the instruction that describes the task: ### Input: Validate an account_id ### Response: def validate(self, meta, val): """Validate an account_id""" val = string_or_int_as_string_spec().normalise(meta, val) if not regexes['amazon_account_id'].match(val): raise BadOption("Account id must match a particular regex", got=val, should_match=regexes['amazon_account_id'].pattern) return val
def may_be_null_is_nullable(): """If may_be_null returns nullable or if NULL can be passed in. This can still be wrong if the specific typelib is older than the linked libgirepository. https://bugzilla.gnome.org/show_bug.cgi?id=660879#c47 """ repo = GIRepository() repo.require("GLib", "2.0", 0) info = repo.find_by_name("GLib", "spawn_sync") # this argument is (allow-none) and can never be (nullable) return not info.get_arg(8).may_be_null
If may_be_null returns nullable or if NULL can be passed in. This can still be wrong if the specific typelib is older than the linked libgirepository. https://bugzilla.gnome.org/show_bug.cgi?id=660879#c47
Below is the the instruction that describes the task: ### Input: If may_be_null returns nullable or if NULL can be passed in. This can still be wrong if the specific typelib is older than the linked libgirepository. https://bugzilla.gnome.org/show_bug.cgi?id=660879#c47 ### Response: def may_be_null_is_nullable(): """If may_be_null returns nullable or if NULL can be passed in. This can still be wrong if the specific typelib is older than the linked libgirepository. https://bugzilla.gnome.org/show_bug.cgi?id=660879#c47 """ repo = GIRepository() repo.require("GLib", "2.0", 0) info = repo.find_by_name("GLib", "spawn_sync") # this argument is (allow-none) and can never be (nullable) return not info.get_arg(8).may_be_null
def safedata(self, data, cdata=True): r"""Convert xml special chars to entities. :param data: the data will be converted safe. :param cdata: whether to use cdata. Default:``True``. If not, use :func:`cgi.escape` to convert data. :type cdata: bool :rtype: str """ safe = ('<![CDATA[%s]]>' % data) if cdata else cgi.escape(str(data), True) return safe
r"""Convert xml special chars to entities. :param data: the data will be converted safe. :param cdata: whether to use cdata. Default:``True``. If not, use :func:`cgi.escape` to convert data. :type cdata: bool :rtype: str
Below is the the instruction that describes the task: ### Input: r"""Convert xml special chars to entities. :param data: the data will be converted safe. :param cdata: whether to use cdata. Default:``True``. If not, use :func:`cgi.escape` to convert data. :type cdata: bool :rtype: str ### Response: def safedata(self, data, cdata=True): r"""Convert xml special chars to entities. :param data: the data will be converted safe. :param cdata: whether to use cdata. Default:``True``. If not, use :func:`cgi.escape` to convert data. :type cdata: bool :rtype: str """ safe = ('<![CDATA[%s]]>' % data) if cdata else cgi.escape(str(data), True) return safe
def render3d(self,view=None): """ Renders the world in 3d-mode. If you want to render custom terrain, you may override this method. Be careful that you still call the original method or else actors may not be rendered. """ for actor in self.actors.values(): actor.render(view)
Renders the world in 3d-mode. If you want to render custom terrain, you may override this method. Be careful that you still call the original method or else actors may not be rendered.
Below is the the instruction that describes the task: ### Input: Renders the world in 3d-mode. If you want to render custom terrain, you may override this method. Be careful that you still call the original method or else actors may not be rendered. ### Response: def render3d(self,view=None): """ Renders the world in 3d-mode. If you want to render custom terrain, you may override this method. Be careful that you still call the original method or else actors may not be rendered. """ for actor in self.actors.values(): actor.render(view)
def authenticate(self, username: str, password: str) -> bool: """Do an Authentricate request and save the cookie returned to be used on the following requests. Return True if the request was successfull """ self.username = username self.password = password auth_payload = """<authenticate1 xmlns=\"utcs\" xmlns:i=\"http://www.w3.org/2001/XMLSchema-instance\"> <password>{password}</password> <username>{username}</username> <application>treeview</application> </authenticate1>""" payload = auth_payload.format(password=self.password, username=self.username) xdoc = self.connection.soap_action( '/ws/AuthenticationService', 'authenticate', payload) if xdoc: isok = xdoc.find( './SOAP-ENV:Body/ns1:authenticate2/ns1:loginWasSuccessful', IHCSoapClient.ihcns) return isok.text == 'true' return False
Do an Authentricate request and save the cookie returned to be used on the following requests. Return True if the request was successfull
Below is the the instruction that describes the task: ### Input: Do an Authentricate request and save the cookie returned to be used on the following requests. Return True if the request was successfull ### Response: def authenticate(self, username: str, password: str) -> bool: """Do an Authentricate request and save the cookie returned to be used on the following requests. Return True if the request was successfull """ self.username = username self.password = password auth_payload = """<authenticate1 xmlns=\"utcs\" xmlns:i=\"http://www.w3.org/2001/XMLSchema-instance\"> <password>{password}</password> <username>{username}</username> <application>treeview</application> </authenticate1>""" payload = auth_payload.format(password=self.password, username=self.username) xdoc = self.connection.soap_action( '/ws/AuthenticationService', 'authenticate', payload) if xdoc: isok = xdoc.find( './SOAP-ENV:Body/ns1:authenticate2/ns1:loginWasSuccessful', IHCSoapClient.ihcns) return isok.text == 'true' return False
def zero_cluster(name): ''' Reset performance statistics to zero across the cluster. .. code-block:: yaml zero_ats_cluster: trafficserver.zero_cluster ''' ret = {'name': name, 'changes': {}, 'result': None, 'comment': ''} if __opts__['test']: ret['comment'] = 'Zeroing cluster statistics' return ret __salt__['trafficserver.zero_cluster']() ret['result'] = True ret['comment'] = 'Zeroed cluster statistics' return ret
Reset performance statistics to zero across the cluster. .. code-block:: yaml zero_ats_cluster: trafficserver.zero_cluster
Below is the the instruction that describes the task: ### Input: Reset performance statistics to zero across the cluster. .. code-block:: yaml zero_ats_cluster: trafficserver.zero_cluster ### Response: def zero_cluster(name): ''' Reset performance statistics to zero across the cluster. .. code-block:: yaml zero_ats_cluster: trafficserver.zero_cluster ''' ret = {'name': name, 'changes': {}, 'result': None, 'comment': ''} if __opts__['test']: ret['comment'] = 'Zeroing cluster statistics' return ret __salt__['trafficserver.zero_cluster']() ret['result'] = True ret['comment'] = 'Zeroed cluster statistics' return ret
def find_library_full_path(name): """ Similar to `from ctypes.util import find_library`, but try to return full path if possible. """ from ctypes.util import find_library if os.name == "posix" and sys.platform == "darwin": # on Mac, ctypes already returns full path return find_library(name) def _use_proc_maps(name): """ Find so from /proc/pid/maps Only works with libraries that has already been loaded. But this is the most accurate method -- it finds the exact library that's being used. """ procmap = os.path.join('/proc', str(os.getpid()), 'maps') if not os.path.isfile(procmap): return None with open(procmap, 'r') as f: for line in f: line = line.strip().split(' ') sofile = line[-1] basename = os.path.basename(sofile) if 'lib' + name + '.so' in basename: if os.path.isfile(sofile): return os.path.realpath(sofile) # The following two methods come from https://github.com/python/cpython/blob/master/Lib/ctypes/util.py def _use_ld(name): """ Find so with `ld -lname -Lpath`. It will search for files in LD_LIBRARY_PATH, but not in ldconfig. """ cmd = "ld -t -l{} -o {}".format(name, os.devnull) ld_lib_path = os.environ.get('LD_LIBRARY_PATH', '') for d in ld_lib_path.split(':'): cmd = cmd + " -L " + d result, ret = subproc_call(cmd + '|| true') expr = r'[^\(\)\s]*lib%s\.[^\(\)\s]*' % re.escape(name) res = re.search(expr, result.decode('utf-8')) if res: res = res.group(0) if not os.path.isfile(res): return None return os.path.realpath(res) def _use_ldconfig(name): """ Find so in `ldconfig -p`. It does not handle LD_LIBRARY_PATH. """ with change_env('LC_ALL', 'C'), change_env('LANG', 'C'): ldconfig, ret = subproc_call("ldconfig -p") ldconfig = ldconfig.decode('utf-8') if ret != 0: return None expr = r'\s+(lib%s\.[^\s]+)\s+\(.*=>\s+(.*)' % (re.escape(name)) res = re.search(expr, ldconfig) if not res: return None else: ret = res.group(2) return os.path.realpath(ret) if sys.platform.startswith('linux'): return _use_proc_maps(name) or _use_ld(name) or _use_ldconfig(name) or find_library(name) return find_library(name)
Similar to `from ctypes.util import find_library`, but try to return full path if possible.
Below is the the instruction that describes the task: ### Input: Similar to `from ctypes.util import find_library`, but try to return full path if possible. ### Response: def find_library_full_path(name): """ Similar to `from ctypes.util import find_library`, but try to return full path if possible. """ from ctypes.util import find_library if os.name == "posix" and sys.platform == "darwin": # on Mac, ctypes already returns full path return find_library(name) def _use_proc_maps(name): """ Find so from /proc/pid/maps Only works with libraries that has already been loaded. But this is the most accurate method -- it finds the exact library that's being used. """ procmap = os.path.join('/proc', str(os.getpid()), 'maps') if not os.path.isfile(procmap): return None with open(procmap, 'r') as f: for line in f: line = line.strip().split(' ') sofile = line[-1] basename = os.path.basename(sofile) if 'lib' + name + '.so' in basename: if os.path.isfile(sofile): return os.path.realpath(sofile) # The following two methods come from https://github.com/python/cpython/blob/master/Lib/ctypes/util.py def _use_ld(name): """ Find so with `ld -lname -Lpath`. It will search for files in LD_LIBRARY_PATH, but not in ldconfig. """ cmd = "ld -t -l{} -o {}".format(name, os.devnull) ld_lib_path = os.environ.get('LD_LIBRARY_PATH', '') for d in ld_lib_path.split(':'): cmd = cmd + " -L " + d result, ret = subproc_call(cmd + '|| true') expr = r'[^\(\)\s]*lib%s\.[^\(\)\s]*' % re.escape(name) res = re.search(expr, result.decode('utf-8')) if res: res = res.group(0) if not os.path.isfile(res): return None return os.path.realpath(res) def _use_ldconfig(name): """ Find so in `ldconfig -p`. It does not handle LD_LIBRARY_PATH. """ with change_env('LC_ALL', 'C'), change_env('LANG', 'C'): ldconfig, ret = subproc_call("ldconfig -p") ldconfig = ldconfig.decode('utf-8') if ret != 0: return None expr = r'\s+(lib%s\.[^\s]+)\s+\(.*=>\s+(.*)' % (re.escape(name)) res = re.search(expr, ldconfig) if not res: return None else: ret = res.group(2) return os.path.realpath(ret) if sys.platform.startswith('linux'): return _use_proc_maps(name) or _use_ld(name) or _use_ldconfig(name) or find_library(name) return find_library(name)
def rename(self, new): """ .. seealso:: :func:`os.rename` """ os.rename(self, new) return self._next_class(new)
.. seealso:: :func:`os.rename`
Below is the the instruction that describes the task: ### Input: .. seealso:: :func:`os.rename` ### Response: def rename(self, new): """ .. seealso:: :func:`os.rename` """ os.rename(self, new) return self._next_class(new)
def get_network_by_device(vm, device, pyvmomi_service, logger): """ Get a Network connected to a particular Device (vNIC) @see https://github.com/vmware/pyvmomi/blob/master/docs/vim/dvs/PortConnection.rst :param vm: :param device: <vim.vm.device.VirtualVmxnet3> instance of adapter :param pyvmomi_service: :param logger: :return: <vim Network Obj or None> """ try: backing = device.backing if hasattr(backing, 'network'): return backing.network elif hasattr(backing, 'port') and hasattr(backing.port, 'portgroupKey'): return VNicService._network_get_network_by_connection(vm, backing.port, pyvmomi_service) except: logger.debug(u"Cannot determinate which Network connected to device {}".format(device)) return None
Get a Network connected to a particular Device (vNIC) @see https://github.com/vmware/pyvmomi/blob/master/docs/vim/dvs/PortConnection.rst :param vm: :param device: <vim.vm.device.VirtualVmxnet3> instance of adapter :param pyvmomi_service: :param logger: :return: <vim Network Obj or None>
Below is the the instruction that describes the task: ### Input: Get a Network connected to a particular Device (vNIC) @see https://github.com/vmware/pyvmomi/blob/master/docs/vim/dvs/PortConnection.rst :param vm: :param device: <vim.vm.device.VirtualVmxnet3> instance of adapter :param pyvmomi_service: :param logger: :return: <vim Network Obj or None> ### Response: def get_network_by_device(vm, device, pyvmomi_service, logger): """ Get a Network connected to a particular Device (vNIC) @see https://github.com/vmware/pyvmomi/blob/master/docs/vim/dvs/PortConnection.rst :param vm: :param device: <vim.vm.device.VirtualVmxnet3> instance of adapter :param pyvmomi_service: :param logger: :return: <vim Network Obj or None> """ try: backing = device.backing if hasattr(backing, 'network'): return backing.network elif hasattr(backing, 'port') and hasattr(backing.port, 'portgroupKey'): return VNicService._network_get_network_by_connection(vm, backing.port, pyvmomi_service) except: logger.debug(u"Cannot determinate which Network connected to device {}".format(device)) return None
def progress(self): """ Returns a string representation of the progress of the search such as "1234/5000", which refers to the number of results retrived / the total number of results found """ if self.response is None: return "Query has not been executed" return "{}/{}".format(len(self.articles), self.response.numFound)
Returns a string representation of the progress of the search such as "1234/5000", which refers to the number of results retrived / the total number of results found
Below is the the instruction that describes the task: ### Input: Returns a string representation of the progress of the search such as "1234/5000", which refers to the number of results retrived / the total number of results found ### Response: def progress(self): """ Returns a string representation of the progress of the search such as "1234/5000", which refers to the number of results retrived / the total number of results found """ if self.response is None: return "Query has not been executed" return "{}/{}".format(len(self.articles), self.response.numFound)
def show_buff(self, pos): """ Return the display of the instruction :rtype: string """ buff = self.get_name() + " " buff += "%x:" % self.first_key for i in self.targets: buff += " %x" % i return buff
Return the display of the instruction :rtype: string
Below is the the instruction that describes the task: ### Input: Return the display of the instruction :rtype: string ### Response: def show_buff(self, pos): """ Return the display of the instruction :rtype: string """ buff = self.get_name() + " " buff += "%x:" % self.first_key for i in self.targets: buff += " %x" % i return buff
def remove(self, user, status=None, symmetrical=False): """ Remove a relationship from one user to another, with the same caveats and behavior as adding a relationship. """ if not status: status = RelationshipStatus.objects.following() res = Relationship.objects.filter( from_user=self.instance, to_user=user, status=status, site__pk=settings.SITE_ID ).delete() if symmetrical: return (res, user.relationships.remove(self.instance, status, False)) else: return res
Remove a relationship from one user to another, with the same caveats and behavior as adding a relationship.
Below is the the instruction that describes the task: ### Input: Remove a relationship from one user to another, with the same caveats and behavior as adding a relationship. ### Response: def remove(self, user, status=None, symmetrical=False): """ Remove a relationship from one user to another, with the same caveats and behavior as adding a relationship. """ if not status: status = RelationshipStatus.objects.following() res = Relationship.objects.filter( from_user=self.instance, to_user=user, status=status, site__pk=settings.SITE_ID ).delete() if symmetrical: return (res, user.relationships.remove(self.instance, status, False)) else: return res
def _pload(offset, size): """ Generic parameter loading. Emmits output code for setting IX at the right location. size = Number of bytes to load: 1 => 8 bit value 2 => 16 bit value / string 4 => 32 bit value / f16 value 5 => 40 bit value """ output = [] indirect = offset[0] == '*' if indirect: offset = offset[1:] I = int(offset) if I >= 0: # If it is a parameter, round up to even bytes I += 4 + (size % 2 if not indirect else 0) # Return Address + "push IX" ix_changed = (indirect or size < 5) and (abs(I) + size) > 127 # Offset > 127 bytes. Need to change IX if ix_changed: # more than 1 byte output.append('push ix') output.append('ld de, %i' % I) output.append('add ix, de') I = 0 elif size == 5: # For floating point numbers we always use DE as IX offset output.append('push ix') output.append('pop hl') output.append('ld de, %i' % I) output.append('add hl, de') I = 0 if indirect: output.append('ld h, (ix%+i)' % (I + 1)) output.append('ld l, (ix%+i)' % I) if size == 1: output.append('ld a, (hl)') elif size == 2: output.append('ld c, (hl)') output.append('inc hl') output.append('ld h, (hl)') output.append('ld l, c') elif size == 4: output.append('call __ILOAD32') REQUIRES.add('iload32.asm') else: # Floating point output.append('call __ILOADF') REQUIRES.add('iloadf.asm') else: if size == 1: output.append('ld a, (ix%+i)' % I) else: if size <= 4: # 16/32bit integer, low part output.append('ld l, (ix%+i)' % I) output.append('ld h, (ix%+i)' % (I + 1)) if size > 2: # 32 bit integer, high part output.append('ld e, (ix%+i)' % (I + 2)) output.append('ld d, (ix%+i)' % (I + 3)) else: # Floating point output.append('call __PLOADF') REQUIRES.add('ploadf.asm') if ix_changed: output.append('pop ix') return output
Generic parameter loading. Emmits output code for setting IX at the right location. size = Number of bytes to load: 1 => 8 bit value 2 => 16 bit value / string 4 => 32 bit value / f16 value 5 => 40 bit value
Below is the the instruction that describes the task: ### Input: Generic parameter loading. Emmits output code for setting IX at the right location. size = Number of bytes to load: 1 => 8 bit value 2 => 16 bit value / string 4 => 32 bit value / f16 value 5 => 40 bit value ### Response: def _pload(offset, size): """ Generic parameter loading. Emmits output code for setting IX at the right location. size = Number of bytes to load: 1 => 8 bit value 2 => 16 bit value / string 4 => 32 bit value / f16 value 5 => 40 bit value """ output = [] indirect = offset[0] == '*' if indirect: offset = offset[1:] I = int(offset) if I >= 0: # If it is a parameter, round up to even bytes I += 4 + (size % 2 if not indirect else 0) # Return Address + "push IX" ix_changed = (indirect or size < 5) and (abs(I) + size) > 127 # Offset > 127 bytes. Need to change IX if ix_changed: # more than 1 byte output.append('push ix') output.append('ld de, %i' % I) output.append('add ix, de') I = 0 elif size == 5: # For floating point numbers we always use DE as IX offset output.append('push ix') output.append('pop hl') output.append('ld de, %i' % I) output.append('add hl, de') I = 0 if indirect: output.append('ld h, (ix%+i)' % (I + 1)) output.append('ld l, (ix%+i)' % I) if size == 1: output.append('ld a, (hl)') elif size == 2: output.append('ld c, (hl)') output.append('inc hl') output.append('ld h, (hl)') output.append('ld l, c') elif size == 4: output.append('call __ILOAD32') REQUIRES.add('iload32.asm') else: # Floating point output.append('call __ILOADF') REQUIRES.add('iloadf.asm') else: if size == 1: output.append('ld a, (ix%+i)' % I) else: if size <= 4: # 16/32bit integer, low part output.append('ld l, (ix%+i)' % I) output.append('ld h, (ix%+i)' % (I + 1)) if size > 2: # 32 bit integer, high part output.append('ld e, (ix%+i)' % (I + 2)) output.append('ld d, (ix%+i)' % (I + 3)) else: # Floating point output.append('call __PLOADF') REQUIRES.add('ploadf.asm') if ix_changed: output.append('pop ix') return output
def find_branches(self): """Find information about the branches in the repository.""" for prefix, name, revision_id in self.find_branches_raw(): yield Revision( branch=name, repository=self, revision_id=revision_id, )
Find information about the branches in the repository.
Below is the the instruction that describes the task: ### Input: Find information about the branches in the repository. ### Response: def find_branches(self): """Find information about the branches in the repository.""" for prefix, name, revision_id in self.find_branches_raw(): yield Revision( branch=name, repository=self, revision_id=revision_id, )
async def controller(self): """Return a Connection to the controller at self.endpoint """ return await Connection.connect( self.endpoint, username=self.username, password=self.password, cacert=self.cacert, bakery_client=self.bakery_client, loop=self.loop, max_frame_size=self.max_frame_size, )
Return a Connection to the controller at self.endpoint
Below is the the instruction that describes the task: ### Input: Return a Connection to the controller at self.endpoint ### Response: async def controller(self): """Return a Connection to the controller at self.endpoint """ return await Connection.connect( self.endpoint, username=self.username, password=self.password, cacert=self.cacert, bakery_client=self.bakery_client, loop=self.loop, max_frame_size=self.max_frame_size, )
def _scobit_transform_deriv_v(systematic_utilities, alt_IDs, rows_to_alts, shape_params, output_array=None, *args, **kwargs): """ Parameters ---------- systematic_utilities : 1D ndarray. All elements should be ints, floats, or longs. Should contain the systematic utilities of each observation per available alternative. Note that this vector is formed by the dot product of the design matrix with the vector of utility coefficients. alt_IDs : 1D ndarray. All elements should be ints. There should be one row per obervation per available alternative for the given observation. Elements denote the alternative corresponding to the given row of the design matrix. rows_to_alts : 2D scipy sparse matrix. There should be one row per observation per available alternative and one column per possible alternative. This matrix maps the rows of the design matrix to the possible alternatives for this dataset. All elements should be zeros or ones. shape_params : None or 1D ndarray. If an array, each element should be an int, float, or long. There should be one value per shape parameter of the model being used. output_array : 2D scipy sparse array. The array should be square and it should have `systematic_utilities.shape[0]` rows. It's data is to be replaced with the correct derivatives of the transformation vector with respect to the vector of systematic utilities. This argument is NOT optional. Returns ------- output_array : 2D scipy sparse array. The shape of the returned array is `(systematic_utilities.shape[0], systematic_utilities.shape[0])`. The returned array specifies the derivative of the transformed utilities with respect to the systematic utilities. All elements are ints, floats, or longs. """ # Note the np.exp is needed because the raw curvature params are the log # of the 'natural' curvature params. This is to ensure the natural shape # params are always positive curve_shapes = np.exp(shape_params) curve_shapes[np.isposinf(curve_shapes)] = max_comp_value long_curve_shapes = rows_to_alts.dot(curve_shapes) # Generate the needed terms for the derivative of the transformation with # respect to the systematic utility and guard against underflow or overflow exp_neg_v = np.exp(-1 * systematic_utilities) powered_term = np.power(1 + exp_neg_v, long_curve_shapes) small_powered_term = np.power(1 + exp_neg_v, long_curve_shapes - 1) derivs = (long_curve_shapes * exp_neg_v * small_powered_term / (powered_term - 1)) # Use L'Hopitals rule to deal with overflow from v --> -inf # From plots, the assignment below may also correctly handle cases where we # have overflow from moderate v (say |v| <= 10) and large shape parameters. too_big_idx = (np.isposinf(derivs) + np.isposinf(exp_neg_v) + np.isposinf(powered_term) + np.isposinf(small_powered_term)).astype(bool) derivs[too_big_idx] = long_curve_shapes[too_big_idx] # Use L'Hopitals rule to deal with underflow from v --> inf too_small_idx = np.where((exp_neg_v == 0) | (powered_term - 1 == 0)) derivs[too_small_idx] = 1.0 # Assign the calculated derivatives to the output array output_array.data = derivs assert output_array.shape == (systematic_utilities.shape[0], systematic_utilities.shape[0]) # Return the matrix of dh_dv. Note the off-diagonal entries are zero # because each transformation only depends on its value of v and no others return output_array
Parameters ---------- systematic_utilities : 1D ndarray. All elements should be ints, floats, or longs. Should contain the systematic utilities of each observation per available alternative. Note that this vector is formed by the dot product of the design matrix with the vector of utility coefficients. alt_IDs : 1D ndarray. All elements should be ints. There should be one row per obervation per available alternative for the given observation. Elements denote the alternative corresponding to the given row of the design matrix. rows_to_alts : 2D scipy sparse matrix. There should be one row per observation per available alternative and one column per possible alternative. This matrix maps the rows of the design matrix to the possible alternatives for this dataset. All elements should be zeros or ones. shape_params : None or 1D ndarray. If an array, each element should be an int, float, or long. There should be one value per shape parameter of the model being used. output_array : 2D scipy sparse array. The array should be square and it should have `systematic_utilities.shape[0]` rows. It's data is to be replaced with the correct derivatives of the transformation vector with respect to the vector of systematic utilities. This argument is NOT optional. Returns ------- output_array : 2D scipy sparse array. The shape of the returned array is `(systematic_utilities.shape[0], systematic_utilities.shape[0])`. The returned array specifies the derivative of the transformed utilities with respect to the systematic utilities. All elements are ints, floats, or longs.
Below is the the instruction that describes the task: ### Input: Parameters ---------- systematic_utilities : 1D ndarray. All elements should be ints, floats, or longs. Should contain the systematic utilities of each observation per available alternative. Note that this vector is formed by the dot product of the design matrix with the vector of utility coefficients. alt_IDs : 1D ndarray. All elements should be ints. There should be one row per obervation per available alternative for the given observation. Elements denote the alternative corresponding to the given row of the design matrix. rows_to_alts : 2D scipy sparse matrix. There should be one row per observation per available alternative and one column per possible alternative. This matrix maps the rows of the design matrix to the possible alternatives for this dataset. All elements should be zeros or ones. shape_params : None or 1D ndarray. If an array, each element should be an int, float, or long. There should be one value per shape parameter of the model being used. output_array : 2D scipy sparse array. The array should be square and it should have `systematic_utilities.shape[0]` rows. It's data is to be replaced with the correct derivatives of the transformation vector with respect to the vector of systematic utilities. This argument is NOT optional. Returns ------- output_array : 2D scipy sparse array. The shape of the returned array is `(systematic_utilities.shape[0], systematic_utilities.shape[0])`. The returned array specifies the derivative of the transformed utilities with respect to the systematic utilities. All elements are ints, floats, or longs. ### Response: def _scobit_transform_deriv_v(systematic_utilities, alt_IDs, rows_to_alts, shape_params, output_array=None, *args, **kwargs): """ Parameters ---------- systematic_utilities : 1D ndarray. All elements should be ints, floats, or longs. Should contain the systematic utilities of each observation per available alternative. Note that this vector is formed by the dot product of the design matrix with the vector of utility coefficients. alt_IDs : 1D ndarray. All elements should be ints. There should be one row per obervation per available alternative for the given observation. Elements denote the alternative corresponding to the given row of the design matrix. rows_to_alts : 2D scipy sparse matrix. There should be one row per observation per available alternative and one column per possible alternative. This matrix maps the rows of the design matrix to the possible alternatives for this dataset. All elements should be zeros or ones. shape_params : None or 1D ndarray. If an array, each element should be an int, float, or long. There should be one value per shape parameter of the model being used. output_array : 2D scipy sparse array. The array should be square and it should have `systematic_utilities.shape[0]` rows. It's data is to be replaced with the correct derivatives of the transformation vector with respect to the vector of systematic utilities. This argument is NOT optional. Returns ------- output_array : 2D scipy sparse array. The shape of the returned array is `(systematic_utilities.shape[0], systematic_utilities.shape[0])`. The returned array specifies the derivative of the transformed utilities with respect to the systematic utilities. All elements are ints, floats, or longs. """ # Note the np.exp is needed because the raw curvature params are the log # of the 'natural' curvature params. This is to ensure the natural shape # params are always positive curve_shapes = np.exp(shape_params) curve_shapes[np.isposinf(curve_shapes)] = max_comp_value long_curve_shapes = rows_to_alts.dot(curve_shapes) # Generate the needed terms for the derivative of the transformation with # respect to the systematic utility and guard against underflow or overflow exp_neg_v = np.exp(-1 * systematic_utilities) powered_term = np.power(1 + exp_neg_v, long_curve_shapes) small_powered_term = np.power(1 + exp_neg_v, long_curve_shapes - 1) derivs = (long_curve_shapes * exp_neg_v * small_powered_term / (powered_term - 1)) # Use L'Hopitals rule to deal with overflow from v --> -inf # From plots, the assignment below may also correctly handle cases where we # have overflow from moderate v (say |v| <= 10) and large shape parameters. too_big_idx = (np.isposinf(derivs) + np.isposinf(exp_neg_v) + np.isposinf(powered_term) + np.isposinf(small_powered_term)).astype(bool) derivs[too_big_idx] = long_curve_shapes[too_big_idx] # Use L'Hopitals rule to deal with underflow from v --> inf too_small_idx = np.where((exp_neg_v == 0) | (powered_term - 1 == 0)) derivs[too_small_idx] = 1.0 # Assign the calculated derivatives to the output array output_array.data = derivs assert output_array.shape == (systematic_utilities.shape[0], systematic_utilities.shape[0]) # Return the matrix of dh_dv. Note the off-diagonal entries are zero # because each transformation only depends on its value of v and no others return output_array
def determine_triad(triad, shorthand=False, no_inversions=False, placeholder=None): """Name the triad; return answers in a list. The third argument should not be given. If shorthand is True the answers will be in abbreviated form. This function can determine major, minor, diminished and suspended triads. Also knows about invertions. Examples: >>> determine_triad(['A', 'C', 'E']) 'A minor triad' >>> determine_triad(['C', 'E', 'A']) 'A minor triad, first inversion' >>> determine_triad(['A', 'C', 'E'], True) 'Am' """ if len(triad) != 3: # warning: raise exception: not a triad return False def inversion_exhauster(triad, shorthand, tries, result): """Run tries every inversion and save the result.""" intval1 = intervals.determine(triad[0], triad[1], True) intval2 = intervals.determine(triad[0], triad[2], True) def add_result(short): result.append((short, tries, triad[0])) intval = intval1 + intval2 if intval == '25': add_result('sus2') elif intval == '3b7': add_result('dom7') # changed from just '7' elif intval == '3b5': add_result('7b5') # why not b5? elif intval == '35': add_result('M') elif intval == '3#5': add_result('aug') elif intval == '36': add_result('M6') elif intval == '37': add_result('M7') elif intval == 'b3b5': add_result('dim') elif intval == 'b35': add_result('m') elif intval == 'b36': add_result('m6') elif intval == 'b3b7': add_result('m7') elif intval == 'b37': add_result('m/M7') elif intval == '45': add_result('sus4') elif intval == '5b7': add_result('m7') elif intval == '57': add_result('M7') if tries != 3 and not no_inversions: return inversion_exhauster([triad[-1]] + triad[:-1], shorthand, tries + 1, result) else: res = [] for r in result: if shorthand: res.append(r[2] + r[0]) else: res.append(r[2] + chord_shorthand_meaning[r[0]] + int_desc(r[1])) return res return inversion_exhauster(triad, shorthand, 1, [])
Name the triad; return answers in a list. The third argument should not be given. If shorthand is True the answers will be in abbreviated form. This function can determine major, minor, diminished and suspended triads. Also knows about invertions. Examples: >>> determine_triad(['A', 'C', 'E']) 'A minor triad' >>> determine_triad(['C', 'E', 'A']) 'A minor triad, first inversion' >>> determine_triad(['A', 'C', 'E'], True) 'Am'
Below is the the instruction that describes the task: ### Input: Name the triad; return answers in a list. The third argument should not be given. If shorthand is True the answers will be in abbreviated form. This function can determine major, minor, diminished and suspended triads. Also knows about invertions. Examples: >>> determine_triad(['A', 'C', 'E']) 'A minor triad' >>> determine_triad(['C', 'E', 'A']) 'A minor triad, first inversion' >>> determine_triad(['A', 'C', 'E'], True) 'Am' ### Response: def determine_triad(triad, shorthand=False, no_inversions=False, placeholder=None): """Name the triad; return answers in a list. The third argument should not be given. If shorthand is True the answers will be in abbreviated form. This function can determine major, minor, diminished and suspended triads. Also knows about invertions. Examples: >>> determine_triad(['A', 'C', 'E']) 'A minor triad' >>> determine_triad(['C', 'E', 'A']) 'A minor triad, first inversion' >>> determine_triad(['A', 'C', 'E'], True) 'Am' """ if len(triad) != 3: # warning: raise exception: not a triad return False def inversion_exhauster(triad, shorthand, tries, result): """Run tries every inversion and save the result.""" intval1 = intervals.determine(triad[0], triad[1], True) intval2 = intervals.determine(triad[0], triad[2], True) def add_result(short): result.append((short, tries, triad[0])) intval = intval1 + intval2 if intval == '25': add_result('sus2') elif intval == '3b7': add_result('dom7') # changed from just '7' elif intval == '3b5': add_result('7b5') # why not b5? elif intval == '35': add_result('M') elif intval == '3#5': add_result('aug') elif intval == '36': add_result('M6') elif intval == '37': add_result('M7') elif intval == 'b3b5': add_result('dim') elif intval == 'b35': add_result('m') elif intval == 'b36': add_result('m6') elif intval == 'b3b7': add_result('m7') elif intval == 'b37': add_result('m/M7') elif intval == '45': add_result('sus4') elif intval == '5b7': add_result('m7') elif intval == '57': add_result('M7') if tries != 3 and not no_inversions: return inversion_exhauster([triad[-1]] + triad[:-1], shorthand, tries + 1, result) else: res = [] for r in result: if shorthand: res.append(r[2] + r[0]) else: res.append(r[2] + chord_shorthand_meaning[r[0]] + int_desc(r[1])) return res return inversion_exhauster(triad, shorthand, 1, [])
def napalm_cli(task: Task, commands: List[str]) -> Result: """ Run commands on remote devices using napalm Arguments: commands: commands to execute Returns: Result object with the following attributes set: * result (``dict``): result of the commands execution """ device = task.host.get_connection("napalm", task.nornir.config) result = device.cli(commands) return Result(host=task.host, result=result)
Run commands on remote devices using napalm Arguments: commands: commands to execute Returns: Result object with the following attributes set: * result (``dict``): result of the commands execution
Below is the the instruction that describes the task: ### Input: Run commands on remote devices using napalm Arguments: commands: commands to execute Returns: Result object with the following attributes set: * result (``dict``): result of the commands execution ### Response: def napalm_cli(task: Task, commands: List[str]) -> Result: """ Run commands on remote devices using napalm Arguments: commands: commands to execute Returns: Result object with the following attributes set: * result (``dict``): result of the commands execution """ device = task.host.get_connection("napalm", task.nornir.config) result = device.cli(commands) return Result(host=task.host, result=result)
def path_glob(pattern, current_dir=None): """Use pathlib for ant-like patterns, like: "**/*.py" :param pattern: File/directory pattern to use (as string). :param current_dir: Current working directory (as Path, pathlib.Path, str) :return Resolved Path (as path.Path). """ if not current_dir: current_dir = pathlib.Path.cwd() elif not isinstance(current_dir, pathlib.Path): # -- CASE: string, path.Path (string-like) current_dir = pathlib.Path(str(current_dir)) for p in current_dir.glob(pattern): yield Path(str(p))
Use pathlib for ant-like patterns, like: "**/*.py" :param pattern: File/directory pattern to use (as string). :param current_dir: Current working directory (as Path, pathlib.Path, str) :return Resolved Path (as path.Path).
Below is the the instruction that describes the task: ### Input: Use pathlib for ant-like patterns, like: "**/*.py" :param pattern: File/directory pattern to use (as string). :param current_dir: Current working directory (as Path, pathlib.Path, str) :return Resolved Path (as path.Path). ### Response: def path_glob(pattern, current_dir=None): """Use pathlib for ant-like patterns, like: "**/*.py" :param pattern: File/directory pattern to use (as string). :param current_dir: Current working directory (as Path, pathlib.Path, str) :return Resolved Path (as path.Path). """ if not current_dir: current_dir = pathlib.Path.cwd() elif not isinstance(current_dir, pathlib.Path): # -- CASE: string, path.Path (string-like) current_dir = pathlib.Path(str(current_dir)) for p in current_dir.glob(pattern): yield Path(str(p))
def get_selected_elements_of_core_class(self, core_element_type): """Returns all selected elements having the specified `core_element_type` as state element class :return: Subset of the selection, only containing elements having `core_element_type` as state element class :rtype: set """ if core_element_type is Outcome: return self.outcomes elif core_element_type is InputDataPort: return self.input_data_ports elif core_element_type is OutputDataPort: return self.output_data_ports elif core_element_type is ScopedVariable: return self.scoped_variables elif core_element_type is Transition: return self.transitions elif core_element_type is DataFlow: return self.data_flows elif core_element_type is State: return self.states raise RuntimeError("Invalid core element type: " + core_element_type)
Returns all selected elements having the specified `core_element_type` as state element class :return: Subset of the selection, only containing elements having `core_element_type` as state element class :rtype: set
Below is the the instruction that describes the task: ### Input: Returns all selected elements having the specified `core_element_type` as state element class :return: Subset of the selection, only containing elements having `core_element_type` as state element class :rtype: set ### Response: def get_selected_elements_of_core_class(self, core_element_type): """Returns all selected elements having the specified `core_element_type` as state element class :return: Subset of the selection, only containing elements having `core_element_type` as state element class :rtype: set """ if core_element_type is Outcome: return self.outcomes elif core_element_type is InputDataPort: return self.input_data_ports elif core_element_type is OutputDataPort: return self.output_data_ports elif core_element_type is ScopedVariable: return self.scoped_variables elif core_element_type is Transition: return self.transitions elif core_element_type is DataFlow: return self.data_flows elif core_element_type is State: return self.states raise RuntimeError("Invalid core element type: " + core_element_type)
def removeTags(dom): """ Remove all tags from `dom` and obtain plaintext representation. Args: dom (str, obj, array): str, HTMLElement instance or array of elements. Returns: str: Plain string without tags. """ # python 2 / 3 shill try: string_type = basestring except NameError: string_type = str # initialize stack with proper value (based on dom parameter) element_stack = None if type(dom) in [list, tuple]: element_stack = dom elif isinstance(dom, HTMLElement): element_stack = dom.childs if dom.isTag() else [dom] elif isinstance(dom, string_type): element_stack = parseString(dom).childs else: element_stack = dom # remove all tags output = "" while element_stack: el = element_stack.pop(0) if not (el.isTag() or el.isComment() or not el.getTagName()): output += el.__str__() if el.childs: element_stack = el.childs + element_stack return output
Remove all tags from `dom` and obtain plaintext representation. Args: dom (str, obj, array): str, HTMLElement instance or array of elements. Returns: str: Plain string without tags.
Below is the the instruction that describes the task: ### Input: Remove all tags from `dom` and obtain plaintext representation. Args: dom (str, obj, array): str, HTMLElement instance or array of elements. Returns: str: Plain string without tags. ### Response: def removeTags(dom): """ Remove all tags from `dom` and obtain plaintext representation. Args: dom (str, obj, array): str, HTMLElement instance or array of elements. Returns: str: Plain string without tags. """ # python 2 / 3 shill try: string_type = basestring except NameError: string_type = str # initialize stack with proper value (based on dom parameter) element_stack = None if type(dom) in [list, tuple]: element_stack = dom elif isinstance(dom, HTMLElement): element_stack = dom.childs if dom.isTag() else [dom] elif isinstance(dom, string_type): element_stack = parseString(dom).childs else: element_stack = dom # remove all tags output = "" while element_stack: el = element_stack.pop(0) if not (el.isTag() or el.isComment() or not el.getTagName()): output += el.__str__() if el.childs: element_stack = el.childs + element_stack return output
def add_attribute(self, attribute_type, attribute_value): """ Adds a attribute to a Group/Indicator or Victim Args: attribute_type: attribute_value: Returns: attribute json """ if not self.can_update(): self._tcex.handle_error(910, [self.type]) return self.tc_requests.add_attribute( self.api_type, self.api_sub_type, self.unique_id, attribute_type, attribute_value, owner=self.owner, )
Adds a attribute to a Group/Indicator or Victim Args: attribute_type: attribute_value: Returns: attribute json
Below is the the instruction that describes the task: ### Input: Adds a attribute to a Group/Indicator or Victim Args: attribute_type: attribute_value: Returns: attribute json ### Response: def add_attribute(self, attribute_type, attribute_value): """ Adds a attribute to a Group/Indicator or Victim Args: attribute_type: attribute_value: Returns: attribute json """ if not self.can_update(): self._tcex.handle_error(910, [self.type]) return self.tc_requests.add_attribute( self.api_type, self.api_sub_type, self.unique_id, attribute_type, attribute_value, owner=self.owner, )
def add_section(self, section): """A block section of code to be used as substitutions :param section: A block section of code to be used as substitutions :type section: Section """ self._sections = self._ensure_append(section, self._sections)
A block section of code to be used as substitutions :param section: A block section of code to be used as substitutions :type section: Section
Below is the the instruction that describes the task: ### Input: A block section of code to be used as substitutions :param section: A block section of code to be used as substitutions :type section: Section ### Response: def add_section(self, section): """A block section of code to be used as substitutions :param section: A block section of code to be used as substitutions :type section: Section """ self._sections = self._ensure_append(section, self._sections)
def get_all_items_of_offer(self, offer_id): """ Get all items of offer This will iterate over all pages until it gets all elements. So if the rate limit exceeded it will throw an Exception and you will get nothing :param offer_id: the offer id :return: list """ return self._iterate_through_pages( get_function=self.get_items_of_offer_per_page, resource=OFFER_ITEMS, **{'offer_id': offer_id} )
Get all items of offer This will iterate over all pages until it gets all elements. So if the rate limit exceeded it will throw an Exception and you will get nothing :param offer_id: the offer id :return: list
Below is the the instruction that describes the task: ### Input: Get all items of offer This will iterate over all pages until it gets all elements. So if the rate limit exceeded it will throw an Exception and you will get nothing :param offer_id: the offer id :return: list ### Response: def get_all_items_of_offer(self, offer_id): """ Get all items of offer This will iterate over all pages until it gets all elements. So if the rate limit exceeded it will throw an Exception and you will get nothing :param offer_id: the offer id :return: list """ return self._iterate_through_pages( get_function=self.get_items_of_offer_per_page, resource=OFFER_ITEMS, **{'offer_id': offer_id} )
def get_statements_by_hash(hash_list, ev_limit=100, best_first=True, tries=2): """Get fully formed statements from a list of hashes. Parameters ---------- hash_list : list[int or str] A list of statement hashes. ev_limit : int or None Limit the amount of evidence returned per Statement. Default is 100. best_first : bool If True, the preassembled statements will be sorted by the amount of evidence they have, and those with the most evidence will be prioritized. When using `max_stmts`, this means you will get the "best" statements. If False, statements will be queried in arbitrary order. tries : int > 0 Set the number of times to try the query. The database often caches results, so if a query times out the first time, trying again after a timeout will often succeed fast enough to avoid a timeout. This can also help gracefully handle an unreliable connection, if you're willing to wait. Default is 2. """ if not isinstance(hash_list, list): raise ValueError("The `hash_list` input is a list, not %s." % type(hash_list)) if not hash_list: return [] if isinstance(hash_list[0], str): hash_list = [int(h) for h in hash_list] if not all([isinstance(h, int) for h in hash_list]): raise ValueError("Hashes must be ints or strings that can be " "converted into ints.") resp = submit_statement_request('post', 'from_hashes', ev_limit=ev_limit, data={'hashes': hash_list}, best_first=best_first, tries=tries) return stmts_from_json(resp.json()['statements'].values())
Get fully formed statements from a list of hashes. Parameters ---------- hash_list : list[int or str] A list of statement hashes. ev_limit : int or None Limit the amount of evidence returned per Statement. Default is 100. best_first : bool If True, the preassembled statements will be sorted by the amount of evidence they have, and those with the most evidence will be prioritized. When using `max_stmts`, this means you will get the "best" statements. If False, statements will be queried in arbitrary order. tries : int > 0 Set the number of times to try the query. The database often caches results, so if a query times out the first time, trying again after a timeout will often succeed fast enough to avoid a timeout. This can also help gracefully handle an unreliable connection, if you're willing to wait. Default is 2.
Below is the the instruction that describes the task: ### Input: Get fully formed statements from a list of hashes. Parameters ---------- hash_list : list[int or str] A list of statement hashes. ev_limit : int or None Limit the amount of evidence returned per Statement. Default is 100. best_first : bool If True, the preassembled statements will be sorted by the amount of evidence they have, and those with the most evidence will be prioritized. When using `max_stmts`, this means you will get the "best" statements. If False, statements will be queried in arbitrary order. tries : int > 0 Set the number of times to try the query. The database often caches results, so if a query times out the first time, trying again after a timeout will often succeed fast enough to avoid a timeout. This can also help gracefully handle an unreliable connection, if you're willing to wait. Default is 2. ### Response: def get_statements_by_hash(hash_list, ev_limit=100, best_first=True, tries=2): """Get fully formed statements from a list of hashes. Parameters ---------- hash_list : list[int or str] A list of statement hashes. ev_limit : int or None Limit the amount of evidence returned per Statement. Default is 100. best_first : bool If True, the preassembled statements will be sorted by the amount of evidence they have, and those with the most evidence will be prioritized. When using `max_stmts`, this means you will get the "best" statements. If False, statements will be queried in arbitrary order. tries : int > 0 Set the number of times to try the query. The database often caches results, so if a query times out the first time, trying again after a timeout will often succeed fast enough to avoid a timeout. This can also help gracefully handle an unreliable connection, if you're willing to wait. Default is 2. """ if not isinstance(hash_list, list): raise ValueError("The `hash_list` input is a list, not %s." % type(hash_list)) if not hash_list: return [] if isinstance(hash_list[0], str): hash_list = [int(h) for h in hash_list] if not all([isinstance(h, int) for h in hash_list]): raise ValueError("Hashes must be ints or strings that can be " "converted into ints.") resp = submit_statement_request('post', 'from_hashes', ev_limit=ev_limit, data={'hashes': hash_list}, best_first=best_first, tries=tries) return stmts_from_json(resp.json()['statements'].values())
def amax(data, axis=None, mapper=None, blen=None, storage=None, create='array', **kwargs): """Compute the maximum value.""" return reduce_axis(data, axis=axis, reducer=np.amax, block_reducer=np.maximum, mapper=mapper, blen=blen, storage=storage, create=create, **kwargs)
Compute the maximum value.
Below is the the instruction that describes the task: ### Input: Compute the maximum value. ### Response: def amax(data, axis=None, mapper=None, blen=None, storage=None, create='array', **kwargs): """Compute the maximum value.""" return reduce_axis(data, axis=axis, reducer=np.amax, block_reducer=np.maximum, mapper=mapper, blen=blen, storage=storage, create=create, **kwargs)
def attribute_path(self, attribute, missing=None, visitor=None): """ Generates a list of values of the `attribute` of all ancestors of this node (as well as the node itself). If a value is ``None``, then the optional value of `missing` is used (by default ``None``). By default, the ``getattr(node, attribute, None) or missing`` mechanism is used to obtain the value of the attribute for each node. This can be overridden by supplying a custom `visitor` function, which expects as arguments the node and the attribute, and should return an appropriate value for the required attribute. :param attribute: the name of the attribute. :param missing: optional value to use when attribute value is None. :param visitor: optional function responsible for obtaining the attribute value from a node. :return: a list of values of the required `attribute` of the ancestor path of this node. """ _parameters = {"node": self, "attribute": attribute} if missing is not None: _parameters["missing"] = missing if visitor is not None: _parameters["visitor"] = visitor return self.__class__.objects.attribute_path(**_parameters)
Generates a list of values of the `attribute` of all ancestors of this node (as well as the node itself). If a value is ``None``, then the optional value of `missing` is used (by default ``None``). By default, the ``getattr(node, attribute, None) or missing`` mechanism is used to obtain the value of the attribute for each node. This can be overridden by supplying a custom `visitor` function, which expects as arguments the node and the attribute, and should return an appropriate value for the required attribute. :param attribute: the name of the attribute. :param missing: optional value to use when attribute value is None. :param visitor: optional function responsible for obtaining the attribute value from a node. :return: a list of values of the required `attribute` of the ancestor path of this node.
Below is the the instruction that describes the task: ### Input: Generates a list of values of the `attribute` of all ancestors of this node (as well as the node itself). If a value is ``None``, then the optional value of `missing` is used (by default ``None``). By default, the ``getattr(node, attribute, None) or missing`` mechanism is used to obtain the value of the attribute for each node. This can be overridden by supplying a custom `visitor` function, which expects as arguments the node and the attribute, and should return an appropriate value for the required attribute. :param attribute: the name of the attribute. :param missing: optional value to use when attribute value is None. :param visitor: optional function responsible for obtaining the attribute value from a node. :return: a list of values of the required `attribute` of the ancestor path of this node. ### Response: def attribute_path(self, attribute, missing=None, visitor=None): """ Generates a list of values of the `attribute` of all ancestors of this node (as well as the node itself). If a value is ``None``, then the optional value of `missing` is used (by default ``None``). By default, the ``getattr(node, attribute, None) or missing`` mechanism is used to obtain the value of the attribute for each node. This can be overridden by supplying a custom `visitor` function, which expects as arguments the node and the attribute, and should return an appropriate value for the required attribute. :param attribute: the name of the attribute. :param missing: optional value to use when attribute value is None. :param visitor: optional function responsible for obtaining the attribute value from a node. :return: a list of values of the required `attribute` of the ancestor path of this node. """ _parameters = {"node": self, "attribute": attribute} if missing is not None: _parameters["missing"] = missing if visitor is not None: _parameters["visitor"] = visitor return self.__class__.objects.attribute_path(**_parameters)
def get_stops_in_polygon( feed: "Feed", polygon: Polygon, geo_stops=None ) -> DataFrame: """ Return the slice of ``feed.stops`` that contains all stops that lie within the given Shapely Polygon object that is specified in WGS84 coordinates. Parameters ---------- feed : Feed polygon : Shapely Polygon Specified in WGS84 coordinates geo_stops : Geopandas GeoDataFrame A geographic version of ``feed.stops`` which will be computed if not given. Specify this parameter in batch jobs to avoid unnecessary computation. Returns ------- DataFrame Subset of ``feed.stops`` Notes ----- - Requires GeoPandas - Assume the following feed attributes are not ``None``: * ``feed.stops``, if ``geo_stops`` is not given """ if geo_stops is not None: f = geo_stops.copy() else: f = geometrize_stops(feed.stops) cols = f.columns f["hit"] = f["geometry"].within(polygon) f = f[f["hit"]][cols] return ungeometrize_stops(f)
Return the slice of ``feed.stops`` that contains all stops that lie within the given Shapely Polygon object that is specified in WGS84 coordinates. Parameters ---------- feed : Feed polygon : Shapely Polygon Specified in WGS84 coordinates geo_stops : Geopandas GeoDataFrame A geographic version of ``feed.stops`` which will be computed if not given. Specify this parameter in batch jobs to avoid unnecessary computation. Returns ------- DataFrame Subset of ``feed.stops`` Notes ----- - Requires GeoPandas - Assume the following feed attributes are not ``None``: * ``feed.stops``, if ``geo_stops`` is not given
Below is the the instruction that describes the task: ### Input: Return the slice of ``feed.stops`` that contains all stops that lie within the given Shapely Polygon object that is specified in WGS84 coordinates. Parameters ---------- feed : Feed polygon : Shapely Polygon Specified in WGS84 coordinates geo_stops : Geopandas GeoDataFrame A geographic version of ``feed.stops`` which will be computed if not given. Specify this parameter in batch jobs to avoid unnecessary computation. Returns ------- DataFrame Subset of ``feed.stops`` Notes ----- - Requires GeoPandas - Assume the following feed attributes are not ``None``: * ``feed.stops``, if ``geo_stops`` is not given ### Response: def get_stops_in_polygon( feed: "Feed", polygon: Polygon, geo_stops=None ) -> DataFrame: """ Return the slice of ``feed.stops`` that contains all stops that lie within the given Shapely Polygon object that is specified in WGS84 coordinates. Parameters ---------- feed : Feed polygon : Shapely Polygon Specified in WGS84 coordinates geo_stops : Geopandas GeoDataFrame A geographic version of ``feed.stops`` which will be computed if not given. Specify this parameter in batch jobs to avoid unnecessary computation. Returns ------- DataFrame Subset of ``feed.stops`` Notes ----- - Requires GeoPandas - Assume the following feed attributes are not ``None``: * ``feed.stops``, if ``geo_stops`` is not given """ if geo_stops is not None: f = geo_stops.copy() else: f = geometrize_stops(feed.stops) cols = f.columns f["hit"] = f["geometry"].within(polygon) f = f[f["hit"]][cols] return ungeometrize_stops(f)
def compute(self, text, # text for which to find the most similar event lang = "eng"): # language in which the text is written """ compute the list of most similar events for the given text """ params = { "lang": lang, "text": text, "topClustersCount": self._nrOfEventsToReturn } res = self._er.jsonRequest("/json/getEventForText/enqueueRequest", params) requestId = res["requestId"] for i in range(10): time.sleep(1) # sleep for 1 second to wait for the clustering to perform computation res = self._er.jsonRequest("/json/getEventForText/testRequest", { "requestId": requestId }) if isinstance(res, list) and len(res) > 0: return res return None
compute the list of most similar events for the given text
Below is the the instruction that describes the task: ### Input: compute the list of most similar events for the given text ### Response: def compute(self, text, # text for which to find the most similar event lang = "eng"): # language in which the text is written """ compute the list of most similar events for the given text """ params = { "lang": lang, "text": text, "topClustersCount": self._nrOfEventsToReturn } res = self._er.jsonRequest("/json/getEventForText/enqueueRequest", params) requestId = res["requestId"] for i in range(10): time.sleep(1) # sleep for 1 second to wait for the clustering to perform computation res = self._er.jsonRequest("/json/getEventForText/testRequest", { "requestId": requestId }) if isinstance(res, list) and len(res) > 0: return res return None
def main(cli_arguments: List[str]): """ Entrypoint. :param cli_arguments: arguments passed in via the CLI :raises SystemExit: always raised """ cli_configuration: CliConfiguration try: cli_configuration = parse_cli_configuration(cli_arguments) except InvalidCliArgumentError as e: logger.error(e) exit(INVALID_CLI_ARGUMENT_EXIT_CODE) except SystemExit as e: exit(e.code) if cli_configuration.log_verbosity: logging.getLogger(PACKAGE_NAME).setLevel(cli_configuration.log_verbosity) consul_configuration: ConsulConfiguration try: consul_configuration = get_consul_configuration_from_environment() except KeyError as e: logger.error(f"Cannot connect to Consul - the environment variable {e.args[0]} must be set") exit(MISSING_REQUIRED_ENVIRONMENT_VARIABLE_EXIT_CODE) except InvalidEnvironmentVariableError as e: logger.error(e) exit(INVALID_ENVIRONMENT_VARIABLE_EXIT_CODE) lock_manager: ConsulLockManager try: lock_manager = ConsulLockManager( consul_configuration=consul_configuration, session_ttl_in_seconds=cli_configuration.session_ttl) except InvalidSessionTtlValueError as e: logger.error(e) exit(INVALID_SESSION_TTL_EXIT_CODE) try: { CliLockConfiguration: _acquire_lock_and_exit, CliLockAndExecuteConfiguration: _acquire_lock_and_execute, CliUnlockConfiguration: _release_lock }[type(cli_configuration)](lock_manager, cli_configuration) except PermissionDeniedConsulError as e: error_message = f"Invalid credentials - are you sure you have set {CONSUL_TOKEN_ENVIRONMENT_VARIABLE} " \ f"correctly?" logger.debug(e) logger.error(error_message) exit(PERMISSION_DENIED_EXIT_CODE) except DoubleSlashKeyError as e: logger.debug(e) logger.error(f"Double slashes \"//\" in keys get converted into single slashes \"/\" - please use a " f"single slash if this is intended: {cli_configuration.key}") exit(INVALID_KEY_EXIT_CODE) except NonNormalisedKeyError as e: logger.debug(e) logger.error(f"Key paths must be normalised - use \"{normpath(e.key)}\" if this key was intended: " f"{cli_configuration.key}") exit(INVALID_KEY_EXIT_CODE)
Entrypoint. :param cli_arguments: arguments passed in via the CLI :raises SystemExit: always raised
Below is the the instruction that describes the task: ### Input: Entrypoint. :param cli_arguments: arguments passed in via the CLI :raises SystemExit: always raised ### Response: def main(cli_arguments: List[str]): """ Entrypoint. :param cli_arguments: arguments passed in via the CLI :raises SystemExit: always raised """ cli_configuration: CliConfiguration try: cli_configuration = parse_cli_configuration(cli_arguments) except InvalidCliArgumentError as e: logger.error(e) exit(INVALID_CLI_ARGUMENT_EXIT_CODE) except SystemExit as e: exit(e.code) if cli_configuration.log_verbosity: logging.getLogger(PACKAGE_NAME).setLevel(cli_configuration.log_verbosity) consul_configuration: ConsulConfiguration try: consul_configuration = get_consul_configuration_from_environment() except KeyError as e: logger.error(f"Cannot connect to Consul - the environment variable {e.args[0]} must be set") exit(MISSING_REQUIRED_ENVIRONMENT_VARIABLE_EXIT_CODE) except InvalidEnvironmentVariableError as e: logger.error(e) exit(INVALID_ENVIRONMENT_VARIABLE_EXIT_CODE) lock_manager: ConsulLockManager try: lock_manager = ConsulLockManager( consul_configuration=consul_configuration, session_ttl_in_seconds=cli_configuration.session_ttl) except InvalidSessionTtlValueError as e: logger.error(e) exit(INVALID_SESSION_TTL_EXIT_CODE) try: { CliLockConfiguration: _acquire_lock_and_exit, CliLockAndExecuteConfiguration: _acquire_lock_and_execute, CliUnlockConfiguration: _release_lock }[type(cli_configuration)](lock_manager, cli_configuration) except PermissionDeniedConsulError as e: error_message = f"Invalid credentials - are you sure you have set {CONSUL_TOKEN_ENVIRONMENT_VARIABLE} " \ f"correctly?" logger.debug(e) logger.error(error_message) exit(PERMISSION_DENIED_EXIT_CODE) except DoubleSlashKeyError as e: logger.debug(e) logger.error(f"Double slashes \"//\" in keys get converted into single slashes \"/\" - please use a " f"single slash if this is intended: {cli_configuration.key}") exit(INVALID_KEY_EXIT_CODE) except NonNormalisedKeyError as e: logger.debug(e) logger.error(f"Key paths must be normalised - use \"{normpath(e.key)}\" if this key was intended: " f"{cli_configuration.key}") exit(INVALID_KEY_EXIT_CODE)
def status(self): """ The current status of the event (started, finished or pending). """ myNow = timezone.localtime(timezone=self.tz) if getAwareDatetime(self.date, self.time_to, self.tz) < myNow: return "finished" elif getAwareDatetime(self.date, self.time_from, self.tz) < myNow: return "started"
The current status of the event (started, finished or pending).
Below is the the instruction that describes the task: ### Input: The current status of the event (started, finished or pending). ### Response: def status(self): """ The current status of the event (started, finished or pending). """ myNow = timezone.localtime(timezone=self.tz) if getAwareDatetime(self.date, self.time_to, self.tz) < myNow: return "finished" elif getAwareDatetime(self.date, self.time_from, self.tz) < myNow: return "started"
def resolve_nested_schema(self, schema): """Return the Open API representation of a marshmallow Schema. Adds the schema to the spec if it isn't already present. Typically will return a dictionary with the reference to the schema's path in the spec unless the `schema_name_resolver` returns `None`, in which case the returned dictoinary will contain a JSON Schema Object representation of the schema. :param schema: schema to add to the spec """ schema_instance = resolve_schema_instance(schema) schema_key = make_schema_key(schema_instance) if schema_key not in self.refs: schema_cls = self.resolve_schema_class(schema) name = self.schema_name_resolver(schema_cls) if not name: try: json_schema = self.schema2jsonschema(schema) except RuntimeError: raise APISpecError( "Name resolver returned None for schema {schema} which is " "part of a chain of circular referencing schemas. Please" " ensure that the schema_name_resolver passed to" " MarshmallowPlugin returns a string for all circular" " referencing schemas.".format(schema=schema) ) if getattr(schema, "many", False): return {"type": "array", "items": json_schema} return json_schema name = get_unique_schema_name(self.spec.components, name) self.spec.components.schema(name, schema=schema) return self.get_ref_dict(schema_instance)
Return the Open API representation of a marshmallow Schema. Adds the schema to the spec if it isn't already present. Typically will return a dictionary with the reference to the schema's path in the spec unless the `schema_name_resolver` returns `None`, in which case the returned dictoinary will contain a JSON Schema Object representation of the schema. :param schema: schema to add to the spec
Below is the the instruction that describes the task: ### Input: Return the Open API representation of a marshmallow Schema. Adds the schema to the spec if it isn't already present. Typically will return a dictionary with the reference to the schema's path in the spec unless the `schema_name_resolver` returns `None`, in which case the returned dictoinary will contain a JSON Schema Object representation of the schema. :param schema: schema to add to the spec ### Response: def resolve_nested_schema(self, schema): """Return the Open API representation of a marshmallow Schema. Adds the schema to the spec if it isn't already present. Typically will return a dictionary with the reference to the schema's path in the spec unless the `schema_name_resolver` returns `None`, in which case the returned dictoinary will contain a JSON Schema Object representation of the schema. :param schema: schema to add to the spec """ schema_instance = resolve_schema_instance(schema) schema_key = make_schema_key(schema_instance) if schema_key not in self.refs: schema_cls = self.resolve_schema_class(schema) name = self.schema_name_resolver(schema_cls) if not name: try: json_schema = self.schema2jsonschema(schema) except RuntimeError: raise APISpecError( "Name resolver returned None for schema {schema} which is " "part of a chain of circular referencing schemas. Please" " ensure that the schema_name_resolver passed to" " MarshmallowPlugin returns a string for all circular" " referencing schemas.".format(schema=schema) ) if getattr(schema, "many", False): return {"type": "array", "items": json_schema} return json_schema name = get_unique_schema_name(self.spec.components, name) self.spec.components.schema(name, schema=schema) return self.get_ref_dict(schema_instance)
def _conv(self, name, x, filter_size, in_filters, out_filters, strides): """Convolution.""" if self.init_layers: conv = Conv2DnGPU(out_filters, (filter_size, filter_size), strides[1:3], 'SAME', w_name='DW') conv.name = name self.layers += [conv] else: conv = self.layers[self.layer_idx] self.layer_idx += 1 conv.device_name = self.device_name conv.set_training(self.training) return conv.fprop(x)
Convolution.
Below is the the instruction that describes the task: ### Input: Convolution. ### Response: def _conv(self, name, x, filter_size, in_filters, out_filters, strides): """Convolution.""" if self.init_layers: conv = Conv2DnGPU(out_filters, (filter_size, filter_size), strides[1:3], 'SAME', w_name='DW') conv.name = name self.layers += [conv] else: conv = self.layers[self.layer_idx] self.layer_idx += 1 conv.device_name = self.device_name conv.set_training(self.training) return conv.fprop(x)
def get(map_name): """Get an instance of a map by name. Errors if the map doesn't exist.""" if isinstance(map_name, Map): return map_name # Get the list of maps. This isn't at module scope to avoid problems of maps # being defined after this module is imported. maps = get_maps() map_class = maps.get(map_name) if map_class: return map_class() raise NoMapException("Map doesn't exist: %s" % map_name)
Get an instance of a map by name. Errors if the map doesn't exist.
Below is the the instruction that describes the task: ### Input: Get an instance of a map by name. Errors if the map doesn't exist. ### Response: def get(map_name): """Get an instance of a map by name. Errors if the map doesn't exist.""" if isinstance(map_name, Map): return map_name # Get the list of maps. This isn't at module scope to avoid problems of maps # being defined after this module is imported. maps = get_maps() map_class = maps.get(map_name) if map_class: return map_class() raise NoMapException("Map doesn't exist: %s" % map_name)
def save(self): """ Saves an object to the database. .. code-block:: python #create a person instance person = Person(first_name='Kimberly', last_name='Eggleston') #saves it to Cassandra person.save() """ # handle polymorphic models if self._is_polymorphic: if self._is_polymorphic_base: raise PolymorphicModelException('cannot save polymorphic base model') else: setattr(self, self._discriminator_column_name, self.__discriminator_value__) self.validate() self.__dmlquery__(self.__class__, self, batch=self._batch, ttl=self._ttl, timestamp=self._timestamp, consistency=self.__consistency__, if_not_exists=self._if_not_exists, conditional=self._conditional, timeout=self._timeout, if_exists=self._if_exists).save() self._set_persisted() self._timestamp = None return self
Saves an object to the database. .. code-block:: python #create a person instance person = Person(first_name='Kimberly', last_name='Eggleston') #saves it to Cassandra person.save()
Below is the the instruction that describes the task: ### Input: Saves an object to the database. .. code-block:: python #create a person instance person = Person(first_name='Kimberly', last_name='Eggleston') #saves it to Cassandra person.save() ### Response: def save(self): """ Saves an object to the database. .. code-block:: python #create a person instance person = Person(first_name='Kimberly', last_name='Eggleston') #saves it to Cassandra person.save() """ # handle polymorphic models if self._is_polymorphic: if self._is_polymorphic_base: raise PolymorphicModelException('cannot save polymorphic base model') else: setattr(self, self._discriminator_column_name, self.__discriminator_value__) self.validate() self.__dmlquery__(self.__class__, self, batch=self._batch, ttl=self._ttl, timestamp=self._timestamp, consistency=self.__consistency__, if_not_exists=self._if_not_exists, conditional=self._conditional, timeout=self._timeout, if_exists=self._if_exists).save() self._set_persisted() self._timestamp = None return self
def coerce_to_pendulum(x: PotentialDatetimeType, assume_local: bool = False) -> Optional[DateTime]: """ Converts something to a :class:`pendulum.DateTime`. Args: x: something that may be coercible to a datetime assume_local: if ``True``, assume local timezone; if ``False``, assume UTC Returns: a :class:`pendulum.DateTime`, or ``None``. Raises: pendulum.parsing.exceptions.ParserError: if a string fails to parse ValueError: if no conversion possible """ if not x: # None and blank string return None if isinstance(x, DateTime): return x tz = get_tz_local() if assume_local else get_tz_utc() if isinstance(x, datetime.datetime): return pendulum.instance(x, tz=tz) # (*) elif isinstance(x, datetime.date): # BEWARE: datetime subclasses date. The order is crucial here. # Can also use: type(x) is datetime.date # noinspection PyUnresolvedReferences midnight = DateTime.min.time() dt = DateTime.combine(x, midnight) return pendulum.instance(dt, tz=tz) # (*) elif isinstance(x, str): return pendulum.parse(x, tz=tz) # (*) # may raise else: raise ValueError("Don't know how to convert to DateTime: " "{!r}".format(x))
Converts something to a :class:`pendulum.DateTime`. Args: x: something that may be coercible to a datetime assume_local: if ``True``, assume local timezone; if ``False``, assume UTC Returns: a :class:`pendulum.DateTime`, or ``None``. Raises: pendulum.parsing.exceptions.ParserError: if a string fails to parse ValueError: if no conversion possible
Below is the the instruction that describes the task: ### Input: Converts something to a :class:`pendulum.DateTime`. Args: x: something that may be coercible to a datetime assume_local: if ``True``, assume local timezone; if ``False``, assume UTC Returns: a :class:`pendulum.DateTime`, or ``None``. Raises: pendulum.parsing.exceptions.ParserError: if a string fails to parse ValueError: if no conversion possible ### Response: def coerce_to_pendulum(x: PotentialDatetimeType, assume_local: bool = False) -> Optional[DateTime]: """ Converts something to a :class:`pendulum.DateTime`. Args: x: something that may be coercible to a datetime assume_local: if ``True``, assume local timezone; if ``False``, assume UTC Returns: a :class:`pendulum.DateTime`, or ``None``. Raises: pendulum.parsing.exceptions.ParserError: if a string fails to parse ValueError: if no conversion possible """ if not x: # None and blank string return None if isinstance(x, DateTime): return x tz = get_tz_local() if assume_local else get_tz_utc() if isinstance(x, datetime.datetime): return pendulum.instance(x, tz=tz) # (*) elif isinstance(x, datetime.date): # BEWARE: datetime subclasses date. The order is crucial here. # Can also use: type(x) is datetime.date # noinspection PyUnresolvedReferences midnight = DateTime.min.time() dt = DateTime.combine(x, midnight) return pendulum.instance(dt, tz=tz) # (*) elif isinstance(x, str): return pendulum.parse(x, tz=tz) # (*) # may raise else: raise ValueError("Don't know how to convert to DateTime: " "{!r}".format(x))
def all_simple_bb_paths(self, start_address, end_address): """Return a list of path between start and end address. """ bb_start = self._find_basic_block(start_address) bb_end = self._find_basic_block(end_address) paths = networkx.all_simple_paths(self._graph, source=bb_start.address, target=bb_end.address) return ([self._bb_by_addr[addr] for addr in path] for path in paths)
Return a list of path between start and end address.
Below is the the instruction that describes the task: ### Input: Return a list of path between start and end address. ### Response: def all_simple_bb_paths(self, start_address, end_address): """Return a list of path between start and end address. """ bb_start = self._find_basic_block(start_address) bb_end = self._find_basic_block(end_address) paths = networkx.all_simple_paths(self._graph, source=bb_start.address, target=bb_end.address) return ([self._bb_by_addr[addr] for addr in path] for path in paths)
def do_get(self, from_path, to_path): """ Copy file from Ndrive to local file and print out out the metadata. Examples: Ndrive> get file.txt ~/ndrive-file.txt """ to_file = open(os.path.expanduser(to_path), "wb") self.n.downloadFile(self.current_path + "/" + from_path, to_path)
Copy file from Ndrive to local file and print out out the metadata. Examples: Ndrive> get file.txt ~/ndrive-file.txt
Below is the the instruction that describes the task: ### Input: Copy file from Ndrive to local file and print out out the metadata. Examples: Ndrive> get file.txt ~/ndrive-file.txt ### Response: def do_get(self, from_path, to_path): """ Copy file from Ndrive to local file and print out out the metadata. Examples: Ndrive> get file.txt ~/ndrive-file.txt """ to_file = open(os.path.expanduser(to_path), "wb") self.n.downloadFile(self.current_path + "/" + from_path, to_path)
async def paginate(self): """Actually paginate the entries and run the interactive loop if necessary.""" await self.show_page(1, first=True) while self.paginating: react = await self.bot.wait_for_reaction(message=self.message, check=self.react_check, timeout=120.0) if react is None: self.paginating = False try: await self.bot.clear_reactions(self.message) except: pass finally: break try: await self.bot.remove_reaction(self.message, react.reaction.emoji, react.user) except: pass # can't remove it so don't bother doing so await self.match()
Actually paginate the entries and run the interactive loop if necessary.
Below is the the instruction that describes the task: ### Input: Actually paginate the entries and run the interactive loop if necessary. ### Response: async def paginate(self): """Actually paginate the entries and run the interactive loop if necessary.""" await self.show_page(1, first=True) while self.paginating: react = await self.bot.wait_for_reaction(message=self.message, check=self.react_check, timeout=120.0) if react is None: self.paginating = False try: await self.bot.clear_reactions(self.message) except: pass finally: break try: await self.bot.remove_reaction(self.message, react.reaction.emoji, react.user) except: pass # can't remove it so don't bother doing so await self.match()
def assert_subclass_of(typ, allowed_types # type: Union[Type, Tuple[Type]] ): """ An inlined version of subclass_of(var_types)(value) without 'return True': it does not return anything in case of success, and raises a IsWrongType exception in case of failure. Used in validate and validation/validator :param typ: the type to check :param allowed_types: the type(s) to enforce. If a tuple of types is provided it is considered alternate types: one match is enough to succeed. If None, type will not be enforced :return: """ if not issubclass(typ, allowed_types): try: # more than 1 ? allowed_types[1] raise IsWrongType(wrong_value=typ, ref_type=allowed_types, help_msg='Value should be a subclass of any of {ref_type}') except IndexError: # 1 allowed_types = allowed_types[0] except TypeError: # 1 pass raise IsWrongType(wrong_value=typ, ref_type=allowed_types)
An inlined version of subclass_of(var_types)(value) without 'return True': it does not return anything in case of success, and raises a IsWrongType exception in case of failure. Used in validate and validation/validator :param typ: the type to check :param allowed_types: the type(s) to enforce. If a tuple of types is provided it is considered alternate types: one match is enough to succeed. If None, type will not be enforced :return:
Below is the the instruction that describes the task: ### Input: An inlined version of subclass_of(var_types)(value) without 'return True': it does not return anything in case of success, and raises a IsWrongType exception in case of failure. Used in validate and validation/validator :param typ: the type to check :param allowed_types: the type(s) to enforce. If a tuple of types is provided it is considered alternate types: one match is enough to succeed. If None, type will not be enforced :return: ### Response: def assert_subclass_of(typ, allowed_types # type: Union[Type, Tuple[Type]] ): """ An inlined version of subclass_of(var_types)(value) without 'return True': it does not return anything in case of success, and raises a IsWrongType exception in case of failure. Used in validate and validation/validator :param typ: the type to check :param allowed_types: the type(s) to enforce. If a tuple of types is provided it is considered alternate types: one match is enough to succeed. If None, type will not be enforced :return: """ if not issubclass(typ, allowed_types): try: # more than 1 ? allowed_types[1] raise IsWrongType(wrong_value=typ, ref_type=allowed_types, help_msg='Value should be a subclass of any of {ref_type}') except IndexError: # 1 allowed_types = allowed_types[0] except TypeError: # 1 pass raise IsWrongType(wrong_value=typ, ref_type=allowed_types)
def prune_rares(self, cutoff=2): """ returns a **new** `Vocab` object that is similar to this one but with rare words removed. Note that the indices in the new `Vocab` will be remapped (because rare words will have been removed). :param cutoff: words occuring less than this number of times are removed from the vocabulary. :return: A new, pruned, vocabulary. NOTE: UNK is never pruned. """ keep = lambda w: self.count(w) >= cutoff or w == self._unk return self.subset([w for w in self if keep(w)])
returns a **new** `Vocab` object that is similar to this one but with rare words removed. Note that the indices in the new `Vocab` will be remapped (because rare words will have been removed). :param cutoff: words occuring less than this number of times are removed from the vocabulary. :return: A new, pruned, vocabulary. NOTE: UNK is never pruned.
Below is the the instruction that describes the task: ### Input: returns a **new** `Vocab` object that is similar to this one but with rare words removed. Note that the indices in the new `Vocab` will be remapped (because rare words will have been removed). :param cutoff: words occuring less than this number of times are removed from the vocabulary. :return: A new, pruned, vocabulary. NOTE: UNK is never pruned. ### Response: def prune_rares(self, cutoff=2): """ returns a **new** `Vocab` object that is similar to this one but with rare words removed. Note that the indices in the new `Vocab` will be remapped (because rare words will have been removed). :param cutoff: words occuring less than this number of times are removed from the vocabulary. :return: A new, pruned, vocabulary. NOTE: UNK is never pruned. """ keep = lambda w: self.count(w) >= cutoff or w == self._unk return self.subset([w for w in self if keep(w)])
def join(self, url): """Join URLs Construct a full (“absolute”) URL by combining a “base URL” (self) with another URL (url). Informally, this uses components of the base URL, in particular the addressing scheme, the network location and (part of) the path, to provide missing components in the relative URL. """ # See docs for urllib.parse.urljoin if not isinstance(url, URL): raise TypeError("url should be URL") return URL(urljoin(str(self), str(url)), encoded=True)
Join URLs Construct a full (“absolute”) URL by combining a “base URL” (self) with another URL (url). Informally, this uses components of the base URL, in particular the addressing scheme, the network location and (part of) the path, to provide missing components in the relative URL.
Below is the the instruction that describes the task: ### Input: Join URLs Construct a full (“absolute”) URL by combining a “base URL” (self) with another URL (url). Informally, this uses components of the base URL, in particular the addressing scheme, the network location and (part of) the path, to provide missing components in the relative URL. ### Response: def join(self, url): """Join URLs Construct a full (“absolute”) URL by combining a “base URL” (self) with another URL (url). Informally, this uses components of the base URL, in particular the addressing scheme, the network location and (part of) the path, to provide missing components in the relative URL. """ # See docs for urllib.parse.urljoin if not isinstance(url, URL): raise TypeError("url should be URL") return URL(urljoin(str(self), str(url)), encoded=True)
def total_branches(self): """How many total branches are there?""" exit_counts = self.parser.exit_counts() return sum([count for count in exit_counts.values() if count > 1])
How many total branches are there?
Below is the the instruction that describes the task: ### Input: How many total branches are there? ### Response: def total_branches(self): """How many total branches are there?""" exit_counts = self.parser.exit_counts() return sum([count for count in exit_counts.values() if count > 1])
def get_index_node(self, idx): '''get node with iterindex `idx`''' idx = self.node_index.index(idx) return self.nodes[idx]
get node with iterindex `idx`
Below is the the instruction that describes the task: ### Input: get node with iterindex `idx` ### Response: def get_index_node(self, idx): '''get node with iterindex `idx`''' idx = self.node_index.index(idx) return self.nodes[idx]
def azureContainers(self, *args, **kwargs): """ List containers in an Account Managed by Auth Retrieve a list of all containers in an account. This method gives output: ``v1/azure-container-list-response.json#`` This method is ``stable`` """ return self._makeApiCall(self.funcinfo["azureContainers"], *args, **kwargs)
List containers in an Account Managed by Auth Retrieve a list of all containers in an account. This method gives output: ``v1/azure-container-list-response.json#`` This method is ``stable``
Below is the the instruction that describes the task: ### Input: List containers in an Account Managed by Auth Retrieve a list of all containers in an account. This method gives output: ``v1/azure-container-list-response.json#`` This method is ``stable`` ### Response: def azureContainers(self, *args, **kwargs): """ List containers in an Account Managed by Auth Retrieve a list of all containers in an account. This method gives output: ``v1/azure-container-list-response.json#`` This method is ``stable`` """ return self._makeApiCall(self.funcinfo["azureContainers"], *args, **kwargs)
def create_event(self, state, server, agentConfig): """Create an event with a message describing the replication state of a mongo node""" def get_state_description(state): if state == 0: return 'Starting Up' elif state == 1: return 'Primary' elif state == 2: return 'Secondary' elif state == 3: return 'Recovering' elif state == 4: return 'Fatal' elif state == 5: return 'Starting up (initial sync)' elif state == 6: return 'Unknown' elif state == 7: return 'Arbiter' elif state == 8: return 'Down' elif state == 9: return 'Rollback' status = get_state_description(state) msg_title = "%s is %s" % (server, status) msg = "TokuMX %s just reported as %s" % (server, status) self.event( { 'timestamp': int(time.time()), 'event_type': 'tokumx', 'msg_title': msg_title, 'msg_text': msg, 'host': self.hostname, } )
Create an event with a message describing the replication state of a mongo node
Below is the the instruction that describes the task: ### Input: Create an event with a message describing the replication state of a mongo node ### Response: def create_event(self, state, server, agentConfig): """Create an event with a message describing the replication state of a mongo node""" def get_state_description(state): if state == 0: return 'Starting Up' elif state == 1: return 'Primary' elif state == 2: return 'Secondary' elif state == 3: return 'Recovering' elif state == 4: return 'Fatal' elif state == 5: return 'Starting up (initial sync)' elif state == 6: return 'Unknown' elif state == 7: return 'Arbiter' elif state == 8: return 'Down' elif state == 9: return 'Rollback' status = get_state_description(state) msg_title = "%s is %s" % (server, status) msg = "TokuMX %s just reported as %s" % (server, status) self.event( { 'timestamp': int(time.time()), 'event_type': 'tokumx', 'msg_title': msg_title, 'msg_text': msg, 'host': self.hostname, } )
def process_input(self): """Called when socket is read-ready""" try: pyngus.read_socket_input(self.connection, self.socket) except Exception as e: LOG.error("Exception on socket read: %s", str(e)) self.connection.close_input() self.connection.close() self.connection.process(time.time())
Called when socket is read-ready
Below is the the instruction that describes the task: ### Input: Called when socket is read-ready ### Response: def process_input(self): """Called when socket is read-ready""" try: pyngus.read_socket_input(self.connection, self.socket) except Exception as e: LOG.error("Exception on socket read: %s", str(e)) self.connection.close_input() self.connection.close() self.connection.process(time.time())
def home_lib(home): """Return the lib dir under the 'home' installation scheme""" if hasattr(sys, 'pypy_version_info'): lib = 'site-packages' else: lib = os.path.join('lib', 'python') return os.path.join(home, lib)
Return the lib dir under the 'home' installation scheme
Below is the the instruction that describes the task: ### Input: Return the lib dir under the 'home' installation scheme ### Response: def home_lib(home): """Return the lib dir under the 'home' installation scheme""" if hasattr(sys, 'pypy_version_info'): lib = 'site-packages' else: lib = os.path.join('lib', 'python') return os.path.join(home, lib)
def is_regular(self): """Determine whether this `Index` contains linearly increasing samples This also works for linear decrease """ if self.size <= 1: return False return numpy.isclose(numpy.diff(self.value, n=2), 0).all()
Determine whether this `Index` contains linearly increasing samples This also works for linear decrease
Below is the the instruction that describes the task: ### Input: Determine whether this `Index` contains linearly increasing samples This also works for linear decrease ### Response: def is_regular(self): """Determine whether this `Index` contains linearly increasing samples This also works for linear decrease """ if self.size <= 1: return False return numpy.isclose(numpy.diff(self.value, n=2), 0).all()
def reduce(self, func, dim=None, keep_attrs=None, **kwargs): """Reduce the items in this group by applying `func` along some dimension(s). Parameters ---------- func : function Function which can be called in the form `func(x, axis=axis, **kwargs)` to return the result of collapsing an np.ndarray over an integer valued axis. dim : str or sequence of str, optional Dimension(s) over which to apply `func`. axis : int or sequence of int, optional Axis(es) over which to apply `func`. Only one of the 'dimension' and 'axis' arguments can be supplied. If neither are supplied, then `func` is calculated over all dimension for each group item. keep_attrs : bool, optional If True, the datasets's attributes (`attrs`) will be copied from the original object to the new one. If False (default), the new object will be returned without attributes. **kwargs : dict Additional keyword arguments passed on to `func`. Returns ------- reduced : Array Array with summarized data and the indicated dimension(s) removed. """ if dim == DEFAULT_DIMS: dim = ALL_DIMS # TODO change this to dim = self._group_dim after # the deprecation process. Do not forget to remove _reduce_method warnings.warn( "Default reduction dimension will be changed to the " "grouped dimension in a future version of xarray. To " "silence this warning, pass dim=xarray.ALL_DIMS " "explicitly.", FutureWarning, stacklevel=2) elif dim is None: dim = self._group_dim if keep_attrs is None: keep_attrs = _get_keep_attrs(default=False) def reduce_dataset(ds): return ds.reduce(func, dim, keep_attrs, **kwargs) return self.apply(reduce_dataset)
Reduce the items in this group by applying `func` along some dimension(s). Parameters ---------- func : function Function which can be called in the form `func(x, axis=axis, **kwargs)` to return the result of collapsing an np.ndarray over an integer valued axis. dim : str or sequence of str, optional Dimension(s) over which to apply `func`. axis : int or sequence of int, optional Axis(es) over which to apply `func`. Only one of the 'dimension' and 'axis' arguments can be supplied. If neither are supplied, then `func` is calculated over all dimension for each group item. keep_attrs : bool, optional If True, the datasets's attributes (`attrs`) will be copied from the original object to the new one. If False (default), the new object will be returned without attributes. **kwargs : dict Additional keyword arguments passed on to `func`. Returns ------- reduced : Array Array with summarized data and the indicated dimension(s) removed.
Below is the the instruction that describes the task: ### Input: Reduce the items in this group by applying `func` along some dimension(s). Parameters ---------- func : function Function which can be called in the form `func(x, axis=axis, **kwargs)` to return the result of collapsing an np.ndarray over an integer valued axis. dim : str or sequence of str, optional Dimension(s) over which to apply `func`. axis : int or sequence of int, optional Axis(es) over which to apply `func`. Only one of the 'dimension' and 'axis' arguments can be supplied. If neither are supplied, then `func` is calculated over all dimension for each group item. keep_attrs : bool, optional If True, the datasets's attributes (`attrs`) will be copied from the original object to the new one. If False (default), the new object will be returned without attributes. **kwargs : dict Additional keyword arguments passed on to `func`. Returns ------- reduced : Array Array with summarized data and the indicated dimension(s) removed. ### Response: def reduce(self, func, dim=None, keep_attrs=None, **kwargs): """Reduce the items in this group by applying `func` along some dimension(s). Parameters ---------- func : function Function which can be called in the form `func(x, axis=axis, **kwargs)` to return the result of collapsing an np.ndarray over an integer valued axis. dim : str or sequence of str, optional Dimension(s) over which to apply `func`. axis : int or sequence of int, optional Axis(es) over which to apply `func`. Only one of the 'dimension' and 'axis' arguments can be supplied. If neither are supplied, then `func` is calculated over all dimension for each group item. keep_attrs : bool, optional If True, the datasets's attributes (`attrs`) will be copied from the original object to the new one. If False (default), the new object will be returned without attributes. **kwargs : dict Additional keyword arguments passed on to `func`. Returns ------- reduced : Array Array with summarized data and the indicated dimension(s) removed. """ if dim == DEFAULT_DIMS: dim = ALL_DIMS # TODO change this to dim = self._group_dim after # the deprecation process. Do not forget to remove _reduce_method warnings.warn( "Default reduction dimension will be changed to the " "grouped dimension in a future version of xarray. To " "silence this warning, pass dim=xarray.ALL_DIMS " "explicitly.", FutureWarning, stacklevel=2) elif dim is None: dim = self._group_dim if keep_attrs is None: keep_attrs = _get_keep_attrs(default=False) def reduce_dataset(ds): return ds.reduce(func, dim, keep_attrs, **kwargs) return self.apply(reduce_dataset)
def reset(self): """ Reseting wrapped function """ super(SinonSpy, self).unwrap() super(SinonSpy, self).wrap2spy()
Reseting wrapped function
Below is the the instruction that describes the task: ### Input: Reseting wrapped function ### Response: def reset(self): """ Reseting wrapped function """ super(SinonSpy, self).unwrap() super(SinonSpy, self).wrap2spy()
def issubset(self, other): """Report whether another set contains this RangeSet.""" self._binary_sanity_check(other) return set.issubset(self, other)
Report whether another set contains this RangeSet.
Below is the the instruction that describes the task: ### Input: Report whether another set contains this RangeSet. ### Response: def issubset(self, other): """Report whether another set contains this RangeSet.""" self._binary_sanity_check(other) return set.issubset(self, other)
def updateItem(self, itemParameters, clearEmptyFields=False, data=None, metadata=None, text=None, serviceUrl=None, multipart=False): """ updates an item's properties using the ItemParameter class. Inputs: itemParameters - property class to update clearEmptyFields - boolean, cleans up empty values data - updates the file property of the service like a .sd file metadata - this is an xml file that contains metadata information text - The text content for the item to be updated. serviceUrl - this is a service url endpoint. multipart - this is a boolean value that means the file will be broken up into smaller pieces and uploaded. """ thumbnail = None largeThumbnail = None files = {} params = { "f": "json", } if clearEmptyFields: params["clearEmptyFields"] = clearEmptyFields if serviceUrl is not None: params['url'] = serviceUrl if text is not None: params['text'] = text if isinstance(itemParameters, ItemParameter) == False: raise AttributeError("itemParameters must be of type parameter.ItemParameter") keys_to_delete = ['id', 'owner', 'size', 'numComments', 'numRatings', 'avgRating', 'numViews' , 'overwrite'] dictItem = itemParameters.value for key in keys_to_delete: if key in dictItem: del dictItem[key] for key in dictItem: if key == "thumbnail": files['thumbnail'] = dictItem['thumbnail'] elif key == "largeThumbnail": files['largeThumbnail'] = dictItem['largeThumbnail'] elif key == "metadata": metadata = dictItem['metadata'] if os.path.basename(metadata) != 'metadata.xml': tempxmlfile = os.path.join(tempfile.gettempdir(), "metadata.xml") if os.path.isfile(tempxmlfile) == True: os.remove(tempxmlfile) import shutil shutil.copy(metadata, tempxmlfile) metadata = tempxmlfile files['metadata'] = dictItem['metadata'] else: params[key] = dictItem[key] if data is not None: files['file'] = data if metadata and os.path.isfile(metadata): files['metadata'] = metadata url = "%s/update" % self.root if multipart: itemID = self.id params['multipart'] = True params['fileName'] = os.path.basename(data) res = self._post(url=url, param_dict=params, securityHandler=self._securityHandler, proxy_url=self._proxy_url, proxy_port=self._proxy_port) itemPartJSON = self.addByPart(filePath=data) res = self.commit(wait=True, additionalParams=\ {'type' : self.type }) else: res = self._post(url=url, param_dict=params, files=files, securityHandler=self._securityHandler, proxy_url=self._proxy_url, proxy_port=self._proxy_port, force_form_post=True) self.__init() return self
updates an item's properties using the ItemParameter class. Inputs: itemParameters - property class to update clearEmptyFields - boolean, cleans up empty values data - updates the file property of the service like a .sd file metadata - this is an xml file that contains metadata information text - The text content for the item to be updated. serviceUrl - this is a service url endpoint. multipart - this is a boolean value that means the file will be broken up into smaller pieces and uploaded.
Below is the the instruction that describes the task: ### Input: updates an item's properties using the ItemParameter class. Inputs: itemParameters - property class to update clearEmptyFields - boolean, cleans up empty values data - updates the file property of the service like a .sd file metadata - this is an xml file that contains metadata information text - The text content for the item to be updated. serviceUrl - this is a service url endpoint. multipart - this is a boolean value that means the file will be broken up into smaller pieces and uploaded. ### Response: def updateItem(self, itemParameters, clearEmptyFields=False, data=None, metadata=None, text=None, serviceUrl=None, multipart=False): """ updates an item's properties using the ItemParameter class. Inputs: itemParameters - property class to update clearEmptyFields - boolean, cleans up empty values data - updates the file property of the service like a .sd file metadata - this is an xml file that contains metadata information text - The text content for the item to be updated. serviceUrl - this is a service url endpoint. multipart - this is a boolean value that means the file will be broken up into smaller pieces and uploaded. """ thumbnail = None largeThumbnail = None files = {} params = { "f": "json", } if clearEmptyFields: params["clearEmptyFields"] = clearEmptyFields if serviceUrl is not None: params['url'] = serviceUrl if text is not None: params['text'] = text if isinstance(itemParameters, ItemParameter) == False: raise AttributeError("itemParameters must be of type parameter.ItemParameter") keys_to_delete = ['id', 'owner', 'size', 'numComments', 'numRatings', 'avgRating', 'numViews' , 'overwrite'] dictItem = itemParameters.value for key in keys_to_delete: if key in dictItem: del dictItem[key] for key in dictItem: if key == "thumbnail": files['thumbnail'] = dictItem['thumbnail'] elif key == "largeThumbnail": files['largeThumbnail'] = dictItem['largeThumbnail'] elif key == "metadata": metadata = dictItem['metadata'] if os.path.basename(metadata) != 'metadata.xml': tempxmlfile = os.path.join(tempfile.gettempdir(), "metadata.xml") if os.path.isfile(tempxmlfile) == True: os.remove(tempxmlfile) import shutil shutil.copy(metadata, tempxmlfile) metadata = tempxmlfile files['metadata'] = dictItem['metadata'] else: params[key] = dictItem[key] if data is not None: files['file'] = data if metadata and os.path.isfile(metadata): files['metadata'] = metadata url = "%s/update" % self.root if multipart: itemID = self.id params['multipart'] = True params['fileName'] = os.path.basename(data) res = self._post(url=url, param_dict=params, securityHandler=self._securityHandler, proxy_url=self._proxy_url, proxy_port=self._proxy_port) itemPartJSON = self.addByPart(filePath=data) res = self.commit(wait=True, additionalParams=\ {'type' : self.type }) else: res = self._post(url=url, param_dict=params, files=files, securityHandler=self._securityHandler, proxy_url=self._proxy_url, proxy_port=self._proxy_port, force_form_post=True) self.__init() return self
async def mount(self, device): """ Mount the device if not already mounted. :param device: device object, block device path or mount path :returns: whether the device is mounted. """ device = self._find_device(device) if not self.is_handleable(device) or not device.is_filesystem: self._log.warn(_('not mounting {0}: unhandled device', device)) return False if device.is_mounted: self._log.info(_('not mounting {0}: already mounted', device)) return True options = match_config(self._config, device, 'options', None) kwargs = dict(options=options) self._log.debug(_('mounting {0} with {1}', device, kwargs)) self._check_device_before_mount(device) mount_path = await device.mount(**kwargs) self._log.info(_('mounted {0} on {1}', device, mount_path)) return True
Mount the device if not already mounted. :param device: device object, block device path or mount path :returns: whether the device is mounted.
Below is the the instruction that describes the task: ### Input: Mount the device if not already mounted. :param device: device object, block device path or mount path :returns: whether the device is mounted. ### Response: async def mount(self, device): """ Mount the device if not already mounted. :param device: device object, block device path or mount path :returns: whether the device is mounted. """ device = self._find_device(device) if not self.is_handleable(device) or not device.is_filesystem: self._log.warn(_('not mounting {0}: unhandled device', device)) return False if device.is_mounted: self._log.info(_('not mounting {0}: already mounted', device)) return True options = match_config(self._config, device, 'options', None) kwargs = dict(options=options) self._log.debug(_('mounting {0} with {1}', device, kwargs)) self._check_device_before_mount(device) mount_path = await device.mount(**kwargs) self._log.info(_('mounted {0} on {1}', device, mount_path)) return True
def generate_time(signal, sample_rate=1000): """ ----- Brief ----- Function intended to generate a time axis of the input signal. ----------- Description ----------- The time axis generated by the acquisition process originates a set of consecutive values that represents the advancement of time, but does not have specific units. Once the acquisitions are made with specific sampling frequencies, it is possible to calculate the time instant of each sample by multiplying that value by the sampling frequency. The current function maps the values in the file produced by Opensignals to their real temporal values. ---------- Parameters ---------- signal : list List with the signal samples. sample_rate : int Sampling frequency of acquisition. Returns ------- out : list Time axis with each list entry in seconds. """ # Download of signal if the input is a url. if _is_a_url(signal): # Check if it is a Google Drive sharable link. if "drive.google" in signal: signal = _generate_download_google_link(signal) data = load(signal, remote=True) key_level_1 = list(data.keys())[0] if "00:" in key_level_1: mac = key_level_1 chn = list(data[mac].keys())[0] signal = data[mac][chn] else: chn = key_level_1 signal = data[chn] nbr_of_samples = len(signal) end_of_time = nbr_of_samples / sample_rate # ================================= Generation of the Time Axis =============================== time_axis = numpy.linspace(0, end_of_time, nbr_of_samples) return list(time_axis)
----- Brief ----- Function intended to generate a time axis of the input signal. ----------- Description ----------- The time axis generated by the acquisition process originates a set of consecutive values that represents the advancement of time, but does not have specific units. Once the acquisitions are made with specific sampling frequencies, it is possible to calculate the time instant of each sample by multiplying that value by the sampling frequency. The current function maps the values in the file produced by Opensignals to their real temporal values. ---------- Parameters ---------- signal : list List with the signal samples. sample_rate : int Sampling frequency of acquisition. Returns ------- out : list Time axis with each list entry in seconds.
Below is the the instruction that describes the task: ### Input: ----- Brief ----- Function intended to generate a time axis of the input signal. ----------- Description ----------- The time axis generated by the acquisition process originates a set of consecutive values that represents the advancement of time, but does not have specific units. Once the acquisitions are made with specific sampling frequencies, it is possible to calculate the time instant of each sample by multiplying that value by the sampling frequency. The current function maps the values in the file produced by Opensignals to their real temporal values. ---------- Parameters ---------- signal : list List with the signal samples. sample_rate : int Sampling frequency of acquisition. Returns ------- out : list Time axis with each list entry in seconds. ### Response: def generate_time(signal, sample_rate=1000): """ ----- Brief ----- Function intended to generate a time axis of the input signal. ----------- Description ----------- The time axis generated by the acquisition process originates a set of consecutive values that represents the advancement of time, but does not have specific units. Once the acquisitions are made with specific sampling frequencies, it is possible to calculate the time instant of each sample by multiplying that value by the sampling frequency. The current function maps the values in the file produced by Opensignals to their real temporal values. ---------- Parameters ---------- signal : list List with the signal samples. sample_rate : int Sampling frequency of acquisition. Returns ------- out : list Time axis with each list entry in seconds. """ # Download of signal if the input is a url. if _is_a_url(signal): # Check if it is a Google Drive sharable link. if "drive.google" in signal: signal = _generate_download_google_link(signal) data = load(signal, remote=True) key_level_1 = list(data.keys())[0] if "00:" in key_level_1: mac = key_level_1 chn = list(data[mac].keys())[0] signal = data[mac][chn] else: chn = key_level_1 signal = data[chn] nbr_of_samples = len(signal) end_of_time = nbr_of_samples / sample_rate # ================================= Generation of the Time Axis =============================== time_axis = numpy.linspace(0, end_of_time, nbr_of_samples) return list(time_axis)
def non_silent_ratio_permutation(context_counts, context_to_mut, seq_context, gene_seq, num_permutations=10000): """Performs null-permutations for non-silent ratio across all genes. Parameters ---------- context_counts : pd.Series number of mutations for each context context_to_mut : dict dictionary mapping nucleotide context to a list of observed somatic base changes. seq_context : SequenceContext Sequence context for the entire gene sequence (regardless of where mutations occur). The nucleotide contexts are identified at positions along the gene. gene_seq : GeneSequence Sequence of gene of interest num_permutations : int, default: 10000 number of permutations to create for null Returns ------- non_silent_count_list : list of tuples list of non-silent and silent mutation counts under the null """ mycontexts = context_counts.index.tolist() somatic_base = [base for one_context in mycontexts for base in context_to_mut[one_context]] # get random positions determined by sequence context tmp_contxt_pos = seq_context.random_pos(context_counts.iteritems(), num_permutations) tmp_mut_pos = np.hstack(pos_array for base, pos_array in tmp_contxt_pos) # determine result of random positions non_silent_count_list = [] for row in tmp_mut_pos: # get info about mutations tmp_mut_info = mc.get_aa_mut_info(row, somatic_base, gene_seq) # calc deleterious mutation info tmp_non_silent = cutils.calc_non_silent_info(tmp_mut_info['Reference AA'], tmp_mut_info['Somatic AA'], tmp_mut_info['Codon Pos']) non_silent_count_list.append(tmp_non_silent) return non_silent_count_list
Performs null-permutations for non-silent ratio across all genes. Parameters ---------- context_counts : pd.Series number of mutations for each context context_to_mut : dict dictionary mapping nucleotide context to a list of observed somatic base changes. seq_context : SequenceContext Sequence context for the entire gene sequence (regardless of where mutations occur). The nucleotide contexts are identified at positions along the gene. gene_seq : GeneSequence Sequence of gene of interest num_permutations : int, default: 10000 number of permutations to create for null Returns ------- non_silent_count_list : list of tuples list of non-silent and silent mutation counts under the null
Below is the the instruction that describes the task: ### Input: Performs null-permutations for non-silent ratio across all genes. Parameters ---------- context_counts : pd.Series number of mutations for each context context_to_mut : dict dictionary mapping nucleotide context to a list of observed somatic base changes. seq_context : SequenceContext Sequence context for the entire gene sequence (regardless of where mutations occur). The nucleotide contexts are identified at positions along the gene. gene_seq : GeneSequence Sequence of gene of interest num_permutations : int, default: 10000 number of permutations to create for null Returns ------- non_silent_count_list : list of tuples list of non-silent and silent mutation counts under the null ### Response: def non_silent_ratio_permutation(context_counts, context_to_mut, seq_context, gene_seq, num_permutations=10000): """Performs null-permutations for non-silent ratio across all genes. Parameters ---------- context_counts : pd.Series number of mutations for each context context_to_mut : dict dictionary mapping nucleotide context to a list of observed somatic base changes. seq_context : SequenceContext Sequence context for the entire gene sequence (regardless of where mutations occur). The nucleotide contexts are identified at positions along the gene. gene_seq : GeneSequence Sequence of gene of interest num_permutations : int, default: 10000 number of permutations to create for null Returns ------- non_silent_count_list : list of tuples list of non-silent and silent mutation counts under the null """ mycontexts = context_counts.index.tolist() somatic_base = [base for one_context in mycontexts for base in context_to_mut[one_context]] # get random positions determined by sequence context tmp_contxt_pos = seq_context.random_pos(context_counts.iteritems(), num_permutations) tmp_mut_pos = np.hstack(pos_array for base, pos_array in tmp_contxt_pos) # determine result of random positions non_silent_count_list = [] for row in tmp_mut_pos: # get info about mutations tmp_mut_info = mc.get_aa_mut_info(row, somatic_base, gene_seq) # calc deleterious mutation info tmp_non_silent = cutils.calc_non_silent_info(tmp_mut_info['Reference AA'], tmp_mut_info['Somatic AA'], tmp_mut_info['Codon Pos']) non_silent_count_list.append(tmp_non_silent) return non_silent_count_list
def validate_IRkernel(venv_dir): """Validates that this env contains an IRkernel kernel and returns info to start it Returns: tuple (ARGV, language, resource_dir) """ r_exe_name = find_exe(venv_dir, "R") if r_exe_name is None: return [], None, None # check if this is really an IRkernel **kernel** import subprocess ressources_dir = None try: print_resources = 'cat(as.character(system.file("kernelspec", package = "IRkernel")))' resources_dir_bytes = subprocess.check_output([r_exe_name, '--slave', '-e', print_resources]) resources_dir = resources_dir_bytes.decode(errors='ignore') except: # not installed? -> not useable in any case... return [], None, None argv = [r_exe_name, "--slave", "-e", "IRkernel::main()", "--args", "{connection_file}"] if not os.path.exists(resources_dir.strip()): # Fallback to our own log, but don't get the nice js goodies... resources_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), "logos", "r") return argv, "r", resources_dir
Validates that this env contains an IRkernel kernel and returns info to start it Returns: tuple (ARGV, language, resource_dir)
Below is the the instruction that describes the task: ### Input: Validates that this env contains an IRkernel kernel and returns info to start it Returns: tuple (ARGV, language, resource_dir) ### Response: def validate_IRkernel(venv_dir): """Validates that this env contains an IRkernel kernel and returns info to start it Returns: tuple (ARGV, language, resource_dir) """ r_exe_name = find_exe(venv_dir, "R") if r_exe_name is None: return [], None, None # check if this is really an IRkernel **kernel** import subprocess ressources_dir = None try: print_resources = 'cat(as.character(system.file("kernelspec", package = "IRkernel")))' resources_dir_bytes = subprocess.check_output([r_exe_name, '--slave', '-e', print_resources]) resources_dir = resources_dir_bytes.decode(errors='ignore') except: # not installed? -> not useable in any case... return [], None, None argv = [r_exe_name, "--slave", "-e", "IRkernel::main()", "--args", "{connection_file}"] if not os.path.exists(resources_dir.strip()): # Fallback to our own log, but don't get the nice js goodies... resources_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), "logos", "r") return argv, "r", resources_dir
def __search_dict(self, obj, item, parent, parents_ids=frozenset({}), print_as_attribute=False): """Search dictionaries""" if print_as_attribute: parent_text = "%s.%s" else: parent_text = "%s[%s]" obj_keys = set(obj.keys()) for item_key in obj_keys: if not print_as_attribute and isinstance(item_key, strings): item_key_str = "'%s'" % item_key else: item_key_str = item_key obj_child = obj[item_key] item_id = id(obj_child) if parents_ids and item_id in parents_ids: continue parents_ids_added = add_to_frozen_set(parents_ids, item_id) new_parent = parent_text % (parent, item_key_str) new_parent_cased = new_parent if self.case_sensitive else new_parent.lower() str_item = str(item) if (self.match_string and str_item == new_parent_cased) or\ (not self.match_string and str_item in new_parent_cased): self.__report( report_key='matched_paths', key=new_parent, value=obj_child) self.__search( obj_child, item, parent=new_parent, parents_ids=parents_ids_added)
Search dictionaries
Below is the the instruction that describes the task: ### Input: Search dictionaries ### Response: def __search_dict(self, obj, item, parent, parents_ids=frozenset({}), print_as_attribute=False): """Search dictionaries""" if print_as_attribute: parent_text = "%s.%s" else: parent_text = "%s[%s]" obj_keys = set(obj.keys()) for item_key in obj_keys: if not print_as_attribute and isinstance(item_key, strings): item_key_str = "'%s'" % item_key else: item_key_str = item_key obj_child = obj[item_key] item_id = id(obj_child) if parents_ids and item_id in parents_ids: continue parents_ids_added = add_to_frozen_set(parents_ids, item_id) new_parent = parent_text % (parent, item_key_str) new_parent_cased = new_parent if self.case_sensitive else new_parent.lower() str_item = str(item) if (self.match_string and str_item == new_parent_cased) or\ (not self.match_string and str_item in new_parent_cased): self.__report( report_key='matched_paths', key=new_parent, value=obj_child) self.__search( obj_child, item, parent=new_parent, parents_ids=parents_ids_added)
def pop_job(self, returning=True): """ Pop a job from the pending jobs list. When returning == True, we prioritize the jobs whose functions are known to be returning (function.returning is True). As an optimization, we are sorting the pending jobs list according to job.function.returning. :param bool returning: Only pop a pending job if the corresponding function returns. :return: A pending job if we can find one, or None if we cannot find any that satisfies the requirement. :rtype: angr.analyses.cfg.cfg_fast.CFGJob """ if not self: return None if not returning: return self._pop_job(next(reversed(self._jobs.keys()))) # Prioritize returning functions for func_addr in reversed(self._jobs.keys()): if func_addr not in self._returning_functions: continue return self._pop_job(func_addr) return None
Pop a job from the pending jobs list. When returning == True, we prioritize the jobs whose functions are known to be returning (function.returning is True). As an optimization, we are sorting the pending jobs list according to job.function.returning. :param bool returning: Only pop a pending job if the corresponding function returns. :return: A pending job if we can find one, or None if we cannot find any that satisfies the requirement. :rtype: angr.analyses.cfg.cfg_fast.CFGJob
Below is the the instruction that describes the task: ### Input: Pop a job from the pending jobs list. When returning == True, we prioritize the jobs whose functions are known to be returning (function.returning is True). As an optimization, we are sorting the pending jobs list according to job.function.returning. :param bool returning: Only pop a pending job if the corresponding function returns. :return: A pending job if we can find one, or None if we cannot find any that satisfies the requirement. :rtype: angr.analyses.cfg.cfg_fast.CFGJob ### Response: def pop_job(self, returning=True): """ Pop a job from the pending jobs list. When returning == True, we prioritize the jobs whose functions are known to be returning (function.returning is True). As an optimization, we are sorting the pending jobs list according to job.function.returning. :param bool returning: Only pop a pending job if the corresponding function returns. :return: A pending job if we can find one, or None if we cannot find any that satisfies the requirement. :rtype: angr.analyses.cfg.cfg_fast.CFGJob """ if not self: return None if not returning: return self._pop_job(next(reversed(self._jobs.keys()))) # Prioritize returning functions for func_addr in reversed(self._jobs.keys()): if func_addr not in self._returning_functions: continue return self._pop_job(func_addr) return None
def msg_callback(self, callback): """Set the message callback.""" if callable(callback): self._msg_callback = callback else: self._msg_callback = None
Set the message callback.
Below is the the instruction that describes the task: ### Input: Set the message callback. ### Response: def msg_callback(self, callback): """Set the message callback.""" if callable(callback): self._msg_callback = callback else: self._msg_callback = None
def from_jsondict(cls, dict_, decode_string=base64.b64decode, **additional_args): r"""Create an instance from a JSON style dict. Instantiate this class with parameters specified by the dict. This method takes the following arguments. .. tabularcolumns:: |l|L| =============== ===================================================== Argument Descrpition =============== ===================================================== dict\_ A dictionary which describes the parameters. For example, {"Param1": 100, "Param2": 200} decode_string (Optional) specify how to decode strings. The default is base64. This argument is used only for attributes which don't have explicit type annotations in _TYPE class attribute. additional_args (Optional) Additional kwargs for constructor. =============== ===================================================== """ decode = lambda k, x: cls._decode_value(k, x, decode_string, **additional_args) kwargs = cls._restore_args(_mapdict_kv(decode, dict_)) try: return cls(**dict(kwargs, **additional_args)) except TypeError: # debug print("CLS %s" % cls) print("ARG %s" % dict_) print("KWARG %s" % kwargs) raise
r"""Create an instance from a JSON style dict. Instantiate this class with parameters specified by the dict. This method takes the following arguments. .. tabularcolumns:: |l|L| =============== ===================================================== Argument Descrpition =============== ===================================================== dict\_ A dictionary which describes the parameters. For example, {"Param1": 100, "Param2": 200} decode_string (Optional) specify how to decode strings. The default is base64. This argument is used only for attributes which don't have explicit type annotations in _TYPE class attribute. additional_args (Optional) Additional kwargs for constructor. =============== =====================================================
Below is the the instruction that describes the task: ### Input: r"""Create an instance from a JSON style dict. Instantiate this class with parameters specified by the dict. This method takes the following arguments. .. tabularcolumns:: |l|L| =============== ===================================================== Argument Descrpition =============== ===================================================== dict\_ A dictionary which describes the parameters. For example, {"Param1": 100, "Param2": 200} decode_string (Optional) specify how to decode strings. The default is base64. This argument is used only for attributes which don't have explicit type annotations in _TYPE class attribute. additional_args (Optional) Additional kwargs for constructor. =============== ===================================================== ### Response: def from_jsondict(cls, dict_, decode_string=base64.b64decode, **additional_args): r"""Create an instance from a JSON style dict. Instantiate this class with parameters specified by the dict. This method takes the following arguments. .. tabularcolumns:: |l|L| =============== ===================================================== Argument Descrpition =============== ===================================================== dict\_ A dictionary which describes the parameters. For example, {"Param1": 100, "Param2": 200} decode_string (Optional) specify how to decode strings. The default is base64. This argument is used only for attributes which don't have explicit type annotations in _TYPE class attribute. additional_args (Optional) Additional kwargs for constructor. =============== ===================================================== """ decode = lambda k, x: cls._decode_value(k, x, decode_string, **additional_args) kwargs = cls._restore_args(_mapdict_kv(decode, dict_)) try: return cls(**dict(kwargs, **additional_args)) except TypeError: # debug print("CLS %s" % cls) print("ARG %s" % dict_) print("KWARG %s" % kwargs) raise
def save_config( self, cmd="copy running-config startup-config", confirm=True, confirm_response="y", ): """Save Config for Extreme SLX.""" return super(ExtremeSlxSSH, self).save_config( cmd=cmd, confirm=confirm, confirm_response=confirm_response )
Save Config for Extreme SLX.
Below is the the instruction that describes the task: ### Input: Save Config for Extreme SLX. ### Response: def save_config( self, cmd="copy running-config startup-config", confirm=True, confirm_response="y", ): """Save Config for Extreme SLX.""" return super(ExtremeSlxSSH, self).save_config( cmd=cmd, confirm=confirm, confirm_response=confirm_response )
def get_aliases(self, lang='en'): """ Retrieve the aliases in a certain language :param lang: The Wikidata language the description should be retrieved for :return: Returns a list of aliases, an empty list if none exist for the specified language """ if self.fast_run: return list(self.fast_run_container.get_language_data(self.wd_item_id, lang, 'aliases')) alias_list = [] if 'aliases' in self.wd_json_representation and lang in self.wd_json_representation['aliases']: for alias in self.wd_json_representation['aliases'][lang]: alias_list.append(alias['value']) return alias_list
Retrieve the aliases in a certain language :param lang: The Wikidata language the description should be retrieved for :return: Returns a list of aliases, an empty list if none exist for the specified language
Below is the the instruction that describes the task: ### Input: Retrieve the aliases in a certain language :param lang: The Wikidata language the description should be retrieved for :return: Returns a list of aliases, an empty list if none exist for the specified language ### Response: def get_aliases(self, lang='en'): """ Retrieve the aliases in a certain language :param lang: The Wikidata language the description should be retrieved for :return: Returns a list of aliases, an empty list if none exist for the specified language """ if self.fast_run: return list(self.fast_run_container.get_language_data(self.wd_item_id, lang, 'aliases')) alias_list = [] if 'aliases' in self.wd_json_representation and lang in self.wd_json_representation['aliases']: for alias in self.wd_json_representation['aliases'][lang]: alias_list.append(alias['value']) return alias_list
def check_limit(self, limit): """ Checks if the given limit is valid. A limit must be > 0 to be considered valid. Raises ValueError when the *limit* is not > 0. """ if limit > 0: self.limit = limit else: raise ValueError("Rule limit must be strictly > 0 ({0} given)" .format(limit)) return self
Checks if the given limit is valid. A limit must be > 0 to be considered valid. Raises ValueError when the *limit* is not > 0.
Below is the the instruction that describes the task: ### Input: Checks if the given limit is valid. A limit must be > 0 to be considered valid. Raises ValueError when the *limit* is not > 0. ### Response: def check_limit(self, limit): """ Checks if the given limit is valid. A limit must be > 0 to be considered valid. Raises ValueError when the *limit* is not > 0. """ if limit > 0: self.limit = limit else: raise ValueError("Rule limit must be strictly > 0 ({0} given)" .format(limit)) return self
def migrate_font(font): "Convert PythonCard font description to gui2py style" if 'faceName' in font: font['face'] = font.pop('faceName') if 'family' in font and font['family'] == 'sansSerif': font['family'] = 'sans serif' return font
Convert PythonCard font description to gui2py style
Below is the the instruction that describes the task: ### Input: Convert PythonCard font description to gui2py style ### Response: def migrate_font(font): "Convert PythonCard font description to gui2py style" if 'faceName' in font: font['face'] = font.pop('faceName') if 'family' in font and font['family'] == 'sansSerif': font['family'] = 'sans serif' return font
def set_background(self, color, loc='all'): """ Sets background color Parameters ---------- color : string or 3 item list, optional, defaults to white Either a string, rgb list, or hex color string. For example: color='white' color='w' color=[1, 1, 1] color='#FFFFFF' loc : int, tuple, list, or str, optional Index of the renderer to add the actor to. For example, ``loc=2`` or ``loc=(1, 1)``. If ``loc='all'`` then all render windows will have their background set. """ if color is None: color = rcParams['background'] if isinstance(color, str): if color.lower() in 'paraview' or color.lower() in 'pv': # Use the default ParaView background color color = PV_BACKGROUND else: color = vtki.string_to_rgb(color) if loc =='all': for renderer in self.renderers: renderer.SetBackground(color) else: renderer = self.renderers[self.loc_to_index(loc)] renderer.SetBackground(color)
Sets background color Parameters ---------- color : string or 3 item list, optional, defaults to white Either a string, rgb list, or hex color string. For example: color='white' color='w' color=[1, 1, 1] color='#FFFFFF' loc : int, tuple, list, or str, optional Index of the renderer to add the actor to. For example, ``loc=2`` or ``loc=(1, 1)``. If ``loc='all'`` then all render windows will have their background set.
Below is the the instruction that describes the task: ### Input: Sets background color Parameters ---------- color : string or 3 item list, optional, defaults to white Either a string, rgb list, or hex color string. For example: color='white' color='w' color=[1, 1, 1] color='#FFFFFF' loc : int, tuple, list, or str, optional Index of the renderer to add the actor to. For example, ``loc=2`` or ``loc=(1, 1)``. If ``loc='all'`` then all render windows will have their background set. ### Response: def set_background(self, color, loc='all'): """ Sets background color Parameters ---------- color : string or 3 item list, optional, defaults to white Either a string, rgb list, or hex color string. For example: color='white' color='w' color=[1, 1, 1] color='#FFFFFF' loc : int, tuple, list, or str, optional Index of the renderer to add the actor to. For example, ``loc=2`` or ``loc=(1, 1)``. If ``loc='all'`` then all render windows will have their background set. """ if color is None: color = rcParams['background'] if isinstance(color, str): if color.lower() in 'paraview' or color.lower() in 'pv': # Use the default ParaView background color color = PV_BACKGROUND else: color = vtki.string_to_rgb(color) if loc =='all': for renderer in self.renderers: renderer.SetBackground(color) else: renderer = self.renderers[self.loc_to_index(loc)] renderer.SetBackground(color)
def from_dict(data, ctx): """ Instantiate a new PositionFinancing from a dict (generally from loading a JSON response). The data used to instantiate the PositionFinancing is a shallow copy of the dict passed in, with any complex child types instantiated appropriately. """ data = data.copy() if data.get('financing') is not None: data['financing'] = ctx.convert_decimal_number( data.get('financing') ) if data.get('openTradeFinancings') is not None: data['openTradeFinancings'] = [ ctx.transaction.OpenTradeFinancing.from_dict(d, ctx) for d in data.get('openTradeFinancings') ] return PositionFinancing(**data)
Instantiate a new PositionFinancing from a dict (generally from loading a JSON response). The data used to instantiate the PositionFinancing is a shallow copy of the dict passed in, with any complex child types instantiated appropriately.
Below is the the instruction that describes the task: ### Input: Instantiate a new PositionFinancing from a dict (generally from loading a JSON response). The data used to instantiate the PositionFinancing is a shallow copy of the dict passed in, with any complex child types instantiated appropriately. ### Response: def from_dict(data, ctx): """ Instantiate a new PositionFinancing from a dict (generally from loading a JSON response). The data used to instantiate the PositionFinancing is a shallow copy of the dict passed in, with any complex child types instantiated appropriately. """ data = data.copy() if data.get('financing') is not None: data['financing'] = ctx.convert_decimal_number( data.get('financing') ) if data.get('openTradeFinancings') is not None: data['openTradeFinancings'] = [ ctx.transaction.OpenTradeFinancing.from_dict(d, ctx) for d in data.get('openTradeFinancings') ] return PositionFinancing(**data)
def _after(self, response): """Calculates the request duration, and adds a transaction ID to the header. """ # Ignore excluded routes. if getattr(request, '_tracy_exclude', False): return response duration = None if getattr(request, '_tracy_start_time', None): duration = monotonic() - request._tracy_start_time # Add Trace_ID header. trace_id = None if getattr(request, '_tracy_id', None): trace_id = request._tracy_id response.headers[trace_header_id] = trace_id # Get the invoking client. trace_client = None if getattr(request, '_tracy_client', None): trace_client = request._tracy_client # Extra log kwargs. d = {'status_code': response.status_code, 'url': request.base_url, 'client_ip': request.remote_addr, 'trace_name': trace_client, 'trace_id': trace_id, 'trace_duration': duration} logger.info(None, extra=d) return response
Calculates the request duration, and adds a transaction ID to the header.
Below is the the instruction that describes the task: ### Input: Calculates the request duration, and adds a transaction ID to the header. ### Response: def _after(self, response): """Calculates the request duration, and adds a transaction ID to the header. """ # Ignore excluded routes. if getattr(request, '_tracy_exclude', False): return response duration = None if getattr(request, '_tracy_start_time', None): duration = monotonic() - request._tracy_start_time # Add Trace_ID header. trace_id = None if getattr(request, '_tracy_id', None): trace_id = request._tracy_id response.headers[trace_header_id] = trace_id # Get the invoking client. trace_client = None if getattr(request, '_tracy_client', None): trace_client = request._tracy_client # Extra log kwargs. d = {'status_code': response.status_code, 'url': request.base_url, 'client_ip': request.remote_addr, 'trace_name': trace_client, 'trace_id': trace_id, 'trace_duration': duration} logger.info(None, extra=d) return response
def xyz(self): """Return all particle coordinates in this compound. Returns ------- pos : np.ndarray, shape=(n, 3), dtype=float Array with the positions of all particles. """ if not self.children: pos = np.expand_dims(self._pos, axis=0) else: arr = np.fromiter(itertools.chain.from_iterable( particle.pos for particle in self.particles()), dtype=float) pos = arr.reshape((-1, 3)) return pos
Return all particle coordinates in this compound. Returns ------- pos : np.ndarray, shape=(n, 3), dtype=float Array with the positions of all particles.
Below is the the instruction that describes the task: ### Input: Return all particle coordinates in this compound. Returns ------- pos : np.ndarray, shape=(n, 3), dtype=float Array with the positions of all particles. ### Response: def xyz(self): """Return all particle coordinates in this compound. Returns ------- pos : np.ndarray, shape=(n, 3), dtype=float Array with the positions of all particles. """ if not self.children: pos = np.expand_dims(self._pos, axis=0) else: arr = np.fromiter(itertools.chain.from_iterable( particle.pos for particle in self.particles()), dtype=float) pos = arr.reshape((-1, 3)) return pos
def units(cls, scale=1): ''' :scale: optional integer scaling factor :return: list of three Point subclass Returns three points whose coordinates are the head of a unit vector from the origin ( conventionally i, j and k). ''' return [cls(x=scale), cls(y=scale), cls(z=scale)]
:scale: optional integer scaling factor :return: list of three Point subclass Returns three points whose coordinates are the head of a unit vector from the origin ( conventionally i, j and k).
Below is the the instruction that describes the task: ### Input: :scale: optional integer scaling factor :return: list of three Point subclass Returns three points whose coordinates are the head of a unit vector from the origin ( conventionally i, j and k). ### Response: def units(cls, scale=1): ''' :scale: optional integer scaling factor :return: list of three Point subclass Returns three points whose coordinates are the head of a unit vector from the origin ( conventionally i, j and k). ''' return [cls(x=scale), cls(y=scale), cls(z=scale)]