code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def vbd_list(name=None, call=None): ''' Get a list of VBDs on a VM **requires**: the name of the vm with the vbd definition .. code-block:: bash salt-cloud -a vbd_list xenvm01 ''' if call == 'function': raise SaltCloudSystemExit( 'This function must be called with -a, --action argument.' ) if name is None: return 'A name kwarg is rquired' ret = {} data = {} session = _get_session() vms = session.xenapi.VM.get_by_name_label(name) if len(vms) == 1: vm = vms[0] vbds = session.xenapi.VM.get_VBDs(vm) if vbds is not None: x = 0 for vbd in vbds: vbd_record = session.xenapi.VBD.get_record(vbd) data['vbd-{}'.format(x)] = vbd_record x += 1 ret = data return ret
Get a list of VBDs on a VM **requires**: the name of the vm with the vbd definition .. code-block:: bash salt-cloud -a vbd_list xenvm01
Below is the the instruction that describes the task: ### Input: Get a list of VBDs on a VM **requires**: the name of the vm with the vbd definition .. code-block:: bash salt-cloud -a vbd_list xenvm01 ### Response: def vbd_list(name=None, call=None): ''' Get a list of VBDs on a VM **requires**: the name of the vm with the vbd definition .. code-block:: bash salt-cloud -a vbd_list xenvm01 ''' if call == 'function': raise SaltCloudSystemExit( 'This function must be called with -a, --action argument.' ) if name is None: return 'A name kwarg is rquired' ret = {} data = {} session = _get_session() vms = session.xenapi.VM.get_by_name_label(name) if len(vms) == 1: vm = vms[0] vbds = session.xenapi.VM.get_VBDs(vm) if vbds is not None: x = 0 for vbd in vbds: vbd_record = session.xenapi.VBD.get_record(vbd) data['vbd-{}'.format(x)] = vbd_record x += 1 ret = data return ret
def residual_map_from_data_mask_and_model_data(data, mask, model_data): """Compute the residual map between a masked observed data and model data, where: Residuals = (Data - Model_Data). Parameters ----------- data : np.ndarray The observed data that is fitted. mask : np.ndarray The mask applied to the data, where *False* entries are included in the calculation. model_data : np.ndarray The model data used to fit the observed data. """ return np.subtract(data, model_data, out=np.zeros_like(data), where=np.asarray(mask) == 0)
Compute the residual map between a masked observed data and model data, where: Residuals = (Data - Model_Data). Parameters ----------- data : np.ndarray The observed data that is fitted. mask : np.ndarray The mask applied to the data, where *False* entries are included in the calculation. model_data : np.ndarray The model data used to fit the observed data.
Below is the the instruction that describes the task: ### Input: Compute the residual map between a masked observed data and model data, where: Residuals = (Data - Model_Data). Parameters ----------- data : np.ndarray The observed data that is fitted. mask : np.ndarray The mask applied to the data, where *False* entries are included in the calculation. model_data : np.ndarray The model data used to fit the observed data. ### Response: def residual_map_from_data_mask_and_model_data(data, mask, model_data): """Compute the residual map between a masked observed data and model data, where: Residuals = (Data - Model_Data). Parameters ----------- data : np.ndarray The observed data that is fitted. mask : np.ndarray The mask applied to the data, where *False* entries are included in the calculation. model_data : np.ndarray The model data used to fit the observed data. """ return np.subtract(data, model_data, out=np.zeros_like(data), where=np.asarray(mask) == 0)
def bed12(self, feature, block_featuretype=['exon'], thick_featuretype=['CDS'], thin_featuretype=None, name_field='ID', color=None): """ Converts `feature` into a BED12 format. GFF and GTF files do not necessarily define genes consistently, so this method provides flexiblity in specifying what to call a "transcript". Parameters ---------- feature : str or Feature instance In most cases, this feature should be a transcript rather than a gene. block_featuretype : str or list Which featuretype to use as the exons. These are represented as blocks in the BED12 format. Typically 'exon'. Use the `thick_featuretype` and `thin_featuretype` arguments to control the display of CDS as thicker blocks and UTRs as thinner blocks. Note that the features for `thick` or `thin` are *not* automatically included in the blocks; if you do want them included, then those featuretypes should be added to this `block_features` list. If no child features of type `block_featuretype` are found, then the full `feature` is returned in BED12 format as if it had a single exon. thick_featuretype : str or list Child featuretype(s) to use in order to determine the boundaries of the "thick" blocks. In BED12 format, these represent coding sequences; typically this would be set to "CDS". This argument is mutually exclusive with `thin_featuretype`. Specifically, the BED12 thickStart will be the start coord of the first `thick` item and the thickEnd will be the stop coord of the last `thick` item. thin_featuretype : str or list Child featuretype(s) to use in order to determine the boundaries of the "thin" blocks. In BED12 format, these represent untranslated regions. Typically "utr" or ['three_prime_UTR', 'five_prime_UTR']. Mutually exclusive with `thick_featuretype`. Specifically, the BED12 thickStart field will be the stop coord of the first `thin` item and the thickEnd field will be the start coord of the last `thin` item. name_field : str Which attribute of `feature` to use as the feature's name. If this field is not present, a "." placeholder will be used instead. color : None or str If None, then use black (0,0,0) as the RGB color; otherwise this should be a comma-separated string of R,G,B values each of which are integers in the range 0-255. """ if thick_featuretype and thin_featuretype: raise ValueError("Can only specify one of `thick_featuertype` or " "`thin_featuretype`") exons = list(self.children(feature, featuretype=block_featuretype, order_by='start')) if len(exons) == 0: exons = [feature] feature = self[feature] first = exons[0].start last = exons[-1].stop if first != feature.start: raise ValueError( "Start of first exon (%s) does not match start of feature (%s)" % (first, feature.start)) if last != feature.stop: raise ValueError( "End of last exon (%s) does not match end of feature (%s)" % (last, feature.stop)) if color is None: color = '0,0,0' color = color.replace(' ', '').strip() # Use field names as defined at # http://genome.ucsc.edu/FAQ/FAQformat.html#format1 chrom = feature.chrom chromStart = feature.start - 1 chromEnd = feature.stop orig = constants.always_return_list constants.always_return_list = True try: name = feature[name_field][0] except KeyError: name = "." constants.always_return_list = orig score = feature.score if score == '.': score = '0' strand = feature.strand itemRgb = color blockCount = len(exons) blockSizes = [len(i) for i in exons] blockStarts = [i.start - 1 - chromStart for i in exons] if thick_featuretype: thick = list(self.children(feature, featuretype=thick_featuretype, order_by='start')) if len(thick) == 0: thickStart = feature.start thickEnd = feature.stop else: thickStart = thick[0].start - 1 # BED 0-based coords thickEnd = thick[-1].stop if thin_featuretype: thin = list(self.children(feature, featuretype=thin_featuretype, order_by='start')) if len(thin) == 0: thickStart = feature.start thickEnd = feature.stop else: thickStart = thin[0].stop thickEnd = thin[-1].start - 1 # BED 0-based coords tst = chromStart + blockStarts[-1] + blockSizes[-1] assert tst == chromEnd, "tst=%s; chromEnd=%s" % (tst, chromEnd) fields = [ chrom, chromStart, chromEnd, name, score, strand, thickStart, thickEnd, itemRgb, blockCount, ','.join(map(str, blockSizes)), ','.join(map(str, blockStarts))] return '\t'.join(map(str, fields))
Converts `feature` into a BED12 format. GFF and GTF files do not necessarily define genes consistently, so this method provides flexiblity in specifying what to call a "transcript". Parameters ---------- feature : str or Feature instance In most cases, this feature should be a transcript rather than a gene. block_featuretype : str or list Which featuretype to use as the exons. These are represented as blocks in the BED12 format. Typically 'exon'. Use the `thick_featuretype` and `thin_featuretype` arguments to control the display of CDS as thicker blocks and UTRs as thinner blocks. Note that the features for `thick` or `thin` are *not* automatically included in the blocks; if you do want them included, then those featuretypes should be added to this `block_features` list. If no child features of type `block_featuretype` are found, then the full `feature` is returned in BED12 format as if it had a single exon. thick_featuretype : str or list Child featuretype(s) to use in order to determine the boundaries of the "thick" blocks. In BED12 format, these represent coding sequences; typically this would be set to "CDS". This argument is mutually exclusive with `thin_featuretype`. Specifically, the BED12 thickStart will be the start coord of the first `thick` item and the thickEnd will be the stop coord of the last `thick` item. thin_featuretype : str or list Child featuretype(s) to use in order to determine the boundaries of the "thin" blocks. In BED12 format, these represent untranslated regions. Typically "utr" or ['three_prime_UTR', 'five_prime_UTR']. Mutually exclusive with `thick_featuretype`. Specifically, the BED12 thickStart field will be the stop coord of the first `thin` item and the thickEnd field will be the start coord of the last `thin` item. name_field : str Which attribute of `feature` to use as the feature's name. If this field is not present, a "." placeholder will be used instead. color : None or str If None, then use black (0,0,0) as the RGB color; otherwise this should be a comma-separated string of R,G,B values each of which are integers in the range 0-255.
Below is the the instruction that describes the task: ### Input: Converts `feature` into a BED12 format. GFF and GTF files do not necessarily define genes consistently, so this method provides flexiblity in specifying what to call a "transcript". Parameters ---------- feature : str or Feature instance In most cases, this feature should be a transcript rather than a gene. block_featuretype : str or list Which featuretype to use as the exons. These are represented as blocks in the BED12 format. Typically 'exon'. Use the `thick_featuretype` and `thin_featuretype` arguments to control the display of CDS as thicker blocks and UTRs as thinner blocks. Note that the features for `thick` or `thin` are *not* automatically included in the blocks; if you do want them included, then those featuretypes should be added to this `block_features` list. If no child features of type `block_featuretype` are found, then the full `feature` is returned in BED12 format as if it had a single exon. thick_featuretype : str or list Child featuretype(s) to use in order to determine the boundaries of the "thick" blocks. In BED12 format, these represent coding sequences; typically this would be set to "CDS". This argument is mutually exclusive with `thin_featuretype`. Specifically, the BED12 thickStart will be the start coord of the first `thick` item and the thickEnd will be the stop coord of the last `thick` item. thin_featuretype : str or list Child featuretype(s) to use in order to determine the boundaries of the "thin" blocks. In BED12 format, these represent untranslated regions. Typically "utr" or ['three_prime_UTR', 'five_prime_UTR']. Mutually exclusive with `thick_featuretype`. Specifically, the BED12 thickStart field will be the stop coord of the first `thin` item and the thickEnd field will be the start coord of the last `thin` item. name_field : str Which attribute of `feature` to use as the feature's name. If this field is not present, a "." placeholder will be used instead. color : None or str If None, then use black (0,0,0) as the RGB color; otherwise this should be a comma-separated string of R,G,B values each of which are integers in the range 0-255. ### Response: def bed12(self, feature, block_featuretype=['exon'], thick_featuretype=['CDS'], thin_featuretype=None, name_field='ID', color=None): """ Converts `feature` into a BED12 format. GFF and GTF files do not necessarily define genes consistently, so this method provides flexiblity in specifying what to call a "transcript". Parameters ---------- feature : str or Feature instance In most cases, this feature should be a transcript rather than a gene. block_featuretype : str or list Which featuretype to use as the exons. These are represented as blocks in the BED12 format. Typically 'exon'. Use the `thick_featuretype` and `thin_featuretype` arguments to control the display of CDS as thicker blocks and UTRs as thinner blocks. Note that the features for `thick` or `thin` are *not* automatically included in the blocks; if you do want them included, then those featuretypes should be added to this `block_features` list. If no child features of type `block_featuretype` are found, then the full `feature` is returned in BED12 format as if it had a single exon. thick_featuretype : str or list Child featuretype(s) to use in order to determine the boundaries of the "thick" blocks. In BED12 format, these represent coding sequences; typically this would be set to "CDS". This argument is mutually exclusive with `thin_featuretype`. Specifically, the BED12 thickStart will be the start coord of the first `thick` item and the thickEnd will be the stop coord of the last `thick` item. thin_featuretype : str or list Child featuretype(s) to use in order to determine the boundaries of the "thin" blocks. In BED12 format, these represent untranslated regions. Typically "utr" or ['three_prime_UTR', 'five_prime_UTR']. Mutually exclusive with `thick_featuretype`. Specifically, the BED12 thickStart field will be the stop coord of the first `thin` item and the thickEnd field will be the start coord of the last `thin` item. name_field : str Which attribute of `feature` to use as the feature's name. If this field is not present, a "." placeholder will be used instead. color : None or str If None, then use black (0,0,0) as the RGB color; otherwise this should be a comma-separated string of R,G,B values each of which are integers in the range 0-255. """ if thick_featuretype and thin_featuretype: raise ValueError("Can only specify one of `thick_featuertype` or " "`thin_featuretype`") exons = list(self.children(feature, featuretype=block_featuretype, order_by='start')) if len(exons) == 0: exons = [feature] feature = self[feature] first = exons[0].start last = exons[-1].stop if first != feature.start: raise ValueError( "Start of first exon (%s) does not match start of feature (%s)" % (first, feature.start)) if last != feature.stop: raise ValueError( "End of last exon (%s) does not match end of feature (%s)" % (last, feature.stop)) if color is None: color = '0,0,0' color = color.replace(' ', '').strip() # Use field names as defined at # http://genome.ucsc.edu/FAQ/FAQformat.html#format1 chrom = feature.chrom chromStart = feature.start - 1 chromEnd = feature.stop orig = constants.always_return_list constants.always_return_list = True try: name = feature[name_field][0] except KeyError: name = "." constants.always_return_list = orig score = feature.score if score == '.': score = '0' strand = feature.strand itemRgb = color blockCount = len(exons) blockSizes = [len(i) for i in exons] blockStarts = [i.start - 1 - chromStart for i in exons] if thick_featuretype: thick = list(self.children(feature, featuretype=thick_featuretype, order_by='start')) if len(thick) == 0: thickStart = feature.start thickEnd = feature.stop else: thickStart = thick[0].start - 1 # BED 0-based coords thickEnd = thick[-1].stop if thin_featuretype: thin = list(self.children(feature, featuretype=thin_featuretype, order_by='start')) if len(thin) == 0: thickStart = feature.start thickEnd = feature.stop else: thickStart = thin[0].stop thickEnd = thin[-1].start - 1 # BED 0-based coords tst = chromStart + blockStarts[-1] + blockSizes[-1] assert tst == chromEnd, "tst=%s; chromEnd=%s" % (tst, chromEnd) fields = [ chrom, chromStart, chromEnd, name, score, strand, thickStart, thickEnd, itemRgb, blockCount, ','.join(map(str, blockSizes)), ','.join(map(str, blockStarts))] return '\t'.join(map(str, fields))
def _EccZmaxRperiRap(self,*args,**kwargs): """ NAME: EccZmaxRperiRap (_EccZmaxRperiRap) PURPOSE: evaluate the eccentricity, maximum height above the plane, peri- and apocenter for a spherical potential INPUT: Either: a) R,vR,vT,z,vz[,phi]: 1) floats: phase-space value for single object (phi is optional) (each can be a Quantity) 2) numpy.ndarray: [N] phase-space values for N objects (each can be a Quantity) b) Orbit instance: initial condition used if that's it, orbit(t) if there is a time given as well as the second argument OUTPUT: (e,zmax,rperi,rap) HISTORY: 2017-12-22 - Written - Bovy (UofT) """ if len(args) == 5: #R,vR.vT, z, vz R,vR,vT, z, vz= args elif len(args) == 6: #R,vR.vT, z, vz, phi R,vR,vT, z, vz, phi= args else: self._parse_eval_args(*args) R= self._eval_R vR= self._eval_vR vT= self._eval_vT z= self._eval_z vz= self._eval_vz if isinstance(R,float): R= nu.array([R]) vR= nu.array([vR]) vT= nu.array([vT]) z= nu.array([z]) vz= nu.array([vz]) if self._c: #pragma: no cover pass else: Lz= R*vT Lx= -z*vT Ly= z*vR-R*vz L2= Lx*Lx+Ly*Ly+Lz*Lz L= nu.sqrt(L2) #Set up an actionAngleAxi object for EL and rap/rperi calculations axiR= nu.sqrt(R**2.+z**2.) axivT= L/axiR axivR= (R*vR+z*vz)/axiR rperi, rap= [], [] for ii in range(len(axiR)): axiaA= actionAngleAxi(axiR[ii],axivR[ii],axivT[ii], pot=self._2dpot) trperi,trap= axiaA.calcRapRperi() rperi.append(trperi) rap.append(trap) rperi= nu.array(rperi) rap= nu.array(rap) return ((rap-rperi)/(rap+rperi),rap*nu.sqrt(1.-Lz**2./L2), rperi,rap)
NAME: EccZmaxRperiRap (_EccZmaxRperiRap) PURPOSE: evaluate the eccentricity, maximum height above the plane, peri- and apocenter for a spherical potential INPUT: Either: a) R,vR,vT,z,vz[,phi]: 1) floats: phase-space value for single object (phi is optional) (each can be a Quantity) 2) numpy.ndarray: [N] phase-space values for N objects (each can be a Quantity) b) Orbit instance: initial condition used if that's it, orbit(t) if there is a time given as well as the second argument OUTPUT: (e,zmax,rperi,rap) HISTORY: 2017-12-22 - Written - Bovy (UofT)
Below is the the instruction that describes the task: ### Input: NAME: EccZmaxRperiRap (_EccZmaxRperiRap) PURPOSE: evaluate the eccentricity, maximum height above the plane, peri- and apocenter for a spherical potential INPUT: Either: a) R,vR,vT,z,vz[,phi]: 1) floats: phase-space value for single object (phi is optional) (each can be a Quantity) 2) numpy.ndarray: [N] phase-space values for N objects (each can be a Quantity) b) Orbit instance: initial condition used if that's it, orbit(t) if there is a time given as well as the second argument OUTPUT: (e,zmax,rperi,rap) HISTORY: 2017-12-22 - Written - Bovy (UofT) ### Response: def _EccZmaxRperiRap(self,*args,**kwargs): """ NAME: EccZmaxRperiRap (_EccZmaxRperiRap) PURPOSE: evaluate the eccentricity, maximum height above the plane, peri- and apocenter for a spherical potential INPUT: Either: a) R,vR,vT,z,vz[,phi]: 1) floats: phase-space value for single object (phi is optional) (each can be a Quantity) 2) numpy.ndarray: [N] phase-space values for N objects (each can be a Quantity) b) Orbit instance: initial condition used if that's it, orbit(t) if there is a time given as well as the second argument OUTPUT: (e,zmax,rperi,rap) HISTORY: 2017-12-22 - Written - Bovy (UofT) """ if len(args) == 5: #R,vR.vT, z, vz R,vR,vT, z, vz= args elif len(args) == 6: #R,vR.vT, z, vz, phi R,vR,vT, z, vz, phi= args else: self._parse_eval_args(*args) R= self._eval_R vR= self._eval_vR vT= self._eval_vT z= self._eval_z vz= self._eval_vz if isinstance(R,float): R= nu.array([R]) vR= nu.array([vR]) vT= nu.array([vT]) z= nu.array([z]) vz= nu.array([vz]) if self._c: #pragma: no cover pass else: Lz= R*vT Lx= -z*vT Ly= z*vR-R*vz L2= Lx*Lx+Ly*Ly+Lz*Lz L= nu.sqrt(L2) #Set up an actionAngleAxi object for EL and rap/rperi calculations axiR= nu.sqrt(R**2.+z**2.) axivT= L/axiR axivR= (R*vR+z*vz)/axiR rperi, rap= [], [] for ii in range(len(axiR)): axiaA= actionAngleAxi(axiR[ii],axivR[ii],axivT[ii], pot=self._2dpot) trperi,trap= axiaA.calcRapRperi() rperi.append(trperi) rap.append(trap) rperi= nu.array(rperi) rap= nu.array(rap) return ((rap-rperi)/(rap+rperi),rap*nu.sqrt(1.-Lz**2./L2), rperi,rap)
def _versions_from_changelog(changelog): """ Return all released versions from given ``changelog``, sorted. :param dict changelog: A changelog dict as returned by ``releases.util.parse_changelog``. :returns: A sorted list of `semantic_version.Version` objects. """ versions = [Version(x) for x in changelog if BUGFIX_RELEASE_RE.match(x)] return sorted(versions)
Return all released versions from given ``changelog``, sorted. :param dict changelog: A changelog dict as returned by ``releases.util.parse_changelog``. :returns: A sorted list of `semantic_version.Version` objects.
Below is the the instruction that describes the task: ### Input: Return all released versions from given ``changelog``, sorted. :param dict changelog: A changelog dict as returned by ``releases.util.parse_changelog``. :returns: A sorted list of `semantic_version.Version` objects. ### Response: def _versions_from_changelog(changelog): """ Return all released versions from given ``changelog``, sorted. :param dict changelog: A changelog dict as returned by ``releases.util.parse_changelog``. :returns: A sorted list of `semantic_version.Version` objects. """ versions = [Version(x) for x in changelog if BUGFIX_RELEASE_RE.match(x)] return sorted(versions)
def render_item(self, contentitem): """ Render the individual item. May raise :class:`SkipItem` to ignore an item. """ render_language = get_render_language(contentitem) with smart_override(render_language): # Plugin output is likely HTML, but it should be placed in mark_safe() to raise awareness about escaping. # This is just like Django's Input.render() and unlike Node.render(). return contentitem.plugin._render_contentitem(self.request, contentitem)
Render the individual item. May raise :class:`SkipItem` to ignore an item.
Below is the the instruction that describes the task: ### Input: Render the individual item. May raise :class:`SkipItem` to ignore an item. ### Response: def render_item(self, contentitem): """ Render the individual item. May raise :class:`SkipItem` to ignore an item. """ render_language = get_render_language(contentitem) with smart_override(render_language): # Plugin output is likely HTML, but it should be placed in mark_safe() to raise awareness about escaping. # This is just like Django's Input.render() and unlike Node.render(). return contentitem.plugin._render_contentitem(self.request, contentitem)
def _safe_call(obj, methname, *args, **kwargs): """ Safely calls the method with the given methname on the given object. Remaining positional and keyword arguments are passed to the method. The return value is None, if the method is not available, or the return value of the method. """ meth = getattr(obj, methname, None) if meth is None or not callable(meth): return return meth(*args, **kwargs)
Safely calls the method with the given methname on the given object. Remaining positional and keyword arguments are passed to the method. The return value is None, if the method is not available, or the return value of the method.
Below is the the instruction that describes the task: ### Input: Safely calls the method with the given methname on the given object. Remaining positional and keyword arguments are passed to the method. The return value is None, if the method is not available, or the return value of the method. ### Response: def _safe_call(obj, methname, *args, **kwargs): """ Safely calls the method with the given methname on the given object. Remaining positional and keyword arguments are passed to the method. The return value is None, if the method is not available, or the return value of the method. """ meth = getattr(obj, methname, None) if meth is None or not callable(meth): return return meth(*args, **kwargs)
def assumed_session(role_arn, session_name, session=None, region=None, external_id=None): """STS Role assume a boto3.Session With automatic credential renewal. Args: role_arn: iam role arn to assume session_name: client session identifier session: an optional extant session, note session is captured in a function closure for renewing the sts assumed role. :return: a boto3 session using the sts assumed role credentials Notes: We have to poke at botocore internals a few times """ if session is None: session = Session() retry = get_retry(('Throttling',)) def refresh(): parameters = {"RoleArn": role_arn, "RoleSessionName": session_name} if external_id is not None: parameters['ExternalId'] = external_id credentials = retry( session.client('sts').assume_role, **parameters)['Credentials'] return dict( access_key=credentials['AccessKeyId'], secret_key=credentials['SecretAccessKey'], token=credentials['SessionToken'], # Silly that we basically stringify so it can be parsed again expiry_time=credentials['Expiration'].isoformat()) session_credentials = RefreshableCredentials.create_from_metadata( metadata=refresh(), refresh_using=refresh, method='sts-assume-role') # so dirty.. it hurts, no clean way to set this outside of the # internals poke. There's some work upstream on making this nicer # but its pretty baroque as well with upstream support. # https://github.com/boto/boto3/issues/443 # https://github.com/boto/botocore/issues/761 s = get_session() s._credentials = session_credentials if region is None: region = s.get_config_variable('region') or 'us-east-1' s.set_config_variable('region', region) return Session(botocore_session=s)
STS Role assume a boto3.Session With automatic credential renewal. Args: role_arn: iam role arn to assume session_name: client session identifier session: an optional extant session, note session is captured in a function closure for renewing the sts assumed role. :return: a boto3 session using the sts assumed role credentials Notes: We have to poke at botocore internals a few times
Below is the the instruction that describes the task: ### Input: STS Role assume a boto3.Session With automatic credential renewal. Args: role_arn: iam role arn to assume session_name: client session identifier session: an optional extant session, note session is captured in a function closure for renewing the sts assumed role. :return: a boto3 session using the sts assumed role credentials Notes: We have to poke at botocore internals a few times ### Response: def assumed_session(role_arn, session_name, session=None, region=None, external_id=None): """STS Role assume a boto3.Session With automatic credential renewal. Args: role_arn: iam role arn to assume session_name: client session identifier session: an optional extant session, note session is captured in a function closure for renewing the sts assumed role. :return: a boto3 session using the sts assumed role credentials Notes: We have to poke at botocore internals a few times """ if session is None: session = Session() retry = get_retry(('Throttling',)) def refresh(): parameters = {"RoleArn": role_arn, "RoleSessionName": session_name} if external_id is not None: parameters['ExternalId'] = external_id credentials = retry( session.client('sts').assume_role, **parameters)['Credentials'] return dict( access_key=credentials['AccessKeyId'], secret_key=credentials['SecretAccessKey'], token=credentials['SessionToken'], # Silly that we basically stringify so it can be parsed again expiry_time=credentials['Expiration'].isoformat()) session_credentials = RefreshableCredentials.create_from_metadata( metadata=refresh(), refresh_using=refresh, method='sts-assume-role') # so dirty.. it hurts, no clean way to set this outside of the # internals poke. There's some work upstream on making this nicer # but its pretty baroque as well with upstream support. # https://github.com/boto/boto3/issues/443 # https://github.com/boto/botocore/issues/761 s = get_session() s._credentials = session_credentials if region is None: region = s.get_config_variable('region') or 'us-east-1' s.set_config_variable('region', region) return Session(botocore_session=s)
def plot( self, best=False, ax=None, title=None, figsize=None, temp_range=None, alpha=None, **kwargs ): """ Plot """ return plot_caltrack_candidate( self, best=best, ax=ax, title=title, figsize=figsize, temp_range=temp_range, alpha=alpha, **kwargs )
Plot
Below is the the instruction that describes the task: ### Input: Plot ### Response: def plot( self, best=False, ax=None, title=None, figsize=None, temp_range=None, alpha=None, **kwargs ): """ Plot """ return plot_caltrack_candidate( self, best=best, ax=ax, title=title, figsize=figsize, temp_range=temp_range, alpha=alpha, **kwargs )
def _handle_key(self, character, event): """Handles either a key press or release, depending on ``event``. :param character: The key to handle. See :meth:`press_key` and :meth:`release_key` for information about this parameter. :param event: The *Xlib* event. This should be either :attr:`Xlib.X.KeyPress` or :attr:`Xlib.X.KeyRelease` """ try: # Detect uppercase or shifted character shifted = self.is_char_shifted(character) except AttributeError: # Handle the case of integer keycode argument with display_manager(self.display) as d: fake_input(d, event, character) else: with display_manager(self.display) as d: if shifted: fake_input(d, event, self.shift_key) keycode = self.lookup_character_keycode(character) fake_input(d, event, keycode)
Handles either a key press or release, depending on ``event``. :param character: The key to handle. See :meth:`press_key` and :meth:`release_key` for information about this parameter. :param event: The *Xlib* event. This should be either :attr:`Xlib.X.KeyPress` or :attr:`Xlib.X.KeyRelease`
Below is the the instruction that describes the task: ### Input: Handles either a key press or release, depending on ``event``. :param character: The key to handle. See :meth:`press_key` and :meth:`release_key` for information about this parameter. :param event: The *Xlib* event. This should be either :attr:`Xlib.X.KeyPress` or :attr:`Xlib.X.KeyRelease` ### Response: def _handle_key(self, character, event): """Handles either a key press or release, depending on ``event``. :param character: The key to handle. See :meth:`press_key` and :meth:`release_key` for information about this parameter. :param event: The *Xlib* event. This should be either :attr:`Xlib.X.KeyPress` or :attr:`Xlib.X.KeyRelease` """ try: # Detect uppercase or shifted character shifted = self.is_char_shifted(character) except AttributeError: # Handle the case of integer keycode argument with display_manager(self.display) as d: fake_input(d, event, character) else: with display_manager(self.display) as d: if shifted: fake_input(d, event, self.shift_key) keycode = self.lookup_character_keycode(character) fake_input(d, event, keycode)
def _make_it_3d(img): """Enforce that img is a 3D img-like object, if it is not, raise a TypeError. i.e., remove dimensions of size 1. Parameters ---------- img: img-like object Returns ------- 3D img-like object """ shape = get_shape(img) if len(shape) == 3: return img elif (len(shape) == 4 and shape[3] == 1): # "squeeze" the image. try: data = get_data(img) affine = img.get_affine() img = nib.Nifti1Image(data[:, :, :, 0], affine) except Exception as exc: raise Exception("Error making image '{}' a 3D volume file.".format(img)) from exc else: return img else: raise TypeError("A 3D image is expected, but an image with a shape of {} was given.".format(shape))
Enforce that img is a 3D img-like object, if it is not, raise a TypeError. i.e., remove dimensions of size 1. Parameters ---------- img: img-like object Returns ------- 3D img-like object
Below is the the instruction that describes the task: ### Input: Enforce that img is a 3D img-like object, if it is not, raise a TypeError. i.e., remove dimensions of size 1. Parameters ---------- img: img-like object Returns ------- 3D img-like object ### Response: def _make_it_3d(img): """Enforce that img is a 3D img-like object, if it is not, raise a TypeError. i.e., remove dimensions of size 1. Parameters ---------- img: img-like object Returns ------- 3D img-like object """ shape = get_shape(img) if len(shape) == 3: return img elif (len(shape) == 4 and shape[3] == 1): # "squeeze" the image. try: data = get_data(img) affine = img.get_affine() img = nib.Nifti1Image(data[:, :, :, 0], affine) except Exception as exc: raise Exception("Error making image '{}' a 3D volume file.".format(img)) from exc else: return img else: raise TypeError("A 3D image is expected, but an image with a shape of {} was given.".format(shape))
def unsubscribe_from_order_book(self, pair, **kwargs): """Unsubscribe to the passed pair's order book channel. :param pair: str, Symbol pair to request data for :param kwargs: :return: """ identifier = ('book', pair) self._unsubscribe('book', identifier, symbol=pair, **kwargs)
Unsubscribe to the passed pair's order book channel. :param pair: str, Symbol pair to request data for :param kwargs: :return:
Below is the the instruction that describes the task: ### Input: Unsubscribe to the passed pair's order book channel. :param pair: str, Symbol pair to request data for :param kwargs: :return: ### Response: def unsubscribe_from_order_book(self, pair, **kwargs): """Unsubscribe to the passed pair's order book channel. :param pair: str, Symbol pair to request data for :param kwargs: :return: """ identifier = ('book', pair) self._unsubscribe('book', identifier, symbol=pair, **kwargs)
def from_line(cls, line): """:return: New RefLogEntry instance from the given revlog line. :param line: line bytes without trailing newline :raise ValueError: If line could not be parsed""" line = line.decode(defenc) fields = line.split('\t', 1) if len(fields) == 1: info, msg = fields[0], None elif len(fields) == 2: info, msg = fields else: raise ValueError("Line must have up to two TAB-separated fields." " Got %s" % repr(line)) # END handle first split oldhexsha = info[:40] newhexsha = info[41:81] for hexsha in (oldhexsha, newhexsha): if not cls._re_hexsha_only.match(hexsha): raise ValueError("Invalid hexsha: %r" % (hexsha,)) # END if hexsha re doesn't match # END for each hexsha email_end = info.find('>', 82) if email_end == -1: raise ValueError("Missing token: >") # END handle missing end brace actor = Actor._from_string(info[82:email_end + 1]) time, tz_offset = parse_date(info[email_end + 2:]) return RefLogEntry((oldhexsha, newhexsha, actor, (time, tz_offset), msg))
:return: New RefLogEntry instance from the given revlog line. :param line: line bytes without trailing newline :raise ValueError: If line could not be parsed
Below is the the instruction that describes the task: ### Input: :return: New RefLogEntry instance from the given revlog line. :param line: line bytes without trailing newline :raise ValueError: If line could not be parsed ### Response: def from_line(cls, line): """:return: New RefLogEntry instance from the given revlog line. :param line: line bytes without trailing newline :raise ValueError: If line could not be parsed""" line = line.decode(defenc) fields = line.split('\t', 1) if len(fields) == 1: info, msg = fields[0], None elif len(fields) == 2: info, msg = fields else: raise ValueError("Line must have up to two TAB-separated fields." " Got %s" % repr(line)) # END handle first split oldhexsha = info[:40] newhexsha = info[41:81] for hexsha in (oldhexsha, newhexsha): if not cls._re_hexsha_only.match(hexsha): raise ValueError("Invalid hexsha: %r" % (hexsha,)) # END if hexsha re doesn't match # END for each hexsha email_end = info.find('>', 82) if email_end == -1: raise ValueError("Missing token: >") # END handle missing end brace actor = Actor._from_string(info[82:email_end + 1]) time, tz_offset = parse_date(info[email_end + 2:]) return RefLogEntry((oldhexsha, newhexsha, actor, (time, tz_offset), msg))
def generate_import_from(module_, names): """Generate an import line. :sig: (str, Set[str]) -> str :param module_: Name of module to import the names from. :param names: Names to import. :return: Import line in stub code. """ regular_names = [n for n in names if "::" not in n] as_names = [n for n in names if "::" in n] line = "" if len(regular_names) > 0: slots = {"m": module_, "n": ", ".join(sorted(regular_names))} line = "from %(m)s import %(n)s" % slots if len(line) > LINE_LENGTH_LIMIT: slots["n"] = INDENT + (",\n" + INDENT).join(sorted(regular_names)) + "," line = "from %(m)s import (\n%(n)s\n)" % slots if len(as_names) > 0: line += "\n" for as_name in as_names: a, n = as_name.split("::") line += "from %(m)s import %(n)s as %(a)s" % {"m": module_, "n": n, "a": a} return line
Generate an import line. :sig: (str, Set[str]) -> str :param module_: Name of module to import the names from. :param names: Names to import. :return: Import line in stub code.
Below is the the instruction that describes the task: ### Input: Generate an import line. :sig: (str, Set[str]) -> str :param module_: Name of module to import the names from. :param names: Names to import. :return: Import line in stub code. ### Response: def generate_import_from(module_, names): """Generate an import line. :sig: (str, Set[str]) -> str :param module_: Name of module to import the names from. :param names: Names to import. :return: Import line in stub code. """ regular_names = [n for n in names if "::" not in n] as_names = [n for n in names if "::" in n] line = "" if len(regular_names) > 0: slots = {"m": module_, "n": ", ".join(sorted(regular_names))} line = "from %(m)s import %(n)s" % slots if len(line) > LINE_LENGTH_LIMIT: slots["n"] = INDENT + (",\n" + INDENT).join(sorted(regular_names)) + "," line = "from %(m)s import (\n%(n)s\n)" % slots if len(as_names) > 0: line += "\n" for as_name in as_names: a, n = as_name.split("::") line += "from %(m)s import %(n)s as %(a)s" % {"m": module_, "n": n, "a": a} return line
def sample_normalize(self, k_samples=1000, overwrite=False): """ Estimate the mean and std of the features from the training set Params: k_samples (int): Use this number of samples for estimation """ log = logUtil.getlogger() log.info("Calculating mean and std from samples") # if k_samples is negative then it goes through total dataset if k_samples < 0: audio_paths = self.audio_paths # using sample else: k_samples = min(k_samples, len(self.train_audio_paths)) samples = self.rng.sample(self.train_audio_paths, k_samples) audio_paths = samples manager = Manager() return_dict = manager.dict() jobs = [] for threadIndex in range(cpu_count()): proc = Process(target=self.preprocess_sample_normalize, args=(threadIndex, audio_paths, overwrite, return_dict)) jobs.append(proc) proc.start() for proc in jobs: proc.join() feat = np.sum(np.vstack([item['feat'] for item in return_dict.values()]), axis=0) count = sum([item['count'] for item in return_dict.values()]) feat_squared = np.sum(np.vstack([item['feat_squared'] for item in return_dict.values()]), axis=0) self.feats_mean = feat / float(count) self.feats_std = np.sqrt(feat_squared / float(count) - np.square(self.feats_mean)) np.savetxt( generate_file_path(self.save_dir, self.model_name, 'feats_mean'), self.feats_mean) np.savetxt( generate_file_path(self.save_dir, self.model_name, 'feats_std'), self.feats_std) log.info("End calculating mean and std from samples")
Estimate the mean and std of the features from the training set Params: k_samples (int): Use this number of samples for estimation
Below is the the instruction that describes the task: ### Input: Estimate the mean and std of the features from the training set Params: k_samples (int): Use this number of samples for estimation ### Response: def sample_normalize(self, k_samples=1000, overwrite=False): """ Estimate the mean and std of the features from the training set Params: k_samples (int): Use this number of samples for estimation """ log = logUtil.getlogger() log.info("Calculating mean and std from samples") # if k_samples is negative then it goes through total dataset if k_samples < 0: audio_paths = self.audio_paths # using sample else: k_samples = min(k_samples, len(self.train_audio_paths)) samples = self.rng.sample(self.train_audio_paths, k_samples) audio_paths = samples manager = Manager() return_dict = manager.dict() jobs = [] for threadIndex in range(cpu_count()): proc = Process(target=self.preprocess_sample_normalize, args=(threadIndex, audio_paths, overwrite, return_dict)) jobs.append(proc) proc.start() for proc in jobs: proc.join() feat = np.sum(np.vstack([item['feat'] for item in return_dict.values()]), axis=0) count = sum([item['count'] for item in return_dict.values()]) feat_squared = np.sum(np.vstack([item['feat_squared'] for item in return_dict.values()]), axis=0) self.feats_mean = feat / float(count) self.feats_std = np.sqrt(feat_squared / float(count) - np.square(self.feats_mean)) np.savetxt( generate_file_path(self.save_dir, self.model_name, 'feats_mean'), self.feats_mean) np.savetxt( generate_file_path(self.save_dir, self.model_name, 'feats_std'), self.feats_std) log.info("End calculating mean and std from samples")
def _xml_to_json_places(tree, is_reverse=False): """ Transform the xml ElementTree due to XML webservice return to json """ select_multi = ( 'GeocodedAddress' if not is_reverse else 'ReverseGeocodedLocation' ) adresses = tree.findall('.//' + select_multi) places = [] sel_pl = './/Address/Place[@type="{}"]' for adr in adresses: el = {} el['pos'] = adr.find('./Point/pos') el['street'] = adr.find('.//Address/StreetAddress/Street') el['freeformaddress'] = adr.find('.//Address/freeFormAddress') el['municipality'] = adr.find(sel_pl.format('Municipality')) el['numero'] = adr.find(sel_pl.format('Numero')) el['feuille'] = adr.find(sel_pl.format('Feuille')) el['section'] = adr.find(sel_pl.format('Section')) el['departement'] = adr.find(sel_pl.format('Departement')) el['commune_absorbee'] = adr.find(sel_pl.format('CommuneAbsorbee')) el['commune'] = adr.find(sel_pl.format('Commune')) el['insee'] = adr.find(sel_pl.format('INSEE')) el['qualite'] = adr.find(sel_pl.format('Qualite')) el['territoire'] = adr.find(sel_pl.format('Territoire')) el['id'] = adr.find(sel_pl.format('ID')) el['id_tr'] = adr.find(sel_pl.format('ID_TR')) el['bbox'] = adr.find(sel_pl.format('Bbox')) el['nature'] = adr.find(sel_pl.format('Nature')) el['postal_code'] = adr.find('.//Address/PostalCode') el['extended_geocode_match_code'] = adr.find( './/ExtendedGeocodeMatchCode' ) place = {} def testContentAttrib(selector, key): """ Helper to select by attribute and if not attribute, value set to empty string """ return selector.attrib.get( key, None ) if selector is not None else None place['accuracy'] = testContentAttrib( adr.find('.//GeocodeMatchCode'), 'accuracy') place['match_type'] = testContentAttrib( adr.find('.//GeocodeMatchCode'), 'matchType') place['building'] = testContentAttrib( adr.find('.//Address/StreetAddress/Building'), 'number') place['search_centre_distance'] = testContentAttrib( adr.find('.//SearchCentreDistance'), 'value') for key, value in iteritems(el): if value is not None: place[key] = value.text if value.text is None: place[key] = None else: place[key] = None # We check if lat lng is not empty and unpack accordingly if place['pos']: lat, lng = place['pos'].split(' ') place['lat'] = lat.strip() place['lng'] = lng.strip() else: place['lat'] = place['lng'] = None # We removed the unused key place.pop("pos", None) places.append(place) return places
Transform the xml ElementTree due to XML webservice return to json
Below is the the instruction that describes the task: ### Input: Transform the xml ElementTree due to XML webservice return to json ### Response: def _xml_to_json_places(tree, is_reverse=False): """ Transform the xml ElementTree due to XML webservice return to json """ select_multi = ( 'GeocodedAddress' if not is_reverse else 'ReverseGeocodedLocation' ) adresses = tree.findall('.//' + select_multi) places = [] sel_pl = './/Address/Place[@type="{}"]' for adr in adresses: el = {} el['pos'] = adr.find('./Point/pos') el['street'] = adr.find('.//Address/StreetAddress/Street') el['freeformaddress'] = adr.find('.//Address/freeFormAddress') el['municipality'] = adr.find(sel_pl.format('Municipality')) el['numero'] = adr.find(sel_pl.format('Numero')) el['feuille'] = adr.find(sel_pl.format('Feuille')) el['section'] = adr.find(sel_pl.format('Section')) el['departement'] = adr.find(sel_pl.format('Departement')) el['commune_absorbee'] = adr.find(sel_pl.format('CommuneAbsorbee')) el['commune'] = adr.find(sel_pl.format('Commune')) el['insee'] = adr.find(sel_pl.format('INSEE')) el['qualite'] = adr.find(sel_pl.format('Qualite')) el['territoire'] = adr.find(sel_pl.format('Territoire')) el['id'] = adr.find(sel_pl.format('ID')) el['id_tr'] = adr.find(sel_pl.format('ID_TR')) el['bbox'] = adr.find(sel_pl.format('Bbox')) el['nature'] = adr.find(sel_pl.format('Nature')) el['postal_code'] = adr.find('.//Address/PostalCode') el['extended_geocode_match_code'] = adr.find( './/ExtendedGeocodeMatchCode' ) place = {} def testContentAttrib(selector, key): """ Helper to select by attribute and if not attribute, value set to empty string """ return selector.attrib.get( key, None ) if selector is not None else None place['accuracy'] = testContentAttrib( adr.find('.//GeocodeMatchCode'), 'accuracy') place['match_type'] = testContentAttrib( adr.find('.//GeocodeMatchCode'), 'matchType') place['building'] = testContentAttrib( adr.find('.//Address/StreetAddress/Building'), 'number') place['search_centre_distance'] = testContentAttrib( adr.find('.//SearchCentreDistance'), 'value') for key, value in iteritems(el): if value is not None: place[key] = value.text if value.text is None: place[key] = None else: place[key] = None # We check if lat lng is not empty and unpack accordingly if place['pos']: lat, lng = place['pos'].split(' ') place['lat'] = lat.strip() place['lng'] = lng.strip() else: place['lat'] = place['lng'] = None # We removed the unused key place.pop("pos", None) places.append(place) return places
def to_python(self, value): """Convert the constant to the real choice value.""" # ``is_required`` is already checked in ``validate``. if value is None: return None # Validate the type. if not isinstance(value, six.string_types): raise forms.ValidationError( "Invalid value type (should be a string).", code='invalid-choice-type', ) # Get the constant from the choices object, raising if it doesn't exist. try: final = getattr(self.choices, value) except AttributeError: available = '[%s]' % ', '.join(self.choices.constants) raise forms.ValidationError( "Invalid value (not in available choices. Available ones are: %s" % available, code='non-existing-choice', ) return final
Convert the constant to the real choice value.
Below is the the instruction that describes the task: ### Input: Convert the constant to the real choice value. ### Response: def to_python(self, value): """Convert the constant to the real choice value.""" # ``is_required`` is already checked in ``validate``. if value is None: return None # Validate the type. if not isinstance(value, six.string_types): raise forms.ValidationError( "Invalid value type (should be a string).", code='invalid-choice-type', ) # Get the constant from the choices object, raising if it doesn't exist. try: final = getattr(self.choices, value) except AttributeError: available = '[%s]' % ', '.join(self.choices.constants) raise forms.ValidationError( "Invalid value (not in available choices. Available ones are: %s" % available, code='non-existing-choice', ) return final
def setdefault(self, name, value): ''' if the ``name`` is set, return its value. Otherwse set ``name`` to ``value`` and return ``value``''' if name in self: return self[name] self[name] = value return self[name]
if the ``name`` is set, return its value. Otherwse set ``name`` to ``value`` and return ``value``
Below is the the instruction that describes the task: ### Input: if the ``name`` is set, return its value. Otherwse set ``name`` to ``value`` and return ``value`` ### Response: def setdefault(self, name, value): ''' if the ``name`` is set, return its value. Otherwse set ``name`` to ``value`` and return ``value``''' if name in self: return self[name] self[name] = value return self[name]
def _hydrate_pivot_relation(self, models): """ Hydrate the pivot table relationship on the models. :type models: list """ for model in models: pivot = self.new_existing_pivot(self._clean_pivot_attributes(model)) model.set_relation("pivot", pivot)
Hydrate the pivot table relationship on the models. :type models: list
Below is the the instruction that describes the task: ### Input: Hydrate the pivot table relationship on the models. :type models: list ### Response: def _hydrate_pivot_relation(self, models): """ Hydrate the pivot table relationship on the models. :type models: list """ for model in models: pivot = self.new_existing_pivot(self._clean_pivot_attributes(model)) model.set_relation("pivot", pivot)
def message(self): """ Render the body of the message to a string. """ template_name = self.template_name() if \ callable(self.template_name) \ else self.template_name return loader.render_to_string( template_name, self.get_context(), request=self.request )
Render the body of the message to a string.
Below is the the instruction that describes the task: ### Input: Render the body of the message to a string. ### Response: def message(self): """ Render the body of the message to a string. """ template_name = self.template_name() if \ callable(self.template_name) \ else self.template_name return loader.render_to_string( template_name, self.get_context(), request=self.request )
def validate_login(ctx, param, value): """Ensure that login is not blank.""" # pylint: disable=unused-argument value = value.strip() if not value: raise click.BadParameter("The value cannot be blank.", param=param) return value
Ensure that login is not blank.
Below is the the instruction that describes the task: ### Input: Ensure that login is not blank. ### Response: def validate_login(ctx, param, value): """Ensure that login is not blank.""" # pylint: disable=unused-argument value = value.strip() if not value: raise click.BadParameter("The value cannot be blank.", param=param) return value
def _apply_cn_keys_patch(): """ apply this patch due to an issue in http.client.parse_headers when there're multi-bytes in headers. it will truncate some headers. https://github.com/aliyun/aliyun-log-python-sdk/issues/79 """ import sys if sys.version_info[:2] == (3, 5): import http.client as hc old_parse = hc.parse_headers def parse_header(*args, **kwargs): fp = args[0] old_readline = fp.readline def new_readline(*args, **kwargs): ret = old_readline(*args, **kwargs) if ret.lower().startswith(b'x-log-query-info'): return b'x-log-query-info: \r\n' return ret fp.readline = new_readline ret = old_parse(*args, **kwargs) return ret hc.parse_headers = parse_header
apply this patch due to an issue in http.client.parse_headers when there're multi-bytes in headers. it will truncate some headers. https://github.com/aliyun/aliyun-log-python-sdk/issues/79
Below is the the instruction that describes the task: ### Input: apply this patch due to an issue in http.client.parse_headers when there're multi-bytes in headers. it will truncate some headers. https://github.com/aliyun/aliyun-log-python-sdk/issues/79 ### Response: def _apply_cn_keys_patch(): """ apply this patch due to an issue in http.client.parse_headers when there're multi-bytes in headers. it will truncate some headers. https://github.com/aliyun/aliyun-log-python-sdk/issues/79 """ import sys if sys.version_info[:2] == (3, 5): import http.client as hc old_parse = hc.parse_headers def parse_header(*args, **kwargs): fp = args[0] old_readline = fp.readline def new_readline(*args, **kwargs): ret = old_readline(*args, **kwargs) if ret.lower().startswith(b'x-log-query-info'): return b'x-log-query-info: \r\n' return ret fp.readline = new_readline ret = old_parse(*args, **kwargs) return ret hc.parse_headers = parse_header
def infer_devices(devices=None): """ Returns the list of devices that multi-replica code should use. :param devices: list of string device names, e.g. ["/GPU:0"] If the user specifies this, `infer_devices` checks that it is valid, and then uses this user-specified list. If the user does not specify this, infer_devices uses: - All available GPUs, if there are any - CPU otherwise """ if devices is None: devices = get_available_gpus() if len(devices) == 0: warnings.warn("No GPUS, running on CPU") # Set device to empy string, tf will figure out whether to use # XLA or not, etc., automatically devices = [""] else: assert len(devices) > 0 for device in devices: assert isinstance(device, six.string_types), type(device) return devices
Returns the list of devices that multi-replica code should use. :param devices: list of string device names, e.g. ["/GPU:0"] If the user specifies this, `infer_devices` checks that it is valid, and then uses this user-specified list. If the user does not specify this, infer_devices uses: - All available GPUs, if there are any - CPU otherwise
Below is the the instruction that describes the task: ### Input: Returns the list of devices that multi-replica code should use. :param devices: list of string device names, e.g. ["/GPU:0"] If the user specifies this, `infer_devices` checks that it is valid, and then uses this user-specified list. If the user does not specify this, infer_devices uses: - All available GPUs, if there are any - CPU otherwise ### Response: def infer_devices(devices=None): """ Returns the list of devices that multi-replica code should use. :param devices: list of string device names, e.g. ["/GPU:0"] If the user specifies this, `infer_devices` checks that it is valid, and then uses this user-specified list. If the user does not specify this, infer_devices uses: - All available GPUs, if there are any - CPU otherwise """ if devices is None: devices = get_available_gpus() if len(devices) == 0: warnings.warn("No GPUS, running on CPU") # Set device to empy string, tf will figure out whether to use # XLA or not, etc., automatically devices = [""] else: assert len(devices) > 0 for device in devices: assert isinstance(device, six.string_types), type(device) return devices
def valid_api_plugin(self, plugin): """ Validate an API plugin, ensuring it is an API plugin and has the necessary fields present. `plugin` is a subclass of scruffy's Plugin class. """ if (issubclass(plugin, APIPlugin) and hasattr(plugin, 'plugin_type') and plugin.plugin_type == 'api' and hasattr(plugin, 'request') and plugin.request != None and hasattr(plugin, 'request_class') and plugin.request_class != None and hasattr(plugin, 'response_class') and plugin.response_class != None): return True return False
Validate an API plugin, ensuring it is an API plugin and has the necessary fields present. `plugin` is a subclass of scruffy's Plugin class.
Below is the the instruction that describes the task: ### Input: Validate an API plugin, ensuring it is an API plugin and has the necessary fields present. `plugin` is a subclass of scruffy's Plugin class. ### Response: def valid_api_plugin(self, plugin): """ Validate an API plugin, ensuring it is an API plugin and has the necessary fields present. `plugin` is a subclass of scruffy's Plugin class. """ if (issubclass(plugin, APIPlugin) and hasattr(plugin, 'plugin_type') and plugin.plugin_type == 'api' and hasattr(plugin, 'request') and plugin.request != None and hasattr(plugin, 'request_class') and plugin.request_class != None and hasattr(plugin, 'response_class') and plugin.response_class != None): return True return False
def load_history(self) -> List["IterationRecord"]: """ Load messaging history from disk to self. :returns: List of iteration records comprising history. """ if path.isfile(self.history_filename): with open(self.history_filename, "r") as f: try: dicts = json.load(f) except json.decoder.JSONDecodeError as e: self.log.error(f"Got error \n{e}\n decoding JSON history, overwriting it.\n" f"Former history available in {self.history_filename}.bak") copyfile(self.history_filename, f"{self.history_filename}.bak") return [] history: List[IterationRecord] = [] for hdict_pre in dicts: if "_type" in hdict_pre and hdict_pre["_type"] == IterationRecord.__name__: # repair any corrupted entries hdict = _repair(hdict_pre) record = IterationRecord.from_dict(hdict) history.append(record) # Be sure to handle legacy tweetrecord-only histories. # Assume anything without our new _type (which should have been there from the # start, whoops) is a legacy history. else: item = IterationRecord() # Lift extra keys up to upper record (if they exist). extra_keys = hdict_pre.pop("extra_keys", {}) item.extra_keys = extra_keys hdict_obj = TweetRecord.from_dict(hdict_pre) # Lift timestamp up to upper record. item.timestamp = hdict_obj.timestamp item.output_records["birdsite"] = hdict_obj history.append(item) self.log.debug(f"Loaded history:\n {history}") return history else: return []
Load messaging history from disk to self. :returns: List of iteration records comprising history.
Below is the the instruction that describes the task: ### Input: Load messaging history from disk to self. :returns: List of iteration records comprising history. ### Response: def load_history(self) -> List["IterationRecord"]: """ Load messaging history from disk to self. :returns: List of iteration records comprising history. """ if path.isfile(self.history_filename): with open(self.history_filename, "r") as f: try: dicts = json.load(f) except json.decoder.JSONDecodeError as e: self.log.error(f"Got error \n{e}\n decoding JSON history, overwriting it.\n" f"Former history available in {self.history_filename}.bak") copyfile(self.history_filename, f"{self.history_filename}.bak") return [] history: List[IterationRecord] = [] for hdict_pre in dicts: if "_type" in hdict_pre and hdict_pre["_type"] == IterationRecord.__name__: # repair any corrupted entries hdict = _repair(hdict_pre) record = IterationRecord.from_dict(hdict) history.append(record) # Be sure to handle legacy tweetrecord-only histories. # Assume anything without our new _type (which should have been there from the # start, whoops) is a legacy history. else: item = IterationRecord() # Lift extra keys up to upper record (if they exist). extra_keys = hdict_pre.pop("extra_keys", {}) item.extra_keys = extra_keys hdict_obj = TweetRecord.from_dict(hdict_pre) # Lift timestamp up to upper record. item.timestamp = hdict_obj.timestamp item.output_records["birdsite"] = hdict_obj history.append(item) self.log.debug(f"Loaded history:\n {history}") return history else: return []
def _grow_alive_seq(self, state): """Grow alive sequences by one token, and collect top 2*beam_size sequences. 2*beam_size sequences are collected because some sequences may have reached the EOS token. 2*beam_size ensures that at least beam_size sequences are still alive. Args: state: A dictionary with the current loop state. Returns: Tuple of (Top 2*beam_size sequences [batch_size, 2 * beam_size, cur_index + 1], Scores of returned sequences [batch_size, 2 * beam_size], New alive cache, for each of the 2 * beam_size sequences) """ i = state[_StateKeys.CUR_INDEX] alive_seq = state[_StateKeys.ALIVE_SEQ] alive_log_probs = state[_StateKeys.ALIVE_LOG_PROBS] alive_cache = state[_StateKeys.ALIVE_CACHE] beams_to_keep = 2 * self.beam_size # Get logits for the next candidate IDs for the alive sequences. Get the new # cache values at the same time. flat_ids = _flatten_beam_dim(alive_seq) # [batch_size * beam_size] flat_cache = nest.map_structure(_flatten_beam_dim, alive_cache) flat_logits, flat_cache = self.symbols_to_logits_fn(flat_ids, i, flat_cache) # Unflatten logits to shape [batch_size, beam_size, vocab_size] logits = _unflatten_beam_dim(flat_logits, self.batch_size, self.beam_size) new_cache = nest.map_structure( lambda t: _unflatten_beam_dim(t, self.batch_size, self.beam_size), flat_cache) # Convert logits to normalized log probs candidate_log_probs = _log_prob_from_logits(logits) # Calculate new log probabilities if each of the alive sequences were # extended # by the the candidate IDs. # Shape [batch_size, beam_size, vocab_size] log_probs = candidate_log_probs + tf.expand_dims(alive_log_probs, axis=2) # Each batch item has beam_size * vocab_size candidate sequences. For each # batch item, get the k candidates with the highest log probabilities. flat_log_probs = tf.reshape(log_probs, [-1, self.beam_size * self.vocab_size]) topk_log_probs, topk_indices = tf.nn.top_k(flat_log_probs, k=beams_to_keep) # Extract the alive sequences that generate the highest log probabilities # after being extended. topk_beam_indices = topk_indices // self.vocab_size topk_seq, new_cache = _gather_beams( [alive_seq, new_cache], topk_beam_indices, self.batch_size, beams_to_keep) # Append the most probable IDs to the topk sequences topk_ids = topk_indices % self.vocab_size topk_ids = tf.expand_dims(topk_ids, axis=2) topk_seq = tf.concat([topk_seq, topk_ids], axis=2) return topk_seq, topk_log_probs, new_cache
Grow alive sequences by one token, and collect top 2*beam_size sequences. 2*beam_size sequences are collected because some sequences may have reached the EOS token. 2*beam_size ensures that at least beam_size sequences are still alive. Args: state: A dictionary with the current loop state. Returns: Tuple of (Top 2*beam_size sequences [batch_size, 2 * beam_size, cur_index + 1], Scores of returned sequences [batch_size, 2 * beam_size], New alive cache, for each of the 2 * beam_size sequences)
Below is the the instruction that describes the task: ### Input: Grow alive sequences by one token, and collect top 2*beam_size sequences. 2*beam_size sequences are collected because some sequences may have reached the EOS token. 2*beam_size ensures that at least beam_size sequences are still alive. Args: state: A dictionary with the current loop state. Returns: Tuple of (Top 2*beam_size sequences [batch_size, 2 * beam_size, cur_index + 1], Scores of returned sequences [batch_size, 2 * beam_size], New alive cache, for each of the 2 * beam_size sequences) ### Response: def _grow_alive_seq(self, state): """Grow alive sequences by one token, and collect top 2*beam_size sequences. 2*beam_size sequences are collected because some sequences may have reached the EOS token. 2*beam_size ensures that at least beam_size sequences are still alive. Args: state: A dictionary with the current loop state. Returns: Tuple of (Top 2*beam_size sequences [batch_size, 2 * beam_size, cur_index + 1], Scores of returned sequences [batch_size, 2 * beam_size], New alive cache, for each of the 2 * beam_size sequences) """ i = state[_StateKeys.CUR_INDEX] alive_seq = state[_StateKeys.ALIVE_SEQ] alive_log_probs = state[_StateKeys.ALIVE_LOG_PROBS] alive_cache = state[_StateKeys.ALIVE_CACHE] beams_to_keep = 2 * self.beam_size # Get logits for the next candidate IDs for the alive sequences. Get the new # cache values at the same time. flat_ids = _flatten_beam_dim(alive_seq) # [batch_size * beam_size] flat_cache = nest.map_structure(_flatten_beam_dim, alive_cache) flat_logits, flat_cache = self.symbols_to_logits_fn(flat_ids, i, flat_cache) # Unflatten logits to shape [batch_size, beam_size, vocab_size] logits = _unflatten_beam_dim(flat_logits, self.batch_size, self.beam_size) new_cache = nest.map_structure( lambda t: _unflatten_beam_dim(t, self.batch_size, self.beam_size), flat_cache) # Convert logits to normalized log probs candidate_log_probs = _log_prob_from_logits(logits) # Calculate new log probabilities if each of the alive sequences were # extended # by the the candidate IDs. # Shape [batch_size, beam_size, vocab_size] log_probs = candidate_log_probs + tf.expand_dims(alive_log_probs, axis=2) # Each batch item has beam_size * vocab_size candidate sequences. For each # batch item, get the k candidates with the highest log probabilities. flat_log_probs = tf.reshape(log_probs, [-1, self.beam_size * self.vocab_size]) topk_log_probs, topk_indices = tf.nn.top_k(flat_log_probs, k=beams_to_keep) # Extract the alive sequences that generate the highest log probabilities # after being extended. topk_beam_indices = topk_indices // self.vocab_size topk_seq, new_cache = _gather_beams( [alive_seq, new_cache], topk_beam_indices, self.batch_size, beams_to_keep) # Append the most probable IDs to the topk sequences topk_ids = topk_indices % self.vocab_size topk_ids = tf.expand_dims(topk_ids, axis=2) topk_seq = tf.concat([topk_seq, topk_ids], axis=2) return topk_seq, topk_log_probs, new_cache
def getOverlayKey(self, ulOverlayHandle, pchValue, unBufferSize): """ Fills the provided buffer with the string key of the overlay. Returns the size of buffer required to store the key, including the terminating null character. k_unVROverlayMaxKeyLength will be enough bytes to fit the string. """ fn = self.function_table.getOverlayKey pError = EVROverlayError() result = fn(ulOverlayHandle, pchValue, unBufferSize, byref(pError)) return result, pError
Fills the provided buffer with the string key of the overlay. Returns the size of buffer required to store the key, including the terminating null character. k_unVROverlayMaxKeyLength will be enough bytes to fit the string.
Below is the the instruction that describes the task: ### Input: Fills the provided buffer with the string key of the overlay. Returns the size of buffer required to store the key, including the terminating null character. k_unVROverlayMaxKeyLength will be enough bytes to fit the string. ### Response: def getOverlayKey(self, ulOverlayHandle, pchValue, unBufferSize): """ Fills the provided buffer with the string key of the overlay. Returns the size of buffer required to store the key, including the terminating null character. k_unVROverlayMaxKeyLength will be enough bytes to fit the string. """ fn = self.function_table.getOverlayKey pError = EVROverlayError() result = fn(ulOverlayHandle, pchValue, unBufferSize, byref(pError)) return result, pError
def _get_ssl_validation(self, value): """Get the TLS Validation option. :param str value: :return: TLS Certificate Options """ return self._get_ssl_attribute(value, compatibility.SSL_CERT_MAP, ssl.CERT_NONE, 'ssl_options: cert_reqs \'%s\' not ' 'found falling back to CERT_NONE.')
Get the TLS Validation option. :param str value: :return: TLS Certificate Options
Below is the the instruction that describes the task: ### Input: Get the TLS Validation option. :param str value: :return: TLS Certificate Options ### Response: def _get_ssl_validation(self, value): """Get the TLS Validation option. :param str value: :return: TLS Certificate Options """ return self._get_ssl_attribute(value, compatibility.SSL_CERT_MAP, ssl.CERT_NONE, 'ssl_options: cert_reqs \'%s\' not ' 'found falling back to CERT_NONE.')
def authenticate(self, username, password, attribute=None, base_dn=None, search_filter=None, search_scope=SUBTREE): '''Attempts to bind a user to the LDAP server. Args: username (str): DN or the username to attempt to bind with. password (str): The password of the username. attribute (str): The LDAP attribute for the username. base_dn (str): The LDAP basedn to search on. search_filter (str): LDAP searchfilter to attempt the user search with. Returns: bool: ``True`` if successful or ``False`` if the credentials are invalid. ''' # If the username is no valid DN we can bind with, we need to find # the user first. valid_dn = False try: parse_dn(username) valid_dn = True except LDAPInvalidDnError: pass if valid_dn is False: user_filter = '({0}={1})'.format(attribute, username) if search_filter is not None: user_filter = '(&{0}{1})'.format(user_filter, search_filter) try: self.connection.search(base_dn, user_filter, search_scope, attributes=[attribute]) response = self.connection.response username = response[0]['dn'] except (LDAPInvalidDnError, LDAPInvalidFilterError, IndexError): return False try: conn = self.connect(username, password) conn.unbind() return True except LDAPBindError: return False
Attempts to bind a user to the LDAP server. Args: username (str): DN or the username to attempt to bind with. password (str): The password of the username. attribute (str): The LDAP attribute for the username. base_dn (str): The LDAP basedn to search on. search_filter (str): LDAP searchfilter to attempt the user search with. Returns: bool: ``True`` if successful or ``False`` if the credentials are invalid.
Below is the the instruction that describes the task: ### Input: Attempts to bind a user to the LDAP server. Args: username (str): DN or the username to attempt to bind with. password (str): The password of the username. attribute (str): The LDAP attribute for the username. base_dn (str): The LDAP basedn to search on. search_filter (str): LDAP searchfilter to attempt the user search with. Returns: bool: ``True`` if successful or ``False`` if the credentials are invalid. ### Response: def authenticate(self, username, password, attribute=None, base_dn=None, search_filter=None, search_scope=SUBTREE): '''Attempts to bind a user to the LDAP server. Args: username (str): DN or the username to attempt to bind with. password (str): The password of the username. attribute (str): The LDAP attribute for the username. base_dn (str): The LDAP basedn to search on. search_filter (str): LDAP searchfilter to attempt the user search with. Returns: bool: ``True`` if successful or ``False`` if the credentials are invalid. ''' # If the username is no valid DN we can bind with, we need to find # the user first. valid_dn = False try: parse_dn(username) valid_dn = True except LDAPInvalidDnError: pass if valid_dn is False: user_filter = '({0}={1})'.format(attribute, username) if search_filter is not None: user_filter = '(&{0}{1})'.format(user_filter, search_filter) try: self.connection.search(base_dn, user_filter, search_scope, attributes=[attribute]) response = self.connection.response username = response[0]['dn'] except (LDAPInvalidDnError, LDAPInvalidFilterError, IndexError): return False try: conn = self.connect(username, password) conn.unbind() return True except LDAPBindError: return False
def auth(username, password): ''' File based authentication ^filename The path to the file to use for authentication. ^filetype The type of file: ``text``, ``htpasswd``, ``htdigest``. Default: ``text`` ^realm The realm required by htdigest authentication. .. note:: The following parameters are only used with the ``text`` filetype. ^hashtype The digest format of the password. Can be ``plaintext`` or any digest available via :py:func:`hashutil.digest <salt.modules.hashutil.digest>`. Default: ``plaintext`` ^field_separator The character to use as a delimiter between fields in a text file. Default: ``:`` ^username_field The numbered field in the text file that contains the username, with numbering beginning at 1 (one). Default: ``1`` ^password_field The numbered field in the text file that contains the password, with numbering beginning at 1 (one). Default: ``2`` ''' config = _get_file_auth_config() if not config: return False auth_function = FILETYPE_FUNCTION_MAP.get(config['filetype'], 'text') return auth_function(username, password, **config)
File based authentication ^filename The path to the file to use for authentication. ^filetype The type of file: ``text``, ``htpasswd``, ``htdigest``. Default: ``text`` ^realm The realm required by htdigest authentication. .. note:: The following parameters are only used with the ``text`` filetype. ^hashtype The digest format of the password. Can be ``plaintext`` or any digest available via :py:func:`hashutil.digest <salt.modules.hashutil.digest>`. Default: ``plaintext`` ^field_separator The character to use as a delimiter between fields in a text file. Default: ``:`` ^username_field The numbered field in the text file that contains the username, with numbering beginning at 1 (one). Default: ``1`` ^password_field The numbered field in the text file that contains the password, with numbering beginning at 1 (one). Default: ``2``
Below is the the instruction that describes the task: ### Input: File based authentication ^filename The path to the file to use for authentication. ^filetype The type of file: ``text``, ``htpasswd``, ``htdigest``. Default: ``text`` ^realm The realm required by htdigest authentication. .. note:: The following parameters are only used with the ``text`` filetype. ^hashtype The digest format of the password. Can be ``plaintext`` or any digest available via :py:func:`hashutil.digest <salt.modules.hashutil.digest>`. Default: ``plaintext`` ^field_separator The character to use as a delimiter between fields in a text file. Default: ``:`` ^username_field The numbered field in the text file that contains the username, with numbering beginning at 1 (one). Default: ``1`` ^password_field The numbered field in the text file that contains the password, with numbering beginning at 1 (one). Default: ``2`` ### Response: def auth(username, password): ''' File based authentication ^filename The path to the file to use for authentication. ^filetype The type of file: ``text``, ``htpasswd``, ``htdigest``. Default: ``text`` ^realm The realm required by htdigest authentication. .. note:: The following parameters are only used with the ``text`` filetype. ^hashtype The digest format of the password. Can be ``plaintext`` or any digest available via :py:func:`hashutil.digest <salt.modules.hashutil.digest>`. Default: ``plaintext`` ^field_separator The character to use as a delimiter between fields in a text file. Default: ``:`` ^username_field The numbered field in the text file that contains the username, with numbering beginning at 1 (one). Default: ``1`` ^password_field The numbered field in the text file that contains the password, with numbering beginning at 1 (one). Default: ``2`` ''' config = _get_file_auth_config() if not config: return False auth_function = FILETYPE_FUNCTION_MAP.get(config['filetype'], 'text') return auth_function(username, password, **config)
def do_set_hub_connection(self, args): """Set Hub connection parameters. Usage: set_hub_connection username password host [port] Arguments: username: Hub username password: Hub password host: host name or IP address port: IP port [default 25105] """ params = args.split() username = None password = None host = None port = None try: username = params[0] password = params[1] host = params[2] port = params[3] except IndexError: pass if username and password and host: if not port: port = 25105 self.tools.username = username self.tools.password = password self.tools.host = host self.tools.port = port else: _LOGGING.error('username password host are required') self.do_help('set_hub_connection')
Set Hub connection parameters. Usage: set_hub_connection username password host [port] Arguments: username: Hub username password: Hub password host: host name or IP address port: IP port [default 25105]
Below is the the instruction that describes the task: ### Input: Set Hub connection parameters. Usage: set_hub_connection username password host [port] Arguments: username: Hub username password: Hub password host: host name or IP address port: IP port [default 25105] ### Response: def do_set_hub_connection(self, args): """Set Hub connection parameters. Usage: set_hub_connection username password host [port] Arguments: username: Hub username password: Hub password host: host name or IP address port: IP port [default 25105] """ params = args.split() username = None password = None host = None port = None try: username = params[0] password = params[1] host = params[2] port = params[3] except IndexError: pass if username and password and host: if not port: port = 25105 self.tools.username = username self.tools.password = password self.tools.host = host self.tools.port = port else: _LOGGING.error('username password host are required') self.do_help('set_hub_connection')
def delete_user_role(self, user, role): """ Remove role from given user. Args: user (string): User name. role (string): Role to remove. Raises: requests.HTTPError on failure. """ self.project_service.set_auth(self._token_project) self.project_service.delete_user_role(user, role)
Remove role from given user. Args: user (string): User name. role (string): Role to remove. Raises: requests.HTTPError on failure.
Below is the the instruction that describes the task: ### Input: Remove role from given user. Args: user (string): User name. role (string): Role to remove. Raises: requests.HTTPError on failure. ### Response: def delete_user_role(self, user, role): """ Remove role from given user. Args: user (string): User name. role (string): Role to remove. Raises: requests.HTTPError on failure. """ self.project_service.set_auth(self._token_project) self.project_service.delete_user_role(user, role)
def getInitialSample(self, wmg): """ Generate an initial sample for the Markov chain. This function will return a two-dimensional array of integers, such that for each pair of candidates, cand1 and cand2, the array contains 1 if more votes rank cand1 above cand2 and 0 otherwise. ivar: dict<int,<dict,<int,int>>> wmg: A two-dimensional dictionary that associates integer representations of each pair of candidates, cand1 and cand2, with the number of times cand1 is ranked above cand2 minus the number of times cand2 is ranked above cand1. The dictionary represents a weighted majority graph for an election. """ cands = range(len(wmg)) allPairs = itertools.combinations(cands, 2) V = self.createBinaryRelation(len(cands)) for pair in allPairs: if wmg[pair[0]+1][pair[1]+1] > 0: V[pair[0]][pair[1]] = 1 V[pair[1]][pair[0]] = 0 else: V[pair[0]][pair[1]] = 0 V[pair[1]][pair[0]] = 1 return V
Generate an initial sample for the Markov chain. This function will return a two-dimensional array of integers, such that for each pair of candidates, cand1 and cand2, the array contains 1 if more votes rank cand1 above cand2 and 0 otherwise. ivar: dict<int,<dict,<int,int>>> wmg: A two-dimensional dictionary that associates integer representations of each pair of candidates, cand1 and cand2, with the number of times cand1 is ranked above cand2 minus the number of times cand2 is ranked above cand1. The dictionary represents a weighted majority graph for an election.
Below is the the instruction that describes the task: ### Input: Generate an initial sample for the Markov chain. This function will return a two-dimensional array of integers, such that for each pair of candidates, cand1 and cand2, the array contains 1 if more votes rank cand1 above cand2 and 0 otherwise. ivar: dict<int,<dict,<int,int>>> wmg: A two-dimensional dictionary that associates integer representations of each pair of candidates, cand1 and cand2, with the number of times cand1 is ranked above cand2 minus the number of times cand2 is ranked above cand1. The dictionary represents a weighted majority graph for an election. ### Response: def getInitialSample(self, wmg): """ Generate an initial sample for the Markov chain. This function will return a two-dimensional array of integers, such that for each pair of candidates, cand1 and cand2, the array contains 1 if more votes rank cand1 above cand2 and 0 otherwise. ivar: dict<int,<dict,<int,int>>> wmg: A two-dimensional dictionary that associates integer representations of each pair of candidates, cand1 and cand2, with the number of times cand1 is ranked above cand2 minus the number of times cand2 is ranked above cand1. The dictionary represents a weighted majority graph for an election. """ cands = range(len(wmg)) allPairs = itertools.combinations(cands, 2) V = self.createBinaryRelation(len(cands)) for pair in allPairs: if wmg[pair[0]+1][pair[1]+1] > 0: V[pair[0]][pair[1]] = 1 V[pair[1]][pair[0]] = 0 else: V[pair[0]][pair[1]] = 0 V[pair[1]][pair[0]] = 1 return V
def ReadSignedBinaryReferences( self, binary_id ): """See db.Database.""" binary_key = _SignedBinaryKeyFromID(binary_id) try: references, timestamp = self.signed_binary_references[binary_key] except KeyError: raise db.UnknownSignedBinaryError(binary_id) return references.Copy(), timestamp.Copy()
See db.Database.
Below is the the instruction that describes the task: ### Input: See db.Database. ### Response: def ReadSignedBinaryReferences( self, binary_id ): """See db.Database.""" binary_key = _SignedBinaryKeyFromID(binary_id) try: references, timestamp = self.signed_binary_references[binary_key] except KeyError: raise db.UnknownSignedBinaryError(binary_id) return references.Copy(), timestamp.Copy()
def determine_bytes_per_chunk(self, size): """ Calculate the size of chunk a worker should download. The last worker may download less than this depending on file size. :return: int: byte size for a worker """ workers = self.settings.config.download_workers if not workers or workers == 'None': workers = 1 bytes_per_chunk = int(math.ceil(size / float(workers))) if bytes_per_chunk < self.bytes_per_chunk: bytes_per_chunk = self.bytes_per_chunk return bytes_per_chunk
Calculate the size of chunk a worker should download. The last worker may download less than this depending on file size. :return: int: byte size for a worker
Below is the the instruction that describes the task: ### Input: Calculate the size of chunk a worker should download. The last worker may download less than this depending on file size. :return: int: byte size for a worker ### Response: def determine_bytes_per_chunk(self, size): """ Calculate the size of chunk a worker should download. The last worker may download less than this depending on file size. :return: int: byte size for a worker """ workers = self.settings.config.download_workers if not workers or workers == 'None': workers = 1 bytes_per_chunk = int(math.ceil(size / float(workers))) if bytes_per_chunk < self.bytes_per_chunk: bytes_per_chunk = self.bytes_per_chunk return bytes_per_chunk
def power_on(self, context, ports): """ Powers on the remote vm :param models.QualiDriverModels.ResourceRemoteCommandContext context: the context the command runs on :param list[string] ports: the ports of the connection between the remote resource and the local resource, NOT IN USE!!! """ return self._power_command(context, ports, self.vm_power_management_command.power_on)
Powers on the remote vm :param models.QualiDriverModels.ResourceRemoteCommandContext context: the context the command runs on :param list[string] ports: the ports of the connection between the remote resource and the local resource, NOT IN USE!!!
Below is the the instruction that describes the task: ### Input: Powers on the remote vm :param models.QualiDriverModels.ResourceRemoteCommandContext context: the context the command runs on :param list[string] ports: the ports of the connection between the remote resource and the local resource, NOT IN USE!!! ### Response: def power_on(self, context, ports): """ Powers on the remote vm :param models.QualiDriverModels.ResourceRemoteCommandContext context: the context the command runs on :param list[string] ports: the ports of the connection between the remote resource and the local resource, NOT IN USE!!! """ return self._power_command(context, ports, self.vm_power_management_command.power_on)
def email_message( self, user, subject_template, body_template, sender=None, message_class=EmailMessage, **kwargs ): """ Returns an email message for a new user. This can be easily overridden. For instance, to send an HTML message, use the EmailMultiAlternatives message_class and attach the additional conent. """ if sender: try: display_name = sender.get_full_name() except (AttributeError, TypeError): display_name = sender.get_username() from_email = "%s <%s>" % ( display_name, email.utils.parseaddr(settings.DEFAULT_FROM_EMAIL)[1] ) reply_to = "%s <%s>" % (display_name, sender.email) else: from_email = settings.DEFAULT_FROM_EMAIL reply_to = from_email headers = {"Reply-To": reply_to} kwargs.update({"sender": sender, "user": user}) subject_template = loader.get_template(subject_template) body_template = loader.get_template(body_template) subject = subject_template.render( kwargs ).strip() # Remove stray newline characters body = body_template.render(kwargs) return message_class(subject, body, from_email, [user.email], headers=headers)
Returns an email message for a new user. This can be easily overridden. For instance, to send an HTML message, use the EmailMultiAlternatives message_class and attach the additional conent.
Below is the the instruction that describes the task: ### Input: Returns an email message for a new user. This can be easily overridden. For instance, to send an HTML message, use the EmailMultiAlternatives message_class and attach the additional conent. ### Response: def email_message( self, user, subject_template, body_template, sender=None, message_class=EmailMessage, **kwargs ): """ Returns an email message for a new user. This can be easily overridden. For instance, to send an HTML message, use the EmailMultiAlternatives message_class and attach the additional conent. """ if sender: try: display_name = sender.get_full_name() except (AttributeError, TypeError): display_name = sender.get_username() from_email = "%s <%s>" % ( display_name, email.utils.parseaddr(settings.DEFAULT_FROM_EMAIL)[1] ) reply_to = "%s <%s>" % (display_name, sender.email) else: from_email = settings.DEFAULT_FROM_EMAIL reply_to = from_email headers = {"Reply-To": reply_to} kwargs.update({"sender": sender, "user": user}) subject_template = loader.get_template(subject_template) body_template = loader.get_template(body_template) subject = subject_template.render( kwargs ).strip() # Remove stray newline characters body = body_template.render(kwargs) return message_class(subject, body, from_email, [user.email], headers=headers)
def advance(self, height, ignore_overflow=False): """Advance the cursor by `height`. If this would cause the cursor to point beyond the bottom of the container, an :class:`EndOfContainer` exception is raised.""" if height <= self.remaining_height: self._self_cursor.grow(height) elif ignore_overflow: self._self_cursor.grow(float(self.remaining_height)) else: raise ContainerOverflow(self.page.number)
Advance the cursor by `height`. If this would cause the cursor to point beyond the bottom of the container, an :class:`EndOfContainer` exception is raised.
Below is the the instruction that describes the task: ### Input: Advance the cursor by `height`. If this would cause the cursor to point beyond the bottom of the container, an :class:`EndOfContainer` exception is raised. ### Response: def advance(self, height, ignore_overflow=False): """Advance the cursor by `height`. If this would cause the cursor to point beyond the bottom of the container, an :class:`EndOfContainer` exception is raised.""" if height <= self.remaining_height: self._self_cursor.grow(height) elif ignore_overflow: self._self_cursor.grow(float(self.remaining_height)) else: raise ContainerOverflow(self.page.number)
def start_router(router_class, router_name): """Wrapper for starting a router and register it. Args: router_class: The router class to instantiate. router_name: The name to give to the router. Returns: A handle to newly started router actor. """ handle = router_class.remote(router_name) ray.experimental.register_actor(router_name, handle) handle.start.remote() return handle
Wrapper for starting a router and register it. Args: router_class: The router class to instantiate. router_name: The name to give to the router. Returns: A handle to newly started router actor.
Below is the the instruction that describes the task: ### Input: Wrapper for starting a router and register it. Args: router_class: The router class to instantiate. router_name: The name to give to the router. Returns: A handle to newly started router actor. ### Response: def start_router(router_class, router_name): """Wrapper for starting a router and register it. Args: router_class: The router class to instantiate. router_name: The name to give to the router. Returns: A handle to newly started router actor. """ handle = router_class.remote(router_name) ray.experimental.register_actor(router_name, handle) handle.start.remote() return handle
def chat_delete(self, *, channel: str, ts: str, **kwargs) -> SlackResponse: """Deletes a message. Args: channel (str): Channel containing the message to be deleted. e.g. 'C1234567890' ts (str): Timestamp of the message to be deleted. e.g. '1234567890.123456' """ kwargs.update({"channel": channel, "ts": ts}) return self.api_call("chat.delete", json=kwargs)
Deletes a message. Args: channel (str): Channel containing the message to be deleted. e.g. 'C1234567890' ts (str): Timestamp of the message to be deleted. e.g. '1234567890.123456'
Below is the the instruction that describes the task: ### Input: Deletes a message. Args: channel (str): Channel containing the message to be deleted. e.g. 'C1234567890' ts (str): Timestamp of the message to be deleted. e.g. '1234567890.123456' ### Response: def chat_delete(self, *, channel: str, ts: str, **kwargs) -> SlackResponse: """Deletes a message. Args: channel (str): Channel containing the message to be deleted. e.g. 'C1234567890' ts (str): Timestamp of the message to be deleted. e.g. '1234567890.123456' """ kwargs.update({"channel": channel, "ts": ts}) return self.api_call("chat.delete", json=kwargs)
def parse_bewit(bewit): """ Returns a `bewittuple` representing the parts of an encoded bewit string. This has the following named attributes: (id, expiration, mac, ext) :param bewit: A base64 encoded bewit string :type bewit: str """ decoded_bewit = b64decode(bewit).decode('ascii') bewit_parts = decoded_bewit.split("\\") if len(bewit_parts) != 4: raise InvalidBewit('Expected 4 parts to bewit: %s' % decoded_bewit) return bewittuple(*bewit_parts)
Returns a `bewittuple` representing the parts of an encoded bewit string. This has the following named attributes: (id, expiration, mac, ext) :param bewit: A base64 encoded bewit string :type bewit: str
Below is the the instruction that describes the task: ### Input: Returns a `bewittuple` representing the parts of an encoded bewit string. This has the following named attributes: (id, expiration, mac, ext) :param bewit: A base64 encoded bewit string :type bewit: str ### Response: def parse_bewit(bewit): """ Returns a `bewittuple` representing the parts of an encoded bewit string. This has the following named attributes: (id, expiration, mac, ext) :param bewit: A base64 encoded bewit string :type bewit: str """ decoded_bewit = b64decode(bewit).decode('ascii') bewit_parts = decoded_bewit.split("\\") if len(bewit_parts) != 4: raise InvalidBewit('Expected 4 parts to bewit: %s' % decoded_bewit) return bewittuple(*bewit_parts)
def incremental_fit(self, train_x, train_y): """ Incrementally fit the regressor. """ if not self._first_fitted: raise ValueError("The first_fit function needs to be called first.") train_x, train_y = np.array(train_x), np.array(train_y) # Incrementally compute K up_right_k = edit_distance_matrix(self._x, train_x) down_left_k = np.transpose(up_right_k) down_right_k = edit_distance_matrix(train_x) up_k = np.concatenate((self._distance_matrix, up_right_k), axis=1) down_k = np.concatenate((down_left_k, down_right_k), axis=1) temp_distance_matrix = np.concatenate((up_k, down_k), axis=0) k_matrix = bourgain_embedding_matrix(temp_distance_matrix) diagonal = np.diag_indices_from(k_matrix) diagonal = (diagonal[0][-len(train_x) :], diagonal[1][-len(train_x) :]) k_matrix[diagonal] += self.alpha try: self._l_matrix = cholesky(k_matrix, lower=True) # Line 2 except LinAlgError: return self self._x = np.concatenate((self._x, train_x), axis=0) self._y = np.concatenate((self._y, train_y), axis=0) self._distance_matrix = temp_distance_matrix self._alpha_vector = cho_solve((self._l_matrix, True), self._y) # Line 3 return self
Incrementally fit the regressor.
Below is the the instruction that describes the task: ### Input: Incrementally fit the regressor. ### Response: def incremental_fit(self, train_x, train_y): """ Incrementally fit the regressor. """ if not self._first_fitted: raise ValueError("The first_fit function needs to be called first.") train_x, train_y = np.array(train_x), np.array(train_y) # Incrementally compute K up_right_k = edit_distance_matrix(self._x, train_x) down_left_k = np.transpose(up_right_k) down_right_k = edit_distance_matrix(train_x) up_k = np.concatenate((self._distance_matrix, up_right_k), axis=1) down_k = np.concatenate((down_left_k, down_right_k), axis=1) temp_distance_matrix = np.concatenate((up_k, down_k), axis=0) k_matrix = bourgain_embedding_matrix(temp_distance_matrix) diagonal = np.diag_indices_from(k_matrix) diagonal = (diagonal[0][-len(train_x) :], diagonal[1][-len(train_x) :]) k_matrix[diagonal] += self.alpha try: self._l_matrix = cholesky(k_matrix, lower=True) # Line 2 except LinAlgError: return self self._x = np.concatenate((self._x, train_x), axis=0) self._y = np.concatenate((self._y, train_y), axis=0) self._distance_matrix = temp_distance_matrix self._alpha_vector = cho_solve((self._l_matrix, True), self._y) # Line 3 return self
def find(init, start=None, one=False, is_exec=False, content=None, parallelize=True, workers=None): """ Finds a given 'target' (filename string) in the file system """ base_start, target, suffix = _find_init(init, start) def _condition(file_path, dirpath, filenames): if target in filenames or is_exec and os.access(file_path, os.X_OK): if not suffix or has_suffix(dirpath, suffix): if not content or search_file(content, file_path): return True return False starting_points, watch_dirs, excludes = _get_starting_points(base_start) disintegrated_excludes = [disintegrate(e) for e in excludes] def _filter(dirnames, dirpath): if disintegrate(dirpath) in watch_dirs: for e in disintegrated_excludes: if e[-1] in dirnames: if disintegrate(dirpath) == e[:-1]: dirnames.remove(e[-1]) def _fetch(top): results = [] for dirpath, dirnames, filenames in os.walk(top, topdown=True): # This if-statement is designed to save time _filter(dirnames, dirpath) file_path = os.path.normpath(os.path.join(dirpath, target)) if _condition(file_path, dirpath, filenames): results.append(file_path) return results st = time() if parallelize: unzipped_results = distribute(_fetch, starting_points, workers=workers) else: unzipped_results = [_fetch(point) for point in base_start] et = time() #print(et - st) zipped_results = [i for item in unzipped_results for i in item] processed_results = process_output(zipped_results, one=one) return processed_results
Finds a given 'target' (filename string) in the file system
Below is the the instruction that describes the task: ### Input: Finds a given 'target' (filename string) in the file system ### Response: def find(init, start=None, one=False, is_exec=False, content=None, parallelize=True, workers=None): """ Finds a given 'target' (filename string) in the file system """ base_start, target, suffix = _find_init(init, start) def _condition(file_path, dirpath, filenames): if target in filenames or is_exec and os.access(file_path, os.X_OK): if not suffix or has_suffix(dirpath, suffix): if not content or search_file(content, file_path): return True return False starting_points, watch_dirs, excludes = _get_starting_points(base_start) disintegrated_excludes = [disintegrate(e) for e in excludes] def _filter(dirnames, dirpath): if disintegrate(dirpath) in watch_dirs: for e in disintegrated_excludes: if e[-1] in dirnames: if disintegrate(dirpath) == e[:-1]: dirnames.remove(e[-1]) def _fetch(top): results = [] for dirpath, dirnames, filenames in os.walk(top, topdown=True): # This if-statement is designed to save time _filter(dirnames, dirpath) file_path = os.path.normpath(os.path.join(dirpath, target)) if _condition(file_path, dirpath, filenames): results.append(file_path) return results st = time() if parallelize: unzipped_results = distribute(_fetch, starting_points, workers=workers) else: unzipped_results = [_fetch(point) for point in base_start] et = time() #print(et - st) zipped_results = [i for item in unzipped_results for i in item] processed_results = process_output(zipped_results, one=one) return processed_results
def show_letter( self, s, text_colour=[255, 255, 255], back_colour=[0, 0, 0] ): """ Displays a single text character on the LED matrix using the specified colours """ if len(s) > 1: raise ValueError('Only one character may be passed into this method') # We must rotate the pixel map left through 90 degrees when drawing # text, see _load_text_assets previous_rotation = self._rotation self._rotation -= 90 if self._rotation < 0: self._rotation = 270 dummy_colour = [None, None, None] pixel_list = [dummy_colour] * 8 pixel_list.extend(self._get_char_pixels(s)) pixel_list.extend([dummy_colour] * 16) coloured_pixels = [ text_colour if pixel == [255, 255, 255] else back_colour for pixel in pixel_list ] self.set_pixels(coloured_pixels) self._rotation = previous_rotation
Displays a single text character on the LED matrix using the specified colours
Below is the the instruction that describes the task: ### Input: Displays a single text character on the LED matrix using the specified colours ### Response: def show_letter( self, s, text_colour=[255, 255, 255], back_colour=[0, 0, 0] ): """ Displays a single text character on the LED matrix using the specified colours """ if len(s) > 1: raise ValueError('Only one character may be passed into this method') # We must rotate the pixel map left through 90 degrees when drawing # text, see _load_text_assets previous_rotation = self._rotation self._rotation -= 90 if self._rotation < 0: self._rotation = 270 dummy_colour = [None, None, None] pixel_list = [dummy_colour] * 8 pixel_list.extend(self._get_char_pixels(s)) pixel_list.extend([dummy_colour] * 16) coloured_pixels = [ text_colour if pixel == [255, 255, 255] else back_colour for pixel in pixel_list ] self.set_pixels(coloured_pixels) self._rotation = previous_rotation
def get_outfile(self, outfile, goids=None): """Return output file for GO Term plot.""" # 1. Use the user-specfied output filename for the GO Term plot if outfile != self.dflt_outfile: return outfile # 2. If only plotting 1 GO term, use GO is in plot name if goids is not None and len(goids) == 1: goid = next(iter(goids)) goobj = self.gosubdag.go2obj[goid] fout = "GO_{NN}_{NM}".format(NN=goid.replace("GO:", ""), NM=goobj.name) return ".".join([re.sub(r"[\s#'()+,-./:<=>\[\]_}]", '_', fout), 'png']) # 3. Return default name return self.dflt_outfile
Return output file for GO Term plot.
Below is the the instruction that describes the task: ### Input: Return output file for GO Term plot. ### Response: def get_outfile(self, outfile, goids=None): """Return output file for GO Term plot.""" # 1. Use the user-specfied output filename for the GO Term plot if outfile != self.dflt_outfile: return outfile # 2. If only plotting 1 GO term, use GO is in plot name if goids is not None and len(goids) == 1: goid = next(iter(goids)) goobj = self.gosubdag.go2obj[goid] fout = "GO_{NN}_{NM}".format(NN=goid.replace("GO:", ""), NM=goobj.name) return ".".join([re.sub(r"[\s#'()+,-./:<=>\[\]_}]", '_', fout), 'png']) # 3. Return default name return self.dflt_outfile
def process_response(self, response_json, resource, elapsed_time, lightweight): """ :param dict/list response_json: Response in dict format :param BaseResource resource: Resource data structure :param float elapsed_time: Elapsed time of request :param bool lightweight: If True will return dict not a resource (22x faster) """ if isinstance(response_json, list): result = response_json else: result = response_json.get('result', response_json) if lightweight: return result elif self.client.lightweight and lightweight is not False: return result elif isinstance(result, list): try: return [resource(elapsed_time=elapsed_time, **x) for x in result] except TypeError: raise InvalidResponse(response=result) else: try: return resource(elapsed_time=elapsed_time, **result) except TypeError: raise InvalidResponse(response=result)
:param dict/list response_json: Response in dict format :param BaseResource resource: Resource data structure :param float elapsed_time: Elapsed time of request :param bool lightweight: If True will return dict not a resource (22x faster)
Below is the the instruction that describes the task: ### Input: :param dict/list response_json: Response in dict format :param BaseResource resource: Resource data structure :param float elapsed_time: Elapsed time of request :param bool lightweight: If True will return dict not a resource (22x faster) ### Response: def process_response(self, response_json, resource, elapsed_time, lightweight): """ :param dict/list response_json: Response in dict format :param BaseResource resource: Resource data structure :param float elapsed_time: Elapsed time of request :param bool lightweight: If True will return dict not a resource (22x faster) """ if isinstance(response_json, list): result = response_json else: result = response_json.get('result', response_json) if lightweight: return result elif self.client.lightweight and lightweight is not False: return result elif isinstance(result, list): try: return [resource(elapsed_time=elapsed_time, **x) for x in result] except TypeError: raise InvalidResponse(response=result) else: try: return resource(elapsed_time=elapsed_time, **result) except TypeError: raise InvalidResponse(response=result)
def init_all_receivers(): """ Initialize all discovered Denon AVR receivers in LAN zone. Returns a list of created Denon AVR instances. By default SSDP broadcasts are sent up to 3 times with a 2 seconds timeout. """ receivers = discover() init_receivers = [] for receiver in receivers: init_receiver = DenonAVR(receiver["host"]) init_receivers.append(init_receiver) return init_receivers
Initialize all discovered Denon AVR receivers in LAN zone. Returns a list of created Denon AVR instances. By default SSDP broadcasts are sent up to 3 times with a 2 seconds timeout.
Below is the the instruction that describes the task: ### Input: Initialize all discovered Denon AVR receivers in LAN zone. Returns a list of created Denon AVR instances. By default SSDP broadcasts are sent up to 3 times with a 2 seconds timeout. ### Response: def init_all_receivers(): """ Initialize all discovered Denon AVR receivers in LAN zone. Returns a list of created Denon AVR instances. By default SSDP broadcasts are sent up to 3 times with a 2 seconds timeout. """ receivers = discover() init_receivers = [] for receiver in receivers: init_receiver = DenonAVR(receiver["host"]) init_receivers.append(init_receiver) return init_receivers
def _set_edge_loop_detection(self, v, load=False): """ Setter method for edge_loop_detection, mapped from YANG variable /protocol/edge_loop_detection (container) If this variable is read-only (config: false) in the source YANG file, then _set_edge_loop_detection is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_edge_loop_detection() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=edge_loop_detection.edge_loop_detection, is_container='container', presence=True, yang_name="edge-loop-detection", rest_name="loop-detection", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'eld_system', u'info': u'Configure ELD parameters', u'cli-full-no': None, u'sort-priority': u'69', u'cli-full-command': None, u'cli-add-mode': None, u'alt-name': u'loop-detection', u'cli-mode-name': u'config-loop-detect'}}, namespace='urn:brocade.com:mgmt:brocade-eld', defining_module='brocade-eld', yang_type='container', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """edge_loop_detection must be of a type compatible with container""", 'defined-type': "container", 'generated-type': """YANGDynClass(base=edge_loop_detection.edge_loop_detection, is_container='container', presence=True, yang_name="edge-loop-detection", rest_name="loop-detection", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'eld_system', u'info': u'Configure ELD parameters', u'cli-full-no': None, u'sort-priority': u'69', u'cli-full-command': None, u'cli-add-mode': None, u'alt-name': u'loop-detection', u'cli-mode-name': u'config-loop-detect'}}, namespace='urn:brocade.com:mgmt:brocade-eld', defining_module='brocade-eld', yang_type='container', is_config=True)""", }) self.__edge_loop_detection = t if hasattr(self, '_set'): self._set()
Setter method for edge_loop_detection, mapped from YANG variable /protocol/edge_loop_detection (container) If this variable is read-only (config: false) in the source YANG file, then _set_edge_loop_detection is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_edge_loop_detection() directly.
Below is the the instruction that describes the task: ### Input: Setter method for edge_loop_detection, mapped from YANG variable /protocol/edge_loop_detection (container) If this variable is read-only (config: false) in the source YANG file, then _set_edge_loop_detection is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_edge_loop_detection() directly. ### Response: def _set_edge_loop_detection(self, v, load=False): """ Setter method for edge_loop_detection, mapped from YANG variable /protocol/edge_loop_detection (container) If this variable is read-only (config: false) in the source YANG file, then _set_edge_loop_detection is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_edge_loop_detection() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=edge_loop_detection.edge_loop_detection, is_container='container', presence=True, yang_name="edge-loop-detection", rest_name="loop-detection", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'eld_system', u'info': u'Configure ELD parameters', u'cli-full-no': None, u'sort-priority': u'69', u'cli-full-command': None, u'cli-add-mode': None, u'alt-name': u'loop-detection', u'cli-mode-name': u'config-loop-detect'}}, namespace='urn:brocade.com:mgmt:brocade-eld', defining_module='brocade-eld', yang_type='container', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """edge_loop_detection must be of a type compatible with container""", 'defined-type': "container", 'generated-type': """YANGDynClass(base=edge_loop_detection.edge_loop_detection, is_container='container', presence=True, yang_name="edge-loop-detection", rest_name="loop-detection", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'eld_system', u'info': u'Configure ELD parameters', u'cli-full-no': None, u'sort-priority': u'69', u'cli-full-command': None, u'cli-add-mode': None, u'alt-name': u'loop-detection', u'cli-mode-name': u'config-loop-detect'}}, namespace='urn:brocade.com:mgmt:brocade-eld', defining_module='brocade-eld', yang_type='container', is_config=True)""", }) self.__edge_loop_detection = t if hasattr(self, '_set'): self._set()
def get_scheduled_analyses(self): """ Retrieve all scheduled analyses for this instance. :return: A list of :class:`.ScheduledAnalysis` objects. """ url = '{}scheduled_analyses/'.format(self.url) return ScheduledAnalysis._get_list_from_url(url, append_base_url=False)
Retrieve all scheduled analyses for this instance. :return: A list of :class:`.ScheduledAnalysis` objects.
Below is the the instruction that describes the task: ### Input: Retrieve all scheduled analyses for this instance. :return: A list of :class:`.ScheduledAnalysis` objects. ### Response: def get_scheduled_analyses(self): """ Retrieve all scheduled analyses for this instance. :return: A list of :class:`.ScheduledAnalysis` objects. """ url = '{}scheduled_analyses/'.format(self.url) return ScheduledAnalysis._get_list_from_url(url, append_base_url=False)
def run_zone(self, minutes, zone=None): """ Run or stop a zone or all zones for an amount of time. :param minutes: The number of minutes to run. :type minutes: int :param zone: The zone number to run. If no zone is specified then run all zones. :type zone: int or None :returns: The response from set_zones() or None if there was an error. :rtype: None or string """ if zone is None: zone_cmd = 'runall' relay_id = None else: if zone < 0 or zone > (len(self.relays) - 1): return None else: zone_cmd = 'run' relay_id = self.relays[zone]['relay_id'] if minutes <= 0: time_cmd = 0 if zone is None: zone_cmd = 'stopall' else: zone_cmd = 'stop' else: time_cmd = minutes * 60 return set_zones(self._user_token, zone_cmd, relay_id, time_cmd)
Run or stop a zone or all zones for an amount of time. :param minutes: The number of minutes to run. :type minutes: int :param zone: The zone number to run. If no zone is specified then run all zones. :type zone: int or None :returns: The response from set_zones() or None if there was an error. :rtype: None or string
Below is the the instruction that describes the task: ### Input: Run or stop a zone or all zones for an amount of time. :param minutes: The number of minutes to run. :type minutes: int :param zone: The zone number to run. If no zone is specified then run all zones. :type zone: int or None :returns: The response from set_zones() or None if there was an error. :rtype: None or string ### Response: def run_zone(self, minutes, zone=None): """ Run or stop a zone or all zones for an amount of time. :param minutes: The number of minutes to run. :type minutes: int :param zone: The zone number to run. If no zone is specified then run all zones. :type zone: int or None :returns: The response from set_zones() or None if there was an error. :rtype: None or string """ if zone is None: zone_cmd = 'runall' relay_id = None else: if zone < 0 or zone > (len(self.relays) - 1): return None else: zone_cmd = 'run' relay_id = self.relays[zone]['relay_id'] if minutes <= 0: time_cmd = 0 if zone is None: zone_cmd = 'stopall' else: zone_cmd = 'stop' else: time_cmd = minutes * 60 return set_zones(self._user_token, zone_cmd, relay_id, time_cmd)
def put_rpc(self, address, rpc_id, arg_payload, response): """Place an RPC onto the RPC queue. The rpc will be dispatched asynchronously by the background dispatch task. This method must be called from the event loop. This method does not block. Args: address (int): The address of the tile with the RPC rpc_id (int): The id of the rpc you want to call arg_payload (bytes): The RPC payload respones (GenericResponse): The object to use to signal the result. """ self._rpc_queue.put_nowait((address, rpc_id, arg_payload, response))
Place an RPC onto the RPC queue. The rpc will be dispatched asynchronously by the background dispatch task. This method must be called from the event loop. This method does not block. Args: address (int): The address of the tile with the RPC rpc_id (int): The id of the rpc you want to call arg_payload (bytes): The RPC payload respones (GenericResponse): The object to use to signal the result.
Below is the the instruction that describes the task: ### Input: Place an RPC onto the RPC queue. The rpc will be dispatched asynchronously by the background dispatch task. This method must be called from the event loop. This method does not block. Args: address (int): The address of the tile with the RPC rpc_id (int): The id of the rpc you want to call arg_payload (bytes): The RPC payload respones (GenericResponse): The object to use to signal the result. ### Response: def put_rpc(self, address, rpc_id, arg_payload, response): """Place an RPC onto the RPC queue. The rpc will be dispatched asynchronously by the background dispatch task. This method must be called from the event loop. This method does not block. Args: address (int): The address of the tile with the RPC rpc_id (int): The id of the rpc you want to call arg_payload (bytes): The RPC payload respones (GenericResponse): The object to use to signal the result. """ self._rpc_queue.put_nowait((address, rpc_id, arg_payload, response))
def associate_azure_publisher(self, publisher_name, azure_publisher_id): """AssociateAzurePublisher. [Preview API] :param str publisher_name: :param str azure_publisher_id: :rtype: :class:`<AzurePublisher> <azure.devops.v5_0.gallery.models.AzurePublisher>` """ route_values = {} if publisher_name is not None: route_values['publisherName'] = self._serialize.url('publisher_name', publisher_name, 'str') query_parameters = {} if azure_publisher_id is not None: query_parameters['azurePublisherId'] = self._serialize.query('azure_publisher_id', azure_publisher_id, 'str') response = self._send(http_method='PUT', location_id='efd202a6-9d87-4ebc-9229-d2b8ae2fdb6d', version='5.0-preview.1', route_values=route_values, query_parameters=query_parameters) return self._deserialize('AzurePublisher', response)
AssociateAzurePublisher. [Preview API] :param str publisher_name: :param str azure_publisher_id: :rtype: :class:`<AzurePublisher> <azure.devops.v5_0.gallery.models.AzurePublisher>`
Below is the the instruction that describes the task: ### Input: AssociateAzurePublisher. [Preview API] :param str publisher_name: :param str azure_publisher_id: :rtype: :class:`<AzurePublisher> <azure.devops.v5_0.gallery.models.AzurePublisher>` ### Response: def associate_azure_publisher(self, publisher_name, azure_publisher_id): """AssociateAzurePublisher. [Preview API] :param str publisher_name: :param str azure_publisher_id: :rtype: :class:`<AzurePublisher> <azure.devops.v5_0.gallery.models.AzurePublisher>` """ route_values = {} if publisher_name is not None: route_values['publisherName'] = self._serialize.url('publisher_name', publisher_name, 'str') query_parameters = {} if azure_publisher_id is not None: query_parameters['azurePublisherId'] = self._serialize.query('azure_publisher_id', azure_publisher_id, 'str') response = self._send(http_method='PUT', location_id='efd202a6-9d87-4ebc-9229-d2b8ae2fdb6d', version='5.0-preview.1', route_values=route_values, query_parameters=query_parameters) return self._deserialize('AzurePublisher', response)
def rpush(self, key, *args): """Emulate rpush.""" redis_list = self._get_list(key, 'RPUSH', create=True) # Creates the list at this key if it doesn't exist, and appends args to it redis_list.extend(map(self._encode, args)) # Return the length of the list after the push operation return len(redis_list)
Emulate rpush.
Below is the the instruction that describes the task: ### Input: Emulate rpush. ### Response: def rpush(self, key, *args): """Emulate rpush.""" redis_list = self._get_list(key, 'RPUSH', create=True) # Creates the list at this key if it doesn't exist, and appends args to it redis_list.extend(map(self._encode, args)) # Return the length of the list after the push operation return len(redis_list)
def get_connection(connection='', engine_name=None, connection_type='long', **args): """ Creating an NamedEngine or just return existed engine instance if '://' include in connection parameter, it'll create new engine object otherwise return existed engine isntance """ engine_name = engine_name or __default_engine__ if '://' in connection: d = { 'connection_string':connection, 'connection_args':args, 'connection_type':connection_type, } return engine_manager.add(engine_name, d).engine else: connection = connection or __default_engine__ if connection in engine_manager: return engine_manager[connection].engine else: raise Error("Can't find engine %s" % connection)
Creating an NamedEngine or just return existed engine instance if '://' include in connection parameter, it'll create new engine object otherwise return existed engine isntance
Below is the the instruction that describes the task: ### Input: Creating an NamedEngine or just return existed engine instance if '://' include in connection parameter, it'll create new engine object otherwise return existed engine isntance ### Response: def get_connection(connection='', engine_name=None, connection_type='long', **args): """ Creating an NamedEngine or just return existed engine instance if '://' include in connection parameter, it'll create new engine object otherwise return existed engine isntance """ engine_name = engine_name or __default_engine__ if '://' in connection: d = { 'connection_string':connection, 'connection_args':args, 'connection_type':connection_type, } return engine_manager.add(engine_name, d).engine else: connection = connection or __default_engine__ if connection in engine_manager: return engine_manager[connection].engine else: raise Error("Can't find engine %s" % connection)
def post(method, hmc, uri, uri_parms, body, logon_required, wait_for_completion): """Operation: Create Storage Group.""" assert wait_for_completion is True # async not supported yet check_required_fields(method, uri, body, ['name', 'cpc-uri', 'type']) cpc_uri = body['cpc-uri'] try: cpc = hmc.lookup_by_uri(cpc_uri) except KeyError: raise InvalidResourceError(method, uri) if not cpc.dpm_enabled: raise CpcNotInDpmError(method, uri, cpc) check_valid_cpc_status(method, uri, cpc) # Reflect the result of creating the storage group body2 = body.copy() sv_requests = body2.pop('storage-volumes', None) new_storage_group = hmc.consoles.console.storage_groups.add(body2) sv_uris = [] if sv_requests: for sv_req in sv_requests: check_required_fields(method, uri, sv_req, ['operation']) operation = sv_req['operation'] if operation == 'create': sv_props = sv_req.copy() del sv_props['operation'] if 'element-uri' in sv_props: raise BadRequestError( method, uri, 7, "The 'element-uri' field in storage-volumes is " "invalid for the create operation") sv_uri = new_storage_group.storage_volumes.add(sv_props) sv_uris.append(sv_uri) else: raise BadRequestError( method, uri, 5, "Invalid value for storage-volumes 'operation' " "field: %s" % operation) return { 'object-uri': new_storage_group.uri, 'element-uris': sv_uris, }
Operation: Create Storage Group.
Below is the the instruction that describes the task: ### Input: Operation: Create Storage Group. ### Response: def post(method, hmc, uri, uri_parms, body, logon_required, wait_for_completion): """Operation: Create Storage Group.""" assert wait_for_completion is True # async not supported yet check_required_fields(method, uri, body, ['name', 'cpc-uri', 'type']) cpc_uri = body['cpc-uri'] try: cpc = hmc.lookup_by_uri(cpc_uri) except KeyError: raise InvalidResourceError(method, uri) if not cpc.dpm_enabled: raise CpcNotInDpmError(method, uri, cpc) check_valid_cpc_status(method, uri, cpc) # Reflect the result of creating the storage group body2 = body.copy() sv_requests = body2.pop('storage-volumes', None) new_storage_group = hmc.consoles.console.storage_groups.add(body2) sv_uris = [] if sv_requests: for sv_req in sv_requests: check_required_fields(method, uri, sv_req, ['operation']) operation = sv_req['operation'] if operation == 'create': sv_props = sv_req.copy() del sv_props['operation'] if 'element-uri' in sv_props: raise BadRequestError( method, uri, 7, "The 'element-uri' field in storage-volumes is " "invalid for the create operation") sv_uri = new_storage_group.storage_volumes.add(sv_props) sv_uris.append(sv_uri) else: raise BadRequestError( method, uri, 5, "Invalid value for storage-volumes 'operation' " "field: %s" % operation) return { 'object-uri': new_storage_group.uri, 'element-uris': sv_uris, }
def get_arguments(self): """ Extracts the specific arguments of this CLI """ ApiCli.get_arguments(self) if self.args.metric_name is not None: self._metric_name = self.args.metric_name self.path = "v1/metrics/{0}".format(self._metric_name)
Extracts the specific arguments of this CLI
Below is the the instruction that describes the task: ### Input: Extracts the specific arguments of this CLI ### Response: def get_arguments(self): """ Extracts the specific arguments of this CLI """ ApiCli.get_arguments(self) if self.args.metric_name is not None: self._metric_name = self.args.metric_name self.path = "v1/metrics/{0}".format(self._metric_name)
def _notebook_model_from_path(self, path, content=False, format=None): """ Build a notebook model from database record. """ model = base_model(path) model["type"] = "notebook" if self.fs.isfile(path): model["last_modified"] = model["created"] = self.fs.lstat(path)["ST_MTIME"] else: model["last_modified"] = model["created"] = DUMMY_CREATED_DATE if content: if not self.fs.isfile(path): self.no_such_entity(path) file_content = self.fs.read(path) nb_content = reads(file_content, as_version=NBFORMAT_VERSION) self.mark_trusted_cells(nb_content, path) model["format"] = "json" model["content"] = nb_content self.validate_notebook_model(model) return model
Build a notebook model from database record.
Below is the the instruction that describes the task: ### Input: Build a notebook model from database record. ### Response: def _notebook_model_from_path(self, path, content=False, format=None): """ Build a notebook model from database record. """ model = base_model(path) model["type"] = "notebook" if self.fs.isfile(path): model["last_modified"] = model["created"] = self.fs.lstat(path)["ST_MTIME"] else: model["last_modified"] = model["created"] = DUMMY_CREATED_DATE if content: if not self.fs.isfile(path): self.no_such_entity(path) file_content = self.fs.read(path) nb_content = reads(file_content, as_version=NBFORMAT_VERSION) self.mark_trusted_cells(nb_content, path) model["format"] = "json" model["content"] = nb_content self.validate_notebook_model(model) return model
def print_cm(cm, labels, hide_zeroes=False, hide_diagonal=False, hide_threshold=None): """pretty print for confusion matrixes""" columnwidth = max([len(x) for x in labels] + [5]) # 5 is value length empty_cell = " " * columnwidth # Print header print(" " + empty_cell, end=" ") for label in labels: print("%{0}s".format(columnwidth) % label, end=" ") print() # Print rows for i, label1 in enumerate(labels): print(" %{0}s".format(columnwidth) % label1, end=" ") for j in range(len(labels)): cell = "%{0}.1f".format(columnwidth) % cm[i, j] if hide_zeroes: cell = cell if float(cm[i, j]) != 0 else empty_cell if hide_diagonal: cell = cell if i != j else empty_cell if hide_threshold: cell = cell if cm[i, j] > hide_threshold else empty_cell print(cell, end=" ") print()
pretty print for confusion matrixes
Below is the the instruction that describes the task: ### Input: pretty print for confusion matrixes ### Response: def print_cm(cm, labels, hide_zeroes=False, hide_diagonal=False, hide_threshold=None): """pretty print for confusion matrixes""" columnwidth = max([len(x) for x in labels] + [5]) # 5 is value length empty_cell = " " * columnwidth # Print header print(" " + empty_cell, end=" ") for label in labels: print("%{0}s".format(columnwidth) % label, end=" ") print() # Print rows for i, label1 in enumerate(labels): print(" %{0}s".format(columnwidth) % label1, end=" ") for j in range(len(labels)): cell = "%{0}.1f".format(columnwidth) % cm[i, j] if hide_zeroes: cell = cell if float(cm[i, j]) != 0 else empty_cell if hide_diagonal: cell = cell if i != j else empty_cell if hide_threshold: cell = cell if cm[i, j] > hide_threshold else empty_cell print(cell, end=" ") print()
def get_paths_goobjs(go_objs, go_top=None, go2obj=None): """Given a list of GO objects, return: paths, user GOs as ints, all GO terms paths.""" go_paths = [] # Collect all paths for go_objs go_all = set() # Collect all GO terms in all paths pathobj = GoPaths() for go_obj in go_objs: #print "?FIND PATHS FOR {}?".format(go_obj.id) if go_obj.id not in go_all: # GO not yet seen in paths already found #print "!FIND PATHS FOR {}!".format(go_obj.id) paths_curr = pathobj.get_paths_from_to(go_obj, go_top, True) if paths_curr: for path_goobjs in paths_curr: for path_goobj in path_goobjs: goid = path_goobj.id if goid not in go_all: go_all.add(goid) go2obj[goid] = path_goobj # go_all.update(GO.id for path in paths_curr for GO in path) go_paths.extend(path for path in paths_curr) return go_paths, go_all
Given a list of GO objects, return: paths, user GOs as ints, all GO terms paths.
Below is the the instruction that describes the task: ### Input: Given a list of GO objects, return: paths, user GOs as ints, all GO terms paths. ### Response: def get_paths_goobjs(go_objs, go_top=None, go2obj=None): """Given a list of GO objects, return: paths, user GOs as ints, all GO terms paths.""" go_paths = [] # Collect all paths for go_objs go_all = set() # Collect all GO terms in all paths pathobj = GoPaths() for go_obj in go_objs: #print "?FIND PATHS FOR {}?".format(go_obj.id) if go_obj.id not in go_all: # GO not yet seen in paths already found #print "!FIND PATHS FOR {}!".format(go_obj.id) paths_curr = pathobj.get_paths_from_to(go_obj, go_top, True) if paths_curr: for path_goobjs in paths_curr: for path_goobj in path_goobjs: goid = path_goobj.id if goid not in go_all: go_all.add(goid) go2obj[goid] = path_goobj # go_all.update(GO.id for path in paths_curr for GO in path) go_paths.extend(path for path in paths_curr) return go_paths, go_all
def _get_final_set(self, sets, pk, sort_options): """ Add intersects fo sets and call parent's _get_final_set. If we have to sort by sorted score, and we have a slice, we have to convert the whole sorted set to keys now. """ if self._lazy_collection['intersects']: # if the intersect method was called, we had new sets to intersect # to the global set of sets. # And it there is no real filters, we had the set of the whole # collection because we cannot be sure that entries in "intersects" # are all real primary keys sets = sets[::] sets.extend(self._lazy_collection['intersects']) if not self._lazy_collection['sets'] and not self.stored_key: sets.append(self.cls.get_field('pk').collection_key) final_set, keys_to_delete_later = super(ExtendedCollectionManager, self)._get_final_set(sets, pk, sort_options) # if we have a slice and we want to sort by the score of a sorted set, # as redis sort command doesn't handle this, we have to create keys for # each value of the sorted set and sort on them # @antirez, y u don't allow this !!??!! if final_set and self._sort_by_sortedset_before: # TODO: if we have filters, maybe apply _zet_to_keys to only # intersected values base_tmp_key, tmp_keys = self._prepare_sort_by_score(None, sort_options) # new keys have to be deleted once the final sort is done if not keys_to_delete_later: keys_to_delete_later = [] keys_to_delete_later.append(base_tmp_key) keys_to_delete_later += tmp_keys return final_set, keys_to_delete_later
Add intersects fo sets and call parent's _get_final_set. If we have to sort by sorted score, and we have a slice, we have to convert the whole sorted set to keys now.
Below is the the instruction that describes the task: ### Input: Add intersects fo sets and call parent's _get_final_set. If we have to sort by sorted score, and we have a slice, we have to convert the whole sorted set to keys now. ### Response: def _get_final_set(self, sets, pk, sort_options): """ Add intersects fo sets and call parent's _get_final_set. If we have to sort by sorted score, and we have a slice, we have to convert the whole sorted set to keys now. """ if self._lazy_collection['intersects']: # if the intersect method was called, we had new sets to intersect # to the global set of sets. # And it there is no real filters, we had the set of the whole # collection because we cannot be sure that entries in "intersects" # are all real primary keys sets = sets[::] sets.extend(self._lazy_collection['intersects']) if not self._lazy_collection['sets'] and not self.stored_key: sets.append(self.cls.get_field('pk').collection_key) final_set, keys_to_delete_later = super(ExtendedCollectionManager, self)._get_final_set(sets, pk, sort_options) # if we have a slice and we want to sort by the score of a sorted set, # as redis sort command doesn't handle this, we have to create keys for # each value of the sorted set and sort on them # @antirez, y u don't allow this !!??!! if final_set and self._sort_by_sortedset_before: # TODO: if we have filters, maybe apply _zet_to_keys to only # intersected values base_tmp_key, tmp_keys = self._prepare_sort_by_score(None, sort_options) # new keys have to be deleted once the final sort is done if not keys_to_delete_later: keys_to_delete_later = [] keys_to_delete_later.append(base_tmp_key) keys_to_delete_later += tmp_keys return final_set, keys_to_delete_later
def get_api_keys_of_account_group(self, account_id, group_id, **kwargs): # noqa: E501 """Get API keys of a group. # noqa: E501 An endpoint for listing the API keys of the group with details. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/policy-groups/{groupID}/api-keys -H 'Authorization: Bearer API_KEY'` # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.get_api_keys_of_account_group(account_id, group_id, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str account_id: Account ID. (required) :param str group_id: The ID of the group whose API keys are retrieved. (required) :param int limit: The number of results to return (2-1000), default is 50. :param str after: The entity ID to fetch after the given one. :param str order: The order of the records based on creation time, ASC or DESC; by default ASC :param str include: Comma separated additional data to return. Currently supported: total_count :return: ApiKeyInfoRespList If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('asynchronous'): return self.get_api_keys_of_account_group_with_http_info(account_id, group_id, **kwargs) # noqa: E501 else: (data) = self.get_api_keys_of_account_group_with_http_info(account_id, group_id, **kwargs) # noqa: E501 return data
Get API keys of a group. # noqa: E501 An endpoint for listing the API keys of the group with details. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/policy-groups/{groupID}/api-keys -H 'Authorization: Bearer API_KEY'` # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.get_api_keys_of_account_group(account_id, group_id, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str account_id: Account ID. (required) :param str group_id: The ID of the group whose API keys are retrieved. (required) :param int limit: The number of results to return (2-1000), default is 50. :param str after: The entity ID to fetch after the given one. :param str order: The order of the records based on creation time, ASC or DESC; by default ASC :param str include: Comma separated additional data to return. Currently supported: total_count :return: ApiKeyInfoRespList If the method is called asynchronously, returns the request thread.
Below is the the instruction that describes the task: ### Input: Get API keys of a group. # noqa: E501 An endpoint for listing the API keys of the group with details. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/policy-groups/{groupID}/api-keys -H 'Authorization: Bearer API_KEY'` # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.get_api_keys_of_account_group(account_id, group_id, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str account_id: Account ID. (required) :param str group_id: The ID of the group whose API keys are retrieved. (required) :param int limit: The number of results to return (2-1000), default is 50. :param str after: The entity ID to fetch after the given one. :param str order: The order of the records based on creation time, ASC or DESC; by default ASC :param str include: Comma separated additional data to return. Currently supported: total_count :return: ApiKeyInfoRespList If the method is called asynchronously, returns the request thread. ### Response: def get_api_keys_of_account_group(self, account_id, group_id, **kwargs): # noqa: E501 """Get API keys of a group. # noqa: E501 An endpoint for listing the API keys of the group with details. **Example usage:** `curl https://api.us-east-1.mbedcloud.com/v3/accounts/{accountID}/policy-groups/{groupID}/api-keys -H 'Authorization: Bearer API_KEY'` # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass asynchronous=True >>> thread = api.get_api_keys_of_account_group(account_id, group_id, asynchronous=True) >>> result = thread.get() :param asynchronous bool :param str account_id: Account ID. (required) :param str group_id: The ID of the group whose API keys are retrieved. (required) :param int limit: The number of results to return (2-1000), default is 50. :param str after: The entity ID to fetch after the given one. :param str order: The order of the records based on creation time, ASC or DESC; by default ASC :param str include: Comma separated additional data to return. Currently supported: total_count :return: ApiKeyInfoRespList If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('asynchronous'): return self.get_api_keys_of_account_group_with_http_info(account_id, group_id, **kwargs) # noqa: E501 else: (data) = self.get_api_keys_of_account_group_with_http_info(account_id, group_id, **kwargs) # noqa: E501 return data
def set_monitor_callback(self, callback, monitor_all=False): """Install callback for monitor. Parameters ---------- callback : function Takes a string and an NDArrayHandle. monitor_all : bool, default False If true, monitor both input and output, otherwise monitor output only. Examples -------- >>> def mon_callback(*args, **kwargs): >>> print("Do your stuff here.") >>> >>> texe.set_monitor_callback(mon_callback) """ cb_type = ctypes.CFUNCTYPE(None, ctypes.c_char_p, NDArrayHandle, ctypes.c_void_p) self._monitor_callback = cb_type(_monitor_callback_wrapper(callback)) check_call(_LIB.MXExecutorSetMonitorCallbackEX( self.handle, self._monitor_callback, None, ctypes.c_int(monitor_all)))
Install callback for monitor. Parameters ---------- callback : function Takes a string and an NDArrayHandle. monitor_all : bool, default False If true, monitor both input and output, otherwise monitor output only. Examples -------- >>> def mon_callback(*args, **kwargs): >>> print("Do your stuff here.") >>> >>> texe.set_monitor_callback(mon_callback)
Below is the the instruction that describes the task: ### Input: Install callback for monitor. Parameters ---------- callback : function Takes a string and an NDArrayHandle. monitor_all : bool, default False If true, monitor both input and output, otherwise monitor output only. Examples -------- >>> def mon_callback(*args, **kwargs): >>> print("Do your stuff here.") >>> >>> texe.set_monitor_callback(mon_callback) ### Response: def set_monitor_callback(self, callback, monitor_all=False): """Install callback for monitor. Parameters ---------- callback : function Takes a string and an NDArrayHandle. monitor_all : bool, default False If true, monitor both input and output, otherwise monitor output only. Examples -------- >>> def mon_callback(*args, **kwargs): >>> print("Do your stuff here.") >>> >>> texe.set_monitor_callback(mon_callback) """ cb_type = ctypes.CFUNCTYPE(None, ctypes.c_char_p, NDArrayHandle, ctypes.c_void_p) self._monitor_callback = cb_type(_monitor_callback_wrapper(callback)) check_call(_LIB.MXExecutorSetMonitorCallbackEX( self.handle, self._monitor_callback, None, ctypes.c_int(monitor_all)))
def add_experiences(self, curr_all_info: AllBrainInfo, next_all_info: AllBrainInfo, take_action_outputs): """ Adds experiences to each agent's experience history. :param curr_all_info: Dictionary of all current brains and corresponding BrainInfo. :param next_all_info: Dictionary of all current brains and corresponding BrainInfo. :param take_action_outputs: The outputs of the Policy's get_action method. """ self.trainer_metrics.start_experience_collection_timer() if take_action_outputs: self.stats['Policy/Value Estimate'].append(take_action_outputs['value'].mean()) self.stats['Policy/Entropy'].append(take_action_outputs['entropy'].mean()) self.stats['Policy/Learning Rate'].append(take_action_outputs['learning_rate']) curr_info = curr_all_info[self.brain_name] next_info = next_all_info[self.brain_name] for agent_id in curr_info.agents: self.training_buffer[agent_id].last_brain_info = curr_info self.training_buffer[agent_id].last_take_action_outputs = take_action_outputs if curr_info.agents != next_info.agents: curr_to_use = self.construct_curr_info(next_info) else: curr_to_use = curr_info intrinsic_rewards = self.policy.get_intrinsic_rewards(curr_to_use, next_info) for agent_id in next_info.agents: stored_info = self.training_buffer[agent_id].last_brain_info stored_take_action_outputs = self.training_buffer[agent_id].last_take_action_outputs if stored_info is not None: idx = stored_info.agents.index(agent_id) next_idx = next_info.agents.index(agent_id) if not stored_info.local_done[idx]: for i, _ in enumerate(stored_info.visual_observations): self.training_buffer[agent_id]['visual_obs%d' % i].append( stored_info.visual_observations[i][idx]) self.training_buffer[agent_id]['next_visual_obs%d' % i].append( next_info.visual_observations[i][next_idx]) if self.policy.use_vec_obs: self.training_buffer[agent_id]['vector_obs'].append(stored_info.vector_observations[idx]) self.training_buffer[agent_id]['next_vector_in'].append( next_info.vector_observations[next_idx]) if self.policy.use_recurrent: if stored_info.memories.shape[1] == 0: stored_info.memories = np.zeros((len(stored_info.agents), self.policy.m_size)) self.training_buffer[agent_id]['memory'].append(stored_info.memories[idx]) actions = stored_take_action_outputs['action'] if self.policy.use_continuous_act: actions_pre = stored_take_action_outputs['pre_action'] self.training_buffer[agent_id]['actions_pre'].append(actions_pre[idx]) epsilons = stored_take_action_outputs['random_normal_epsilon'] self.training_buffer[agent_id]['random_normal_epsilon'].append( epsilons[idx]) else: self.training_buffer[agent_id]['action_mask'].append( stored_info.action_masks[idx], padding_value=1) a_dist = stored_take_action_outputs['log_probs'] value = stored_take_action_outputs['value'] self.training_buffer[agent_id]['actions'].append(actions[idx]) self.training_buffer[agent_id]['prev_action'].append(stored_info.previous_vector_actions[idx]) self.training_buffer[agent_id]['masks'].append(1.0) if self.use_curiosity: self.training_buffer[agent_id]['rewards'].append(next_info.rewards[next_idx] + intrinsic_rewards[next_idx]) else: self.training_buffer[agent_id]['rewards'].append(next_info.rewards[next_idx]) self.training_buffer[agent_id]['action_probs'].append(a_dist[idx]) self.training_buffer[agent_id]['value_estimates'].append(value[idx][0]) if agent_id not in self.cumulative_rewards: self.cumulative_rewards[agent_id] = 0 self.cumulative_rewards[agent_id] += next_info.rewards[next_idx] if self.use_curiosity: if agent_id not in self.intrinsic_rewards: self.intrinsic_rewards[agent_id] = 0 self.intrinsic_rewards[agent_id] += intrinsic_rewards[next_idx] if not next_info.local_done[next_idx]: if agent_id not in self.episode_steps: self.episode_steps[agent_id] = 0 self.episode_steps[agent_id] += 1 self.trainer_metrics.end_experience_collection_timer()
Adds experiences to each agent's experience history. :param curr_all_info: Dictionary of all current brains and corresponding BrainInfo. :param next_all_info: Dictionary of all current brains and corresponding BrainInfo. :param take_action_outputs: The outputs of the Policy's get_action method.
Below is the the instruction that describes the task: ### Input: Adds experiences to each agent's experience history. :param curr_all_info: Dictionary of all current brains and corresponding BrainInfo. :param next_all_info: Dictionary of all current brains and corresponding BrainInfo. :param take_action_outputs: The outputs of the Policy's get_action method. ### Response: def add_experiences(self, curr_all_info: AllBrainInfo, next_all_info: AllBrainInfo, take_action_outputs): """ Adds experiences to each agent's experience history. :param curr_all_info: Dictionary of all current brains and corresponding BrainInfo. :param next_all_info: Dictionary of all current brains and corresponding BrainInfo. :param take_action_outputs: The outputs of the Policy's get_action method. """ self.trainer_metrics.start_experience_collection_timer() if take_action_outputs: self.stats['Policy/Value Estimate'].append(take_action_outputs['value'].mean()) self.stats['Policy/Entropy'].append(take_action_outputs['entropy'].mean()) self.stats['Policy/Learning Rate'].append(take_action_outputs['learning_rate']) curr_info = curr_all_info[self.brain_name] next_info = next_all_info[self.brain_name] for agent_id in curr_info.agents: self.training_buffer[agent_id].last_brain_info = curr_info self.training_buffer[agent_id].last_take_action_outputs = take_action_outputs if curr_info.agents != next_info.agents: curr_to_use = self.construct_curr_info(next_info) else: curr_to_use = curr_info intrinsic_rewards = self.policy.get_intrinsic_rewards(curr_to_use, next_info) for agent_id in next_info.agents: stored_info = self.training_buffer[agent_id].last_brain_info stored_take_action_outputs = self.training_buffer[agent_id].last_take_action_outputs if stored_info is not None: idx = stored_info.agents.index(agent_id) next_idx = next_info.agents.index(agent_id) if not stored_info.local_done[idx]: for i, _ in enumerate(stored_info.visual_observations): self.training_buffer[agent_id]['visual_obs%d' % i].append( stored_info.visual_observations[i][idx]) self.training_buffer[agent_id]['next_visual_obs%d' % i].append( next_info.visual_observations[i][next_idx]) if self.policy.use_vec_obs: self.training_buffer[agent_id]['vector_obs'].append(stored_info.vector_observations[idx]) self.training_buffer[agent_id]['next_vector_in'].append( next_info.vector_observations[next_idx]) if self.policy.use_recurrent: if stored_info.memories.shape[1] == 0: stored_info.memories = np.zeros((len(stored_info.agents), self.policy.m_size)) self.training_buffer[agent_id]['memory'].append(stored_info.memories[idx]) actions = stored_take_action_outputs['action'] if self.policy.use_continuous_act: actions_pre = stored_take_action_outputs['pre_action'] self.training_buffer[agent_id]['actions_pre'].append(actions_pre[idx]) epsilons = stored_take_action_outputs['random_normal_epsilon'] self.training_buffer[agent_id]['random_normal_epsilon'].append( epsilons[idx]) else: self.training_buffer[agent_id]['action_mask'].append( stored_info.action_masks[idx], padding_value=1) a_dist = stored_take_action_outputs['log_probs'] value = stored_take_action_outputs['value'] self.training_buffer[agent_id]['actions'].append(actions[idx]) self.training_buffer[agent_id]['prev_action'].append(stored_info.previous_vector_actions[idx]) self.training_buffer[agent_id]['masks'].append(1.0) if self.use_curiosity: self.training_buffer[agent_id]['rewards'].append(next_info.rewards[next_idx] + intrinsic_rewards[next_idx]) else: self.training_buffer[agent_id]['rewards'].append(next_info.rewards[next_idx]) self.training_buffer[agent_id]['action_probs'].append(a_dist[idx]) self.training_buffer[agent_id]['value_estimates'].append(value[idx][0]) if agent_id not in self.cumulative_rewards: self.cumulative_rewards[agent_id] = 0 self.cumulative_rewards[agent_id] += next_info.rewards[next_idx] if self.use_curiosity: if agent_id not in self.intrinsic_rewards: self.intrinsic_rewards[agent_id] = 0 self.intrinsic_rewards[agent_id] += intrinsic_rewards[next_idx] if not next_info.local_done[next_idx]: if agent_id not in self.episode_steps: self.episode_steps[agent_id] = 0 self.episode_steps[agent_id] += 1 self.trainer_metrics.end_experience_collection_timer()
def _get_macros(chain): """Get all macros of a chain Cut '$' char and create a dict with the following structure:: { 'MacroSTR1' : {'val': '', 'type': 'unknown'} 'MacroSTR2' : {'val': '', 'type': 'unknown'} } :param chain: chain to parse :type chain: str :return: dict with macro parsed as key :rtype: dict """ regex = re.compile(r'(\$)') elts = regex.split(chain) macros = {} in_macro = False for elt in elts: if elt == '$': in_macro = not in_macro elif in_macro: macros[elt] = {'val': '', 'type': 'unknown'} return macros
Get all macros of a chain Cut '$' char and create a dict with the following structure:: { 'MacroSTR1' : {'val': '', 'type': 'unknown'} 'MacroSTR2' : {'val': '', 'type': 'unknown'} } :param chain: chain to parse :type chain: str :return: dict with macro parsed as key :rtype: dict
Below is the the instruction that describes the task: ### Input: Get all macros of a chain Cut '$' char and create a dict with the following structure:: { 'MacroSTR1' : {'val': '', 'type': 'unknown'} 'MacroSTR2' : {'val': '', 'type': 'unknown'} } :param chain: chain to parse :type chain: str :return: dict with macro parsed as key :rtype: dict ### Response: def _get_macros(chain): """Get all macros of a chain Cut '$' char and create a dict with the following structure:: { 'MacroSTR1' : {'val': '', 'type': 'unknown'} 'MacroSTR2' : {'val': '', 'type': 'unknown'} } :param chain: chain to parse :type chain: str :return: dict with macro parsed as key :rtype: dict """ regex = re.compile(r'(\$)') elts = regex.split(chain) macros = {} in_macro = False for elt in elts: if elt == '$': in_macro = not in_macro elif in_macro: macros[elt] = {'val': '', 'type': 'unknown'} return macros
def dispatch_hook(cls, _pkt, _underlayer=None, *args, **kargs): """dispatch_hook to choose among different registered payloads""" for klass in cls._payload_class: if hasattr(klass, "can_handle") and \ klass.can_handle(_pkt, _underlayer): return klass print("DCE/RPC payload class not found or undefined (using Raw)") return Raw
dispatch_hook to choose among different registered payloads
Below is the the instruction that describes the task: ### Input: dispatch_hook to choose among different registered payloads ### Response: def dispatch_hook(cls, _pkt, _underlayer=None, *args, **kargs): """dispatch_hook to choose among different registered payloads""" for klass in cls._payload_class: if hasattr(klass, "can_handle") and \ klass.can_handle(_pkt, _underlayer): return klass print("DCE/RPC payload class not found or undefined (using Raw)") return Raw
def is_extension_type(arr): """ Check whether an array-like is of a pandas extension class instance. Extension classes include categoricals, pandas sparse objects (i.e. classes represented within the pandas library and not ones external to it like scipy sparse matrices), and datetime-like arrays. Parameters ---------- arr : array-like The array-like to check. Returns ------- boolean Whether or not the array-like is of a pandas extension class instance. Examples -------- >>> is_extension_type([1, 2, 3]) False >>> is_extension_type(np.array([1, 2, 3])) False >>> >>> cat = pd.Categorical([1, 2, 3]) >>> >>> is_extension_type(cat) True >>> is_extension_type(pd.Series(cat)) True >>> is_extension_type(pd.SparseArray([1, 2, 3])) True >>> is_extension_type(pd.SparseSeries([1, 2, 3])) True >>> >>> from scipy.sparse import bsr_matrix >>> is_extension_type(bsr_matrix([1, 2, 3])) False >>> is_extension_type(pd.DatetimeIndex([1, 2, 3])) False >>> is_extension_type(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern")) True >>> >>> dtype = DatetimeTZDtype("ns", tz="US/Eastern") >>> s = pd.Series([], dtype=dtype) >>> is_extension_type(s) True """ if is_categorical(arr): return True elif is_sparse(arr): return True elif is_datetime64tz_dtype(arr): return True return False
Check whether an array-like is of a pandas extension class instance. Extension classes include categoricals, pandas sparse objects (i.e. classes represented within the pandas library and not ones external to it like scipy sparse matrices), and datetime-like arrays. Parameters ---------- arr : array-like The array-like to check. Returns ------- boolean Whether or not the array-like is of a pandas extension class instance. Examples -------- >>> is_extension_type([1, 2, 3]) False >>> is_extension_type(np.array([1, 2, 3])) False >>> >>> cat = pd.Categorical([1, 2, 3]) >>> >>> is_extension_type(cat) True >>> is_extension_type(pd.Series(cat)) True >>> is_extension_type(pd.SparseArray([1, 2, 3])) True >>> is_extension_type(pd.SparseSeries([1, 2, 3])) True >>> >>> from scipy.sparse import bsr_matrix >>> is_extension_type(bsr_matrix([1, 2, 3])) False >>> is_extension_type(pd.DatetimeIndex([1, 2, 3])) False >>> is_extension_type(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern")) True >>> >>> dtype = DatetimeTZDtype("ns", tz="US/Eastern") >>> s = pd.Series([], dtype=dtype) >>> is_extension_type(s) True
Below is the the instruction that describes the task: ### Input: Check whether an array-like is of a pandas extension class instance. Extension classes include categoricals, pandas sparse objects (i.e. classes represented within the pandas library and not ones external to it like scipy sparse matrices), and datetime-like arrays. Parameters ---------- arr : array-like The array-like to check. Returns ------- boolean Whether or not the array-like is of a pandas extension class instance. Examples -------- >>> is_extension_type([1, 2, 3]) False >>> is_extension_type(np.array([1, 2, 3])) False >>> >>> cat = pd.Categorical([1, 2, 3]) >>> >>> is_extension_type(cat) True >>> is_extension_type(pd.Series(cat)) True >>> is_extension_type(pd.SparseArray([1, 2, 3])) True >>> is_extension_type(pd.SparseSeries([1, 2, 3])) True >>> >>> from scipy.sparse import bsr_matrix >>> is_extension_type(bsr_matrix([1, 2, 3])) False >>> is_extension_type(pd.DatetimeIndex([1, 2, 3])) False >>> is_extension_type(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern")) True >>> >>> dtype = DatetimeTZDtype("ns", tz="US/Eastern") >>> s = pd.Series([], dtype=dtype) >>> is_extension_type(s) True ### Response: def is_extension_type(arr): """ Check whether an array-like is of a pandas extension class instance. Extension classes include categoricals, pandas sparse objects (i.e. classes represented within the pandas library and not ones external to it like scipy sparse matrices), and datetime-like arrays. Parameters ---------- arr : array-like The array-like to check. Returns ------- boolean Whether or not the array-like is of a pandas extension class instance. Examples -------- >>> is_extension_type([1, 2, 3]) False >>> is_extension_type(np.array([1, 2, 3])) False >>> >>> cat = pd.Categorical([1, 2, 3]) >>> >>> is_extension_type(cat) True >>> is_extension_type(pd.Series(cat)) True >>> is_extension_type(pd.SparseArray([1, 2, 3])) True >>> is_extension_type(pd.SparseSeries([1, 2, 3])) True >>> >>> from scipy.sparse import bsr_matrix >>> is_extension_type(bsr_matrix([1, 2, 3])) False >>> is_extension_type(pd.DatetimeIndex([1, 2, 3])) False >>> is_extension_type(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern")) True >>> >>> dtype = DatetimeTZDtype("ns", tz="US/Eastern") >>> s = pd.Series([], dtype=dtype) >>> is_extension_type(s) True """ if is_categorical(arr): return True elif is_sparse(arr): return True elif is_datetime64tz_dtype(arr): return True return False
def line(h1: Histogram1D, **kwargs) -> dict: """Line plot of 1D histogram values. Points are horizontally placed in bin centers. Parameters ---------- h1 : physt.histogram1d.Histogram1D Dimensionality of histogram for which it is applicable """ lw = kwargs.pop("lw", DEFAULT_STROKE_WIDTH) mark_template = [{ "type": "line", "encode": { "enter": { "x": {"scale": "xscale", "field": "x"}, "y": {"scale": "yscale", "field": "y"}, "stroke": {"scale": "series", "field": "c"}, "strokeWidth": {"value": lw} } }, "from": {"data": "series"}, }] vega = _scatter_or_line(h1, mark_template=mark_template, kwargs=kwargs) return vega
Line plot of 1D histogram values. Points are horizontally placed in bin centers. Parameters ---------- h1 : physt.histogram1d.Histogram1D Dimensionality of histogram for which it is applicable
Below is the the instruction that describes the task: ### Input: Line plot of 1D histogram values. Points are horizontally placed in bin centers. Parameters ---------- h1 : physt.histogram1d.Histogram1D Dimensionality of histogram for which it is applicable ### Response: def line(h1: Histogram1D, **kwargs) -> dict: """Line plot of 1D histogram values. Points are horizontally placed in bin centers. Parameters ---------- h1 : physt.histogram1d.Histogram1D Dimensionality of histogram for which it is applicable """ lw = kwargs.pop("lw", DEFAULT_STROKE_WIDTH) mark_template = [{ "type": "line", "encode": { "enter": { "x": {"scale": "xscale", "field": "x"}, "y": {"scale": "yscale", "field": "y"}, "stroke": {"scale": "series", "field": "c"}, "strokeWidth": {"value": lw} } }, "from": {"data": "series"}, }] vega = _scatter_or_line(h1, mark_template=mark_template, kwargs=kwargs) return vega
def train(self, root=''): """ Trains our Language Model. :param root: Path to training data. """ self.trainer = Train(root=root) corpus = self.trainer.get_corpus() # Show loaded Languages #print 'Lang Set: ' + ' '.join(train.get_lang_set()) for item in corpus: self.lm.add_doc(doc_id=item[0], doc_terms=self._readfile(item[1])) # Save training timestamp self.training_timestamp = self.trainer.get_last_modified()
Trains our Language Model. :param root: Path to training data.
Below is the the instruction that describes the task: ### Input: Trains our Language Model. :param root: Path to training data. ### Response: def train(self, root=''): """ Trains our Language Model. :param root: Path to training data. """ self.trainer = Train(root=root) corpus = self.trainer.get_corpus() # Show loaded Languages #print 'Lang Set: ' + ' '.join(train.get_lang_set()) for item in corpus: self.lm.add_doc(doc_id=item[0], doc_terms=self._readfile(item[1])) # Save training timestamp self.training_timestamp = self.trainer.get_last_modified()
async def mount(self, mount_point, *, mount_options=None): """Mount this partition.""" self._data = await self._handler.mount( system_id=self.block_device.node.system_id, device_id=self.block_device.id, id=self.id, mount_point=mount_point, mount_options=mount_options)
Mount this partition.
Below is the the instruction that describes the task: ### Input: Mount this partition. ### Response: async def mount(self, mount_point, *, mount_options=None): """Mount this partition.""" self._data = await self._handler.mount( system_id=self.block_device.node.system_id, device_id=self.block_device.id, id=self.id, mount_point=mount_point, mount_options=mount_options)
def parse_server_addr(str_addr, default_port=26000): """Parse address and returns host and port Args: str_addr --- string that contains server ip or hostname and optionaly port Returns: tuple (host, port) Examples: >>> parse_server_addr('127.0.0.1:26006') ('127.0.0.1', 26006) >>> parse_server_addr('[2001:db8:85a3:8d3:1319:8a2e:370:7348]:26006') ('2001:db8:85a3:8d3:1319:8a2e:370:7348', 26006) >>> parse_server_addr('[2001:db8:85a3:8d3:1319:8a2e:370:7348]') ('2001:db8:85a3:8d3:1319:8a2e:370:7348', 26000) >>> parse_server_addr('localhost:123') ('localhost', 123) >>> parse_server_addr('localhost:1d23') Traceback (most recent call last): ... ValueError: Bad address string "localhost:1d23" """ m = ADDR_STR_RE.match(str_addr) if m is None: raise ValueError('Bad address string "{0}"'.format(str_addr)) dct = m.groupdict() port = dct.get('port') if port is None: port = default_port else: port = int(port) # Caution: could raise ValueEror or TypeError if port == 0: raise ValueError("Port can't be zero") host = dct['host'] if dct['host'] else dct['host6'] return host, port
Parse address and returns host and port Args: str_addr --- string that contains server ip or hostname and optionaly port Returns: tuple (host, port) Examples: >>> parse_server_addr('127.0.0.1:26006') ('127.0.0.1', 26006) >>> parse_server_addr('[2001:db8:85a3:8d3:1319:8a2e:370:7348]:26006') ('2001:db8:85a3:8d3:1319:8a2e:370:7348', 26006) >>> parse_server_addr('[2001:db8:85a3:8d3:1319:8a2e:370:7348]') ('2001:db8:85a3:8d3:1319:8a2e:370:7348', 26000) >>> parse_server_addr('localhost:123') ('localhost', 123) >>> parse_server_addr('localhost:1d23') Traceback (most recent call last): ... ValueError: Bad address string "localhost:1d23"
Below is the the instruction that describes the task: ### Input: Parse address and returns host and port Args: str_addr --- string that contains server ip or hostname and optionaly port Returns: tuple (host, port) Examples: >>> parse_server_addr('127.0.0.1:26006') ('127.0.0.1', 26006) >>> parse_server_addr('[2001:db8:85a3:8d3:1319:8a2e:370:7348]:26006') ('2001:db8:85a3:8d3:1319:8a2e:370:7348', 26006) >>> parse_server_addr('[2001:db8:85a3:8d3:1319:8a2e:370:7348]') ('2001:db8:85a3:8d3:1319:8a2e:370:7348', 26000) >>> parse_server_addr('localhost:123') ('localhost', 123) >>> parse_server_addr('localhost:1d23') Traceback (most recent call last): ... ValueError: Bad address string "localhost:1d23" ### Response: def parse_server_addr(str_addr, default_port=26000): """Parse address and returns host and port Args: str_addr --- string that contains server ip or hostname and optionaly port Returns: tuple (host, port) Examples: >>> parse_server_addr('127.0.0.1:26006') ('127.0.0.1', 26006) >>> parse_server_addr('[2001:db8:85a3:8d3:1319:8a2e:370:7348]:26006') ('2001:db8:85a3:8d3:1319:8a2e:370:7348', 26006) >>> parse_server_addr('[2001:db8:85a3:8d3:1319:8a2e:370:7348]') ('2001:db8:85a3:8d3:1319:8a2e:370:7348', 26000) >>> parse_server_addr('localhost:123') ('localhost', 123) >>> parse_server_addr('localhost:1d23') Traceback (most recent call last): ... ValueError: Bad address string "localhost:1d23" """ m = ADDR_STR_RE.match(str_addr) if m is None: raise ValueError('Bad address string "{0}"'.format(str_addr)) dct = m.groupdict() port = dct.get('port') if port is None: port = default_port else: port = int(port) # Caution: could raise ValueEror or TypeError if port == 0: raise ValueError("Port can't be zero") host = dct['host'] if dct['host'] else dct['host6'] return host, port
def sens_atmos_send(self, TempAmbient, Humidity, force_mavlink1=False): ''' Atmospheric sensors (temperature, humidity, ...) TempAmbient : Ambient temperature [degrees Celsius] (float) Humidity : Relative humidity [%] (float) ''' return self.send(self.sens_atmos_encode(TempAmbient, Humidity), force_mavlink1=force_mavlink1)
Atmospheric sensors (temperature, humidity, ...) TempAmbient : Ambient temperature [degrees Celsius] (float) Humidity : Relative humidity [%] (float)
Below is the the instruction that describes the task: ### Input: Atmospheric sensors (temperature, humidity, ...) TempAmbient : Ambient temperature [degrees Celsius] (float) Humidity : Relative humidity [%] (float) ### Response: def sens_atmos_send(self, TempAmbient, Humidity, force_mavlink1=False): ''' Atmospheric sensors (temperature, humidity, ...) TempAmbient : Ambient temperature [degrees Celsius] (float) Humidity : Relative humidity [%] (float) ''' return self.send(self.sens_atmos_encode(TempAmbient, Humidity), force_mavlink1=force_mavlink1)
def get_ss_regions(assembly, ss_types): """Returns an Assembly containing Polymers for each region of structure. Parameters ---------- assembly : ampal.Assembly `Assembly` object to be searched secondary structure regions. ss_types : list List of secondary structure tags to be separate i.e. ['H'] would return helices, ['H', 'E'] would return helices and strands. Returns ------- fragments : Assembly `Assembly` containing a `Polymer` for each region of specified secondary structure. """ if not any(map(lambda x: 'ss_regions' in x.tags, assembly)): raise ValueError( 'This assembly does not have any tagged secondary structure ' 'regions. Use `ampal.dssp.tag_dssp_data` to add the tags.' ) fragments = Assembly() for polypeptide in assembly: if 'ss_regions' in polypeptide.tags: for start, end, ss_type in polypeptide.tags['ss_regions']: if ss_type in ss_types: fragment = polypeptide.get_slice_from_res_id(start, end) fragments.append(fragment) if not fragments: raise ValueError('No regions matching that secondary structure type' ' have been found. Use standard DSSP labels.') return fragments
Returns an Assembly containing Polymers for each region of structure. Parameters ---------- assembly : ampal.Assembly `Assembly` object to be searched secondary structure regions. ss_types : list List of secondary structure tags to be separate i.e. ['H'] would return helices, ['H', 'E'] would return helices and strands. Returns ------- fragments : Assembly `Assembly` containing a `Polymer` for each region of specified secondary structure.
Below is the the instruction that describes the task: ### Input: Returns an Assembly containing Polymers for each region of structure. Parameters ---------- assembly : ampal.Assembly `Assembly` object to be searched secondary structure regions. ss_types : list List of secondary structure tags to be separate i.e. ['H'] would return helices, ['H', 'E'] would return helices and strands. Returns ------- fragments : Assembly `Assembly` containing a `Polymer` for each region of specified secondary structure. ### Response: def get_ss_regions(assembly, ss_types): """Returns an Assembly containing Polymers for each region of structure. Parameters ---------- assembly : ampal.Assembly `Assembly` object to be searched secondary structure regions. ss_types : list List of secondary structure tags to be separate i.e. ['H'] would return helices, ['H', 'E'] would return helices and strands. Returns ------- fragments : Assembly `Assembly` containing a `Polymer` for each region of specified secondary structure. """ if not any(map(lambda x: 'ss_regions' in x.tags, assembly)): raise ValueError( 'This assembly does not have any tagged secondary structure ' 'regions. Use `ampal.dssp.tag_dssp_data` to add the tags.' ) fragments = Assembly() for polypeptide in assembly: if 'ss_regions' in polypeptide.tags: for start, end, ss_type in polypeptide.tags['ss_regions']: if ss_type in ss_types: fragment = polypeptide.get_slice_from_res_id(start, end) fragments.append(fragment) if not fragments: raise ValueError('No regions matching that secondary structure type' ' have been found. Use standard DSSP labels.') return fragments
def ParseInput(self, a_file): """Consumes input extracting definitions. Args: a_file: The file like stream to parse. Raises: PDDMError if there are any issues. """ input_lines = a_file.read().splitlines() self.ParseLines(input_lines)
Consumes input extracting definitions. Args: a_file: The file like stream to parse. Raises: PDDMError if there are any issues.
Below is the the instruction that describes the task: ### Input: Consumes input extracting definitions. Args: a_file: The file like stream to parse. Raises: PDDMError if there are any issues. ### Response: def ParseInput(self, a_file): """Consumes input extracting definitions. Args: a_file: The file like stream to parse. Raises: PDDMError if there are any issues. """ input_lines = a_file.read().splitlines() self.ParseLines(input_lines)
def shell(self, name='default', site=None, **kwargs): """ Opens a SQL shell to the given database, assuming the configured database and user supports this feature. """ r = self.database_renderer(name=name, site=site) self.write_pgpass(name=name, site=site, root=True) db_name = kwargs.get('db_name') if db_name: r.env.db_name = db_name r.run('/bin/bash -i -c "psql --username={db_root_username} --host={db_host} --dbname={db_name}"') else: r.run('/bin/bash -i -c "psql --username={db_root_username} --host={db_host}"')
Opens a SQL shell to the given database, assuming the configured database and user supports this feature.
Below is the the instruction that describes the task: ### Input: Opens a SQL shell to the given database, assuming the configured database and user supports this feature. ### Response: def shell(self, name='default', site=None, **kwargs): """ Opens a SQL shell to the given database, assuming the configured database and user supports this feature. """ r = self.database_renderer(name=name, site=site) self.write_pgpass(name=name, site=site, root=True) db_name = kwargs.get('db_name') if db_name: r.env.db_name = db_name r.run('/bin/bash -i -c "psql --username={db_root_username} --host={db_host} --dbname={db_name}"') else: r.run('/bin/bash -i -c "psql --username={db_root_username} --host={db_host}"')
def add_publication_info( self, year=None, cnum=None, artid=None, page_end=None, page_start=None, journal_issue=None, journal_title=None, journal_volume=None, pubinfo_freetext=None, material=None, parent_record=None, parent_isbn=None, ): """Add publication info. :param year: year of publication :type year: integer :param cnum: inspire conference number :type cnum: string :param artid: article id :type artid: string :param page_end: final page for the article :type page_end: string :param page_start: initial page for the article :type page_start: string :param journal_issue: issue of the journal where the document has been published :type journal_issue: string :param journal_title: title of the journal where the document has been published :type journal_title: string :param journal_volume: volume of the journal where the document has been published :type journal_volume: string :param pubinfo_freetext: Unstructured text describing the publication information. :type pubinfo_freetext: string :param material: material of the article :type material: string :param parent_record: reference for the parent record :type parent_record: string :param parent_isbn: isbn for the parent record :type parent_isbn: string """ # If only journal title is present, and no other fields, assume the # paper was submitted, but not yet published if journal_title and all( not field for field in (cnum, artid, journal_issue, journal_volume, page_start, page_end)): self.add_public_note('Submitted to {}'.format(journal_title)) return publication_item = {} for key in ('cnum', 'artid', 'page_end', 'page_start', 'journal_issue', 'journal_title', 'journal_volume', 'year', 'pubinfo_freetext', 'material'): if locals()[key] is not None: publication_item[key] = locals()[key] if parent_record is not None: parent_item = {'$ref': parent_record} publication_item['parent_record'] = parent_item if parent_isbn is not None: publication_item['parent_isbn'] = normalize_isbn(parent_isbn) if page_start and page_end: try: self.add_number_of_pages( int(page_end) - int(page_start) + 1 ) except (TypeError, ValueError): pass self._append_to('publication_info', publication_item) if is_citeable(self.record['publication_info']): self.set_citeable(True)
Add publication info. :param year: year of publication :type year: integer :param cnum: inspire conference number :type cnum: string :param artid: article id :type artid: string :param page_end: final page for the article :type page_end: string :param page_start: initial page for the article :type page_start: string :param journal_issue: issue of the journal where the document has been published :type journal_issue: string :param journal_title: title of the journal where the document has been published :type journal_title: string :param journal_volume: volume of the journal where the document has been published :type journal_volume: string :param pubinfo_freetext: Unstructured text describing the publication information. :type pubinfo_freetext: string :param material: material of the article :type material: string :param parent_record: reference for the parent record :type parent_record: string :param parent_isbn: isbn for the parent record :type parent_isbn: string
Below is the the instruction that describes the task: ### Input: Add publication info. :param year: year of publication :type year: integer :param cnum: inspire conference number :type cnum: string :param artid: article id :type artid: string :param page_end: final page for the article :type page_end: string :param page_start: initial page for the article :type page_start: string :param journal_issue: issue of the journal where the document has been published :type journal_issue: string :param journal_title: title of the journal where the document has been published :type journal_title: string :param journal_volume: volume of the journal where the document has been published :type journal_volume: string :param pubinfo_freetext: Unstructured text describing the publication information. :type pubinfo_freetext: string :param material: material of the article :type material: string :param parent_record: reference for the parent record :type parent_record: string :param parent_isbn: isbn for the parent record :type parent_isbn: string ### Response: def add_publication_info( self, year=None, cnum=None, artid=None, page_end=None, page_start=None, journal_issue=None, journal_title=None, journal_volume=None, pubinfo_freetext=None, material=None, parent_record=None, parent_isbn=None, ): """Add publication info. :param year: year of publication :type year: integer :param cnum: inspire conference number :type cnum: string :param artid: article id :type artid: string :param page_end: final page for the article :type page_end: string :param page_start: initial page for the article :type page_start: string :param journal_issue: issue of the journal where the document has been published :type journal_issue: string :param journal_title: title of the journal where the document has been published :type journal_title: string :param journal_volume: volume of the journal where the document has been published :type journal_volume: string :param pubinfo_freetext: Unstructured text describing the publication information. :type pubinfo_freetext: string :param material: material of the article :type material: string :param parent_record: reference for the parent record :type parent_record: string :param parent_isbn: isbn for the parent record :type parent_isbn: string """ # If only journal title is present, and no other fields, assume the # paper was submitted, but not yet published if journal_title and all( not field for field in (cnum, artid, journal_issue, journal_volume, page_start, page_end)): self.add_public_note('Submitted to {}'.format(journal_title)) return publication_item = {} for key in ('cnum', 'artid', 'page_end', 'page_start', 'journal_issue', 'journal_title', 'journal_volume', 'year', 'pubinfo_freetext', 'material'): if locals()[key] is not None: publication_item[key] = locals()[key] if parent_record is not None: parent_item = {'$ref': parent_record} publication_item['parent_record'] = parent_item if parent_isbn is not None: publication_item['parent_isbn'] = normalize_isbn(parent_isbn) if page_start and page_end: try: self.add_number_of_pages( int(page_end) - int(page_start) + 1 ) except (TypeError, ValueError): pass self._append_to('publication_info', publication_item) if is_citeable(self.record['publication_info']): self.set_citeable(True)
def parse_options(): """ parse_options() -> opts, args Parse any command-line options given returning both the parsed options and arguments. https://docs.python.org/2/library/optparse.html """ parser = optparse.OptionParser(usage=USAGE, version=ontospy.VERSION) parser.add_option("-p", "--port", action="store", type="int", default=DEFAULT_PORT, dest="port", help="A number specifying which port to use for the server.") opts, args = parser.parse_args() # if not opts.all and not opts.query: # parser.print_help() # sys.exit(0) return opts, args
parse_options() -> opts, args Parse any command-line options given returning both the parsed options and arguments. https://docs.python.org/2/library/optparse.html
Below is the the instruction that describes the task: ### Input: parse_options() -> opts, args Parse any command-line options given returning both the parsed options and arguments. https://docs.python.org/2/library/optparse.html ### Response: def parse_options(): """ parse_options() -> opts, args Parse any command-line options given returning both the parsed options and arguments. https://docs.python.org/2/library/optparse.html """ parser = optparse.OptionParser(usage=USAGE, version=ontospy.VERSION) parser.add_option("-p", "--port", action="store", type="int", default=DEFAULT_PORT, dest="port", help="A number specifying which port to use for the server.") opts, args = parser.parse_args() # if not opts.all and not opts.query: # parser.print_help() # sys.exit(0) return opts, args
def getPermissionText(self, t, m): ''' returns the permission textual representation of a specified permission bit/object type ''' try: return self.rights[t][m]['TEXT'] except KeyError: raise CommandExecutionError(( 'No right "{0}". It should be one of the following: {1}') .format(m, ', '.join(self.rights[t])))
returns the permission textual representation of a specified permission bit/object type
Below is the the instruction that describes the task: ### Input: returns the permission textual representation of a specified permission bit/object type ### Response: def getPermissionText(self, t, m): ''' returns the permission textual representation of a specified permission bit/object type ''' try: return self.rights[t][m]['TEXT'] except KeyError: raise CommandExecutionError(( 'No right "{0}". It should be one of the following: {1}') .format(m, ', '.join(self.rights[t])))
def RunInstaller(): """Run all registered installers. Run all the current installers and then exit the process. """ try: os.makedirs(os.path.dirname(config.CONFIG["Installer.logfile"])) except OSError: pass # Always log to the installer logfile at debug level. This way if our # installer fails we can send detailed diagnostics. handler = logging.FileHandler(config.CONFIG["Installer.logfile"], mode="wb") handler.setLevel(logging.DEBUG) # Add this to the root logger. logging.getLogger().addHandler(handler) # Ordinarily when the client starts up, the local volatile # configuration is read. Howevwer, when running the installer, we # need to ensure that only the installer configuration is used so # nothing gets overridden by local settings. We there must reload # the configuration from the flag and ignore the Config.writeback # location. config.CONFIG.Initialize(filename=flags.FLAGS.config, reset=True) config.CONFIG.AddContext(contexts.INSTALLER_CONTEXT, "Context applied when we run the client installer.") logging.warning("Starting installation procedure for GRR client.") try: Installer().Init() except Exception as e: # pylint: disable=broad-except # Ouch! we failed to install... Not a lot we can do # here - just log the error and give up. logging.exception("Installation failed: %s", e) # Error return status. sys.exit(-1) # Exit successfully. sys.exit(0)
Run all registered installers. Run all the current installers and then exit the process.
Below is the the instruction that describes the task: ### Input: Run all registered installers. Run all the current installers and then exit the process. ### Response: def RunInstaller(): """Run all registered installers. Run all the current installers and then exit the process. """ try: os.makedirs(os.path.dirname(config.CONFIG["Installer.logfile"])) except OSError: pass # Always log to the installer logfile at debug level. This way if our # installer fails we can send detailed diagnostics. handler = logging.FileHandler(config.CONFIG["Installer.logfile"], mode="wb") handler.setLevel(logging.DEBUG) # Add this to the root logger. logging.getLogger().addHandler(handler) # Ordinarily when the client starts up, the local volatile # configuration is read. Howevwer, when running the installer, we # need to ensure that only the installer configuration is used so # nothing gets overridden by local settings. We there must reload # the configuration from the flag and ignore the Config.writeback # location. config.CONFIG.Initialize(filename=flags.FLAGS.config, reset=True) config.CONFIG.AddContext(contexts.INSTALLER_CONTEXT, "Context applied when we run the client installer.") logging.warning("Starting installation procedure for GRR client.") try: Installer().Init() except Exception as e: # pylint: disable=broad-except # Ouch! we failed to install... Not a lot we can do # here - just log the error and give up. logging.exception("Installation failed: %s", e) # Error return status. sys.exit(-1) # Exit successfully. sys.exit(0)
def shift(self, modelResult): """Shift the model result and return the new instance. Queues up the T(i+1) prediction value and emits a T(i) input/prediction pair, if possible. E.g., if the previous T(i-1) iteration was learn-only, then we would not have a T(i) prediction in our FIFO and would not be able to emit a meaningful input/prediction pair. :param modelResult: A :class:`~.nupic.frameworks.opf.opf_utils.ModelResult` instance to shift. :return: A :class:`~.nupic.frameworks.opf.opf_utils.ModelResult` instance that has been shifted """ inferencesToWrite = {} if self._inferenceBuffer is None: maxDelay = InferenceElement.getMaxDelay(modelResult.inferences) self._inferenceBuffer = collections.deque(maxlen=maxDelay + 1) self._inferenceBuffer.appendleft(copy.deepcopy(modelResult.inferences)) for inferenceElement, inference in modelResult.inferences.iteritems(): if isinstance(inference, dict): inferencesToWrite[inferenceElement] = {} for key, _ in inference.iteritems(): delay = InferenceElement.getTemporalDelay(inferenceElement, key) if len(self._inferenceBuffer) > delay: prevInference = self._inferenceBuffer[delay][inferenceElement][key] inferencesToWrite[inferenceElement][key] = prevInference else: inferencesToWrite[inferenceElement][key] = None else: delay = InferenceElement.getTemporalDelay(inferenceElement) if len(self._inferenceBuffer) > delay: inferencesToWrite[inferenceElement] = ( self._inferenceBuffer[delay][inferenceElement]) else: if type(inference) in (list, tuple): inferencesToWrite[inferenceElement] = [None] * len(inference) else: inferencesToWrite[inferenceElement] = None shiftedResult = ModelResult(rawInput=modelResult.rawInput, sensorInput=modelResult.sensorInput, inferences=inferencesToWrite, metrics=modelResult.metrics, predictedFieldIdx=modelResult.predictedFieldIdx, predictedFieldName=modelResult.predictedFieldName) return shiftedResult
Shift the model result and return the new instance. Queues up the T(i+1) prediction value and emits a T(i) input/prediction pair, if possible. E.g., if the previous T(i-1) iteration was learn-only, then we would not have a T(i) prediction in our FIFO and would not be able to emit a meaningful input/prediction pair. :param modelResult: A :class:`~.nupic.frameworks.opf.opf_utils.ModelResult` instance to shift. :return: A :class:`~.nupic.frameworks.opf.opf_utils.ModelResult` instance that has been shifted
Below is the the instruction that describes the task: ### Input: Shift the model result and return the new instance. Queues up the T(i+1) prediction value and emits a T(i) input/prediction pair, if possible. E.g., if the previous T(i-1) iteration was learn-only, then we would not have a T(i) prediction in our FIFO and would not be able to emit a meaningful input/prediction pair. :param modelResult: A :class:`~.nupic.frameworks.opf.opf_utils.ModelResult` instance to shift. :return: A :class:`~.nupic.frameworks.opf.opf_utils.ModelResult` instance that has been shifted ### Response: def shift(self, modelResult): """Shift the model result and return the new instance. Queues up the T(i+1) prediction value and emits a T(i) input/prediction pair, if possible. E.g., if the previous T(i-1) iteration was learn-only, then we would not have a T(i) prediction in our FIFO and would not be able to emit a meaningful input/prediction pair. :param modelResult: A :class:`~.nupic.frameworks.opf.opf_utils.ModelResult` instance to shift. :return: A :class:`~.nupic.frameworks.opf.opf_utils.ModelResult` instance that has been shifted """ inferencesToWrite = {} if self._inferenceBuffer is None: maxDelay = InferenceElement.getMaxDelay(modelResult.inferences) self._inferenceBuffer = collections.deque(maxlen=maxDelay + 1) self._inferenceBuffer.appendleft(copy.deepcopy(modelResult.inferences)) for inferenceElement, inference in modelResult.inferences.iteritems(): if isinstance(inference, dict): inferencesToWrite[inferenceElement] = {} for key, _ in inference.iteritems(): delay = InferenceElement.getTemporalDelay(inferenceElement, key) if len(self._inferenceBuffer) > delay: prevInference = self._inferenceBuffer[delay][inferenceElement][key] inferencesToWrite[inferenceElement][key] = prevInference else: inferencesToWrite[inferenceElement][key] = None else: delay = InferenceElement.getTemporalDelay(inferenceElement) if len(self._inferenceBuffer) > delay: inferencesToWrite[inferenceElement] = ( self._inferenceBuffer[delay][inferenceElement]) else: if type(inference) in (list, tuple): inferencesToWrite[inferenceElement] = [None] * len(inference) else: inferencesToWrite[inferenceElement] = None shiftedResult = ModelResult(rawInput=modelResult.rawInput, sensorInput=modelResult.sensorInput, inferences=inferencesToWrite, metrics=modelResult.metrics, predictedFieldIdx=modelResult.predictedFieldIdx, predictedFieldName=modelResult.predictedFieldName) return shiftedResult
def min_edit_distance( source: Sequence[T], target: Sequence[T], ins_cost: Callable[..., int] = lambda _x: 1, del_cost: Callable[..., int] = lambda _x: 1, sub_cost: Callable[..., int] = lambda x, y: 0 if x == y else 1) -> int: """Calculates the minimum edit distance between two sequences. Uses the Levenshtein weighting as a default, but offers keyword arguments to supply functions to measure the costs for editing with different elements. Args: ins_cost: A function describing the cost of inserting a given char del_cost: A function describing the cost of deleting a given char sub_cost: A function describing the cost of substituting one char for Returns: The edit distance between the two input sequences. """ # Initialize an m+1 by n+1 array. Note that the strings start from index 1, # with index 0 being used to denote the empty string. n = len(target) m = len(source) distance = np.zeros((m+1, n+1), dtype=np.int16) # Initialize the zeroth row and column to be the distance from the empty # string. for i in range(1, m+1): distance[i, 0] = distance[i-1, 0] + ins_cost(source[i-1]) for j in range(1, n+1): distance[0, j] = distance[0, j-1] + ins_cost(target[j-1]) # Do the dynamic programming to fill in the matrix with the edit distances. for j in range(1, n+1): for i in range(1, m+1): distance[i, j] = min( distance[i-1, j] + ins_cost(source[i-1]), distance[i-1, j-1] + sub_cost(source[i-1],target[j-1]), distance[i, j-1] + del_cost(target[j-1])) return int(distance[len(source), len(target)])
Calculates the minimum edit distance between two sequences. Uses the Levenshtein weighting as a default, but offers keyword arguments to supply functions to measure the costs for editing with different elements. Args: ins_cost: A function describing the cost of inserting a given char del_cost: A function describing the cost of deleting a given char sub_cost: A function describing the cost of substituting one char for Returns: The edit distance between the two input sequences.
Below is the the instruction that describes the task: ### Input: Calculates the minimum edit distance between two sequences. Uses the Levenshtein weighting as a default, but offers keyword arguments to supply functions to measure the costs for editing with different elements. Args: ins_cost: A function describing the cost of inserting a given char del_cost: A function describing the cost of deleting a given char sub_cost: A function describing the cost of substituting one char for Returns: The edit distance between the two input sequences. ### Response: def min_edit_distance( source: Sequence[T], target: Sequence[T], ins_cost: Callable[..., int] = lambda _x: 1, del_cost: Callable[..., int] = lambda _x: 1, sub_cost: Callable[..., int] = lambda x, y: 0 if x == y else 1) -> int: """Calculates the minimum edit distance between two sequences. Uses the Levenshtein weighting as a default, but offers keyword arguments to supply functions to measure the costs for editing with different elements. Args: ins_cost: A function describing the cost of inserting a given char del_cost: A function describing the cost of deleting a given char sub_cost: A function describing the cost of substituting one char for Returns: The edit distance between the two input sequences. """ # Initialize an m+1 by n+1 array. Note that the strings start from index 1, # with index 0 being used to denote the empty string. n = len(target) m = len(source) distance = np.zeros((m+1, n+1), dtype=np.int16) # Initialize the zeroth row and column to be the distance from the empty # string. for i in range(1, m+1): distance[i, 0] = distance[i-1, 0] + ins_cost(source[i-1]) for j in range(1, n+1): distance[0, j] = distance[0, j-1] + ins_cost(target[j-1]) # Do the dynamic programming to fill in the matrix with the edit distances. for j in range(1, n+1): for i in range(1, m+1): distance[i, j] = min( distance[i-1, j] + ins_cost(source[i-1]), distance[i-1, j-1] + sub_cost(source[i-1],target[j-1]), distance[i, j-1] + del_cost(target[j-1])) return int(distance[len(source), len(target)])
def indices(self, names, axis=None): """get the row and col indices of names. If axis is None, two ndarrays are returned, corresponding the indices of names for each axis Parameters ---------- names : iterable column and/or row names axis : (int) (optional) the axis to search. Returns ------- numpy.ndarray : numpy.ndarray indices of names. """ return Matrix.find_rowcol_indices(names,self.row_names,self.col_names,axis=axis)
get the row and col indices of names. If axis is None, two ndarrays are returned, corresponding the indices of names for each axis Parameters ---------- names : iterable column and/or row names axis : (int) (optional) the axis to search. Returns ------- numpy.ndarray : numpy.ndarray indices of names.
Below is the the instruction that describes the task: ### Input: get the row and col indices of names. If axis is None, two ndarrays are returned, corresponding the indices of names for each axis Parameters ---------- names : iterable column and/or row names axis : (int) (optional) the axis to search. Returns ------- numpy.ndarray : numpy.ndarray indices of names. ### Response: def indices(self, names, axis=None): """get the row and col indices of names. If axis is None, two ndarrays are returned, corresponding the indices of names for each axis Parameters ---------- names : iterable column and/or row names axis : (int) (optional) the axis to search. Returns ------- numpy.ndarray : numpy.ndarray indices of names. """ return Matrix.find_rowcol_indices(names,self.row_names,self.col_names,axis=axis)
def parse(target, trace=False, **kwargs): """Parse the given target. If it is a file-like object, then parse its contents. If given a string, perform one of the following actions - If the string is a valid file path, then open and parse it - If the string is a valid directory path, then recursively look for files ending in .ly or .ily - Otherwise parse the string directly. """ # Beware! This function, that actually is the core of all the # business, is written to minimize the responsibilities of each # chunk of code, keeping things simple. Performance may degrade # because of this, but without actual measurements the simplest # choice is the best one. if hasattr(target, 'read'): # It's a file-like object file_content = target.read() return parse(file_content, trace, **kwargs) if os.path.isfile(target): if target.endswith(".ily") or target.endswith(".ly"): console.display("Parsing", target) with io.open(target, "r", encoding="utf-8") as fp: return parse(fp, trace, filename=target, **kwargs) else: return [] if os.path.isdir(target): docs = [] logging.info("Parsing directory {}", target) for root, _, files in os.walk(target): for f in files: fname = os.path.join(root, f) file_docs = parse(fname, trace, **kwargs) docs.extend(file_docs) return docs # We were given a text, so parse it directly metrics = kwargs.get("metrics", None) if metrics is not None: metrics.record_file(target) docs = [] parser = LilyParser(parseinfo=True) try: parser.parse(target, 'lilypond', semantics=DocumentationSemantics(docs), filename=kwargs.get("filename", None), trace=trace) except FailedParse as err: logging.warn(err) if metrics is not None: metrics.record_error(err) except RuntimeError as err: logging.warn(err) if metrics is not None: metrics.record_error(err) return docs
Parse the given target. If it is a file-like object, then parse its contents. If given a string, perform one of the following actions - If the string is a valid file path, then open and parse it - If the string is a valid directory path, then recursively look for files ending in .ly or .ily - Otherwise parse the string directly.
Below is the the instruction that describes the task: ### Input: Parse the given target. If it is a file-like object, then parse its contents. If given a string, perform one of the following actions - If the string is a valid file path, then open and parse it - If the string is a valid directory path, then recursively look for files ending in .ly or .ily - Otherwise parse the string directly. ### Response: def parse(target, trace=False, **kwargs): """Parse the given target. If it is a file-like object, then parse its contents. If given a string, perform one of the following actions - If the string is a valid file path, then open and parse it - If the string is a valid directory path, then recursively look for files ending in .ly or .ily - Otherwise parse the string directly. """ # Beware! This function, that actually is the core of all the # business, is written to minimize the responsibilities of each # chunk of code, keeping things simple. Performance may degrade # because of this, but without actual measurements the simplest # choice is the best one. if hasattr(target, 'read'): # It's a file-like object file_content = target.read() return parse(file_content, trace, **kwargs) if os.path.isfile(target): if target.endswith(".ily") or target.endswith(".ly"): console.display("Parsing", target) with io.open(target, "r", encoding="utf-8") as fp: return parse(fp, trace, filename=target, **kwargs) else: return [] if os.path.isdir(target): docs = [] logging.info("Parsing directory {}", target) for root, _, files in os.walk(target): for f in files: fname = os.path.join(root, f) file_docs = parse(fname, trace, **kwargs) docs.extend(file_docs) return docs # We were given a text, so parse it directly metrics = kwargs.get("metrics", None) if metrics is not None: metrics.record_file(target) docs = [] parser = LilyParser(parseinfo=True) try: parser.parse(target, 'lilypond', semantics=DocumentationSemantics(docs), filename=kwargs.get("filename", None), trace=trace) except FailedParse as err: logging.warn(err) if metrics is not None: metrics.record_error(err) except RuntimeError as err: logging.warn(err) if metrics is not None: metrics.record_error(err) return docs
def make_required_folders(self): """Makes all folders declared in the config if they do not exist. """ for folder in [ self.pending_folder, self.usb_incoming_folder, self.outgoing_folder, self.incoming_folder, self.archive_folder, self.tmp_folder, self.log_folder, ]: if not os.path.exists(folder): os.makedirs(folder)
Makes all folders declared in the config if they do not exist.
Below is the the instruction that describes the task: ### Input: Makes all folders declared in the config if they do not exist. ### Response: def make_required_folders(self): """Makes all folders declared in the config if they do not exist. """ for folder in [ self.pending_folder, self.usb_incoming_folder, self.outgoing_folder, self.incoming_folder, self.archive_folder, self.tmp_folder, self.log_folder, ]: if not os.path.exists(folder): os.makedirs(folder)
def balance(self): """Adds corresponding empty attributes or null geometry records depending on which type of record was created to make sure all three files are in synch.""" while self.recNum > self.shpNum: self.null() while self.recNum < self.shpNum: self.record()
Adds corresponding empty attributes or null geometry records depending on which type of record was created to make sure all three files are in synch.
Below is the the instruction that describes the task: ### Input: Adds corresponding empty attributes or null geometry records depending on which type of record was created to make sure all three files are in synch. ### Response: def balance(self): """Adds corresponding empty attributes or null geometry records depending on which type of record was created to make sure all three files are in synch.""" while self.recNum > self.shpNum: self.null() while self.recNum < self.shpNum: self.record()
def RemoveClientLabels(self, client): """Removes all labels for a given client object. Args: client: A VFSGRRClient record. """ keywords = [] for label in client.GetLabelsNames(): keyword = self._NormalizeKeyword(utils.SmartStr(label)) # This might actually delete a keyword with the same name as the label (if # there is one). Usually the client keywords will be rebuilt after the # deletion of the old labels though, so this can only destroy historic # index data; normal search functionality will not be affected. keywords.append(keyword) keywords.append("label:%s" % keyword) self.RemoveKeywordsForName(self._ClientIdFromURN(client.urn), keywords)
Removes all labels for a given client object. Args: client: A VFSGRRClient record.
Below is the the instruction that describes the task: ### Input: Removes all labels for a given client object. Args: client: A VFSGRRClient record. ### Response: def RemoveClientLabels(self, client): """Removes all labels for a given client object. Args: client: A VFSGRRClient record. """ keywords = [] for label in client.GetLabelsNames(): keyword = self._NormalizeKeyword(utils.SmartStr(label)) # This might actually delete a keyword with the same name as the label (if # there is one). Usually the client keywords will be rebuilt after the # deletion of the old labels though, so this can only destroy historic # index data; normal search functionality will not be affected. keywords.append(keyword) keywords.append("label:%s" % keyword) self.RemoveKeywordsForName(self._ClientIdFromURN(client.urn), keywords)
def _batch_interp_with_gather_nd(x, x_ref_min, x_ref_max, y_ref, nd, fill_value, batch_dims): """N-D interpolation that works with leading batch dims.""" dtype = x.dtype # In this function, # x.shape = [A1, ..., An, D, nd], where n = batch_dims # and # y_ref.shape = [A1, ..., An, C1, C2,..., Cnd, B1,...,BM] # y_ref[A1, ..., An, i1,...,ind] is a shape [B1,...,BM] Tensor with the value # at index [i1,...,ind] in the interpolation table. # and x_ref_max have shapes [A1, ..., An, nd]. # ny[k] is number of y reference points in interp dim k. ny = tf.cast(tf.shape(input=y_ref)[batch_dims:batch_dims + nd], dtype) # Map [x_ref_min, x_ref_max] to [0, ny - 1]. # This is the (fractional) index of x. # x_idx_unclipped[A1, ..., An, d, k] is the fractional index into dim k of # interpolation table for the dth x value. x_ref_min_expanded = tf.expand_dims(x_ref_min, axis=-2) x_ref_max_expanded = tf.expand_dims(x_ref_max, axis=-2) x_idx_unclipped = (ny - 1) * (x - x_ref_min_expanded) / ( x_ref_max_expanded - x_ref_min_expanded) # Wherever x is NaN, x_idx_unclipped will be NaN as well. # Keep track of the nan indices here (so we can impute NaN later). # Also eliminate any NaN indices, since there is not NaN in 32bit. nan_idx = tf.math.is_nan(x_idx_unclipped) x_idx_unclipped = tf.where(nan_idx, tf.zeros_like(x_idx_unclipped), x_idx_unclipped) # x_idx.shape = [A1, ..., An, D, nd] x_idx = tf.clip_by_value(x_idx_unclipped, tf.zeros((), dtype=dtype), ny - 1) # Get the index above and below x_idx. # Naively we could set idx_below = floor(x_idx), idx_above = ceil(x_idx), # however, this results in idx_below == idx_above whenever x is on a grid. # This in turn results in y_ref_below == y_ref_above, and then the gradient # at this point is zero. So here we "jitter" one of idx_below, idx_above, # so that they are at different values. This jittering does not affect the # interpolated value, but does make the gradient nonzero (unless of course # the y_ref values are the same). idx_below = tf.floor(x_idx) idx_above = tf.minimum(idx_below + 1, ny - 1) idx_below = tf.maximum(idx_above - 1, 0) # These are the values of y_ref corresponding to above/below indices. # idx_below_int32.shape = x.shape[:-1] + [nd] idx_below_int32 = tf.cast(idx_below, dtype=tf.int32) idx_above_int32 = tf.cast(idx_above, dtype=tf.int32) # idx_below_list is a length nd list of shape x.shape[:-1] int32 tensors. idx_below_list = tf.unstack(idx_below_int32, axis=-1) idx_above_list = tf.unstack(idx_above_int32, axis=-1) # Use t to get a convex combination of the below/above values. # t.shape = [A1, ..., An, D, nd] t = x_idx - idx_below # x, and tensors shaped like x, need to be added to, and selected with # (using tf.where) the output y. This requires appending singletons. def _expand_x_fn(tensor): # Reshape tensor to tensor.shape + [1] * M. extended_shape = tf.concat([ tf.shape(input=tensor), tf.ones_like(tf.shape(input=y_ref)[batch_dims + nd:]) ], axis=0) return tf.reshape(tensor, extended_shape) # Now, t.shape = [A1, ..., An, D, nd] + [1] * (rank(y_ref) - nd - batch_dims) t = _expand_x_fn(t) s = 1 - t # Re-insert NaN wherever x was NaN. nan_idx = _expand_x_fn(nan_idx) t = tf.where(nan_idx, tf.fill(tf.shape(input=t), tf.constant(np.nan, dtype)), t) terms = [] # Our work above has located x's fractional index inside a cube of above/below # indices. The distance to the below indices is t, and to the above indices # is s. # Drawing lines from x to the cube walls, we get 2**nd smaller cubes. Each # term in the result is a product of a reference point, gathered from y_ref, # multiplied by a volume. The volume is that of the cube opposite to the # reference point. E.g. if the reference point is below x in every axis, the # volume is that of the cube with corner above x in every axis, s[0]*...*s[nd] # We could probably do this with one massive gather, but that would be very # unreadable and un-debuggable. It also would create a large Tensor. for zero_ones_list in _binary_count(nd): gather_from_y_ref_idx = [] opposite_volume_t_idx = [] opposite_volume_s_idx = [] for k, zero_or_one in enumerate(zero_ones_list): if zero_or_one == 0: # If the kth iterate has zero_or_one = 0, # Will gather from the "below" reference point along axis k. gather_from_y_ref_idx.append(idx_below_list[k]) # Now append the index to gather for computing opposite_volume. # This could be done by initializing opposite_volume to 1, then here: # opposite_volume *= tf.gather(s, indices=k, axis=tf.rank(x) - 1) # but that puts a gather in the "inner loop." Better to append the # index and do one larger gather down below. opposite_volume_s_idx.append(k) else: gather_from_y_ref_idx.append(idx_above_list[k]) # Append an index to gather, having the same effect as # opposite_volume *= tf.gather(t, indices=k, axis=tf.rank(x) - 1) opposite_volume_t_idx.append(k) # Compute opposite_volume (volume of cube opposite the ref point): # Recall t.shape = s.shape = [D, nd] + [1, ..., 1] # Gather from t and s along the "nd" axis, which is rank(x) - 1. ov_axis = tf.rank(x) - 1 opposite_volume = ( tf.reduce_prod( input_tensor=tf.gather( t, indices=tf.cast(opposite_volume_t_idx, dtype=tf.int32), axis=ov_axis), axis=ov_axis) * tf.reduce_prod( input_tensor=tf.gather( s, indices=tf.cast(opposite_volume_s_idx, dtype=tf.int32), axis=ov_axis), axis=ov_axis) ) # pyformat: disable y_ref_pt = tf.gather_nd( y_ref, tf.stack(gather_from_y_ref_idx, axis=-1), batch_dims=batch_dims) terms.append(y_ref_pt * opposite_volume) y = tf.math.add_n(terms) if tf.debugging.is_numeric_tensor(fill_value): # Recall x_idx_unclipped.shape = [D, nd], # so here we check if it was out of bounds in any of the nd dims. # Thus, oob_idx.shape = [D]. oob_idx = tf.reduce_any( input_tensor=(x_idx_unclipped < 0) | (x_idx_unclipped > ny - 1), axis=-1) # Now, y.shape = [D, B1,...,BM], so we'll have to broadcast oob_idx. oob_idx = _expand_x_fn(oob_idx) # Shape [D, 1,...,1] oob_idx |= tf.fill(tf.shape(input=y), False) y = tf.where(oob_idx, tf.fill(tf.shape(input=y), fill_value), y) return y
N-D interpolation that works with leading batch dims.
Below is the the instruction that describes the task: ### Input: N-D interpolation that works with leading batch dims. ### Response: def _batch_interp_with_gather_nd(x, x_ref_min, x_ref_max, y_ref, nd, fill_value, batch_dims): """N-D interpolation that works with leading batch dims.""" dtype = x.dtype # In this function, # x.shape = [A1, ..., An, D, nd], where n = batch_dims # and # y_ref.shape = [A1, ..., An, C1, C2,..., Cnd, B1,...,BM] # y_ref[A1, ..., An, i1,...,ind] is a shape [B1,...,BM] Tensor with the value # at index [i1,...,ind] in the interpolation table. # and x_ref_max have shapes [A1, ..., An, nd]. # ny[k] is number of y reference points in interp dim k. ny = tf.cast(tf.shape(input=y_ref)[batch_dims:batch_dims + nd], dtype) # Map [x_ref_min, x_ref_max] to [0, ny - 1]. # This is the (fractional) index of x. # x_idx_unclipped[A1, ..., An, d, k] is the fractional index into dim k of # interpolation table for the dth x value. x_ref_min_expanded = tf.expand_dims(x_ref_min, axis=-2) x_ref_max_expanded = tf.expand_dims(x_ref_max, axis=-2) x_idx_unclipped = (ny - 1) * (x - x_ref_min_expanded) / ( x_ref_max_expanded - x_ref_min_expanded) # Wherever x is NaN, x_idx_unclipped will be NaN as well. # Keep track of the nan indices here (so we can impute NaN later). # Also eliminate any NaN indices, since there is not NaN in 32bit. nan_idx = tf.math.is_nan(x_idx_unclipped) x_idx_unclipped = tf.where(nan_idx, tf.zeros_like(x_idx_unclipped), x_idx_unclipped) # x_idx.shape = [A1, ..., An, D, nd] x_idx = tf.clip_by_value(x_idx_unclipped, tf.zeros((), dtype=dtype), ny - 1) # Get the index above and below x_idx. # Naively we could set idx_below = floor(x_idx), idx_above = ceil(x_idx), # however, this results in idx_below == idx_above whenever x is on a grid. # This in turn results in y_ref_below == y_ref_above, and then the gradient # at this point is zero. So here we "jitter" one of idx_below, idx_above, # so that they are at different values. This jittering does not affect the # interpolated value, but does make the gradient nonzero (unless of course # the y_ref values are the same). idx_below = tf.floor(x_idx) idx_above = tf.minimum(idx_below + 1, ny - 1) idx_below = tf.maximum(idx_above - 1, 0) # These are the values of y_ref corresponding to above/below indices. # idx_below_int32.shape = x.shape[:-1] + [nd] idx_below_int32 = tf.cast(idx_below, dtype=tf.int32) idx_above_int32 = tf.cast(idx_above, dtype=tf.int32) # idx_below_list is a length nd list of shape x.shape[:-1] int32 tensors. idx_below_list = tf.unstack(idx_below_int32, axis=-1) idx_above_list = tf.unstack(idx_above_int32, axis=-1) # Use t to get a convex combination of the below/above values. # t.shape = [A1, ..., An, D, nd] t = x_idx - idx_below # x, and tensors shaped like x, need to be added to, and selected with # (using tf.where) the output y. This requires appending singletons. def _expand_x_fn(tensor): # Reshape tensor to tensor.shape + [1] * M. extended_shape = tf.concat([ tf.shape(input=tensor), tf.ones_like(tf.shape(input=y_ref)[batch_dims + nd:]) ], axis=0) return tf.reshape(tensor, extended_shape) # Now, t.shape = [A1, ..., An, D, nd] + [1] * (rank(y_ref) - nd - batch_dims) t = _expand_x_fn(t) s = 1 - t # Re-insert NaN wherever x was NaN. nan_idx = _expand_x_fn(nan_idx) t = tf.where(nan_idx, tf.fill(tf.shape(input=t), tf.constant(np.nan, dtype)), t) terms = [] # Our work above has located x's fractional index inside a cube of above/below # indices. The distance to the below indices is t, and to the above indices # is s. # Drawing lines from x to the cube walls, we get 2**nd smaller cubes. Each # term in the result is a product of a reference point, gathered from y_ref, # multiplied by a volume. The volume is that of the cube opposite to the # reference point. E.g. if the reference point is below x in every axis, the # volume is that of the cube with corner above x in every axis, s[0]*...*s[nd] # We could probably do this with one massive gather, but that would be very # unreadable and un-debuggable. It also would create a large Tensor. for zero_ones_list in _binary_count(nd): gather_from_y_ref_idx = [] opposite_volume_t_idx = [] opposite_volume_s_idx = [] for k, zero_or_one in enumerate(zero_ones_list): if zero_or_one == 0: # If the kth iterate has zero_or_one = 0, # Will gather from the "below" reference point along axis k. gather_from_y_ref_idx.append(idx_below_list[k]) # Now append the index to gather for computing opposite_volume. # This could be done by initializing opposite_volume to 1, then here: # opposite_volume *= tf.gather(s, indices=k, axis=tf.rank(x) - 1) # but that puts a gather in the "inner loop." Better to append the # index and do one larger gather down below. opposite_volume_s_idx.append(k) else: gather_from_y_ref_idx.append(idx_above_list[k]) # Append an index to gather, having the same effect as # opposite_volume *= tf.gather(t, indices=k, axis=tf.rank(x) - 1) opposite_volume_t_idx.append(k) # Compute opposite_volume (volume of cube opposite the ref point): # Recall t.shape = s.shape = [D, nd] + [1, ..., 1] # Gather from t and s along the "nd" axis, which is rank(x) - 1. ov_axis = tf.rank(x) - 1 opposite_volume = ( tf.reduce_prod( input_tensor=tf.gather( t, indices=tf.cast(opposite_volume_t_idx, dtype=tf.int32), axis=ov_axis), axis=ov_axis) * tf.reduce_prod( input_tensor=tf.gather( s, indices=tf.cast(opposite_volume_s_idx, dtype=tf.int32), axis=ov_axis), axis=ov_axis) ) # pyformat: disable y_ref_pt = tf.gather_nd( y_ref, tf.stack(gather_from_y_ref_idx, axis=-1), batch_dims=batch_dims) terms.append(y_ref_pt * opposite_volume) y = tf.math.add_n(terms) if tf.debugging.is_numeric_tensor(fill_value): # Recall x_idx_unclipped.shape = [D, nd], # so here we check if it was out of bounds in any of the nd dims. # Thus, oob_idx.shape = [D]. oob_idx = tf.reduce_any( input_tensor=(x_idx_unclipped < 0) | (x_idx_unclipped > ny - 1), axis=-1) # Now, y.shape = [D, B1,...,BM], so we'll have to broadcast oob_idx. oob_idx = _expand_x_fn(oob_idx) # Shape [D, 1,...,1] oob_idx |= tf.fill(tf.shape(input=y), False) y = tf.where(oob_idx, tf.fill(tf.shape(input=y), fill_value), y) return y
async def get_first_search_result(self, term: str): """Get first search result. This function will parse the information from the link that search_novel_updates returns and then return it as a dictionary :param term: The novel to search for and parse """ # Uses the other method in the class # to search the search page for the actual page that we want to_parse = await self.get_search_page(term) async with self.session.get(to_parse) as response: # If the response is OK if response.status == 200: # The information to parse parse_info = BeautifulSoup(await response.text(), 'lxml') # Artists, # defined up here so we can account for if it is None, e.g. for web novels ect artists = parse_info.find('a', class_='genre', id='artiststag') # English publisher, # defined here so we can account for it if None, # e.g. for works unlicensed in English english_publisher = parse_info.find('a', class_='genre', id='myepub') # Publisher, # defined here so we can account for it if it's None, e.g. not published publisher = parse_info.find('a', class_='genre', id='myopub') # Accounting for if Artists/English Publisher/Publisher is None if artists is not None: artists = artists.string if english_publisher is not None: try: english_publisher = english_publisher.children.string except AttributeError: # english publisher's children tag is not string. english_publisher = list(english_publisher.children) if len(english_publisher) == 1: english_publisher = english_publisher[0] if publisher is not None: publisher = publisher.string # The data to return to the user, in a dictionary no_img_found_url = 'http://www.novelupdates.com/img/noimagefound.jpg' data = {'title': self._get_title(parse_info=parse_info), 'cover': ( None if parse_info.find('img').get('src') == no_img_found_url else parse_info.find('img').get('src') ), 'type': parse_info.find('a', class_='genre type').string, 'genre': ( [ x.string for x in list( parse_info.find_all('div', id='seriesgenre')[0].children ) if len(x.string.strip()) > 0 ] ), 'tags': ( [ x.string for x in list( parse_info.find_all('div', id='showtags')[0].children ) if len(x.string.strip()) > 0 ] ), 'language': parse_info.find('a', class_='genre lang').string, 'authors': list( set([x.string for x in parse_info.find_all('a', id='authtag')]) ), 'artists': artists, 'year': parse_info.find('div', id='edityear').string.strip(), 'novel_status': self._get_novel_status(parse_info=parse_info), 'licensed': ( True if parse_info.find('div', id='showlicensed').string.strip() == 'Yes' else False ), 'completely_translated': ( True if len( list(parse_info.find('div', id='showtranslated').descendants) ) > 1 else False ), 'publisher': publisher, 'english_publisher': english_publisher, 'description': ( ' '.join( [ x.string.strip() for x in list( parse_info.find('div', id='editdescription').children ) if x.string.strip() ] ) ), 'aliases': self._get_aliases(parse_info=parse_info), 'link': to_parse, 'related_series': self._get_related_series(parse_info=parse_info)} # Returning the dictionary with all of the information # from novelupdates that we parsed return data else: # Raise an error with the response status raise aiohttp.ClientResponseError(response.status)
Get first search result. This function will parse the information from the link that search_novel_updates returns and then return it as a dictionary :param term: The novel to search for and parse
Below is the the instruction that describes the task: ### Input: Get first search result. This function will parse the information from the link that search_novel_updates returns and then return it as a dictionary :param term: The novel to search for and parse ### Response: async def get_first_search_result(self, term: str): """Get first search result. This function will parse the information from the link that search_novel_updates returns and then return it as a dictionary :param term: The novel to search for and parse """ # Uses the other method in the class # to search the search page for the actual page that we want to_parse = await self.get_search_page(term) async with self.session.get(to_parse) as response: # If the response is OK if response.status == 200: # The information to parse parse_info = BeautifulSoup(await response.text(), 'lxml') # Artists, # defined up here so we can account for if it is None, e.g. for web novels ect artists = parse_info.find('a', class_='genre', id='artiststag') # English publisher, # defined here so we can account for it if None, # e.g. for works unlicensed in English english_publisher = parse_info.find('a', class_='genre', id='myepub') # Publisher, # defined here so we can account for it if it's None, e.g. not published publisher = parse_info.find('a', class_='genre', id='myopub') # Accounting for if Artists/English Publisher/Publisher is None if artists is not None: artists = artists.string if english_publisher is not None: try: english_publisher = english_publisher.children.string except AttributeError: # english publisher's children tag is not string. english_publisher = list(english_publisher.children) if len(english_publisher) == 1: english_publisher = english_publisher[0] if publisher is not None: publisher = publisher.string # The data to return to the user, in a dictionary no_img_found_url = 'http://www.novelupdates.com/img/noimagefound.jpg' data = {'title': self._get_title(parse_info=parse_info), 'cover': ( None if parse_info.find('img').get('src') == no_img_found_url else parse_info.find('img').get('src') ), 'type': parse_info.find('a', class_='genre type').string, 'genre': ( [ x.string for x in list( parse_info.find_all('div', id='seriesgenre')[0].children ) if len(x.string.strip()) > 0 ] ), 'tags': ( [ x.string for x in list( parse_info.find_all('div', id='showtags')[0].children ) if len(x.string.strip()) > 0 ] ), 'language': parse_info.find('a', class_='genre lang').string, 'authors': list( set([x.string for x in parse_info.find_all('a', id='authtag')]) ), 'artists': artists, 'year': parse_info.find('div', id='edityear').string.strip(), 'novel_status': self._get_novel_status(parse_info=parse_info), 'licensed': ( True if parse_info.find('div', id='showlicensed').string.strip() == 'Yes' else False ), 'completely_translated': ( True if len( list(parse_info.find('div', id='showtranslated').descendants) ) > 1 else False ), 'publisher': publisher, 'english_publisher': english_publisher, 'description': ( ' '.join( [ x.string.strip() for x in list( parse_info.find('div', id='editdescription').children ) if x.string.strip() ] ) ), 'aliases': self._get_aliases(parse_info=parse_info), 'link': to_parse, 'related_series': self._get_related_series(parse_info=parse_info)} # Returning the dictionary with all of the information # from novelupdates that we parsed return data else: # Raise an error with the response status raise aiohttp.ClientResponseError(response.status)
def create_fwrule(kwargs=None, call=None): ''' Create a GCE firewall rule. The 'default' network is used if not specified. CLI Example: .. code-block:: bash salt-cloud -f create_fwrule gce name=allow-http allow=tcp:80 ''' if call != 'function': raise SaltCloudSystemExit( 'The create_fwrule function must be called with -f or --function.' ) if not kwargs or 'name' not in kwargs: log.error( 'A name must be specified when creating a firewall rule.' ) return False if 'allow' not in kwargs: log.error( 'Must use "allow" to specify allowed protocols/ports.' ) return False name = kwargs['name'] network_name = kwargs.get('network', 'default') allow = _parse_allow(kwargs['allow']) src_range = kwargs.get('src_range', '0.0.0.0/0') src_tags = kwargs.get('src_tags', None) dst_tags = kwargs.get('dst_tags', None) if src_range: src_range = src_range.split(',') if src_tags: src_tags = src_tags.split(',') if dst_tags: dst_tags = dst_tags.split(',') conn = get_conn() __utils__['cloud.fire_event']( 'event', 'create firewall', 'salt/cloud/firewall/creating', args={ 'name': name, 'network': network_name, 'allow': kwargs['allow'], }, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) fwrule = conn.ex_create_firewall( name, allow, network=network_name, source_ranges=src_range, source_tags=src_tags, target_tags=dst_tags ) __utils__['cloud.fire_event']( 'event', 'created firewall', 'salt/cloud/firewall/created', args={ 'name': name, 'network': network_name, 'allow': kwargs['allow'], }, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) return _expand_item(fwrule)
Create a GCE firewall rule. The 'default' network is used if not specified. CLI Example: .. code-block:: bash salt-cloud -f create_fwrule gce name=allow-http allow=tcp:80
Below is the the instruction that describes the task: ### Input: Create a GCE firewall rule. The 'default' network is used if not specified. CLI Example: .. code-block:: bash salt-cloud -f create_fwrule gce name=allow-http allow=tcp:80 ### Response: def create_fwrule(kwargs=None, call=None): ''' Create a GCE firewall rule. The 'default' network is used if not specified. CLI Example: .. code-block:: bash salt-cloud -f create_fwrule gce name=allow-http allow=tcp:80 ''' if call != 'function': raise SaltCloudSystemExit( 'The create_fwrule function must be called with -f or --function.' ) if not kwargs or 'name' not in kwargs: log.error( 'A name must be specified when creating a firewall rule.' ) return False if 'allow' not in kwargs: log.error( 'Must use "allow" to specify allowed protocols/ports.' ) return False name = kwargs['name'] network_name = kwargs.get('network', 'default') allow = _parse_allow(kwargs['allow']) src_range = kwargs.get('src_range', '0.0.0.0/0') src_tags = kwargs.get('src_tags', None) dst_tags = kwargs.get('dst_tags', None) if src_range: src_range = src_range.split(',') if src_tags: src_tags = src_tags.split(',') if dst_tags: dst_tags = dst_tags.split(',') conn = get_conn() __utils__['cloud.fire_event']( 'event', 'create firewall', 'salt/cloud/firewall/creating', args={ 'name': name, 'network': network_name, 'allow': kwargs['allow'], }, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) fwrule = conn.ex_create_firewall( name, allow, network=network_name, source_ranges=src_range, source_tags=src_tags, target_tags=dst_tags ) __utils__['cloud.fire_event']( 'event', 'created firewall', 'salt/cloud/firewall/created', args={ 'name': name, 'network': network_name, 'allow': kwargs['allow'], }, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) return _expand_item(fwrule)
def get_user_venues(self, id, **data): """ GET /users/:id/venues/ Returns a paginated response of :format:`venue` objects that are owned by the user. """ return self.get("/users/{0}/venues/".format(id), data=data)
GET /users/:id/venues/ Returns a paginated response of :format:`venue` objects that are owned by the user.
Below is the the instruction that describes the task: ### Input: GET /users/:id/venues/ Returns a paginated response of :format:`venue` objects that are owned by the user. ### Response: def get_user_venues(self, id, **data): """ GET /users/:id/venues/ Returns a paginated response of :format:`venue` objects that are owned by the user. """ return self.get("/users/{0}/venues/".format(id), data=data)
def dcnm_network_create_event(self, network_info): """Process network create event from DCNM.""" # 1. Add network info to database before sending request to # neutron to create the network. # Check if network is already created. pre_seg_id = network_info.get('segmentation_id') pre_project_name = network_info.get('project_name') pre_partition_name = network_info.get('partition_name') if not pre_seg_id or not pre_partition_name or not pre_project_name: LOG.error('Invalid network event: %s', network_info) return # Check if partition name is the one that openstack created. if pre_partition_name != self.cfg.dcnm.default_partition_name: LOG.error('Failed to create network. Partition %(part)s is ' 'not %(os_part)s which is created by openstack.', {'part': pre_partition_name, 'os_part': self.cfg.dcnm.default_partition_name}) return query_net = self.get_network_by_segid(pre_seg_id) if query_net: # The network is already created no need to process the event. LOG.info('dcnm_network_create_event: network %(name)s was ' 'created. Ignoring processing the event.', {'name': query_net.name}) return dcnm_net_info = self.dcnm_client.get_network(pre_project_name, pre_seg_id) if not dcnm_net_info: LOG.info('No network details for %(org)s and %(segid)s', {'org': pre_project_name, 'segid': pre_seg_id}) return net_id = utils.get_uuid() pseg_id = dcnm_net_info.get('segmentId') seg_id = self._get_segmentation_id(net_id, pseg_id, 'DCNM') cfgp = dcnm_net_info.get('profileName') net_name = dcnm_net_info.get('networkName') fwd_mod = self.dcnm_client.config_profile_fwding_mode_get(cfgp) tenant_name = dcnm_net_info.get('organizationName') tenant_id = self.get_project_id(tenant_name) # Get the subnet details. subnet = dcnm_net_info.get('dhcpScope') if not subnet: # The dhcpScope is not provided. Calculating the cidr based on # gateway ip and netmask. gw_addr = dcnm_net_info.get('gateway') net_mask = dcnm_net_info.get('netmaskLength') cidr = utils.make_cidr(gw_addr, net_mask) if not cidr: LOG.error('Failed to create network: ' 'cidr is None for %(gw)s %(mask)s', {'gw': gw_addr, 'mask': net_mask}) return subnet = dict(gateway=gw_addr, subnet=cidr) # Check if parameters are provided. if not (net_name and tenant_id and seg_id and subnet): LOG.error('Invalid value: network %(name)s tenant_id ' '%(tenant_id)s segmentation_id %(seg_id)s ' 'subnet %(subnet)s.', {'name': net_name, 'tenant_id': tenant_id, 'seg_id': seg_id, 'subnet': subnet}) return # Update network cache and add the network to the database. net_ext_name = self.cfg.dcnm.dcnm_net_ext self.network[net_id] = dict(segmentation_id=seg_id, config_profile=cfgp, fwd_mod=fwd_mod, tenant_id=tenant_id, name=net_name + net_ext_name, id=net_id, source='DCNM') self.add_network_db(net_id, self.network[net_id], 'DCNM', constants.RESULT_SUCCESS) # 2. Send network create request to neutron try: # With create_network (called below), the same request comes as # notification and it will be processed in the # create_network_event. The request should not be processed as it # is already processed here. # The only way to decide whether it is for a new network or not is # the segmentation_id (DCNM does not have uuid for network) which # is unique. For that reason it is needed to send segmentation_id # when creating network in openstack. # Moreover, we are using network_type=local and for that reason # provider:segmentation_id cannot be added as parameter when # creating network. One solution is to embed segmentation_id in the # network name. Then, when processing the notification, if the # request is from DCNM, the segmentation_id will be extracted from # network name. With that create_network_event can decide to # process or deny an event. updated_net_name = net_name + net_ext_name + str(seg_id) body = {'network': {'name': updated_net_name, 'tenant_id': tenant_id, 'admin_state_up': True}} dcnm_net = self.neutronclient.create_network( body=body).get('network') net_id = dcnm_net.get('id') except Exception as exc: # Failed to create network, do clean up. # Remove the entry from database and local cache. del self.network[net_id] self.delete_network_db(net_id) LOG.exception('dcnm_network_create_event: Failed to create ' '%(network)s. Reason %(err)s.', {'network': body, 'err': str(exc)}) return LOG.debug('dcnm_network_create_event: Created network %(network)s', ( body)) # 3. Send subnet create request to neutron. pool = subnet.get('ipRange') allocation_pools = [] if pool: allocation_pools = [{'start': s, 'end': e} for s, e in [p.split('-') for p in pool.split(',')]] try: body = {'subnet': {'cidr': subnet.get('subnet'), 'gateway_ip': subnet.get('gateway'), 'ip_version': 4, 'network_id': net_id, 'tenant_id': tenant_id, 'enable_dhcp': not self.dcnm_dhcp, 'allocation_pools': allocation_pools, }} if not self.dcnm_dhcp: body.get('subnet').pop('allocation_pools') # Send requenst to create subnet in neutron. LOG.debug('Creating subnet %(subnet)s for DCNM request.', body) dcnm_subnet = self.neutronclient.create_subnet( body=body).get('subnet') subnet_id = dcnm_subnet.get('id') # Update subnet cache. self.subnet[subnet_id] = {} self.subnet[subnet_id].update(body.get('subnet')) except Exception as exc: # Failed to create network, do clean up if necessary. LOG.exception('Failed to create subnet %(subnet)s for DCNM ' 'request. Error %(err)s', {'subnet': body['subnet'], 'err': str(exc)}) LOG.debug('dcnm_network_create_event: Created subnet %(subnet)s', ( body))
Process network create event from DCNM.
Below is the the instruction that describes the task: ### Input: Process network create event from DCNM. ### Response: def dcnm_network_create_event(self, network_info): """Process network create event from DCNM.""" # 1. Add network info to database before sending request to # neutron to create the network. # Check if network is already created. pre_seg_id = network_info.get('segmentation_id') pre_project_name = network_info.get('project_name') pre_partition_name = network_info.get('partition_name') if not pre_seg_id or not pre_partition_name or not pre_project_name: LOG.error('Invalid network event: %s', network_info) return # Check if partition name is the one that openstack created. if pre_partition_name != self.cfg.dcnm.default_partition_name: LOG.error('Failed to create network. Partition %(part)s is ' 'not %(os_part)s which is created by openstack.', {'part': pre_partition_name, 'os_part': self.cfg.dcnm.default_partition_name}) return query_net = self.get_network_by_segid(pre_seg_id) if query_net: # The network is already created no need to process the event. LOG.info('dcnm_network_create_event: network %(name)s was ' 'created. Ignoring processing the event.', {'name': query_net.name}) return dcnm_net_info = self.dcnm_client.get_network(pre_project_name, pre_seg_id) if not dcnm_net_info: LOG.info('No network details for %(org)s and %(segid)s', {'org': pre_project_name, 'segid': pre_seg_id}) return net_id = utils.get_uuid() pseg_id = dcnm_net_info.get('segmentId') seg_id = self._get_segmentation_id(net_id, pseg_id, 'DCNM') cfgp = dcnm_net_info.get('profileName') net_name = dcnm_net_info.get('networkName') fwd_mod = self.dcnm_client.config_profile_fwding_mode_get(cfgp) tenant_name = dcnm_net_info.get('organizationName') tenant_id = self.get_project_id(tenant_name) # Get the subnet details. subnet = dcnm_net_info.get('dhcpScope') if not subnet: # The dhcpScope is not provided. Calculating the cidr based on # gateway ip and netmask. gw_addr = dcnm_net_info.get('gateway') net_mask = dcnm_net_info.get('netmaskLength') cidr = utils.make_cidr(gw_addr, net_mask) if not cidr: LOG.error('Failed to create network: ' 'cidr is None for %(gw)s %(mask)s', {'gw': gw_addr, 'mask': net_mask}) return subnet = dict(gateway=gw_addr, subnet=cidr) # Check if parameters are provided. if not (net_name and tenant_id and seg_id and subnet): LOG.error('Invalid value: network %(name)s tenant_id ' '%(tenant_id)s segmentation_id %(seg_id)s ' 'subnet %(subnet)s.', {'name': net_name, 'tenant_id': tenant_id, 'seg_id': seg_id, 'subnet': subnet}) return # Update network cache and add the network to the database. net_ext_name = self.cfg.dcnm.dcnm_net_ext self.network[net_id] = dict(segmentation_id=seg_id, config_profile=cfgp, fwd_mod=fwd_mod, tenant_id=tenant_id, name=net_name + net_ext_name, id=net_id, source='DCNM') self.add_network_db(net_id, self.network[net_id], 'DCNM', constants.RESULT_SUCCESS) # 2. Send network create request to neutron try: # With create_network (called below), the same request comes as # notification and it will be processed in the # create_network_event. The request should not be processed as it # is already processed here. # The only way to decide whether it is for a new network or not is # the segmentation_id (DCNM does not have uuid for network) which # is unique. For that reason it is needed to send segmentation_id # when creating network in openstack. # Moreover, we are using network_type=local and for that reason # provider:segmentation_id cannot be added as parameter when # creating network. One solution is to embed segmentation_id in the # network name. Then, when processing the notification, if the # request is from DCNM, the segmentation_id will be extracted from # network name. With that create_network_event can decide to # process or deny an event. updated_net_name = net_name + net_ext_name + str(seg_id) body = {'network': {'name': updated_net_name, 'tenant_id': tenant_id, 'admin_state_up': True}} dcnm_net = self.neutronclient.create_network( body=body).get('network') net_id = dcnm_net.get('id') except Exception as exc: # Failed to create network, do clean up. # Remove the entry from database and local cache. del self.network[net_id] self.delete_network_db(net_id) LOG.exception('dcnm_network_create_event: Failed to create ' '%(network)s. Reason %(err)s.', {'network': body, 'err': str(exc)}) return LOG.debug('dcnm_network_create_event: Created network %(network)s', ( body)) # 3. Send subnet create request to neutron. pool = subnet.get('ipRange') allocation_pools = [] if pool: allocation_pools = [{'start': s, 'end': e} for s, e in [p.split('-') for p in pool.split(',')]] try: body = {'subnet': {'cidr': subnet.get('subnet'), 'gateway_ip': subnet.get('gateway'), 'ip_version': 4, 'network_id': net_id, 'tenant_id': tenant_id, 'enable_dhcp': not self.dcnm_dhcp, 'allocation_pools': allocation_pools, }} if not self.dcnm_dhcp: body.get('subnet').pop('allocation_pools') # Send requenst to create subnet in neutron. LOG.debug('Creating subnet %(subnet)s for DCNM request.', body) dcnm_subnet = self.neutronclient.create_subnet( body=body).get('subnet') subnet_id = dcnm_subnet.get('id') # Update subnet cache. self.subnet[subnet_id] = {} self.subnet[subnet_id].update(body.get('subnet')) except Exception as exc: # Failed to create network, do clean up if necessary. LOG.exception('Failed to create subnet %(subnet)s for DCNM ' 'request. Error %(err)s', {'subnet': body['subnet'], 'err': str(exc)}) LOG.debug('dcnm_network_create_event: Created subnet %(subnet)s', ( body))
def licenses( ctx, summary=False, from_classifier=False, with_system=False, with_authors=False, with_urls=False, ): """ List dependency licenses. """ licenses_command = "pip-licenses --order=license" report.info(ctx, "package.licenses", "listing licenses of package dependencies") if summary: report.debug(ctx, "package.licenses", "summarizing licenses") licenses_command += " --summary" if from_classifier: report.debug(ctx, "package.licenses", "reporting from classifiers") licenses_command += " --from-classifier" if with_system: report.debug(ctx, "package.licenses", "including system packages") licenses_command += " --with-system" if with_authors: report.debug(ctx, "package.licenses", "including package authors") licenses_command += " --with-authors" if with_urls: report.debug(ctx, "package.licenses", "including package urls") licenses_command += " --with-urls" ctx.run(licenses_command)
List dependency licenses.
Below is the the instruction that describes the task: ### Input: List dependency licenses. ### Response: def licenses( ctx, summary=False, from_classifier=False, with_system=False, with_authors=False, with_urls=False, ): """ List dependency licenses. """ licenses_command = "pip-licenses --order=license" report.info(ctx, "package.licenses", "listing licenses of package dependencies") if summary: report.debug(ctx, "package.licenses", "summarizing licenses") licenses_command += " --summary" if from_classifier: report.debug(ctx, "package.licenses", "reporting from classifiers") licenses_command += " --from-classifier" if with_system: report.debug(ctx, "package.licenses", "including system packages") licenses_command += " --with-system" if with_authors: report.debug(ctx, "package.licenses", "including package authors") licenses_command += " --with-authors" if with_urls: report.debug(ctx, "package.licenses", "including package urls") licenses_command += " --with-urls" ctx.run(licenses_command)
def _derY(self,x,y): ''' Returns the derivative with respect to y of the interpolated function at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX. ''' x_pos, y_pos = self.findSector(x,y) alpha, beta = self.findCoords(x,y,x_pos,y_pos) # Get four corners data for each point xA = self.x_values[x_pos,y_pos] xB = self.x_values[x_pos+1,y_pos] xC = self.x_values[x_pos,y_pos+1] xD = self.x_values[x_pos+1,y_pos+1] yA = self.y_values[x_pos,y_pos] yB = self.y_values[x_pos+1,y_pos] yC = self.y_values[x_pos,y_pos+1] yD = self.y_values[x_pos+1,y_pos+1] fA = self.f_values[x_pos,y_pos] fB = self.f_values[x_pos+1,y_pos] fC = self.f_values[x_pos,y_pos+1] fD = self.f_values[x_pos+1,y_pos+1] # Calculate components of the alpha,beta --> x,y delta translation matrix alpha_x = (1-beta)*(xB-xA) + beta*(xD-xC) alpha_y = (1-beta)*(yB-yA) + beta*(yD-yC) beta_x = (1-alpha)*(xC-xA) + alpha*(xD-xB) beta_y = (1-alpha)*(yC-yA) + alpha*(yD-yB) # Invert the delta translation matrix into x,y --> alpha,beta det = alpha_x*beta_y - beta_x*alpha_y y_alpha = -beta_x/det y_beta = alpha_x/det # Calculate the derivative of f w.r.t. alpha and beta dfda = (1-beta)*(fB-fA) + beta*(fD-fC) dfdb = (1-alpha)*(fC-fA) + alpha*(fD-fB) # Calculate the derivative with respect to x (and return it) dfdy = y_alpha*dfda + y_beta*dfdb return dfdy
Returns the derivative with respect to y of the interpolated function at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
Below is the the instruction that describes the task: ### Input: Returns the derivative with respect to y of the interpolated function at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX. ### Response: def _derY(self,x,y): ''' Returns the derivative with respect to y of the interpolated function at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX. ''' x_pos, y_pos = self.findSector(x,y) alpha, beta = self.findCoords(x,y,x_pos,y_pos) # Get four corners data for each point xA = self.x_values[x_pos,y_pos] xB = self.x_values[x_pos+1,y_pos] xC = self.x_values[x_pos,y_pos+1] xD = self.x_values[x_pos+1,y_pos+1] yA = self.y_values[x_pos,y_pos] yB = self.y_values[x_pos+1,y_pos] yC = self.y_values[x_pos,y_pos+1] yD = self.y_values[x_pos+1,y_pos+1] fA = self.f_values[x_pos,y_pos] fB = self.f_values[x_pos+1,y_pos] fC = self.f_values[x_pos,y_pos+1] fD = self.f_values[x_pos+1,y_pos+1] # Calculate components of the alpha,beta --> x,y delta translation matrix alpha_x = (1-beta)*(xB-xA) + beta*(xD-xC) alpha_y = (1-beta)*(yB-yA) + beta*(yD-yC) beta_x = (1-alpha)*(xC-xA) + alpha*(xD-xB) beta_y = (1-alpha)*(yC-yA) + alpha*(yD-yB) # Invert the delta translation matrix into x,y --> alpha,beta det = alpha_x*beta_y - beta_x*alpha_y y_alpha = -beta_x/det y_beta = alpha_x/det # Calculate the derivative of f w.r.t. alpha and beta dfda = (1-beta)*(fB-fA) + beta*(fD-fC) dfdb = (1-alpha)*(fC-fA) + alpha*(fD-fB) # Calculate the derivative with respect to x (and return it) dfdy = y_alpha*dfda + y_beta*dfdb return dfdy
def _find_pair(self, protocol, remote_candidate): """ Find a candidate pair in the check list. """ for pair in self._check_list: if (pair.protocol == protocol and pair.remote_candidate == remote_candidate): return pair return None
Find a candidate pair in the check list.
Below is the the instruction that describes the task: ### Input: Find a candidate pair in the check list. ### Response: def _find_pair(self, protocol, remote_candidate): """ Find a candidate pair in the check list. """ for pair in self._check_list: if (pair.protocol == protocol and pair.remote_candidate == remote_candidate): return pair return None
def _add_differential_expression_attributes(self, att_ind_start, att_mappings): """Add differential expression information to the attribute mapping dictionary. :param int att_ind_start: Start index for enumerating the attributes. :param dict att_mappings: Dictionary of mappings between vertices and enumerated attributes. :return: End index for attribute enumeration. """ up_regulated_ind = self.graph.vs.select(up_regulated_eq=True).indices down_regulated_ind = self.graph.vs.select(down_regulated_eq=True).indices rest_ind = self.graph.vs.select(diff_expressed_eq=False).indices self._add_attribute_values(att_ind_start + 1, att_mappings, up_regulated_ind) self._add_attribute_values(att_ind_start + 2, att_mappings, down_regulated_ind) self._add_attribute_values(att_ind_start + 3, att_mappings, rest_ind) return att_ind_start + 4
Add differential expression information to the attribute mapping dictionary. :param int att_ind_start: Start index for enumerating the attributes. :param dict att_mappings: Dictionary of mappings between vertices and enumerated attributes. :return: End index for attribute enumeration.
Below is the the instruction that describes the task: ### Input: Add differential expression information to the attribute mapping dictionary. :param int att_ind_start: Start index for enumerating the attributes. :param dict att_mappings: Dictionary of mappings between vertices and enumerated attributes. :return: End index for attribute enumeration. ### Response: def _add_differential_expression_attributes(self, att_ind_start, att_mappings): """Add differential expression information to the attribute mapping dictionary. :param int att_ind_start: Start index for enumerating the attributes. :param dict att_mappings: Dictionary of mappings between vertices and enumerated attributes. :return: End index for attribute enumeration. """ up_regulated_ind = self.graph.vs.select(up_regulated_eq=True).indices down_regulated_ind = self.graph.vs.select(down_regulated_eq=True).indices rest_ind = self.graph.vs.select(diff_expressed_eq=False).indices self._add_attribute_values(att_ind_start + 1, att_mappings, up_regulated_ind) self._add_attribute_values(att_ind_start + 2, att_mappings, down_regulated_ind) self._add_attribute_values(att_ind_start + 3, att_mappings, rest_ind) return att_ind_start + 4
def update_udi(self): """Update udi.""" self.chain.connection.log("Parsing inventory") # TODO: Maybe validate if udi is complete self.udi = parse_inventory(self.inventory_text)
Update udi.
Below is the the instruction that describes the task: ### Input: Update udi. ### Response: def update_udi(self): """Update udi.""" self.chain.connection.log("Parsing inventory") # TODO: Maybe validate if udi is complete self.udi = parse_inventory(self.inventory_text)
def wd_db996(self, value=None): """ Corresponds to IDD Field `wd_db996` most frequent wind direction corresponding to mean wind speed coincident with 99.6% dry-bulb temperature degrees from north (east = 90 deg) Args: value (float): value for IDD Field `wd_db996` Unit: deg if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value """ if value is not None: try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float ' 'for field `wd_db996`'.format(value)) self._wd_db996 = value
Corresponds to IDD Field `wd_db996` most frequent wind direction corresponding to mean wind speed coincident with 99.6% dry-bulb temperature degrees from north (east = 90 deg) Args: value (float): value for IDD Field `wd_db996` Unit: deg if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
Below is the the instruction that describes the task: ### Input: Corresponds to IDD Field `wd_db996` most frequent wind direction corresponding to mean wind speed coincident with 99.6% dry-bulb temperature degrees from north (east = 90 deg) Args: value (float): value for IDD Field `wd_db996` Unit: deg if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value ### Response: def wd_db996(self, value=None): """ Corresponds to IDD Field `wd_db996` most frequent wind direction corresponding to mean wind speed coincident with 99.6% dry-bulb temperature degrees from north (east = 90 deg) Args: value (float): value for IDD Field `wd_db996` Unit: deg if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value """ if value is not None: try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float ' 'for field `wd_db996`'.format(value)) self._wd_db996 = value
def send_exit_status(self, status): """ Send the exit status of an executed command to the client. (This really only makes sense in server mode.) Many clients expect to get some sort of status code back from an executed command after it completes. @param status: the exit code of the process @type status: int @since: 1.2 """ # in many cases, the channel will not still be open here. # that's fine. m = Message() m.add_byte(chr(MSG_CHANNEL_REQUEST)) m.add_int(self.remote_chanid) m.add_string('exit-status') m.add_boolean(False) m.add_int(status) self.transport._send_user_message(m)
Send the exit status of an executed command to the client. (This really only makes sense in server mode.) Many clients expect to get some sort of status code back from an executed command after it completes. @param status: the exit code of the process @type status: int @since: 1.2
Below is the the instruction that describes the task: ### Input: Send the exit status of an executed command to the client. (This really only makes sense in server mode.) Many clients expect to get some sort of status code back from an executed command after it completes. @param status: the exit code of the process @type status: int @since: 1.2 ### Response: def send_exit_status(self, status): """ Send the exit status of an executed command to the client. (This really only makes sense in server mode.) Many clients expect to get some sort of status code back from an executed command after it completes. @param status: the exit code of the process @type status: int @since: 1.2 """ # in many cases, the channel will not still be open here. # that's fine. m = Message() m.add_byte(chr(MSG_CHANNEL_REQUEST)) m.add_int(self.remote_chanid) m.add_string('exit-status') m.add_boolean(False) m.add_int(status) self.transport._send_user_message(m)
def find_connected_pores(self, throats=[], flatten=False, mode='union'): r""" Return a list of pores connected to the given list of throats Parameters ---------- throats : array_like List of throats numbers flatten : boolean, optional If ``True`` (default) a 1D array of unique pore numbers is returned. If ``False`` each location in the the returned array contains a sub-arras of neighboring pores for each input throat, in the order they were sent. mode : string Specifies logic to filter the resulting list. Options are: **'or'** : (default) All neighbors of the input throats. This is also known as the 'union' in set theory or 'any' in boolean logic. Both keywords are accepted and treated as 'or'. **'xor'** : Only neighbors of one and only one input throat. This is useful for finding the sites that are not shared by any of the input throats. **'xnor'** : Neighbors that are shared by two or more input throats. This is equivalent to finding all neighbors with 'or', minus those found with 'xor', and is useful for finding neighbors that the inputs have in common. **'and'** : Only neighbors shared by all input throats. This is also known as 'intersection' in set theory and (somtimes) as 'all' in boolean logic. Both keywords are accepted and treated as 'and'. Returns ------- 1D array (if ``flatten`` is ``True``) or ndarray of arrays (if ``flatten`` is ``False``) Examples -------- >>> import openpnm as op >>> pn = op.network.Cubic(shape=[5, 5, 5]) >>> Ps = pn.find_connected_pores(throats=[0, 1]) >>> print(Ps) [[0 1] [1 2]] >>> Ps = pn.find_connected_pores(throats=[0, 1], flatten=True) >>> print(Ps) [0 1 2] """ Ts = self._parse_indices(throats) am = self.get_adjacency_matrix(fmt='coo') pores = topotools.find_connected_sites(bonds=Ts, am=am, flatten=flatten, logic=mode) return pores
r""" Return a list of pores connected to the given list of throats Parameters ---------- throats : array_like List of throats numbers flatten : boolean, optional If ``True`` (default) a 1D array of unique pore numbers is returned. If ``False`` each location in the the returned array contains a sub-arras of neighboring pores for each input throat, in the order they were sent. mode : string Specifies logic to filter the resulting list. Options are: **'or'** : (default) All neighbors of the input throats. This is also known as the 'union' in set theory or 'any' in boolean logic. Both keywords are accepted and treated as 'or'. **'xor'** : Only neighbors of one and only one input throat. This is useful for finding the sites that are not shared by any of the input throats. **'xnor'** : Neighbors that are shared by two or more input throats. This is equivalent to finding all neighbors with 'or', minus those found with 'xor', and is useful for finding neighbors that the inputs have in common. **'and'** : Only neighbors shared by all input throats. This is also known as 'intersection' in set theory and (somtimes) as 'all' in boolean logic. Both keywords are accepted and treated as 'and'. Returns ------- 1D array (if ``flatten`` is ``True``) or ndarray of arrays (if ``flatten`` is ``False``) Examples -------- >>> import openpnm as op >>> pn = op.network.Cubic(shape=[5, 5, 5]) >>> Ps = pn.find_connected_pores(throats=[0, 1]) >>> print(Ps) [[0 1] [1 2]] >>> Ps = pn.find_connected_pores(throats=[0, 1], flatten=True) >>> print(Ps) [0 1 2]
Below is the the instruction that describes the task: ### Input: r""" Return a list of pores connected to the given list of throats Parameters ---------- throats : array_like List of throats numbers flatten : boolean, optional If ``True`` (default) a 1D array of unique pore numbers is returned. If ``False`` each location in the the returned array contains a sub-arras of neighboring pores for each input throat, in the order they were sent. mode : string Specifies logic to filter the resulting list. Options are: **'or'** : (default) All neighbors of the input throats. This is also known as the 'union' in set theory or 'any' in boolean logic. Both keywords are accepted and treated as 'or'. **'xor'** : Only neighbors of one and only one input throat. This is useful for finding the sites that are not shared by any of the input throats. **'xnor'** : Neighbors that are shared by two or more input throats. This is equivalent to finding all neighbors with 'or', minus those found with 'xor', and is useful for finding neighbors that the inputs have in common. **'and'** : Only neighbors shared by all input throats. This is also known as 'intersection' in set theory and (somtimes) as 'all' in boolean logic. Both keywords are accepted and treated as 'and'. Returns ------- 1D array (if ``flatten`` is ``True``) or ndarray of arrays (if ``flatten`` is ``False``) Examples -------- >>> import openpnm as op >>> pn = op.network.Cubic(shape=[5, 5, 5]) >>> Ps = pn.find_connected_pores(throats=[0, 1]) >>> print(Ps) [[0 1] [1 2]] >>> Ps = pn.find_connected_pores(throats=[0, 1], flatten=True) >>> print(Ps) [0 1 2] ### Response: def find_connected_pores(self, throats=[], flatten=False, mode='union'): r""" Return a list of pores connected to the given list of throats Parameters ---------- throats : array_like List of throats numbers flatten : boolean, optional If ``True`` (default) a 1D array of unique pore numbers is returned. If ``False`` each location in the the returned array contains a sub-arras of neighboring pores for each input throat, in the order they were sent. mode : string Specifies logic to filter the resulting list. Options are: **'or'** : (default) All neighbors of the input throats. This is also known as the 'union' in set theory or 'any' in boolean logic. Both keywords are accepted and treated as 'or'. **'xor'** : Only neighbors of one and only one input throat. This is useful for finding the sites that are not shared by any of the input throats. **'xnor'** : Neighbors that are shared by two or more input throats. This is equivalent to finding all neighbors with 'or', minus those found with 'xor', and is useful for finding neighbors that the inputs have in common. **'and'** : Only neighbors shared by all input throats. This is also known as 'intersection' in set theory and (somtimes) as 'all' in boolean logic. Both keywords are accepted and treated as 'and'. Returns ------- 1D array (if ``flatten`` is ``True``) or ndarray of arrays (if ``flatten`` is ``False``) Examples -------- >>> import openpnm as op >>> pn = op.network.Cubic(shape=[5, 5, 5]) >>> Ps = pn.find_connected_pores(throats=[0, 1]) >>> print(Ps) [[0 1] [1 2]] >>> Ps = pn.find_connected_pores(throats=[0, 1], flatten=True) >>> print(Ps) [0 1 2] """ Ts = self._parse_indices(throats) am = self.get_adjacency_matrix(fmt='coo') pores = topotools.find_connected_sites(bonds=Ts, am=am, flatten=flatten, logic=mode) return pores
def help_center_user_articles(self, id, **kwargs): "https://developer.zendesk.com/rest_api/docs/help_center/articles#list-articles" api_path = "/api/v2/help_center/users/{id}/articles.json" api_path = api_path.format(id=id) return self.call(api_path, **kwargs)
https://developer.zendesk.com/rest_api/docs/help_center/articles#list-articles
Below is the the instruction that describes the task: ### Input: https://developer.zendesk.com/rest_api/docs/help_center/articles#list-articles ### Response: def help_center_user_articles(self, id, **kwargs): "https://developer.zendesk.com/rest_api/docs/help_center/articles#list-articles" api_path = "/api/v2/help_center/users/{id}/articles.json" api_path = api_path.format(id=id) return self.call(api_path, **kwargs)
def to_ele(x): "Convert and return the :class:`~xml.etree.ElementTree.Element` for the XML document *x*. If *x* is already an :class:`~xml.etree.ElementTree.Element` simply returns that." if sys.version < '3': return x if etree.iselement(x) else etree.fromstring(x, parser=parser) else: return x if etree.iselement(x) else etree.fromstring(x.encode('UTF-8'), parser=parser)
Convert and return the :class:`~xml.etree.ElementTree.Element` for the XML document *x*. If *x* is already an :class:`~xml.etree.ElementTree.Element` simply returns that.
Below is the the instruction that describes the task: ### Input: Convert and return the :class:`~xml.etree.ElementTree.Element` for the XML document *x*. If *x* is already an :class:`~xml.etree.ElementTree.Element` simply returns that. ### Response: def to_ele(x): "Convert and return the :class:`~xml.etree.ElementTree.Element` for the XML document *x*. If *x* is already an :class:`~xml.etree.ElementTree.Element` simply returns that." if sys.version < '3': return x if etree.iselement(x) else etree.fromstring(x, parser=parser) else: return x if etree.iselement(x) else etree.fromstring(x.encode('UTF-8'), parser=parser)