code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def delete_kubernetes_role(self, role, mount_point='kubernetes'): """DELETE /auth/<mount_point>/role/:role :type role: Name of the role. :param role: str. :param mount_point: The "path" the k8s auth backend was mounted on. Vault currently defaults to "kubernetes". :type mount_point: str. :return: Will be an empty body with a 204 status code upon success. :rtype: requests.Response. """ url = 'v1/auth/{0}/role/{1}'.format(mount_point, role) return self._adapter.delete(url)
DELETE /auth/<mount_point>/role/:role :type role: Name of the role. :param role: str. :param mount_point: The "path" the k8s auth backend was mounted on. Vault currently defaults to "kubernetes". :type mount_point: str. :return: Will be an empty body with a 204 status code upon success. :rtype: requests.Response.
Below is the the instruction that describes the task: ### Input: DELETE /auth/<mount_point>/role/:role :type role: Name of the role. :param role: str. :param mount_point: The "path" the k8s auth backend was mounted on. Vault currently defaults to "kubernetes". :type mount_point: str. :return: Will be an empty body with a 204 status code upon success. :rtype: requests.Response. ### Response: def delete_kubernetes_role(self, role, mount_point='kubernetes'): """DELETE /auth/<mount_point>/role/:role :type role: Name of the role. :param role: str. :param mount_point: The "path" the k8s auth backend was mounted on. Vault currently defaults to "kubernetes". :type mount_point: str. :return: Will be an empty body with a 204 status code upon success. :rtype: requests.Response. """ url = 'v1/auth/{0}/role/{1}'.format(mount_point, role) return self._adapter.delete(url)
def _TSat_P(P): """Define the saturated line, T=f(P) Parameters ---------- P : float Pressure, [MPa] Returns ------- T : float Temperature, [K] Notes ------ Raise :class:`NotImplementedError` if input isn't in limit: * 0.00061121 ≤ P ≤ 22.064 References ---------- IAPWS, Revised Release on the IAPWS Industrial Formulation 1997 for the Thermodynamic Properties of Water and Steam August 2007, http://www.iapws.org/relguide/IF97-Rev.html, Eq 31 Examples -------- >>> _TSat_P(10) 584.149488 """ # Check input parameters if P < 611.212677/1e6 or P > 22.064: raise NotImplementedError("Incoming out of bound") n = [0, 0.11670521452767E+04, -0.72421316703206E+06, -0.17073846940092E+02, 0.12020824702470E+05, -0.32325550322333E+07, 0.14915108613530E+02, -0.48232657361591E+04, 0.40511340542057E+06, -0.23855557567849E+00, 0.65017534844798E+03] beta = P**0.25 E = beta**2+n[3]*beta+n[6] F = n[1]*beta**2+n[4]*beta+n[7] G = n[2]*beta**2+n[5]*beta+n[8] D = 2*G/(-F-(F**2-4*E*G)**0.5) return (n[10]+D-((n[10]+D)**2-4*(n[9]+n[10]*D))**0.5)/2
Define the saturated line, T=f(P) Parameters ---------- P : float Pressure, [MPa] Returns ------- T : float Temperature, [K] Notes ------ Raise :class:`NotImplementedError` if input isn't in limit: * 0.00061121 ≤ P ≤ 22.064 References ---------- IAPWS, Revised Release on the IAPWS Industrial Formulation 1997 for the Thermodynamic Properties of Water and Steam August 2007, http://www.iapws.org/relguide/IF97-Rev.html, Eq 31 Examples -------- >>> _TSat_P(10) 584.149488
Below is the the instruction that describes the task: ### Input: Define the saturated line, T=f(P) Parameters ---------- P : float Pressure, [MPa] Returns ------- T : float Temperature, [K] Notes ------ Raise :class:`NotImplementedError` if input isn't in limit: * 0.00061121 ≤ P ≤ 22.064 References ---------- IAPWS, Revised Release on the IAPWS Industrial Formulation 1997 for the Thermodynamic Properties of Water and Steam August 2007, http://www.iapws.org/relguide/IF97-Rev.html, Eq 31 Examples -------- >>> _TSat_P(10) 584.149488 ### Response: def _TSat_P(P): """Define the saturated line, T=f(P) Parameters ---------- P : float Pressure, [MPa] Returns ------- T : float Temperature, [K] Notes ------ Raise :class:`NotImplementedError` if input isn't in limit: * 0.00061121 ≤ P ≤ 22.064 References ---------- IAPWS, Revised Release on the IAPWS Industrial Formulation 1997 for the Thermodynamic Properties of Water and Steam August 2007, http://www.iapws.org/relguide/IF97-Rev.html, Eq 31 Examples -------- >>> _TSat_P(10) 584.149488 """ # Check input parameters if P < 611.212677/1e6 or P > 22.064: raise NotImplementedError("Incoming out of bound") n = [0, 0.11670521452767E+04, -0.72421316703206E+06, -0.17073846940092E+02, 0.12020824702470E+05, -0.32325550322333E+07, 0.14915108613530E+02, -0.48232657361591E+04, 0.40511340542057E+06, -0.23855557567849E+00, 0.65017534844798E+03] beta = P**0.25 E = beta**2+n[3]*beta+n[6] F = n[1]*beta**2+n[4]*beta+n[7] G = n[2]*beta**2+n[5]*beta+n[8] D = 2*G/(-F-(F**2-4*E*G)**0.5) return (n[10]+D-((n[10]+D)**2-4*(n[9]+n[10]*D))**0.5)/2
def _get_team_results(self, team_result_html): """ Extract the winning or losing team's name and abbreviation. Depending on which team's data field is passed (either the winner or loser), return the name and abbreviation of that team to denote which team won and which lost the game. Parameters ---------- team_result_html : PyQuery object A PyQuery object representing either the winning or losing team's data field within the boxscore. Returns ------- tuple Returns a tuple of the team's name followed by the abbreviation. """ link = [i for i in team_result_html('td a').items()] # If there are no links, the boxscore is likely misformed and can't be # parsed. In this case, the boxscore should be skipped. if len(link) < 1: return None name, abbreviation = self._get_name(link[0]) return name, abbreviation
Extract the winning or losing team's name and abbreviation. Depending on which team's data field is passed (either the winner or loser), return the name and abbreviation of that team to denote which team won and which lost the game. Parameters ---------- team_result_html : PyQuery object A PyQuery object representing either the winning or losing team's data field within the boxscore. Returns ------- tuple Returns a tuple of the team's name followed by the abbreviation.
Below is the the instruction that describes the task: ### Input: Extract the winning or losing team's name and abbreviation. Depending on which team's data field is passed (either the winner or loser), return the name and abbreviation of that team to denote which team won and which lost the game. Parameters ---------- team_result_html : PyQuery object A PyQuery object representing either the winning or losing team's data field within the boxscore. Returns ------- tuple Returns a tuple of the team's name followed by the abbreviation. ### Response: def _get_team_results(self, team_result_html): """ Extract the winning or losing team's name and abbreviation. Depending on which team's data field is passed (either the winner or loser), return the name and abbreviation of that team to denote which team won and which lost the game. Parameters ---------- team_result_html : PyQuery object A PyQuery object representing either the winning or losing team's data field within the boxscore. Returns ------- tuple Returns a tuple of the team's name followed by the abbreviation. """ link = [i for i in team_result_html('td a').items()] # If there are no links, the boxscore is likely misformed and can't be # parsed. In this case, the boxscore should be skipped. if len(link) < 1: return None name, abbreviation = self._get_name(link[0]) return name, abbreviation
def create_client(self, client_id=None, client_secret=None, uaa=None): """ Create a client and add it to the manifest. :param client_id: The client id used to authenticate as a client in UAA. :param client_secret: The secret password used by a client to authenticate and generate a UAA token. :param uaa: The UAA to create client with """ if not uaa: uaa = predix.admin.uaa.UserAccountAuthentication() # Client id and secret can be generated if not provided as arguments if not client_id: client_id = uaa._create_id() if not client_secret: client_secret = uaa._create_secret() uaa.create_client(client_id, client_secret) uaa.add_client_to_manifest(client_id, client_secret, self)
Create a client and add it to the manifest. :param client_id: The client id used to authenticate as a client in UAA. :param client_secret: The secret password used by a client to authenticate and generate a UAA token. :param uaa: The UAA to create client with
Below is the the instruction that describes the task: ### Input: Create a client and add it to the manifest. :param client_id: The client id used to authenticate as a client in UAA. :param client_secret: The secret password used by a client to authenticate and generate a UAA token. :param uaa: The UAA to create client with ### Response: def create_client(self, client_id=None, client_secret=None, uaa=None): """ Create a client and add it to the manifest. :param client_id: The client id used to authenticate as a client in UAA. :param client_secret: The secret password used by a client to authenticate and generate a UAA token. :param uaa: The UAA to create client with """ if not uaa: uaa = predix.admin.uaa.UserAccountAuthentication() # Client id and secret can be generated if not provided as arguments if not client_id: client_id = uaa._create_id() if not client_secret: client_secret = uaa._create_secret() uaa.create_client(client_id, client_secret) uaa.add_client_to_manifest(client_id, client_secret, self)
def add_failure(self): """ Add a failure event with the current timestamp. """ failure_time = time.time() if not self.first_failure_time: self.first_failure_time = failure_time self.failures.append(failure_time)
Add a failure event with the current timestamp.
Below is the the instruction that describes the task: ### Input: Add a failure event with the current timestamp. ### Response: def add_failure(self): """ Add a failure event with the current timestamp. """ failure_time = time.time() if not self.first_failure_time: self.first_failure_time = failure_time self.failures.append(failure_time)
def apply_mask(matrix, mask_pattern, matrix_size, is_encoding_region): """\ Applies the provided mask pattern on the `matrix`. ISO/IEC 18004:2015(E) -- 7.8.2 Data mask patterns (page 50) :param tuple matrix: A tuple of bytearrays :param mask_pattern: A mask pattern (a function) :param int matrix_size: width or height of the matrix :param is_encoding_region: A function which returns ``True`` iff the row index / col index belongs to the data region. """ for i in range(matrix_size): for j in range(matrix_size): if is_encoding_region(i, j): matrix[i][j] ^= mask_pattern(i, j)
\ Applies the provided mask pattern on the `matrix`. ISO/IEC 18004:2015(E) -- 7.8.2 Data mask patterns (page 50) :param tuple matrix: A tuple of bytearrays :param mask_pattern: A mask pattern (a function) :param int matrix_size: width or height of the matrix :param is_encoding_region: A function which returns ``True`` iff the row index / col index belongs to the data region.
Below is the the instruction that describes the task: ### Input: \ Applies the provided mask pattern on the `matrix`. ISO/IEC 18004:2015(E) -- 7.8.2 Data mask patterns (page 50) :param tuple matrix: A tuple of bytearrays :param mask_pattern: A mask pattern (a function) :param int matrix_size: width or height of the matrix :param is_encoding_region: A function which returns ``True`` iff the row index / col index belongs to the data region. ### Response: def apply_mask(matrix, mask_pattern, matrix_size, is_encoding_region): """\ Applies the provided mask pattern on the `matrix`. ISO/IEC 18004:2015(E) -- 7.8.2 Data mask patterns (page 50) :param tuple matrix: A tuple of bytearrays :param mask_pattern: A mask pattern (a function) :param int matrix_size: width or height of the matrix :param is_encoding_region: A function which returns ``True`` iff the row index / col index belongs to the data region. """ for i in range(matrix_size): for j in range(matrix_size): if is_encoding_region(i, j): matrix[i][j] ^= mask_pattern(i, j)
def expand_python_version(version): """ Expand Python versions to all identifiers used on PyPI. >>> expand_python_version('3.5') ['3.5', 'py3', 'py2.py3', 'cp35'] """ if not re.match(r"^\d\.\d$", version): return [version] major, minor = version.split(".") patterns = [ "{major}.{minor}", "cp{major}{minor}", "py{major}", "py{major}.{minor}", "py{major}{minor}", "source", "py2.py3", ] return set(pattern.format(major=major, minor=minor) for pattern in patterns)
Expand Python versions to all identifiers used on PyPI. >>> expand_python_version('3.5') ['3.5', 'py3', 'py2.py3', 'cp35']
Below is the the instruction that describes the task: ### Input: Expand Python versions to all identifiers used on PyPI. >>> expand_python_version('3.5') ['3.5', 'py3', 'py2.py3', 'cp35'] ### Response: def expand_python_version(version): """ Expand Python versions to all identifiers used on PyPI. >>> expand_python_version('3.5') ['3.5', 'py3', 'py2.py3', 'cp35'] """ if not re.match(r"^\d\.\d$", version): return [version] major, minor = version.split(".") patterns = [ "{major}.{minor}", "cp{major}{minor}", "py{major}", "py{major}.{minor}", "py{major}{minor}", "source", "py2.py3", ] return set(pattern.format(major=major, minor=minor) for pattern in patterns)
def rgb_to_sv(rgb): '''Convert an RGB image or array of RGB colors to saturation and value, returning each one as a separate 32-bit floating point array or value. ''' if not isinstance(rgb, np.ndarray): rgb = np.array(rgb) axis = len(rgb.shape)-1 cmax = rgb.max(axis=axis).astype(np.float32) cmin = rgb.min(axis=axis).astype(np.float32) delta = cmax - cmin saturation = delta.astype(np.float32) / cmax.astype(np.float32) saturation = np.where(cmax == 0, 0, saturation) value = cmax/255.0 return saturation, value
Convert an RGB image or array of RGB colors to saturation and value, returning each one as a separate 32-bit floating point array or value.
Below is the the instruction that describes the task: ### Input: Convert an RGB image or array of RGB colors to saturation and value, returning each one as a separate 32-bit floating point array or value. ### Response: def rgb_to_sv(rgb): '''Convert an RGB image or array of RGB colors to saturation and value, returning each one as a separate 32-bit floating point array or value. ''' if not isinstance(rgb, np.ndarray): rgb = np.array(rgb) axis = len(rgb.shape)-1 cmax = rgb.max(axis=axis).astype(np.float32) cmin = rgb.min(axis=axis).astype(np.float32) delta = cmax - cmin saturation = delta.astype(np.float32) / cmax.astype(np.float32) saturation = np.where(cmax == 0, 0, saturation) value = cmax/255.0 return saturation, value
def imap_async(self, func, iterable, chunksize=None, callback=None): """A variant of the imap() method which returns an ApplyResult object that provides an iterator (next method(timeout) available). If callback is specified then it should be a callable which accepts a single argument. When the resulting iterator becomes ready, callback is applied to it (unless the call failed). callback should complete immediately since otherwise the thread which handles the results will get blocked.""" apply_result = ApplyResult(callback=callback) collector = OrderedResultCollector(apply_result, as_iterator=True) self._create_sequences(func, iterable, chunksize, collector) return apply_result
A variant of the imap() method which returns an ApplyResult object that provides an iterator (next method(timeout) available). If callback is specified then it should be a callable which accepts a single argument. When the resulting iterator becomes ready, callback is applied to it (unless the call failed). callback should complete immediately since otherwise the thread which handles the results will get blocked.
Below is the the instruction that describes the task: ### Input: A variant of the imap() method which returns an ApplyResult object that provides an iterator (next method(timeout) available). If callback is specified then it should be a callable which accepts a single argument. When the resulting iterator becomes ready, callback is applied to it (unless the call failed). callback should complete immediately since otherwise the thread which handles the results will get blocked. ### Response: def imap_async(self, func, iterable, chunksize=None, callback=None): """A variant of the imap() method which returns an ApplyResult object that provides an iterator (next method(timeout) available). If callback is specified then it should be a callable which accepts a single argument. When the resulting iterator becomes ready, callback is applied to it (unless the call failed). callback should complete immediately since otherwise the thread which handles the results will get blocked.""" apply_result = ApplyResult(callback=callback) collector = OrderedResultCollector(apply_result, as_iterator=True) self._create_sequences(func, iterable, chunksize, collector) return apply_result
def uint16_gt(a: int, b: int) -> bool: """ Return a > b. """ half_mod = 0x8000 return (((a < b) and ((b - a) > half_mod)) or ((a > b) and ((a - b) < half_mod)))
Return a > b.
Below is the the instruction that describes the task: ### Input: Return a > b. ### Response: def uint16_gt(a: int, b: int) -> bool: """ Return a > b. """ half_mod = 0x8000 return (((a < b) and ((b - a) > half_mod)) or ((a > b) and ((a - b) < half_mod)))
def render_html_tree(tree): """ Renders the given HTML tree, and strips any wrapping that was applied in get_html_tree(). You should avoid further processing of the given tree after calling this method because we modify namespaced tags here. """ # Restore any tag names that were changed in get_html_tree() for el in tree.iter(): if '__tag_name' in el.attrib: actual_tag_name = el.attrib.pop('__tag_name') el.tag = actual_tag_name html = lxml.html.tostring(tree, encoding='utf8').decode('utf8') return strip_wrapping(html)
Renders the given HTML tree, and strips any wrapping that was applied in get_html_tree(). You should avoid further processing of the given tree after calling this method because we modify namespaced tags here.
Below is the the instruction that describes the task: ### Input: Renders the given HTML tree, and strips any wrapping that was applied in get_html_tree(). You should avoid further processing of the given tree after calling this method because we modify namespaced tags here. ### Response: def render_html_tree(tree): """ Renders the given HTML tree, and strips any wrapping that was applied in get_html_tree(). You should avoid further processing of the given tree after calling this method because we modify namespaced tags here. """ # Restore any tag names that were changed in get_html_tree() for el in tree.iter(): if '__tag_name' in el.attrib: actual_tag_name = el.attrib.pop('__tag_name') el.tag = actual_tag_name html = lxml.html.tostring(tree, encoding='utf8').decode('utf8') return strip_wrapping(html)
def is_pathogenic(pvs, ps_terms, pm_terms, pp_terms): """Check if the criterias for Pathogenic is fullfilled The following are descriptions of Pathogenic clasification from ACMG paper: Pathogenic (i) 1 Very strong (PVS1) AND (a) ≥1 Strong (PS1–PS4) OR (b) ≥2 Moderate (PM1–PM6) OR (c) 1 Moderate (PM1–PM6) and 1 supporting (PP1–PP5) OR (d) ≥2 Supporting (PP1–PP5) (ii) ≥2 Strong (PS1–PS4) OR (iii) 1 Strong (PS1–PS4) AND (a)≥3 Moderate (PM1–PM6) OR (b)2 Moderate (PM1–PM6) AND ≥2 Supporting (PP1–PP5) OR (c)1 Moderate (PM1–PM6) AND ≥4 supporting (PP1–PP5) Args: pvs(bool): Pathogenic Very Strong ps_terms(list(str)): Pathogenic Strong terms pm_terms(list(str)): Pathogenic Moderate terms pp_terms(list(str)): Pathogenic Supporting terms Returns: bool: if classification indicates Pathogenic level """ if pvs: # Pathogenic (i)(a): if ps_terms: return True if pm_terms: # Pathogenic (i)(c): if pp_terms: return True # Pathogenic (i)(b): if len(pm_terms) >= 2: return True # Pathogenic (i)(d): if len(pp_terms) >= 2: return True if ps_terms: # Pathogenic (ii): if len(ps_terms) >= 2: return True # Pathogenic (iii)(a): if pm_terms: if len(pm_terms) >= 3: return True elif len(pm_terms) >= 2: if len(pp_terms) >= 2: return True elif len(pp_terms) >= 4: return True return False
Check if the criterias for Pathogenic is fullfilled The following are descriptions of Pathogenic clasification from ACMG paper: Pathogenic (i) 1 Very strong (PVS1) AND (a) ≥1 Strong (PS1–PS4) OR (b) ≥2 Moderate (PM1–PM6) OR (c) 1 Moderate (PM1–PM6) and 1 supporting (PP1–PP5) OR (d) ≥2 Supporting (PP1–PP5) (ii) ≥2 Strong (PS1–PS4) OR (iii) 1 Strong (PS1–PS4) AND (a)≥3 Moderate (PM1–PM6) OR (b)2 Moderate (PM1–PM6) AND ≥2 Supporting (PP1–PP5) OR (c)1 Moderate (PM1–PM6) AND ≥4 supporting (PP1–PP5) Args: pvs(bool): Pathogenic Very Strong ps_terms(list(str)): Pathogenic Strong terms pm_terms(list(str)): Pathogenic Moderate terms pp_terms(list(str)): Pathogenic Supporting terms Returns: bool: if classification indicates Pathogenic level
Below is the the instruction that describes the task: ### Input: Check if the criterias for Pathogenic is fullfilled The following are descriptions of Pathogenic clasification from ACMG paper: Pathogenic (i) 1 Very strong (PVS1) AND (a) ≥1 Strong (PS1–PS4) OR (b) ≥2 Moderate (PM1–PM6) OR (c) 1 Moderate (PM1–PM6) and 1 supporting (PP1–PP5) OR (d) ≥2 Supporting (PP1–PP5) (ii) ≥2 Strong (PS1–PS4) OR (iii) 1 Strong (PS1–PS4) AND (a)≥3 Moderate (PM1–PM6) OR (b)2 Moderate (PM1–PM6) AND ≥2 Supporting (PP1–PP5) OR (c)1 Moderate (PM1–PM6) AND ≥4 supporting (PP1–PP5) Args: pvs(bool): Pathogenic Very Strong ps_terms(list(str)): Pathogenic Strong terms pm_terms(list(str)): Pathogenic Moderate terms pp_terms(list(str)): Pathogenic Supporting terms Returns: bool: if classification indicates Pathogenic level ### Response: def is_pathogenic(pvs, ps_terms, pm_terms, pp_terms): """Check if the criterias for Pathogenic is fullfilled The following are descriptions of Pathogenic clasification from ACMG paper: Pathogenic (i) 1 Very strong (PVS1) AND (a) ≥1 Strong (PS1–PS4) OR (b) ≥2 Moderate (PM1–PM6) OR (c) 1 Moderate (PM1–PM6) and 1 supporting (PP1–PP5) OR (d) ≥2 Supporting (PP1–PP5) (ii) ≥2 Strong (PS1–PS4) OR (iii) 1 Strong (PS1–PS4) AND (a)≥3 Moderate (PM1–PM6) OR (b)2 Moderate (PM1–PM6) AND ≥2 Supporting (PP1–PP5) OR (c)1 Moderate (PM1–PM6) AND ≥4 supporting (PP1–PP5) Args: pvs(bool): Pathogenic Very Strong ps_terms(list(str)): Pathogenic Strong terms pm_terms(list(str)): Pathogenic Moderate terms pp_terms(list(str)): Pathogenic Supporting terms Returns: bool: if classification indicates Pathogenic level """ if pvs: # Pathogenic (i)(a): if ps_terms: return True if pm_terms: # Pathogenic (i)(c): if pp_terms: return True # Pathogenic (i)(b): if len(pm_terms) >= 2: return True # Pathogenic (i)(d): if len(pp_terms) >= 2: return True if ps_terms: # Pathogenic (ii): if len(ps_terms) >= 2: return True # Pathogenic (iii)(a): if pm_terms: if len(pm_terms) >= 3: return True elif len(pm_terms) >= 2: if len(pp_terms) >= 2: return True elif len(pp_terms) >= 4: return True return False
def handle_markdown(value): md = markdown( value, extensions=[ 'markdown.extensions.fenced_code', 'codehilite', ] ) """ For some unknown reason markdown wraps the value in <p> tags. Currently there doesn't seem to be an extension to turn this off. """ open_tag = '<p>' close_tag = '</p>' if md.startswith(open_tag) and md.endswith(close_tag): md = md[len(open_tag):-len(close_tag)] return mark_safe(md)
For some unknown reason markdown wraps the value in <p> tags. Currently there doesn't seem to be an extension to turn this off.
Below is the the instruction that describes the task: ### Input: For some unknown reason markdown wraps the value in <p> tags. Currently there doesn't seem to be an extension to turn this off. ### Response: def handle_markdown(value): md = markdown( value, extensions=[ 'markdown.extensions.fenced_code', 'codehilite', ] ) """ For some unknown reason markdown wraps the value in <p> tags. Currently there doesn't seem to be an extension to turn this off. """ open_tag = '<p>' close_tag = '</p>' if md.startswith(open_tag) and md.endswith(close_tag): md = md[len(open_tag):-len(close_tag)] return mark_safe(md)
def continuous(self, *args): """ Set fields to be continuous. :rtype: DataFrame :Example: >>> # Table schema is create table test(f1 double, f2 string) >>> # Original continuity: f1=DISCRETE, f2=DISCRETE >>> # Now we want to set ``f1`` and ``f2`` into continuous >>> new_ds = df.continuous('f1 f2') """ new_df = copy_df(self) fields = _render_field_set(args) self._assert_ml_fields_valid(*fields) new_df._perform_operation(op.FieldContinuityOperation(dict((_get_field_name(f), True) for f in fields))) return new_df
Set fields to be continuous. :rtype: DataFrame :Example: >>> # Table schema is create table test(f1 double, f2 string) >>> # Original continuity: f1=DISCRETE, f2=DISCRETE >>> # Now we want to set ``f1`` and ``f2`` into continuous >>> new_ds = df.continuous('f1 f2')
Below is the the instruction that describes the task: ### Input: Set fields to be continuous. :rtype: DataFrame :Example: >>> # Table schema is create table test(f1 double, f2 string) >>> # Original continuity: f1=DISCRETE, f2=DISCRETE >>> # Now we want to set ``f1`` and ``f2`` into continuous >>> new_ds = df.continuous('f1 f2') ### Response: def continuous(self, *args): """ Set fields to be continuous. :rtype: DataFrame :Example: >>> # Table schema is create table test(f1 double, f2 string) >>> # Original continuity: f1=DISCRETE, f2=DISCRETE >>> # Now we want to set ``f1`` and ``f2`` into continuous >>> new_ds = df.continuous('f1 f2') """ new_df = copy_df(self) fields = _render_field_set(args) self._assert_ml_fields_valid(*fields) new_df._perform_operation(op.FieldContinuityOperation(dict((_get_field_name(f), True) for f in fields))) return new_df
def weighted_average_to_nodes(x1, x2, data, interpolator ): """ Weighted average of scattered data to the nodal points of a triangulation using the barycentric coordinates as weightings. Parameters ---------- x1, x2 : 1D arrays arrays of x,y or lon, lat (radians) data : 1D array of data to be lumped to the node locations interpolator : a stripy.Triangulation or stripy.sTriangulation object which defines the node locations and their triangulation Returns ------- grid : 1D array containing the results of the weighted average norm : 1D array of the normalisation used to compute `grid` count : 1D int array of number of points that contribute anything to a given node """ import numpy as np gridded_data = np.zeros(interpolator.npoints) norm = np.zeros(interpolator.npoints) count = np.zeros(interpolator.npoints, dtype=np.int) bcc, nodes = interpolator.containing_simplex_and_bcc(x1, x2) # Beware vectorising the reduction operation !! for i in range(0, len(data)): grid[nodes[i][0]] += bcc[i][0] * data[i] grid[nodes[i][1]] += bcc[i][1] * data[i] grid[nodes[i][2]] += bcc[i][2] * data[i] norm[nodes[i][0]] += bcc[i][0] norm[nodes[i][1]] += bcc[i][1] norm[nodes[i][2]] += bcc[i][2] count[nodes[i][0]] += 1 count[nodes[i][1]] += 1 count[nodes[i][2]] += 1 grid[np.where(norm > 0.0)] /= norm[np.where(norm > 0.0)] return grid, norm, count
Weighted average of scattered data to the nodal points of a triangulation using the barycentric coordinates as weightings. Parameters ---------- x1, x2 : 1D arrays arrays of x,y or lon, lat (radians) data : 1D array of data to be lumped to the node locations interpolator : a stripy.Triangulation or stripy.sTriangulation object which defines the node locations and their triangulation Returns ------- grid : 1D array containing the results of the weighted average norm : 1D array of the normalisation used to compute `grid` count : 1D int array of number of points that contribute anything to a given node
Below is the the instruction that describes the task: ### Input: Weighted average of scattered data to the nodal points of a triangulation using the barycentric coordinates as weightings. Parameters ---------- x1, x2 : 1D arrays arrays of x,y or lon, lat (radians) data : 1D array of data to be lumped to the node locations interpolator : a stripy.Triangulation or stripy.sTriangulation object which defines the node locations and their triangulation Returns ------- grid : 1D array containing the results of the weighted average norm : 1D array of the normalisation used to compute `grid` count : 1D int array of number of points that contribute anything to a given node ### Response: def weighted_average_to_nodes(x1, x2, data, interpolator ): """ Weighted average of scattered data to the nodal points of a triangulation using the barycentric coordinates as weightings. Parameters ---------- x1, x2 : 1D arrays arrays of x,y or lon, lat (radians) data : 1D array of data to be lumped to the node locations interpolator : a stripy.Triangulation or stripy.sTriangulation object which defines the node locations and their triangulation Returns ------- grid : 1D array containing the results of the weighted average norm : 1D array of the normalisation used to compute `grid` count : 1D int array of number of points that contribute anything to a given node """ import numpy as np gridded_data = np.zeros(interpolator.npoints) norm = np.zeros(interpolator.npoints) count = np.zeros(interpolator.npoints, dtype=np.int) bcc, nodes = interpolator.containing_simplex_and_bcc(x1, x2) # Beware vectorising the reduction operation !! for i in range(0, len(data)): grid[nodes[i][0]] += bcc[i][0] * data[i] grid[nodes[i][1]] += bcc[i][1] * data[i] grid[nodes[i][2]] += bcc[i][2] * data[i] norm[nodes[i][0]] += bcc[i][0] norm[nodes[i][1]] += bcc[i][1] norm[nodes[i][2]] += bcc[i][2] count[nodes[i][0]] += 1 count[nodes[i][1]] += 1 count[nodes[i][2]] += 1 grid[np.where(norm > 0.0)] /= norm[np.where(norm > 0.0)] return grid, norm, count
def drive(self) -> DriveChannel: """Return the primary drive channel of this qubit.""" if self._drives: return self._drives[0] else: raise PulseError("No drive channels in q[%d]" % self._index)
Return the primary drive channel of this qubit.
Below is the the instruction that describes the task: ### Input: Return the primary drive channel of this qubit. ### Response: def drive(self) -> DriveChannel: """Return the primary drive channel of this qubit.""" if self._drives: return self._drives[0] else: raise PulseError("No drive channels in q[%d]" % self._index)
def get_extent(self, filename, locations): """Obtain a SourceRange from this translation unit. The bounds of the SourceRange must ultimately be defined by a start and end SourceLocation. For the locations argument, you can pass: - 2 SourceLocation instances in a 2-tuple or list. - 2 int file offsets via a 2-tuple or list. - 2 2-tuple or lists of (line, column) pairs in a 2-tuple or list. e.g. get_extent('foo.c', (5, 10)) get_extent('foo.c', ((1, 1), (1, 15))) """ f = self.get_file(filename) if len(locations) < 2: raise Exception('Must pass object with at least 2 elements') start_location, end_location = locations if hasattr(start_location, '__len__'): start_location = SourceLocation.from_position(self, f, start_location[0], start_location[1]) elif isinstance(start_location, int): start_location = SourceLocation.from_offset(self, f, start_location) if hasattr(end_location, '__len__'): end_location = SourceLocation.from_position(self, f, end_location[0], end_location[1]) elif isinstance(end_location, int): end_location = SourceLocation.from_offset(self, f, end_location) assert isinstance(start_location, SourceLocation) assert isinstance(end_location, SourceLocation) return SourceRange.from_locations(start_location, end_location)
Obtain a SourceRange from this translation unit. The bounds of the SourceRange must ultimately be defined by a start and end SourceLocation. For the locations argument, you can pass: - 2 SourceLocation instances in a 2-tuple or list. - 2 int file offsets via a 2-tuple or list. - 2 2-tuple or lists of (line, column) pairs in a 2-tuple or list. e.g. get_extent('foo.c', (5, 10)) get_extent('foo.c', ((1, 1), (1, 15)))
Below is the the instruction that describes the task: ### Input: Obtain a SourceRange from this translation unit. The bounds of the SourceRange must ultimately be defined by a start and end SourceLocation. For the locations argument, you can pass: - 2 SourceLocation instances in a 2-tuple or list. - 2 int file offsets via a 2-tuple or list. - 2 2-tuple or lists of (line, column) pairs in a 2-tuple or list. e.g. get_extent('foo.c', (5, 10)) get_extent('foo.c', ((1, 1), (1, 15))) ### Response: def get_extent(self, filename, locations): """Obtain a SourceRange from this translation unit. The bounds of the SourceRange must ultimately be defined by a start and end SourceLocation. For the locations argument, you can pass: - 2 SourceLocation instances in a 2-tuple or list. - 2 int file offsets via a 2-tuple or list. - 2 2-tuple or lists of (line, column) pairs in a 2-tuple or list. e.g. get_extent('foo.c', (5, 10)) get_extent('foo.c', ((1, 1), (1, 15))) """ f = self.get_file(filename) if len(locations) < 2: raise Exception('Must pass object with at least 2 elements') start_location, end_location = locations if hasattr(start_location, '__len__'): start_location = SourceLocation.from_position(self, f, start_location[0], start_location[1]) elif isinstance(start_location, int): start_location = SourceLocation.from_offset(self, f, start_location) if hasattr(end_location, '__len__'): end_location = SourceLocation.from_position(self, f, end_location[0], end_location[1]) elif isinstance(end_location, int): end_location = SourceLocation.from_offset(self, f, end_location) assert isinstance(start_location, SourceLocation) assert isinstance(end_location, SourceLocation) return SourceRange.from_locations(start_location, end_location)
async def add_alternative(self, alt, timeout=OTGW_DEFAULT_TIMEOUT): """ Add the specified Data-ID to the list of alternative commands to send to the boiler instead of a Data-ID that is known to be unsupported by the boiler. Alternative Data-IDs will always be sent to the boiler in a Read-Data request message with the data-value set to zero. The table of alternative Data-IDs is stored in non-volatile memory so it will persist even if the gateway has been powered off. Data-ID values from 1 to 255 are allowed. Return the ID that was added to the list, or None on failure. This method is a coroutine """ cmd = OTGW_CMD_ADD_ALT alt = int(alt) if alt < 1 or alt > 255: return None ret = await self._wait_for_cmd(cmd, alt, timeout) if ret is not None: return int(ret)
Add the specified Data-ID to the list of alternative commands to send to the boiler instead of a Data-ID that is known to be unsupported by the boiler. Alternative Data-IDs will always be sent to the boiler in a Read-Data request message with the data-value set to zero. The table of alternative Data-IDs is stored in non-volatile memory so it will persist even if the gateway has been powered off. Data-ID values from 1 to 255 are allowed. Return the ID that was added to the list, or None on failure. This method is a coroutine
Below is the the instruction that describes the task: ### Input: Add the specified Data-ID to the list of alternative commands to send to the boiler instead of a Data-ID that is known to be unsupported by the boiler. Alternative Data-IDs will always be sent to the boiler in a Read-Data request message with the data-value set to zero. The table of alternative Data-IDs is stored in non-volatile memory so it will persist even if the gateway has been powered off. Data-ID values from 1 to 255 are allowed. Return the ID that was added to the list, or None on failure. This method is a coroutine ### Response: async def add_alternative(self, alt, timeout=OTGW_DEFAULT_TIMEOUT): """ Add the specified Data-ID to the list of alternative commands to send to the boiler instead of a Data-ID that is known to be unsupported by the boiler. Alternative Data-IDs will always be sent to the boiler in a Read-Data request message with the data-value set to zero. The table of alternative Data-IDs is stored in non-volatile memory so it will persist even if the gateway has been powered off. Data-ID values from 1 to 255 are allowed. Return the ID that was added to the list, or None on failure. This method is a coroutine """ cmd = OTGW_CMD_ADD_ALT alt = int(alt) if alt < 1 or alt > 255: return None ret = await self._wait_for_cmd(cmd, alt, timeout) if ret is not None: return int(ret)
def delete(self): """ Adds a check to make sure that the snapshot is able to be deleted. """ if self.status not in ("available", "error"): raise exc.SnapshotNotAvailable("Snapshot must be in 'available' " "or 'error' status before deleting. Current status: %s" % self.status) # When there are more thann one snapshot for a given volume, attempting to # delete them all will throw a 409 exception. This will help by retrying # such an error once after a RETRY_INTERVAL second delay. try: super(CloudBlockStorageSnapshot, self).delete() except exc.ClientException as e: if "Request conflicts with in-progress 'DELETE" in str(e): time.sleep(RETRY_INTERVAL) # Try again; if it fails, oh, well... super(CloudBlockStorageSnapshot, self).delete()
Adds a check to make sure that the snapshot is able to be deleted.
Below is the the instruction that describes the task: ### Input: Adds a check to make sure that the snapshot is able to be deleted. ### Response: def delete(self): """ Adds a check to make sure that the snapshot is able to be deleted. """ if self.status not in ("available", "error"): raise exc.SnapshotNotAvailable("Snapshot must be in 'available' " "or 'error' status before deleting. Current status: %s" % self.status) # When there are more thann one snapshot for a given volume, attempting to # delete them all will throw a 409 exception. This will help by retrying # such an error once after a RETRY_INTERVAL second delay. try: super(CloudBlockStorageSnapshot, self).delete() except exc.ClientException as e: if "Request conflicts with in-progress 'DELETE" in str(e): time.sleep(RETRY_INTERVAL) # Try again; if it fails, oh, well... super(CloudBlockStorageSnapshot, self).delete()
def build(self): """Builds the barcode pattern from `self.ean`. :returns: The pattern as string :rtype: String """ code = _ean.EDGE[:] pattern = _ean.LEFT_PATTERN[int(self.ean[0])] for i, number in enumerate(self.ean[1:7]): code += _ean.CODES[pattern[i]][int(number)] code += _ean.MIDDLE for number in self.ean[7:]: code += _ean.CODES['C'][int(number)] code += _ean.EDGE return [code]
Builds the barcode pattern from `self.ean`. :returns: The pattern as string :rtype: String
Below is the the instruction that describes the task: ### Input: Builds the barcode pattern from `self.ean`. :returns: The pattern as string :rtype: String ### Response: def build(self): """Builds the barcode pattern from `self.ean`. :returns: The pattern as string :rtype: String """ code = _ean.EDGE[:] pattern = _ean.LEFT_PATTERN[int(self.ean[0])] for i, number in enumerate(self.ean[1:7]): code += _ean.CODES[pattern[i]][int(number)] code += _ean.MIDDLE for number in self.ean[7:]: code += _ean.CODES['C'][int(number)] code += _ean.EDGE return [code]
def has_option(self, section, option): """Check for the existence of a given option in a given section. If the specified `section' is None or an empty string, DEFAULT is assumed. If the specified `section' does not exist, returns False.""" if not section or section == self.default_section: option = self.optionxform(option) return option in self._defaults elif section not in self._sections: return False else: option = self.optionxform(option) return (option in self._sections[section] or option in self._defaults)
Check for the existence of a given option in a given section. If the specified `section' is None or an empty string, DEFAULT is assumed. If the specified `section' does not exist, returns False.
Below is the the instruction that describes the task: ### Input: Check for the existence of a given option in a given section. If the specified `section' is None or an empty string, DEFAULT is assumed. If the specified `section' does not exist, returns False. ### Response: def has_option(self, section, option): """Check for the existence of a given option in a given section. If the specified `section' is None or an empty string, DEFAULT is assumed. If the specified `section' does not exist, returns False.""" if not section or section == self.default_section: option = self.optionxform(option) return option in self._defaults elif section not in self._sections: return False else: option = self.optionxform(option) return (option in self._sections[section] or option in self._defaults)
def start(self): """Start the game.""" # For old-fashioned players, accept five-letter truncations like # "inven" instead of insisting on full words like "inventory". for key, value in list(self.vocabulary.items()): if isinstance(key, str) and len(key) > 5: self.vocabulary[key[:5]] = value # Set things going. self.chest_room = self.rooms[114] self.bottle.contents = self.water self.yesno(self.messages[65], self.start2)
Start the game.
Below is the the instruction that describes the task: ### Input: Start the game. ### Response: def start(self): """Start the game.""" # For old-fashioned players, accept five-letter truncations like # "inven" instead of insisting on full words like "inventory". for key, value in list(self.vocabulary.items()): if isinstance(key, str) and len(key) > 5: self.vocabulary[key[:5]] = value # Set things going. self.chest_room = self.rooms[114] self.bottle.contents = self.water self.yesno(self.messages[65], self.start2)
def IsRunning(self): """Returns True if there's a currently running iteration of this job.""" current_urn = self.Get(self.Schema.CURRENT_FLOW_URN) if not current_urn: return False try: current_flow = aff4.FACTORY.Open( urn=current_urn, aff4_type=flow.GRRFlow, token=self.token, mode="r") except aff4.InstantiationError: # This isn't a flow, something went really wrong, clear it out. logging.error("Unable to open cron job run: %s", current_urn) self.DeleteAttribute(self.Schema.CURRENT_FLOW_URN) self.Flush() return False return current_flow.GetRunner().IsRunning()
Returns True if there's a currently running iteration of this job.
Below is the the instruction that describes the task: ### Input: Returns True if there's a currently running iteration of this job. ### Response: def IsRunning(self): """Returns True if there's a currently running iteration of this job.""" current_urn = self.Get(self.Schema.CURRENT_FLOW_URN) if not current_urn: return False try: current_flow = aff4.FACTORY.Open( urn=current_urn, aff4_type=flow.GRRFlow, token=self.token, mode="r") except aff4.InstantiationError: # This isn't a flow, something went really wrong, clear it out. logging.error("Unable to open cron job run: %s", current_urn) self.DeleteAttribute(self.Schema.CURRENT_FLOW_URN) self.Flush() return False return current_flow.GetRunner().IsRunning()
def _load_results(self, container_id): """ load results from recent build :return: BuildResults """ if self.temp_dir: dt = DockerTasker() # FIXME: load results only when requested # results_path = os.path.join(self.temp_dir, RESULTS_JSON) # df_path = os.path.join(self.temp_dir, 'Dockerfile') # try: # with open(results_path, 'r') as results_fp: # results = json.load(results_fp, cls=BuildResultsJSONDecoder) # except (IOError, OSError) as ex: # logger.error("Can't open results: '%s'", repr(ex)) # for l in self.dt.logs(self.build_container_id, stream=False): # logger.debug(l.strip()) # raise RuntimeError("Can't open results: '%s'" % repr(ex)) # results.dockerfile = open(df_path, 'r').read() results = BuildResults() results.build_logs = dt.logs(container_id, stream=False) results.container_id = container_id return results
load results from recent build :return: BuildResults
Below is the the instruction that describes the task: ### Input: load results from recent build :return: BuildResults ### Response: def _load_results(self, container_id): """ load results from recent build :return: BuildResults """ if self.temp_dir: dt = DockerTasker() # FIXME: load results only when requested # results_path = os.path.join(self.temp_dir, RESULTS_JSON) # df_path = os.path.join(self.temp_dir, 'Dockerfile') # try: # with open(results_path, 'r') as results_fp: # results = json.load(results_fp, cls=BuildResultsJSONDecoder) # except (IOError, OSError) as ex: # logger.error("Can't open results: '%s'", repr(ex)) # for l in self.dt.logs(self.build_container_id, stream=False): # logger.debug(l.strip()) # raise RuntimeError("Can't open results: '%s'" % repr(ex)) # results.dockerfile = open(df_path, 'r').read() results = BuildResults() results.build_logs = dt.logs(container_id, stream=False) results.container_id = container_id return results
def national_significant_number(numobj): """Gets the national significant number of a phone number. Note that a national significant number doesn't contain a national prefix or any formatting. Arguments: numobj -- The PhoneNumber object for which the national significant number is needed. Returns the national significant number of the PhoneNumber object passed in. """ # If leading zero(s) have been set, we prefix this now. Note this is not a # national prefix. national_number = U_EMPTY_STRING if numobj.italian_leading_zero: num_zeros = numobj.number_of_leading_zeros if num_zeros is None: num_zeros = 1 if num_zeros > 0: national_number = U_ZERO * num_zeros national_number += str(numobj.national_number) return national_number
Gets the national significant number of a phone number. Note that a national significant number doesn't contain a national prefix or any formatting. Arguments: numobj -- The PhoneNumber object for which the national significant number is needed. Returns the national significant number of the PhoneNumber object passed in.
Below is the the instruction that describes the task: ### Input: Gets the national significant number of a phone number. Note that a national significant number doesn't contain a national prefix or any formatting. Arguments: numobj -- The PhoneNumber object for which the national significant number is needed. Returns the national significant number of the PhoneNumber object passed in. ### Response: def national_significant_number(numobj): """Gets the national significant number of a phone number. Note that a national significant number doesn't contain a national prefix or any formatting. Arguments: numobj -- The PhoneNumber object for which the national significant number is needed. Returns the national significant number of the PhoneNumber object passed in. """ # If leading zero(s) have been set, we prefix this now. Note this is not a # national prefix. national_number = U_EMPTY_STRING if numobj.italian_leading_zero: num_zeros = numobj.number_of_leading_zeros if num_zeros is None: num_zeros = 1 if num_zeros > 0: national_number = U_ZERO * num_zeros national_number += str(numobj.national_number) return national_number
async def get_vm(self, vm_id): ''' Get VM :arg vm_id: string :returns vm: object ''' result = await self.nova.servers.get(vm_id) return self._map_vm_structure(result["server"])
Get VM :arg vm_id: string :returns vm: object
Below is the the instruction that describes the task: ### Input: Get VM :arg vm_id: string :returns vm: object ### Response: async def get_vm(self, vm_id): ''' Get VM :arg vm_id: string :returns vm: object ''' result = await self.nova.servers.get(vm_id) return self._map_vm_structure(result["server"])
def flatten(it): """ Flattens any iterable From: http://stackoverflow.com/questions/11503065/python-function-to-flatten-generator-containing-another-generator :param it: Iterator, iterator to flatten :return: Generator, A generator of the flattened values """ for x in it: if isinstance(x, collections.Iterable) and not isinstance(x, str): for y in flatten(x): yield y else: yield x
Flattens any iterable From: http://stackoverflow.com/questions/11503065/python-function-to-flatten-generator-containing-another-generator :param it: Iterator, iterator to flatten :return: Generator, A generator of the flattened values
Below is the the instruction that describes the task: ### Input: Flattens any iterable From: http://stackoverflow.com/questions/11503065/python-function-to-flatten-generator-containing-another-generator :param it: Iterator, iterator to flatten :return: Generator, A generator of the flattened values ### Response: def flatten(it): """ Flattens any iterable From: http://stackoverflow.com/questions/11503065/python-function-to-flatten-generator-containing-another-generator :param it: Iterator, iterator to flatten :return: Generator, A generator of the flattened values """ for x in it: if isinstance(x, collections.Iterable) and not isinstance(x, str): for y in flatten(x): yield y else: yield x
def delete(self, infohash_list): """ Delete torrents. :param infohash_list: Single or list() of infohashes. """ data = self._process_infohash_list(infohash_list) return self._post('command/delete', data=data)
Delete torrents. :param infohash_list: Single or list() of infohashes.
Below is the the instruction that describes the task: ### Input: Delete torrents. :param infohash_list: Single or list() of infohashes. ### Response: def delete(self, infohash_list): """ Delete torrents. :param infohash_list: Single or list() of infohashes. """ data = self._process_infohash_list(infohash_list) return self._post('command/delete', data=data)
def AddVSSProcessingOptions(self, argument_group): """Adds the VSS processing options to the argument group. Args: argument_group (argparse._ArgumentGroup): argparse argument group. """ argument_group.add_argument( '--no_vss', '--no-vss', dest='no_vss', action='store_true', default=False, help=( 'Do not scan for Volume Shadow Snapshots (VSS). This means that ' 'Volume Shadow Snapshots (VSS) are not processed.')) argument_group.add_argument( '--vss_only', '--vss-only', dest='vss_only', action='store_true', default=False, help=( 'Do not process the current volume if Volume Shadow Snapshots ' '(VSS) have been selected.')) argument_group.add_argument( '--vss_stores', '--vss-stores', dest='vss_stores', action='store', type=str, default=None, help=( 'Define Volume Shadow Snapshots (VSS) (or stores that need to be ' 'processed. A range of stores can be defined as: "3..5". ' 'Multiple stores can be defined as: "1,3,5" (a list of comma ' 'separated values). Ranges and lists can also be combined as: ' '"1,3..5". The first store is 1. All stores can be defined as: ' '"all".'))
Adds the VSS processing options to the argument group. Args: argument_group (argparse._ArgumentGroup): argparse argument group.
Below is the the instruction that describes the task: ### Input: Adds the VSS processing options to the argument group. Args: argument_group (argparse._ArgumentGroup): argparse argument group. ### Response: def AddVSSProcessingOptions(self, argument_group): """Adds the VSS processing options to the argument group. Args: argument_group (argparse._ArgumentGroup): argparse argument group. """ argument_group.add_argument( '--no_vss', '--no-vss', dest='no_vss', action='store_true', default=False, help=( 'Do not scan for Volume Shadow Snapshots (VSS). This means that ' 'Volume Shadow Snapshots (VSS) are not processed.')) argument_group.add_argument( '--vss_only', '--vss-only', dest='vss_only', action='store_true', default=False, help=( 'Do not process the current volume if Volume Shadow Snapshots ' '(VSS) have been selected.')) argument_group.add_argument( '--vss_stores', '--vss-stores', dest='vss_stores', action='store', type=str, default=None, help=( 'Define Volume Shadow Snapshots (VSS) (or stores that need to be ' 'processed. A range of stores can be defined as: "3..5". ' 'Multiple stores can be defined as: "1,3,5" (a list of comma ' 'separated values). Ranges and lists can also be combined as: ' '"1,3..5". The first store is 1. All stores can be defined as: ' '"all".'))
def _request_login(self, login, password): """Sends Login request""" return self._request_internal("Login", login=login, password=password)
Sends Login request
Below is the the instruction that describes the task: ### Input: Sends Login request ### Response: def _request_login(self, login, password): """Sends Login request""" return self._request_internal("Login", login=login, password=password)
def _send_stream_features(self): """Send stream <features/>. [receiving entity only]""" self.features = self._make_stream_features() self._write_element(self.features)
Send stream <features/>. [receiving entity only]
Below is the the instruction that describes the task: ### Input: Send stream <features/>. [receiving entity only] ### Response: def _send_stream_features(self): """Send stream <features/>. [receiving entity only]""" self.features = self._make_stream_features() self._write_element(self.features)
def bounds_at_zoom(self, zoom=None): """ Return process bounds for zoom level. Parameters ---------- zoom : integer or list Returns ------- process bounds : tuple left, bottom, right, top """ return () if self.area_at_zoom(zoom).is_empty else Bounds( *self.area_at_zoom(zoom).bounds)
Return process bounds for zoom level. Parameters ---------- zoom : integer or list Returns ------- process bounds : tuple left, bottom, right, top
Below is the the instruction that describes the task: ### Input: Return process bounds for zoom level. Parameters ---------- zoom : integer or list Returns ------- process bounds : tuple left, bottom, right, top ### Response: def bounds_at_zoom(self, zoom=None): """ Return process bounds for zoom level. Parameters ---------- zoom : integer or list Returns ------- process bounds : tuple left, bottom, right, top """ return () if self.area_at_zoom(zoom).is_empty else Bounds( *self.area_at_zoom(zoom).bounds)
def private_key_to_address(private_key: Union[str, bytes]) -> ChecksumAddress: """ Converts a private key to an Ethereum address. """ if isinstance(private_key, str): private_key_bytes = to_bytes(hexstr=private_key) else: private_key_bytes = private_key pk = PrivateKey(private_key_bytes) return public_key_to_address(pk.public_key)
Converts a private key to an Ethereum address.
Below is the the instruction that describes the task: ### Input: Converts a private key to an Ethereum address. ### Response: def private_key_to_address(private_key: Union[str, bytes]) -> ChecksumAddress: """ Converts a private key to an Ethereum address. """ if isinstance(private_key, str): private_key_bytes = to_bytes(hexstr=private_key) else: private_key_bytes = private_key pk = PrivateKey(private_key_bytes) return public_key_to_address(pk.public_key)
def help(self, message, plugin=None): """help: the normal help you're reading.""" # help_data = self.load("help_files") selected_modules = help_modules = self.load("help_modules") self.say("Sure thing, %s." % message.sender.handle) help_text = "Here's what I know how to do:" if plugin and plugin in help_modules: help_text = "Here's what I know how to do about %s:" % plugin selected_modules = dict() selected_modules[plugin] = help_modules[plugin] for k in sorted(selected_modules, key=lambda x: x[0]): help_data = selected_modules[k] if help_data: help_text += "<br/><br/><b>%s</b>:" % k for line in help_data: if line: if ":" in line: line = "&nbsp; <b>%s</b>%s" % (line[:line.find(":")], line[line.find(":"):]) help_text += "<br/> %s" % line self.say(help_text, html=True)
help: the normal help you're reading.
Below is the the instruction that describes the task: ### Input: help: the normal help you're reading. ### Response: def help(self, message, plugin=None): """help: the normal help you're reading.""" # help_data = self.load("help_files") selected_modules = help_modules = self.load("help_modules") self.say("Sure thing, %s." % message.sender.handle) help_text = "Here's what I know how to do:" if plugin and plugin in help_modules: help_text = "Here's what I know how to do about %s:" % plugin selected_modules = dict() selected_modules[plugin] = help_modules[plugin] for k in sorted(selected_modules, key=lambda x: x[0]): help_data = selected_modules[k] if help_data: help_text += "<br/><br/><b>%s</b>:" % k for line in help_data: if line: if ":" in line: line = "&nbsp; <b>%s</b>%s" % (line[:line.find(":")], line[line.find(":"):]) help_text += "<br/> %s" % line self.say(help_text, html=True)
def _init_client(self, from_archive=False): """Init client""" return ConduitClient(self.url, self.api_token, self.max_retries, self.sleep_time, self.archive, from_archive)
Init client
Below is the the instruction that describes the task: ### Input: Init client ### Response: def _init_client(self, from_archive=False): """Init client""" return ConduitClient(self.url, self.api_token, self.max_retries, self.sleep_time, self.archive, from_archive)
def wrap(cls, value): ''' Some property types need to wrap their values in special containers, etc. ''' if isinstance(value, list): if isinstance(value, PropertyValueList): return value else: return PropertyValueList(value) else: return value
Some property types need to wrap their values in special containers, etc.
Below is the the instruction that describes the task: ### Input: Some property types need to wrap their values in special containers, etc. ### Response: def wrap(cls, value): ''' Some property types need to wrap their values in special containers, etc. ''' if isinstance(value, list): if isinstance(value, PropertyValueList): return value else: return PropertyValueList(value) else: return value
def _get_stddevs(self, stddev_types, rrup): """ Return standard deviations as defined in equation 3.5.5-2 page 151 """ assert all(stddev_type in self.DEFINED_FOR_STANDARD_DEVIATION_TYPES for stddev_type in stddev_types) std = np.zeros_like(rrup) std[rrup <= 20] = 0.23 idx = (rrup > 20) & (rrup <= 30) std[idx] = 0.23 - 0.03 * np.log10(rrup[idx] / 20) / np.log10(30. / 20.) std[rrup > 30] = 0.20 # convert from log10 to ln std = np.log(10 ** std) return [std for stddev_type in stddev_types]
Return standard deviations as defined in equation 3.5.5-2 page 151
Below is the the instruction that describes the task: ### Input: Return standard deviations as defined in equation 3.5.5-2 page 151 ### Response: def _get_stddevs(self, stddev_types, rrup): """ Return standard deviations as defined in equation 3.5.5-2 page 151 """ assert all(stddev_type in self.DEFINED_FOR_STANDARD_DEVIATION_TYPES for stddev_type in stddev_types) std = np.zeros_like(rrup) std[rrup <= 20] = 0.23 idx = (rrup > 20) & (rrup <= 30) std[idx] = 0.23 - 0.03 * np.log10(rrup[idx] / 20) / np.log10(30. / 20.) std[rrup > 30] = 0.20 # convert from log10 to ln std = np.log(10 ** std) return [std for stddev_type in stddev_types]
def add_eager_constraints(self, models): """ Set the constraints for an eager load of the relation. :type models: list """ key = "%s.%s" % (self._related.get_table(), self._other_key) self._query.where_in(key, self._get_eager_model_keys(models))
Set the constraints for an eager load of the relation. :type models: list
Below is the the instruction that describes the task: ### Input: Set the constraints for an eager load of the relation. :type models: list ### Response: def add_eager_constraints(self, models): """ Set the constraints for an eager load of the relation. :type models: list """ key = "%s.%s" % (self._related.get_table(), self._other_key) self._query.where_in(key, self._get_eager_model_keys(models))
def getMyPlexAccount(opts=None): # pragma: no cover """ Helper function tries to get a MyPlex Account instance by checking the the following locations for a username and password. This is useful to create user-friendly command line tools. 1. command-line options (opts). 2. environment variables and config.ini 3. Prompt on the command line. """ from plexapi import CONFIG from plexapi.myplex import MyPlexAccount # 1. Check command-line options if opts and opts.username and opts.password: print('Authenticating with Plex.tv as %s..' % opts.username) return MyPlexAccount(opts.username, opts.password) # 2. Check Plexconfig (environment variables and config.ini) config_username = CONFIG.get('auth.myplex_username') config_password = CONFIG.get('auth.myplex_password') if config_username and config_password: print('Authenticating with Plex.tv as %s..' % config_username) return MyPlexAccount(config_username, config_password) # 3. Prompt for username and password on the command line username = input('What is your plex.tv username: ') password = getpass('What is your plex.tv password: ') print('Authenticating with Plex.tv as %s..' % username) return MyPlexAccount(username, password)
Helper function tries to get a MyPlex Account instance by checking the the following locations for a username and password. This is useful to create user-friendly command line tools. 1. command-line options (opts). 2. environment variables and config.ini 3. Prompt on the command line.
Below is the the instruction that describes the task: ### Input: Helper function tries to get a MyPlex Account instance by checking the the following locations for a username and password. This is useful to create user-friendly command line tools. 1. command-line options (opts). 2. environment variables and config.ini 3. Prompt on the command line. ### Response: def getMyPlexAccount(opts=None): # pragma: no cover """ Helper function tries to get a MyPlex Account instance by checking the the following locations for a username and password. This is useful to create user-friendly command line tools. 1. command-line options (opts). 2. environment variables and config.ini 3. Prompt on the command line. """ from plexapi import CONFIG from plexapi.myplex import MyPlexAccount # 1. Check command-line options if opts and opts.username and opts.password: print('Authenticating with Plex.tv as %s..' % opts.username) return MyPlexAccount(opts.username, opts.password) # 2. Check Plexconfig (environment variables and config.ini) config_username = CONFIG.get('auth.myplex_username') config_password = CONFIG.get('auth.myplex_password') if config_username and config_password: print('Authenticating with Plex.tv as %s..' % config_username) return MyPlexAccount(config_username, config_password) # 3. Prompt for username and password on the command line username = input('What is your plex.tv username: ') password = getpass('What is your plex.tv password: ') print('Authenticating with Plex.tv as %s..' % username) return MyPlexAccount(username, password)
def extract_code(end_mark, current_str, str_array, line_num): '''Extract a multi-line string from a string array, up to a specified end marker. Args: end_mark (str): The end mark string to match for. current_str (str): The first line of the string array. str_array (list): An array of strings (lines). line_num (int): The current offset into the array. Returns: Extended string up to line with end marker. ''' if end_mark not in current_str: reached_end = False line_num += 1 while reached_end is False: next_line = str_array[line_num] if end_mark in next_line: reached_end = True else: line_num += 1 current_str += next_line clean_str = current_str.split(end_mark)[0] return {'current_str': clean_str, 'line_num': line_num}
Extract a multi-line string from a string array, up to a specified end marker. Args: end_mark (str): The end mark string to match for. current_str (str): The first line of the string array. str_array (list): An array of strings (lines). line_num (int): The current offset into the array. Returns: Extended string up to line with end marker.
Below is the the instruction that describes the task: ### Input: Extract a multi-line string from a string array, up to a specified end marker. Args: end_mark (str): The end mark string to match for. current_str (str): The first line of the string array. str_array (list): An array of strings (lines). line_num (int): The current offset into the array. Returns: Extended string up to line with end marker. ### Response: def extract_code(end_mark, current_str, str_array, line_num): '''Extract a multi-line string from a string array, up to a specified end marker. Args: end_mark (str): The end mark string to match for. current_str (str): The first line of the string array. str_array (list): An array of strings (lines). line_num (int): The current offset into the array. Returns: Extended string up to line with end marker. ''' if end_mark not in current_str: reached_end = False line_num += 1 while reached_end is False: next_line = str_array[line_num] if end_mark in next_line: reached_end = True else: line_num += 1 current_str += next_line clean_str = current_str.split(end_mark)[0] return {'current_str': clean_str, 'line_num': line_num}
def _prm_read_table(self, table_or_group, full_name): """Reads a non-nested PyTables table column by column and created a new ObjectTable for the loaded data. :param table_or_group: PyTables table to read from or a group containing subtables. :param full_name: Full name of the parameter or result whose data is to be loaded :return: Data to be loaded """ try: result_table = None if self._all_get_from_attrs(table_or_group, HDF5StorageService.SPLIT_TABLE): table_name = table_or_group._v_name data_type_table_name = table_name + '__' + HDF5StorageService.STORAGE_TYPE data_type_table = table_or_group._v_children[data_type_table_name] data_type_dict = {} for row in data_type_table: fieldname = row['field_name'].decode('utf-8') data_type_dict[fieldname] = row['data_type'].decode('utf-8') for sub_table in table_or_group: sub_table_name = sub_table._v_name if sub_table_name == data_type_table_name: continue for colname in sub_table.colnames: # Read Data column by column col = sub_table.col(colname) data_list = list(col) prefix = HDF5StorageService.FORMATTED_COLUMN_PREFIX % colname for idx, data in enumerate(data_list): # Recall original type of data data, type_changed = self._all_recall_native_type(data, PTItemMock( data_type_dict), prefix) if type_changed: data_list[idx] = data else: break # Construct or insert into an ObjectTable if result_table is None: result_table = ObjectTable(data={colname: data_list}) else: result_table[colname] = data_list else: for colname in table_or_group.colnames: # Read Data column by column col = table_or_group.col(colname) data_list = list(col) prefix = HDF5StorageService.FORMATTED_COLUMN_PREFIX % colname for idx, data in enumerate(data_list): # Recall original type of data data, type_changed = self._all_recall_native_type(data, table_or_group, prefix) if type_changed: data_list[idx] = data else: break # Construct or insert into an ObjectTable if result_table is None: result_table = ObjectTable(data={colname: data_list}) else: result_table[colname] = data_list return result_table except: self._logger.error( 'Failed loading `%s` of `%s`.' % (table_or_group._v_name, full_name)) raise
Reads a non-nested PyTables table column by column and created a new ObjectTable for the loaded data. :param table_or_group: PyTables table to read from or a group containing subtables. :param full_name: Full name of the parameter or result whose data is to be loaded :return: Data to be loaded
Below is the the instruction that describes the task: ### Input: Reads a non-nested PyTables table column by column and created a new ObjectTable for the loaded data. :param table_or_group: PyTables table to read from or a group containing subtables. :param full_name: Full name of the parameter or result whose data is to be loaded :return: Data to be loaded ### Response: def _prm_read_table(self, table_or_group, full_name): """Reads a non-nested PyTables table column by column and created a new ObjectTable for the loaded data. :param table_or_group: PyTables table to read from or a group containing subtables. :param full_name: Full name of the parameter or result whose data is to be loaded :return: Data to be loaded """ try: result_table = None if self._all_get_from_attrs(table_or_group, HDF5StorageService.SPLIT_TABLE): table_name = table_or_group._v_name data_type_table_name = table_name + '__' + HDF5StorageService.STORAGE_TYPE data_type_table = table_or_group._v_children[data_type_table_name] data_type_dict = {} for row in data_type_table: fieldname = row['field_name'].decode('utf-8') data_type_dict[fieldname] = row['data_type'].decode('utf-8') for sub_table in table_or_group: sub_table_name = sub_table._v_name if sub_table_name == data_type_table_name: continue for colname in sub_table.colnames: # Read Data column by column col = sub_table.col(colname) data_list = list(col) prefix = HDF5StorageService.FORMATTED_COLUMN_PREFIX % colname for idx, data in enumerate(data_list): # Recall original type of data data, type_changed = self._all_recall_native_type(data, PTItemMock( data_type_dict), prefix) if type_changed: data_list[idx] = data else: break # Construct or insert into an ObjectTable if result_table is None: result_table = ObjectTable(data={colname: data_list}) else: result_table[colname] = data_list else: for colname in table_or_group.colnames: # Read Data column by column col = table_or_group.col(colname) data_list = list(col) prefix = HDF5StorageService.FORMATTED_COLUMN_PREFIX % colname for idx, data in enumerate(data_list): # Recall original type of data data, type_changed = self._all_recall_native_type(data, table_or_group, prefix) if type_changed: data_list[idx] = data else: break # Construct or insert into an ObjectTable if result_table is None: result_table = ObjectTable(data={colname: data_list}) else: result_table[colname] = data_list return result_table except: self._logger.error( 'Failed loading `%s` of `%s`.' % (table_or_group._v_name, full_name)) raise
def save(self): """Update or insert a Todo item.""" req = datastore.CommitRequest() req.mode = datastore.CommitRequest.NON_TRANSACTIONAL req.mutations.add().upsert.CopyFrom(self.to_proto()) resp = datastore.commit(req) if not self.id: self.id = resp.mutation_results[0].key.path[-1].id return self
Update or insert a Todo item.
Below is the the instruction that describes the task: ### Input: Update or insert a Todo item. ### Response: def save(self): """Update or insert a Todo item.""" req = datastore.CommitRequest() req.mode = datastore.CommitRequest.NON_TRANSACTIONAL req.mutations.add().upsert.CopyFrom(self.to_proto()) resp = datastore.commit(req) if not self.id: self.id = resp.mutation_results[0].key.path[-1].id return self
def tabulate(self, restricted_predicted_column_indices = [], restricted_predicted_column_names = [], dataset_name = None): '''Returns summary analysis from the dataframe as a DataTable object. DataTables are wrapped pandas dataframes which can be combined if the have the same width. This is useful for combining multiple analyses. DataTables can be printed to terminal as a tabular string using their representation function (i.e. print(data_table)). This function (tabulate) looks at specific analysis; this class (DatasetDataFrame) can be subclassed for custom tabulation.''' self._analyze() data_series = self.get_series_names(column_indices = restricted_predicted_column_indices, column_names = restricted_predicted_column_names) # Determine the multi-index headers group_names = [] for l in self.index_layers: group_names.append(l) # Set up the table headers headers = ['Dataset'] + group_names + ['n', 'R', 'rho', 'MAE', 'Fraction correct ', 'FC sign', 'SB sensitivity', 'SB specificity'] table_rows = [] for dseries in data_series: if isinstance(dseries, tuple): dseries_l = list(dseries) else: assert(isinstance(dseries, basestring)) dseries_l = [dseries] results = [] assert (len(self.index_layers) == len(dseries)) if self.analysis.get(dseries, {}).get('partial') and self.analysis.get(dseries, {}).get('full'):# data_series in self.analysis[dseries]['full']: results.append((dseries_l[:-1] + [dseries_l[-1] + '*'], self.analysis[dseries]['partial'])) results.append((dseries_l[:-1] + [dseries_l[-1]], self.analysis[dseries]['full'])) elif (self.analysis.get(dseries, {}).get('partial')): results.append((dseries_l[:-1] + [dseries_l[-1] + '*'], self.analysis[dseries]['partial'])) elif (self.analysis.get(dseries, {}).get('full')): results = [(dseries, self.analysis[dseries]['full'])] for result in results: n = result[1]['data']['n'] R = result[1]['data']['pearsonr'][0] rho = result[1]['data']['spearmanr'][0] mae = result[1]['data']['MAE'] fraction_correct = result[1]['data']['fraction_correct'] accuracy = result[1]['data']['accuracy'] SBSensitivity = '{0:.3f} / {1}'.format(result[1]['data']['significant_beneficient_sensitivity'][0], result[1]['data']['significant_beneficient_sensitivity'][1]) SBSpecificity = '{0:.3f} / {1}'.format(result[1]['data']['significant_beneficient_specificity'][0], result[1]['data']['significant_beneficient_specificity'][1]) method = result[0] if isinstance(method, tuple): method = list(method) table_rows.append([dataset_name or self.reference_dataset_name] + method + [n, R, rho, mae, fraction_correct, accuracy, SBSensitivity, SBSpecificity]) # Convert the lists into a (wrapped) pandas dataframe to make use of the pandas formatting code to save reinventing the wheel... return DataTable(pandas.DataFrame(table_rows, columns = headers), self.index_layers)
Returns summary analysis from the dataframe as a DataTable object. DataTables are wrapped pandas dataframes which can be combined if the have the same width. This is useful for combining multiple analyses. DataTables can be printed to terminal as a tabular string using their representation function (i.e. print(data_table)). This function (tabulate) looks at specific analysis; this class (DatasetDataFrame) can be subclassed for custom tabulation.
Below is the the instruction that describes the task: ### Input: Returns summary analysis from the dataframe as a DataTable object. DataTables are wrapped pandas dataframes which can be combined if the have the same width. This is useful for combining multiple analyses. DataTables can be printed to terminal as a tabular string using their representation function (i.e. print(data_table)). This function (tabulate) looks at specific analysis; this class (DatasetDataFrame) can be subclassed for custom tabulation. ### Response: def tabulate(self, restricted_predicted_column_indices = [], restricted_predicted_column_names = [], dataset_name = None): '''Returns summary analysis from the dataframe as a DataTable object. DataTables are wrapped pandas dataframes which can be combined if the have the same width. This is useful for combining multiple analyses. DataTables can be printed to terminal as a tabular string using their representation function (i.e. print(data_table)). This function (tabulate) looks at specific analysis; this class (DatasetDataFrame) can be subclassed for custom tabulation.''' self._analyze() data_series = self.get_series_names(column_indices = restricted_predicted_column_indices, column_names = restricted_predicted_column_names) # Determine the multi-index headers group_names = [] for l in self.index_layers: group_names.append(l) # Set up the table headers headers = ['Dataset'] + group_names + ['n', 'R', 'rho', 'MAE', 'Fraction correct ', 'FC sign', 'SB sensitivity', 'SB specificity'] table_rows = [] for dseries in data_series: if isinstance(dseries, tuple): dseries_l = list(dseries) else: assert(isinstance(dseries, basestring)) dseries_l = [dseries] results = [] assert (len(self.index_layers) == len(dseries)) if self.analysis.get(dseries, {}).get('partial') and self.analysis.get(dseries, {}).get('full'):# data_series in self.analysis[dseries]['full']: results.append((dseries_l[:-1] + [dseries_l[-1] + '*'], self.analysis[dseries]['partial'])) results.append((dseries_l[:-1] + [dseries_l[-1]], self.analysis[dseries]['full'])) elif (self.analysis.get(dseries, {}).get('partial')): results.append((dseries_l[:-1] + [dseries_l[-1] + '*'], self.analysis[dseries]['partial'])) elif (self.analysis.get(dseries, {}).get('full')): results = [(dseries, self.analysis[dseries]['full'])] for result in results: n = result[1]['data']['n'] R = result[1]['data']['pearsonr'][0] rho = result[1]['data']['spearmanr'][0] mae = result[1]['data']['MAE'] fraction_correct = result[1]['data']['fraction_correct'] accuracy = result[1]['data']['accuracy'] SBSensitivity = '{0:.3f} / {1}'.format(result[1]['data']['significant_beneficient_sensitivity'][0], result[1]['data']['significant_beneficient_sensitivity'][1]) SBSpecificity = '{0:.3f} / {1}'.format(result[1]['data']['significant_beneficient_specificity'][0], result[1]['data']['significant_beneficient_specificity'][1]) method = result[0] if isinstance(method, tuple): method = list(method) table_rows.append([dataset_name or self.reference_dataset_name] + method + [n, R, rho, mae, fraction_correct, accuracy, SBSensitivity, SBSpecificity]) # Convert the lists into a (wrapped) pandas dataframe to make use of the pandas formatting code to save reinventing the wheel... return DataTable(pandas.DataFrame(table_rows, columns = headers), self.index_layers)
def aws(client, path, opt): """Renders a shell environment snippet with AWS information""" try: creds = client.read(path) except (hvac.exceptions.InternalServerError) as vault_exception: # this is how old vault behaves if vault_exception.errors[0].find('unsupported path') > 0: emsg = "Invalid AWS path. Did you forget the" \ " credential type and role?" raise aomi.exceptions.AomiFile(emsg) else: raise # this is how new vault behaves if not creds: emsg = "Invalid AWS path. Did you forget the" \ " credential type and role?" raise aomi.exceptions.AomiFile(emsg) renew_secret(client, creds, opt) if creds and 'data' in creds: print("AWS_ACCESS_KEY_ID=\"%s\"" % creds['data']['access_key']) print("AWS_SECRET_ACCESS_KEY=\"%s\"" % creds['data']['secret_key']) if 'security_token' in creds['data'] \ and creds['data']['security_token']: token = creds['data']['security_token'] print("AWS_SECURITY_TOKEN=\"%s\"" % token) else: client.revoke_self_token() e_msg = "Unable to generate AWS credentials from %s" % path raise aomi.exceptions.VaultData(e_msg) if opt.export: print("export AWS_ACCESS_KEY_ID") print("export AWS_SECRET_ACCESS_KEY") if 'security_token' in creds['data'] \ and creds['data']['security_token']: print("export AWS_SECURITY_TOKEN")
Renders a shell environment snippet with AWS information
Below is the the instruction that describes the task: ### Input: Renders a shell environment snippet with AWS information ### Response: def aws(client, path, opt): """Renders a shell environment snippet with AWS information""" try: creds = client.read(path) except (hvac.exceptions.InternalServerError) as vault_exception: # this is how old vault behaves if vault_exception.errors[0].find('unsupported path') > 0: emsg = "Invalid AWS path. Did you forget the" \ " credential type and role?" raise aomi.exceptions.AomiFile(emsg) else: raise # this is how new vault behaves if not creds: emsg = "Invalid AWS path. Did you forget the" \ " credential type and role?" raise aomi.exceptions.AomiFile(emsg) renew_secret(client, creds, opt) if creds and 'data' in creds: print("AWS_ACCESS_KEY_ID=\"%s\"" % creds['data']['access_key']) print("AWS_SECRET_ACCESS_KEY=\"%s\"" % creds['data']['secret_key']) if 'security_token' in creds['data'] \ and creds['data']['security_token']: token = creds['data']['security_token'] print("AWS_SECURITY_TOKEN=\"%s\"" % token) else: client.revoke_self_token() e_msg = "Unable to generate AWS credentials from %s" % path raise aomi.exceptions.VaultData(e_msg) if opt.export: print("export AWS_ACCESS_KEY_ID") print("export AWS_SECRET_ACCESS_KEY") if 'security_token' in creds['data'] \ and creds['data']['security_token']: print("export AWS_SECURITY_TOKEN")
def split(cls, dataset, start, end, datatype, **kwargs): """ Splits a multi-interface Dataset into regular Datasets using regular tabular interfaces. """ objs = [] if datatype is None: for d in dataset.data[start: end]: objs.append(dataset.clone(d, datatype=cls.subtypes)) return objs elif not dataset.data: return objs ds = cls._inner_dataset_template(dataset) for d in dataset.data: ds.data = d if datatype == 'array': obj = ds.array(**kwargs) elif datatype == 'dataframe': obj = ds.dframe(**kwargs) elif datatype == 'columns': if ds.interface.datatype == 'dictionary': obj = dict(ds.data) else: obj = ds.columns(**kwargs) else: raise ValueError("%s datatype not support" % datatype) objs.append(obj) return objs
Splits a multi-interface Dataset into regular Datasets using regular tabular interfaces.
Below is the the instruction that describes the task: ### Input: Splits a multi-interface Dataset into regular Datasets using regular tabular interfaces. ### Response: def split(cls, dataset, start, end, datatype, **kwargs): """ Splits a multi-interface Dataset into regular Datasets using regular tabular interfaces. """ objs = [] if datatype is None: for d in dataset.data[start: end]: objs.append(dataset.clone(d, datatype=cls.subtypes)) return objs elif not dataset.data: return objs ds = cls._inner_dataset_template(dataset) for d in dataset.data: ds.data = d if datatype == 'array': obj = ds.array(**kwargs) elif datatype == 'dataframe': obj = ds.dframe(**kwargs) elif datatype == 'columns': if ds.interface.datatype == 'dictionary': obj = dict(ds.data) else: obj = ds.columns(**kwargs) else: raise ValueError("%s datatype not support" % datatype) objs.append(obj) return objs
def fuzzy_get_value(obj, approximate_key, default=None, **kwargs): """ Like fuzzy_get, but assume the obj is dict-like and return the value without the key Notes: Argument order is in reverse order relative to `fuzzywuzzy.process.extractOne()` but in the same order as get(self, key) method on dicts Arguments: obj (dict-like): object to run the get method on using the key that is most similar to one within the dict approximate_key (str): key to look for a fuzzy match within the dict keys default (obj): the value to return if a similar key cannote be found in the `possible_keys` similarity (str): fractional similiarity between the approximate_key and the dict key (0.9 means 90% of characters must be identical) tuple_joiner (str): Character to use as delimitter/joiner between tuple elements. Used to create keys of any tuples to be able to use fuzzywuzzy string matching on it. key_and_value (bool): Whether to return both the key and its value (True) or just the value (False). Default is the same behavior as dict.get (i.e. key_and_value=False) dict_keys (list of str): if you already have a set of keys to search, this will save this funciton a little time and RAM Examples: >>> fuzzy_get_value({'seller': 2.7, 'sailor': set('e')}, 'sail') == set(['e']) True >>> fuzzy_get_value({'seller': 2.7, 'sailor': set('e'), 'camera': object()}, 'SLR') 2.7 >>> fuzzy_get_value({'seller': 2.7, 'sailor': set('e'), 'camera': object()}, 'I') == set(['e']) True >>> fuzzy_get_value({'word': tuple('word'), 'noun': tuple('noun')}, 'woh!', similarity=.3) ('w', 'o', 'r', 'd') >>> df = pd.DataFrame(np.arange(6*2).reshape(2,6), columns=('alpha','beta','omega','begin','life','end')) >>> fuzzy_get_value(df, 'life')[0], fuzzy_get(df, 'omega')[0] (4, 2) """ dict_obj = OrderedDict(obj) try: return dict_obj[list(dict_obj.keys())[int(approximate_key)]] except (ValueError, IndexError): pass return fuzzy_get(dict_obj, approximate_key, key_and_value=False, **kwargs)
Like fuzzy_get, but assume the obj is dict-like and return the value without the key Notes: Argument order is in reverse order relative to `fuzzywuzzy.process.extractOne()` but in the same order as get(self, key) method on dicts Arguments: obj (dict-like): object to run the get method on using the key that is most similar to one within the dict approximate_key (str): key to look for a fuzzy match within the dict keys default (obj): the value to return if a similar key cannote be found in the `possible_keys` similarity (str): fractional similiarity between the approximate_key and the dict key (0.9 means 90% of characters must be identical) tuple_joiner (str): Character to use as delimitter/joiner between tuple elements. Used to create keys of any tuples to be able to use fuzzywuzzy string matching on it. key_and_value (bool): Whether to return both the key and its value (True) or just the value (False). Default is the same behavior as dict.get (i.e. key_and_value=False) dict_keys (list of str): if you already have a set of keys to search, this will save this funciton a little time and RAM Examples: >>> fuzzy_get_value({'seller': 2.7, 'sailor': set('e')}, 'sail') == set(['e']) True >>> fuzzy_get_value({'seller': 2.7, 'sailor': set('e'), 'camera': object()}, 'SLR') 2.7 >>> fuzzy_get_value({'seller': 2.7, 'sailor': set('e'), 'camera': object()}, 'I') == set(['e']) True >>> fuzzy_get_value({'word': tuple('word'), 'noun': tuple('noun')}, 'woh!', similarity=.3) ('w', 'o', 'r', 'd') >>> df = pd.DataFrame(np.arange(6*2).reshape(2,6), columns=('alpha','beta','omega','begin','life','end')) >>> fuzzy_get_value(df, 'life')[0], fuzzy_get(df, 'omega')[0] (4, 2)
Below is the the instruction that describes the task: ### Input: Like fuzzy_get, but assume the obj is dict-like and return the value without the key Notes: Argument order is in reverse order relative to `fuzzywuzzy.process.extractOne()` but in the same order as get(self, key) method on dicts Arguments: obj (dict-like): object to run the get method on using the key that is most similar to one within the dict approximate_key (str): key to look for a fuzzy match within the dict keys default (obj): the value to return if a similar key cannote be found in the `possible_keys` similarity (str): fractional similiarity between the approximate_key and the dict key (0.9 means 90% of characters must be identical) tuple_joiner (str): Character to use as delimitter/joiner between tuple elements. Used to create keys of any tuples to be able to use fuzzywuzzy string matching on it. key_and_value (bool): Whether to return both the key and its value (True) or just the value (False). Default is the same behavior as dict.get (i.e. key_and_value=False) dict_keys (list of str): if you already have a set of keys to search, this will save this funciton a little time and RAM Examples: >>> fuzzy_get_value({'seller': 2.7, 'sailor': set('e')}, 'sail') == set(['e']) True >>> fuzzy_get_value({'seller': 2.7, 'sailor': set('e'), 'camera': object()}, 'SLR') 2.7 >>> fuzzy_get_value({'seller': 2.7, 'sailor': set('e'), 'camera': object()}, 'I') == set(['e']) True >>> fuzzy_get_value({'word': tuple('word'), 'noun': tuple('noun')}, 'woh!', similarity=.3) ('w', 'o', 'r', 'd') >>> df = pd.DataFrame(np.arange(6*2).reshape(2,6), columns=('alpha','beta','omega','begin','life','end')) >>> fuzzy_get_value(df, 'life')[0], fuzzy_get(df, 'omega')[0] (4, 2) ### Response: def fuzzy_get_value(obj, approximate_key, default=None, **kwargs): """ Like fuzzy_get, but assume the obj is dict-like and return the value without the key Notes: Argument order is in reverse order relative to `fuzzywuzzy.process.extractOne()` but in the same order as get(self, key) method on dicts Arguments: obj (dict-like): object to run the get method on using the key that is most similar to one within the dict approximate_key (str): key to look for a fuzzy match within the dict keys default (obj): the value to return if a similar key cannote be found in the `possible_keys` similarity (str): fractional similiarity between the approximate_key and the dict key (0.9 means 90% of characters must be identical) tuple_joiner (str): Character to use as delimitter/joiner between tuple elements. Used to create keys of any tuples to be able to use fuzzywuzzy string matching on it. key_and_value (bool): Whether to return both the key and its value (True) or just the value (False). Default is the same behavior as dict.get (i.e. key_and_value=False) dict_keys (list of str): if you already have a set of keys to search, this will save this funciton a little time and RAM Examples: >>> fuzzy_get_value({'seller': 2.7, 'sailor': set('e')}, 'sail') == set(['e']) True >>> fuzzy_get_value({'seller': 2.7, 'sailor': set('e'), 'camera': object()}, 'SLR') 2.7 >>> fuzzy_get_value({'seller': 2.7, 'sailor': set('e'), 'camera': object()}, 'I') == set(['e']) True >>> fuzzy_get_value({'word': tuple('word'), 'noun': tuple('noun')}, 'woh!', similarity=.3) ('w', 'o', 'r', 'd') >>> df = pd.DataFrame(np.arange(6*2).reshape(2,6), columns=('alpha','beta','omega','begin','life','end')) >>> fuzzy_get_value(df, 'life')[0], fuzzy_get(df, 'omega')[0] (4, 2) """ dict_obj = OrderedDict(obj) try: return dict_obj[list(dict_obj.keys())[int(approximate_key)]] except (ValueError, IndexError): pass return fuzzy_get(dict_obj, approximate_key, key_and_value=False, **kwargs)
def _calculate_cluster_distance(end_iter): """Compute allowed distance for clustering based on end confidence intervals. """ out = [] sizes = [] for x in end_iter: out.append(x) sizes.append(x.end1 - x.start1) sizes.append(x.end2 - x.start2) distance = sum(sizes) // len(sizes) return distance, out
Compute allowed distance for clustering based on end confidence intervals.
Below is the the instruction that describes the task: ### Input: Compute allowed distance for clustering based on end confidence intervals. ### Response: def _calculate_cluster_distance(end_iter): """Compute allowed distance for clustering based on end confidence intervals. """ out = [] sizes = [] for x in end_iter: out.append(x) sizes.append(x.end1 - x.start1) sizes.append(x.end2 - x.start2) distance = sum(sizes) // len(sizes) return distance, out
def create_new_address_for_user(self, user_id): """Create a new bitcoin address to accept payments for a User. This is a convenience wrapper around `get_child` that helps you do the right thing. This method always creates a public, non-prime address that can be generated from a BIP32 public key on an insecure server.""" max_id = 0x80000000 if user_id < 0 or user_id > max_id: raise ValueError( "Invalid UserID. Must be between 0 and %s" % max_id) return self.get_child(user_id, is_prime=False, as_private=False)
Create a new bitcoin address to accept payments for a User. This is a convenience wrapper around `get_child` that helps you do the right thing. This method always creates a public, non-prime address that can be generated from a BIP32 public key on an insecure server.
Below is the the instruction that describes the task: ### Input: Create a new bitcoin address to accept payments for a User. This is a convenience wrapper around `get_child` that helps you do the right thing. This method always creates a public, non-prime address that can be generated from a BIP32 public key on an insecure server. ### Response: def create_new_address_for_user(self, user_id): """Create a new bitcoin address to accept payments for a User. This is a convenience wrapper around `get_child` that helps you do the right thing. This method always creates a public, non-prime address that can be generated from a BIP32 public key on an insecure server.""" max_id = 0x80000000 if user_id < 0 or user_id > max_id: raise ValueError( "Invalid UserID. Must be between 0 and %s" % max_id) return self.get_child(user_id, is_prime=False, as_private=False)
def add_rect(img, box, color=None, thickness=1): """ Draws a bounding box inside the image. :param img: Input image :param box: Box object that defines the bounding box. :param color: Color of the box :param thickness: Thickness of line :return: Rectangle added image """ if color is None: color = COL_GRAY box = box.to_int() cv.rectangle(img, box.top_left(), box.bottom_right(), color, thickness)
Draws a bounding box inside the image. :param img: Input image :param box: Box object that defines the bounding box. :param color: Color of the box :param thickness: Thickness of line :return: Rectangle added image
Below is the the instruction that describes the task: ### Input: Draws a bounding box inside the image. :param img: Input image :param box: Box object that defines the bounding box. :param color: Color of the box :param thickness: Thickness of line :return: Rectangle added image ### Response: def add_rect(img, box, color=None, thickness=1): """ Draws a bounding box inside the image. :param img: Input image :param box: Box object that defines the bounding box. :param color: Color of the box :param thickness: Thickness of line :return: Rectangle added image """ if color is None: color = COL_GRAY box = box.to_int() cv.rectangle(img, box.top_left(), box.bottom_right(), color, thickness)
def create_namespaced_config_map(self, namespace, body, **kwargs): # noqa: E501 """create_namespaced_config_map # noqa: E501 create a ConfigMap # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.create_namespaced_config_map(namespace, body, async_req=True) >>> result = thread.get() :param async_req bool :param str namespace: object name and auth scope, such as for teams and projects (required) :param V1ConfigMap body: (required) :param bool include_uninitialized: If true, partially initialized resources are included in the response. :param str pretty: If 'true', then the output is pretty printed. :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed :return: V1ConfigMap If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async_req'): return self.create_namespaced_config_map_with_http_info(namespace, body, **kwargs) # noqa: E501 else: (data) = self.create_namespaced_config_map_with_http_info(namespace, body, **kwargs) # noqa: E501 return data
create_namespaced_config_map # noqa: E501 create a ConfigMap # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.create_namespaced_config_map(namespace, body, async_req=True) >>> result = thread.get() :param async_req bool :param str namespace: object name and auth scope, such as for teams and projects (required) :param V1ConfigMap body: (required) :param bool include_uninitialized: If true, partially initialized resources are included in the response. :param str pretty: If 'true', then the output is pretty printed. :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed :return: V1ConfigMap If the method is called asynchronously, returns the request thread.
Below is the the instruction that describes the task: ### Input: create_namespaced_config_map # noqa: E501 create a ConfigMap # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.create_namespaced_config_map(namespace, body, async_req=True) >>> result = thread.get() :param async_req bool :param str namespace: object name and auth scope, such as for teams and projects (required) :param V1ConfigMap body: (required) :param bool include_uninitialized: If true, partially initialized resources are included in the response. :param str pretty: If 'true', then the output is pretty printed. :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed :return: V1ConfigMap If the method is called asynchronously, returns the request thread. ### Response: def create_namespaced_config_map(self, namespace, body, **kwargs): # noqa: E501 """create_namespaced_config_map # noqa: E501 create a ConfigMap # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.create_namespaced_config_map(namespace, body, async_req=True) >>> result = thread.get() :param async_req bool :param str namespace: object name and auth scope, such as for teams and projects (required) :param V1ConfigMap body: (required) :param bool include_uninitialized: If true, partially initialized resources are included in the response. :param str pretty: If 'true', then the output is pretty printed. :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed :return: V1ConfigMap If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async_req'): return self.create_namespaced_config_map_with_http_info(namespace, body, **kwargs) # noqa: E501 else: (data) = self.create_namespaced_config_map_with_http_info(namespace, body, **kwargs) # noqa: E501 return data
def remove(self, item): """ Transactional implementation of :func:`Set.remove(item) <hazelcast.proxy.set.Set.remove>` :param item: (object), the specified item to be deleted. :return: (bool), ``true`` if item is remove successfully, ``false`` otherwise. """ check_not_none(item, "item can't be none") return self._encode_invoke(transactional_set_remove_codec, item=self._to_data(item))
Transactional implementation of :func:`Set.remove(item) <hazelcast.proxy.set.Set.remove>` :param item: (object), the specified item to be deleted. :return: (bool), ``true`` if item is remove successfully, ``false`` otherwise.
Below is the the instruction that describes the task: ### Input: Transactional implementation of :func:`Set.remove(item) <hazelcast.proxy.set.Set.remove>` :param item: (object), the specified item to be deleted. :return: (bool), ``true`` if item is remove successfully, ``false`` otherwise. ### Response: def remove(self, item): """ Transactional implementation of :func:`Set.remove(item) <hazelcast.proxy.set.Set.remove>` :param item: (object), the specified item to be deleted. :return: (bool), ``true`` if item is remove successfully, ``false`` otherwise. """ check_not_none(item, "item can't be none") return self._encode_invoke(transactional_set_remove_codec, item=self._to_data(item))
def sgt(self, other): """Compares two equal-sized BinWords, treating them as signed integers, and returning True if the first is bigger. """ self._check_match(other) return self.to_sint() > other.to_sint()
Compares two equal-sized BinWords, treating them as signed integers, and returning True if the first is bigger.
Below is the the instruction that describes the task: ### Input: Compares two equal-sized BinWords, treating them as signed integers, and returning True if the first is bigger. ### Response: def sgt(self, other): """Compares two equal-sized BinWords, treating them as signed integers, and returning True if the first is bigger. """ self._check_match(other) return self.to_sint() > other.to_sint()
def get_authorisation_url(self, reset=False): """ Initialises the OAuth2 Process by asking the auth server for a login URL. Once called, the user can login by being redirected to the url returned by this function. If there is an error during authorisation, None is returned.""" if reset: self.auth_url = None if not self.auth_url: try: oauth = OAuth2Session(self.client_id,redirect_uri=self.redirect_url) self.auth_url,self.state = oauth.authorization_url(self.auth_base_url) except Exception: #print("Unexpected error:", sys.exc_info()[0]) #print("Could not get Authorisation Url!") return None return self.auth_url
Initialises the OAuth2 Process by asking the auth server for a login URL. Once called, the user can login by being redirected to the url returned by this function. If there is an error during authorisation, None is returned.
Below is the the instruction that describes the task: ### Input: Initialises the OAuth2 Process by asking the auth server for a login URL. Once called, the user can login by being redirected to the url returned by this function. If there is an error during authorisation, None is returned. ### Response: def get_authorisation_url(self, reset=False): """ Initialises the OAuth2 Process by asking the auth server for a login URL. Once called, the user can login by being redirected to the url returned by this function. If there is an error during authorisation, None is returned.""" if reset: self.auth_url = None if not self.auth_url: try: oauth = OAuth2Session(self.client_id,redirect_uri=self.redirect_url) self.auth_url,self.state = oauth.authorization_url(self.auth_base_url) except Exception: #print("Unexpected error:", sys.exc_info()[0]) #print("Could not get Authorisation Url!") return None return self.auth_url
def Search(self, text): """Search the text for our value.""" if isinstance(text, rdfvalue.RDFString): text = str(text) return self._regex.search(text)
Search the text for our value.
Below is the the instruction that describes the task: ### Input: Search the text for our value. ### Response: def Search(self, text): """Search the text for our value.""" if isinstance(text, rdfvalue.RDFString): text = str(text) return self._regex.search(text)
def upper_underscore(string, prefix='', suffix=''): """ Generate an underscore-separated upper-case identifier. Useful for constants. Takes a string, prefix, and optional suffix. `prefix` can be set to `''`, though be careful - without a prefix, the function will throw `InvalidIdentifier` when your string starts with a number. Example: >>> upper_underscore("This is a constant", prefix='') 'THIS_IS_A_CONSTANT' """ return require_valid(append_underscore_if_keyword('_'.join( word.upper() for word in en.words(' '.join([prefix, string, suffix]))) ))
Generate an underscore-separated upper-case identifier. Useful for constants. Takes a string, prefix, and optional suffix. `prefix` can be set to `''`, though be careful - without a prefix, the function will throw `InvalidIdentifier` when your string starts with a number. Example: >>> upper_underscore("This is a constant", prefix='') 'THIS_IS_A_CONSTANT'
Below is the the instruction that describes the task: ### Input: Generate an underscore-separated upper-case identifier. Useful for constants. Takes a string, prefix, and optional suffix. `prefix` can be set to `''`, though be careful - without a prefix, the function will throw `InvalidIdentifier` when your string starts with a number. Example: >>> upper_underscore("This is a constant", prefix='') 'THIS_IS_A_CONSTANT' ### Response: def upper_underscore(string, prefix='', suffix=''): """ Generate an underscore-separated upper-case identifier. Useful for constants. Takes a string, prefix, and optional suffix. `prefix` can be set to `''`, though be careful - without a prefix, the function will throw `InvalidIdentifier` when your string starts with a number. Example: >>> upper_underscore("This is a constant", prefix='') 'THIS_IS_A_CONSTANT' """ return require_valid(append_underscore_if_keyword('_'.join( word.upper() for word in en.words(' '.join([prefix, string, suffix]))) ))
def remove_repo_from_team(self, auth, team_id, repo_name): """ Remove repo from team. :param auth.Authentication auth: authentication object, must be admin-level :param str team_id: Team's id :param str repo_name: Name of the repo to be removed from the team :raises NetworkFailure: if there is an error communicating with the server :raises ApiFailure: if the request cannot be serviced """ url = "/admin/teams/{t}/repos/{r}".format(t=team_id, r=repo_name) self.delete(url, auth=auth)
Remove repo from team. :param auth.Authentication auth: authentication object, must be admin-level :param str team_id: Team's id :param str repo_name: Name of the repo to be removed from the team :raises NetworkFailure: if there is an error communicating with the server :raises ApiFailure: if the request cannot be serviced
Below is the the instruction that describes the task: ### Input: Remove repo from team. :param auth.Authentication auth: authentication object, must be admin-level :param str team_id: Team's id :param str repo_name: Name of the repo to be removed from the team :raises NetworkFailure: if there is an error communicating with the server :raises ApiFailure: if the request cannot be serviced ### Response: def remove_repo_from_team(self, auth, team_id, repo_name): """ Remove repo from team. :param auth.Authentication auth: authentication object, must be admin-level :param str team_id: Team's id :param str repo_name: Name of the repo to be removed from the team :raises NetworkFailure: if there is an error communicating with the server :raises ApiFailure: if the request cannot be serviced """ url = "/admin/teams/{t}/repos/{r}".format(t=team_id, r=repo_name) self.delete(url, auth=auth)
def sweep(ABF,sweep=None,rainbow=True,alpha=None,protocol=False,color='b', continuous=False,offsetX=0,offsetY=0,minutes=False, decimate=None,newFigure=False): """ Load a particular sweep then plot it. If sweep is None or False, just plot current dataX/dataY. If rainbow, it'll make it color coded prettily. """ if len(pylab.get_fignums())==0 or newFigure: new(ABF,True) if offsetY>0: pylab.grid(None) # figure which sweeps to plot if sweep is None: sweeps=[ABF.currentSweep] if not ABF.currentSweep: sweeps=[0] elif sweep=="all": sweeps=range(0,ABF.sweeps) elif type(sweep) in [int,float]: sweeps=[int(sweep)] elif type(sweep) is list: sweeps=sweep else: print("DONT KNOW WHAT TO DO WITH THIS SWEEPS!!!\n",type(sweep),sweep) #figure out offsets: if continuous: offsetX=ABF.sweepInterval # determine the colors to use colors=[color]*len(sweeps) #detault to blue if rainbow and len(sweeps)>1: for i in range(len(sweeps)): colors[i]=ABF.colormap[i] if alpha is None and len(sweeps)==1: alpha=1 if rainbow and alpha is None: alpha=.5 # correct for alpha if alpha is None: alpha=1 # conversion to minutes? if minutes == False: minutes=1 else: minutes=60 pylab.xlabel("minutes") ABF.decimateMethod=decimate # do the plotting of each sweep for i in range(len(sweeps)): ABF.setSweep(sweeps[i]) if protocol: pylab.plot((np.array(ABF.protoX)/ABF.rate+offsetX*i)/minutes, ABF.protoY+offsetY*i, alpha=alpha,color=colors[i]) else: pylab.plot((ABF.dataX+offsetX*i)/minutes, ABF.dataY+offsetY*i,alpha=alpha,color=colors[i]) ABF.decimateMethod=None pylab.margins(0,.02)
Load a particular sweep then plot it. If sweep is None or False, just plot current dataX/dataY. If rainbow, it'll make it color coded prettily.
Below is the the instruction that describes the task: ### Input: Load a particular sweep then plot it. If sweep is None or False, just plot current dataX/dataY. If rainbow, it'll make it color coded prettily. ### Response: def sweep(ABF,sweep=None,rainbow=True,alpha=None,protocol=False,color='b', continuous=False,offsetX=0,offsetY=0,minutes=False, decimate=None,newFigure=False): """ Load a particular sweep then plot it. If sweep is None or False, just plot current dataX/dataY. If rainbow, it'll make it color coded prettily. """ if len(pylab.get_fignums())==0 or newFigure: new(ABF,True) if offsetY>0: pylab.grid(None) # figure which sweeps to plot if sweep is None: sweeps=[ABF.currentSweep] if not ABF.currentSweep: sweeps=[0] elif sweep=="all": sweeps=range(0,ABF.sweeps) elif type(sweep) in [int,float]: sweeps=[int(sweep)] elif type(sweep) is list: sweeps=sweep else: print("DONT KNOW WHAT TO DO WITH THIS SWEEPS!!!\n",type(sweep),sweep) #figure out offsets: if continuous: offsetX=ABF.sweepInterval # determine the colors to use colors=[color]*len(sweeps) #detault to blue if rainbow and len(sweeps)>1: for i in range(len(sweeps)): colors[i]=ABF.colormap[i] if alpha is None and len(sweeps)==1: alpha=1 if rainbow and alpha is None: alpha=.5 # correct for alpha if alpha is None: alpha=1 # conversion to minutes? if minutes == False: minutes=1 else: minutes=60 pylab.xlabel("minutes") ABF.decimateMethod=decimate # do the plotting of each sweep for i in range(len(sweeps)): ABF.setSweep(sweeps[i]) if protocol: pylab.plot((np.array(ABF.protoX)/ABF.rate+offsetX*i)/minutes, ABF.protoY+offsetY*i, alpha=alpha,color=colors[i]) else: pylab.plot((ABF.dataX+offsetX*i)/minutes, ABF.dataY+offsetY*i,alpha=alpha,color=colors[i]) ABF.decimateMethod=None pylab.margins(0,.02)
def _read_header(self, header_str): """Reads metadata from the header.""" # regular expressions re_float = '[-+]?(\d+(\.\d*)?|\.\d+)([eE][-+]?\d+)?' re_uint = '\d+' re_binning = '{d} in (?P<nbins>' + re_uint + ') bin[ s] ' re_binning += 'of (?P<binwidth>' + re_float + ') {unit}' # map of dimensions and units dim_units = { 'X': 'cm', 'Y': 'cm', 'Z': 'cm', 'R': 'cm', 'Phi': 'deg', 'Theta': 'deg', } # retrieve binning info self.dimensions = [] for line in header_str.splitlines(): for dim, unit in dim_units.items(): re_tmp = re_binning.format(d=dim, unit=unit) regex = re.compile(re_tmp) match = regex.search(line) if match: N = int(match.group('nbins')) width = float(match.group('binwidth')) dimension = BinnedDimension(dim, unit, N, width) self.dimensions.append(dimension) # retrieve scored quantity info re_score_unit = '# (?P<quant>.+) \( (?P<unit>.+) \) : (?P<stats>.+)' re_score_unitless = '# (?P<quant>.+) : (?P<stats>.+)' regex_unit = re.compile(re_score_unit) regex_unitless = re.compile(re_score_unitless) for line in header_str.splitlines(): match = regex_unit.search(line) if match: self.quantity = match.group('quant') self.unit = match.group('unit') self.statistics = match.group('stats').split() break match = regex_unitless.search(line) if match: self.quantity = match.group('quant') self.unit = None self.statistics = match.group('stats').split() break
Reads metadata from the header.
Below is the the instruction that describes the task: ### Input: Reads metadata from the header. ### Response: def _read_header(self, header_str): """Reads metadata from the header.""" # regular expressions re_float = '[-+]?(\d+(\.\d*)?|\.\d+)([eE][-+]?\d+)?' re_uint = '\d+' re_binning = '{d} in (?P<nbins>' + re_uint + ') bin[ s] ' re_binning += 'of (?P<binwidth>' + re_float + ') {unit}' # map of dimensions and units dim_units = { 'X': 'cm', 'Y': 'cm', 'Z': 'cm', 'R': 'cm', 'Phi': 'deg', 'Theta': 'deg', } # retrieve binning info self.dimensions = [] for line in header_str.splitlines(): for dim, unit in dim_units.items(): re_tmp = re_binning.format(d=dim, unit=unit) regex = re.compile(re_tmp) match = regex.search(line) if match: N = int(match.group('nbins')) width = float(match.group('binwidth')) dimension = BinnedDimension(dim, unit, N, width) self.dimensions.append(dimension) # retrieve scored quantity info re_score_unit = '# (?P<quant>.+) \( (?P<unit>.+) \) : (?P<stats>.+)' re_score_unitless = '# (?P<quant>.+) : (?P<stats>.+)' regex_unit = re.compile(re_score_unit) regex_unitless = re.compile(re_score_unitless) for line in header_str.splitlines(): match = regex_unit.search(line) if match: self.quantity = match.group('quant') self.unit = match.group('unit') self.statistics = match.group('stats').split() break match = regex_unitless.search(line) if match: self.quantity = match.group('quant') self.unit = None self.statistics = match.group('stats').split() break
def csv_to_numpy(string_like, dtype=None): # type: (str) -> np.array """Convert a CSV object to a numpy array. Args: string_like (str): CSV string. dtype (dtype, optional): Data type of the resulting array. If None, the dtypes will be determined by the contents of each column, individually. This argument can only be used to 'upcast' the array. For downcasting, use the .astype(t) method. Returns: (np.array): numpy array """ stream = StringIO(string_like) return np.genfromtxt(stream, dtype=dtype, delimiter=',')
Convert a CSV object to a numpy array. Args: string_like (str): CSV string. dtype (dtype, optional): Data type of the resulting array. If None, the dtypes will be determined by the contents of each column, individually. This argument can only be used to 'upcast' the array. For downcasting, use the .astype(t) method. Returns: (np.array): numpy array
Below is the the instruction that describes the task: ### Input: Convert a CSV object to a numpy array. Args: string_like (str): CSV string. dtype (dtype, optional): Data type of the resulting array. If None, the dtypes will be determined by the contents of each column, individually. This argument can only be used to 'upcast' the array. For downcasting, use the .astype(t) method. Returns: (np.array): numpy array ### Response: def csv_to_numpy(string_like, dtype=None): # type: (str) -> np.array """Convert a CSV object to a numpy array. Args: string_like (str): CSV string. dtype (dtype, optional): Data type of the resulting array. If None, the dtypes will be determined by the contents of each column, individually. This argument can only be used to 'upcast' the array. For downcasting, use the .astype(t) method. Returns: (np.array): numpy array """ stream = StringIO(string_like) return np.genfromtxt(stream, dtype=dtype, delimiter=',')
def paga_expression_entropies(adata) -> List[float]: """Compute the median expression entropy for each node-group. Parameters ---------- adata : AnnData Annotated data matrix. Returns ------- Entropies of median expressions for each node. """ from scipy.stats import entropy groups_order, groups_masks = utils.select_groups( adata, key=adata.uns['paga']['groups']) entropies = [] for mask in groups_masks: X_mask = adata.X[mask].todense() x_median = np.nanmedian(X_mask, axis=1,overwrite_input=True) x_probs = (x_median - np.nanmin(x_median)) / (np.nanmax(x_median) - np.nanmin(x_median)) entropies.append(entropy(x_probs)) return entropies
Compute the median expression entropy for each node-group. Parameters ---------- adata : AnnData Annotated data matrix. Returns ------- Entropies of median expressions for each node.
Below is the the instruction that describes the task: ### Input: Compute the median expression entropy for each node-group. Parameters ---------- adata : AnnData Annotated data matrix. Returns ------- Entropies of median expressions for each node. ### Response: def paga_expression_entropies(adata) -> List[float]: """Compute the median expression entropy for each node-group. Parameters ---------- adata : AnnData Annotated data matrix. Returns ------- Entropies of median expressions for each node. """ from scipy.stats import entropy groups_order, groups_masks = utils.select_groups( adata, key=adata.uns['paga']['groups']) entropies = [] for mask in groups_masks: X_mask = adata.X[mask].todense() x_median = np.nanmedian(X_mask, axis=1,overwrite_input=True) x_probs = (x_median - np.nanmin(x_median)) / (np.nanmax(x_median) - np.nanmin(x_median)) entropies.append(entropy(x_probs)) return entropies
def get_pixel(framebuf, x, y): """Get the color of a given pixel""" index = (y >> 3) * framebuf.stride + x offset = y & 0x07 return (framebuf.buf[index] >> offset) & 0x01
Get the color of a given pixel
Below is the the instruction that describes the task: ### Input: Get the color of a given pixel ### Response: def get_pixel(framebuf, x, y): """Get the color of a given pixel""" index = (y >> 3) * framebuf.stride + x offset = y & 0x07 return (framebuf.buf[index] >> offset) & 0x01
def sg_arg(): r"""Gets current command line options Returns: tf.sg_opt instance that is updated with current commandd line options. """ if not tf.app.flags.FLAGS.__dict__['__parsed']: tf.app.flags.FLAGS._parse_flags() return tf.sg_opt(tf.app.flags.FLAGS.__dict__['__flags'])
r"""Gets current command line options Returns: tf.sg_opt instance that is updated with current commandd line options.
Below is the the instruction that describes the task: ### Input: r"""Gets current command line options Returns: tf.sg_opt instance that is updated with current commandd line options. ### Response: def sg_arg(): r"""Gets current command line options Returns: tf.sg_opt instance that is updated with current commandd line options. """ if not tf.app.flags.FLAGS.__dict__['__parsed']: tf.app.flags.FLAGS._parse_flags() return tf.sg_opt(tf.app.flags.FLAGS.__dict__['__flags'])
def do_gate(self, gate: Gate) -> 'AbstractQuantumSimulator': """ Perform a gate. :return: ``self`` to support method chaining. """ unitary = lifted_gate(gate=gate, n_qubits=self.n_qubits) self.density = unitary.dot(self.density).dot(np.conj(unitary).T) return self
Perform a gate. :return: ``self`` to support method chaining.
Below is the the instruction that describes the task: ### Input: Perform a gate. :return: ``self`` to support method chaining. ### Response: def do_gate(self, gate: Gate) -> 'AbstractQuantumSimulator': """ Perform a gate. :return: ``self`` to support method chaining. """ unitary = lifted_gate(gate=gate, n_qubits=self.n_qubits) self.density = unitary.dot(self.density).dot(np.conj(unitary).T) return self
def filter_hidden_frames(self): """Remove the frames according to the paste spec.""" for group in self.groups: group.filter_hidden_frames() self.frames[:] = [frame for group in self.groups for frame in group.frames]
Remove the frames according to the paste spec.
Below is the the instruction that describes the task: ### Input: Remove the frames according to the paste spec. ### Response: def filter_hidden_frames(self): """Remove the frames according to the paste spec.""" for group in self.groups: group.filter_hidden_frames() self.frames[:] = [frame for group in self.groups for frame in group.frames]
def plan_tr(p, *args, **kwargs): ''' plan_tr(p, ...) yields a copy of plan p in which the afferent and efferent values of its functions have been translated. The translation is found from merging the list of 0 or more dictionary arguments given left-to-right followed by the keyword arguments. If the plan that is given is not a plan object explicitly, calc_tr will attempt to coerce it to one. ''' if not is_plan(p): p = plan(p) return p.tr(*args, **kwargs)
plan_tr(p, ...) yields a copy of plan p in which the afferent and efferent values of its functions have been translated. The translation is found from merging the list of 0 or more dictionary arguments given left-to-right followed by the keyword arguments. If the plan that is given is not a plan object explicitly, calc_tr will attempt to coerce it to one.
Below is the the instruction that describes the task: ### Input: plan_tr(p, ...) yields a copy of plan p in which the afferent and efferent values of its functions have been translated. The translation is found from merging the list of 0 or more dictionary arguments given left-to-right followed by the keyword arguments. If the plan that is given is not a plan object explicitly, calc_tr will attempt to coerce it to one. ### Response: def plan_tr(p, *args, **kwargs): ''' plan_tr(p, ...) yields a copy of plan p in which the afferent and efferent values of its functions have been translated. The translation is found from merging the list of 0 or more dictionary arguments given left-to-right followed by the keyword arguments. If the plan that is given is not a plan object explicitly, calc_tr will attempt to coerce it to one. ''' if not is_plan(p): p = plan(p) return p.tr(*args, **kwargs)
def match_note_onsets(ref_intervals, est_intervals, onset_tolerance=0.05, strict=False): """Compute a maximum matching between reference and estimated notes, only taking note onsets into account. Given two note sequences represented by ``ref_intervals`` and ``est_intervals`` (see :func:`mir_eval.io.load_valued_intervals`), we see the largest set of correspondences ``(i,j)`` such that the onset of reference note ``i`` is within ``onset_tolerance`` of the onset of estimated note ``j``. Every reference note is matched against at most one estimated note. Note there are separate functions :func:`match_note_offsets` and :func:`match_notes` for matching notes based on offsets only or based on onset, offset, and pitch, respectively. This is because the rules for matching note onsets and matching note offsets are different. Parameters ---------- ref_intervals : np.ndarray, shape=(n,2) Array of reference notes time intervals (onset and offset times) est_intervals : np.ndarray, shape=(m,2) Array of estimated notes time intervals (onset and offset times) onset_tolerance : float > 0 The tolerance for an estimated note's onset deviating from the reference note's onset, in seconds. Default is 0.05 (50 ms). strict : bool If ``strict=False`` (the default), threshold checks for onset matching are performed using ``<=`` (less than or equal). If ``strict=True``, the threshold checks are performed using ``<`` (less than). Returns ------- matching : list of tuples A list of matched reference and estimated notes. ``matching[i] == (i, j)`` where reference note ``i`` matches estimated note ``j``. """ # set the comparison function if strict: cmp_func = np.less else: cmp_func = np.less_equal # check for onset matches onset_distances = np.abs(np.subtract.outer(ref_intervals[:, 0], est_intervals[:, 0])) # Round distances to a target precision to avoid the situation where # if the distance is exactly 50ms (and strict=False) it erroneously # doesn't match the notes because of precision issues. onset_distances = np.around(onset_distances, decimals=N_DECIMALS) onset_hit_matrix = cmp_func(onset_distances, onset_tolerance) # find hits hits = np.where(onset_hit_matrix) # Construct the graph input # Flip graph so that 'matching' is a list of tuples where the first item # in each tuple is the reference note index, and the second item is the # estimated note index. G = {} for ref_i, est_i in zip(*hits): if est_i not in G: G[est_i] = [] G[est_i].append(ref_i) # Compute the maximum matching matching = sorted(util._bipartite_match(G).items()) return matching
Compute a maximum matching between reference and estimated notes, only taking note onsets into account. Given two note sequences represented by ``ref_intervals`` and ``est_intervals`` (see :func:`mir_eval.io.load_valued_intervals`), we see the largest set of correspondences ``(i,j)`` such that the onset of reference note ``i`` is within ``onset_tolerance`` of the onset of estimated note ``j``. Every reference note is matched against at most one estimated note. Note there are separate functions :func:`match_note_offsets` and :func:`match_notes` for matching notes based on offsets only or based on onset, offset, and pitch, respectively. This is because the rules for matching note onsets and matching note offsets are different. Parameters ---------- ref_intervals : np.ndarray, shape=(n,2) Array of reference notes time intervals (onset and offset times) est_intervals : np.ndarray, shape=(m,2) Array of estimated notes time intervals (onset and offset times) onset_tolerance : float > 0 The tolerance for an estimated note's onset deviating from the reference note's onset, in seconds. Default is 0.05 (50 ms). strict : bool If ``strict=False`` (the default), threshold checks for onset matching are performed using ``<=`` (less than or equal). If ``strict=True``, the threshold checks are performed using ``<`` (less than). Returns ------- matching : list of tuples A list of matched reference and estimated notes. ``matching[i] == (i, j)`` where reference note ``i`` matches estimated note ``j``.
Below is the the instruction that describes the task: ### Input: Compute a maximum matching between reference and estimated notes, only taking note onsets into account. Given two note sequences represented by ``ref_intervals`` and ``est_intervals`` (see :func:`mir_eval.io.load_valued_intervals`), we see the largest set of correspondences ``(i,j)`` such that the onset of reference note ``i`` is within ``onset_tolerance`` of the onset of estimated note ``j``. Every reference note is matched against at most one estimated note. Note there are separate functions :func:`match_note_offsets` and :func:`match_notes` for matching notes based on offsets only or based on onset, offset, and pitch, respectively. This is because the rules for matching note onsets and matching note offsets are different. Parameters ---------- ref_intervals : np.ndarray, shape=(n,2) Array of reference notes time intervals (onset and offset times) est_intervals : np.ndarray, shape=(m,2) Array of estimated notes time intervals (onset and offset times) onset_tolerance : float > 0 The tolerance for an estimated note's onset deviating from the reference note's onset, in seconds. Default is 0.05 (50 ms). strict : bool If ``strict=False`` (the default), threshold checks for onset matching are performed using ``<=`` (less than or equal). If ``strict=True``, the threshold checks are performed using ``<`` (less than). Returns ------- matching : list of tuples A list of matched reference and estimated notes. ``matching[i] == (i, j)`` where reference note ``i`` matches estimated note ``j``. ### Response: def match_note_onsets(ref_intervals, est_intervals, onset_tolerance=0.05, strict=False): """Compute a maximum matching between reference and estimated notes, only taking note onsets into account. Given two note sequences represented by ``ref_intervals`` and ``est_intervals`` (see :func:`mir_eval.io.load_valued_intervals`), we see the largest set of correspondences ``(i,j)`` such that the onset of reference note ``i`` is within ``onset_tolerance`` of the onset of estimated note ``j``. Every reference note is matched against at most one estimated note. Note there are separate functions :func:`match_note_offsets` and :func:`match_notes` for matching notes based on offsets only or based on onset, offset, and pitch, respectively. This is because the rules for matching note onsets and matching note offsets are different. Parameters ---------- ref_intervals : np.ndarray, shape=(n,2) Array of reference notes time intervals (onset and offset times) est_intervals : np.ndarray, shape=(m,2) Array of estimated notes time intervals (onset and offset times) onset_tolerance : float > 0 The tolerance for an estimated note's onset deviating from the reference note's onset, in seconds. Default is 0.05 (50 ms). strict : bool If ``strict=False`` (the default), threshold checks for onset matching are performed using ``<=`` (less than or equal). If ``strict=True``, the threshold checks are performed using ``<`` (less than). Returns ------- matching : list of tuples A list of matched reference and estimated notes. ``matching[i] == (i, j)`` where reference note ``i`` matches estimated note ``j``. """ # set the comparison function if strict: cmp_func = np.less else: cmp_func = np.less_equal # check for onset matches onset_distances = np.abs(np.subtract.outer(ref_intervals[:, 0], est_intervals[:, 0])) # Round distances to a target precision to avoid the situation where # if the distance is exactly 50ms (and strict=False) it erroneously # doesn't match the notes because of precision issues. onset_distances = np.around(onset_distances, decimals=N_DECIMALS) onset_hit_matrix = cmp_func(onset_distances, onset_tolerance) # find hits hits = np.where(onset_hit_matrix) # Construct the graph input # Flip graph so that 'matching' is a list of tuples where the first item # in each tuple is the reference note index, and the second item is the # estimated note index. G = {} for ref_i, est_i in zip(*hits): if est_i not in G: G[est_i] = [] G[est_i].append(ref_i) # Compute the maximum matching matching = sorted(util._bipartite_match(G).items()) return matching
def clear(self): """Clear all keys from the comment.""" for i in list(self._internal): self._internal.remove(i)
Clear all keys from the comment.
Below is the the instruction that describes the task: ### Input: Clear all keys from the comment. ### Response: def clear(self): """Clear all keys from the comment.""" for i in list(self._internal): self._internal.remove(i)
def decode_jwt(encoded_token, secret, algorithm, identity_claim_key, user_claims_key): """ Decodes an encoded JWT :param encoded_token: The encoded JWT string to decode :param secret: Secret key used to encode the JWT :param algorithm: Algorithm used to encode the JWT :param identity_claim_key: expected key that contains the identity :param user_claims_key: expected key that contains the user claims :return: Dictionary containing contents of the JWT """ # This call verifies the ext, iat, and nbf claims data = jwt.decode(encoded_token, secret, algorithms=[algorithm]) # Make sure that any custom claims we expect in the token are present if 'jti' not in data: raise JWTDecodeError("Missing claim: jti") if identity_claim_key not in data: raise JWTDecodeError("Missing claim: {}".format(identity_claim_key)) if 'type' not in data or data['type'] not in ('refresh', 'access'): raise JWTDecodeError("Missing or invalid claim: type") if user_claims_key not in data: data[user_claims_key] = {} return data
Decodes an encoded JWT :param encoded_token: The encoded JWT string to decode :param secret: Secret key used to encode the JWT :param algorithm: Algorithm used to encode the JWT :param identity_claim_key: expected key that contains the identity :param user_claims_key: expected key that contains the user claims :return: Dictionary containing contents of the JWT
Below is the the instruction that describes the task: ### Input: Decodes an encoded JWT :param encoded_token: The encoded JWT string to decode :param secret: Secret key used to encode the JWT :param algorithm: Algorithm used to encode the JWT :param identity_claim_key: expected key that contains the identity :param user_claims_key: expected key that contains the user claims :return: Dictionary containing contents of the JWT ### Response: def decode_jwt(encoded_token, secret, algorithm, identity_claim_key, user_claims_key): """ Decodes an encoded JWT :param encoded_token: The encoded JWT string to decode :param secret: Secret key used to encode the JWT :param algorithm: Algorithm used to encode the JWT :param identity_claim_key: expected key that contains the identity :param user_claims_key: expected key that contains the user claims :return: Dictionary containing contents of the JWT """ # This call verifies the ext, iat, and nbf claims data = jwt.decode(encoded_token, secret, algorithms=[algorithm]) # Make sure that any custom claims we expect in the token are present if 'jti' not in data: raise JWTDecodeError("Missing claim: jti") if identity_claim_key not in data: raise JWTDecodeError("Missing claim: {}".format(identity_claim_key)) if 'type' not in data or data['type'] not in ('refresh', 'access'): raise JWTDecodeError("Missing or invalid claim: type") if user_claims_key not in data: data[user_claims_key] = {} return data
def setup_smp(self): """ setup observations from PEST-style SMP file pairs """ if self.obssim_smp_pairs is None: return if len(self.obssim_smp_pairs) == 2: if isinstance(self.obssim_smp_pairs[0],str): self.obssim_smp_pairs = [self.obssim_smp_pairs] for obs_smp,sim_smp in self.obssim_smp_pairs: self.log("processing {0} and {1} smp files".format(obs_smp,sim_smp)) if not os.path.exists(obs_smp): self.logger.lraise("couldn't find obs smp: {0}".format(obs_smp)) if not os.path.exists(sim_smp): self.logger.lraise("couldn't find sim smp: {0}".format(sim_smp)) new_obs_smp = os.path.join(self.m.model_ws, os.path.split(obs_smp)[-1]) shutil.copy2(obs_smp,new_obs_smp) new_sim_smp = os.path.join(self.m.model_ws, os.path.split(sim_smp)[-1]) shutil.copy2(sim_smp,new_sim_smp) pyemu.smp_utils.smp_to_ins(new_sim_smp)
setup observations from PEST-style SMP file pairs
Below is the the instruction that describes the task: ### Input: setup observations from PEST-style SMP file pairs ### Response: def setup_smp(self): """ setup observations from PEST-style SMP file pairs """ if self.obssim_smp_pairs is None: return if len(self.obssim_smp_pairs) == 2: if isinstance(self.obssim_smp_pairs[0],str): self.obssim_smp_pairs = [self.obssim_smp_pairs] for obs_smp,sim_smp in self.obssim_smp_pairs: self.log("processing {0} and {1} smp files".format(obs_smp,sim_smp)) if not os.path.exists(obs_smp): self.logger.lraise("couldn't find obs smp: {0}".format(obs_smp)) if not os.path.exists(sim_smp): self.logger.lraise("couldn't find sim smp: {0}".format(sim_smp)) new_obs_smp = os.path.join(self.m.model_ws, os.path.split(obs_smp)[-1]) shutil.copy2(obs_smp,new_obs_smp) new_sim_smp = os.path.join(self.m.model_ws, os.path.split(sim_smp)[-1]) shutil.copy2(sim_smp,new_sim_smp) pyemu.smp_utils.smp_to_ins(new_sim_smp)
def from_ashrae_revised_clear_sky(cls, location, monthly_tau_beam, monthly_tau_diffuse, timestep=1, is_leap_year=False): """Create a wea object representing an ASHRAE Revised Clear Sky ("Tau Model") ASHRAE Revised Clear Skies are intended to determine peak solar load and sizing parmeters for HVAC systems. The revised clear sky is currently the default recommended sky model used to autosize HVAC systems in EnergyPlus. For more information on the ASHRAE Revised Clear Sky model, see the EnergyPlus Engineering Reference: https://bigladdersoftware.com/epx/docs/8-9/engineering-reference/climate-calculations.html Args: location: Ladybug location object. monthly_tau_beam: A list of 12 float values indicating the beam optical depth of the sky at each month of the year. monthly_tau_diffuse: A list of 12 float values indicating the diffuse optical depth of the sky at each month of the year. timestep: An optional integer to set the number of time steps per hour. Default is 1 for one value per hour. is_leap_year: A boolean to indicate if values are representing a leap year. Default is False. """ # extract metadata metadata = {'source': location.source, 'country': location.country, 'city': location.city} # create sunpath and get altitude at every timestep of the year sp = Sunpath.from_location(location) sp.is_leap_year = is_leap_year altitudes = [[] for i in range(12)] dates = cls._get_datetimes(timestep, is_leap_year) for t_date in dates: sun = sp.calculate_sun_from_date_time(t_date) altitudes[sun.datetime.month - 1].append(sun.altitude) # run all of the months through the ashrae_revised_clear_sky model direct_norm, diffuse_horiz = [], [] for i_mon, alt_list in enumerate(altitudes): dir_norm_rad, dif_horiz_rad = ashrae_revised_clear_sky( alt_list, monthly_tau_beam[i_mon], monthly_tau_diffuse[i_mon]) direct_norm.extend(dir_norm_rad) diffuse_horiz.extend(dif_horiz_rad) direct_norm_rad, diffuse_horiz_rad = \ cls._get_data_collections(direct_norm, diffuse_horiz, metadata, timestep, is_leap_year) return cls(location, direct_norm_rad, diffuse_horiz_rad, timestep, is_leap_year)
Create a wea object representing an ASHRAE Revised Clear Sky ("Tau Model") ASHRAE Revised Clear Skies are intended to determine peak solar load and sizing parmeters for HVAC systems. The revised clear sky is currently the default recommended sky model used to autosize HVAC systems in EnergyPlus. For more information on the ASHRAE Revised Clear Sky model, see the EnergyPlus Engineering Reference: https://bigladdersoftware.com/epx/docs/8-9/engineering-reference/climate-calculations.html Args: location: Ladybug location object. monthly_tau_beam: A list of 12 float values indicating the beam optical depth of the sky at each month of the year. monthly_tau_diffuse: A list of 12 float values indicating the diffuse optical depth of the sky at each month of the year. timestep: An optional integer to set the number of time steps per hour. Default is 1 for one value per hour. is_leap_year: A boolean to indicate if values are representing a leap year. Default is False.
Below is the the instruction that describes the task: ### Input: Create a wea object representing an ASHRAE Revised Clear Sky ("Tau Model") ASHRAE Revised Clear Skies are intended to determine peak solar load and sizing parmeters for HVAC systems. The revised clear sky is currently the default recommended sky model used to autosize HVAC systems in EnergyPlus. For more information on the ASHRAE Revised Clear Sky model, see the EnergyPlus Engineering Reference: https://bigladdersoftware.com/epx/docs/8-9/engineering-reference/climate-calculations.html Args: location: Ladybug location object. monthly_tau_beam: A list of 12 float values indicating the beam optical depth of the sky at each month of the year. monthly_tau_diffuse: A list of 12 float values indicating the diffuse optical depth of the sky at each month of the year. timestep: An optional integer to set the number of time steps per hour. Default is 1 for one value per hour. is_leap_year: A boolean to indicate if values are representing a leap year. Default is False. ### Response: def from_ashrae_revised_clear_sky(cls, location, monthly_tau_beam, monthly_tau_diffuse, timestep=1, is_leap_year=False): """Create a wea object representing an ASHRAE Revised Clear Sky ("Tau Model") ASHRAE Revised Clear Skies are intended to determine peak solar load and sizing parmeters for HVAC systems. The revised clear sky is currently the default recommended sky model used to autosize HVAC systems in EnergyPlus. For more information on the ASHRAE Revised Clear Sky model, see the EnergyPlus Engineering Reference: https://bigladdersoftware.com/epx/docs/8-9/engineering-reference/climate-calculations.html Args: location: Ladybug location object. monthly_tau_beam: A list of 12 float values indicating the beam optical depth of the sky at each month of the year. monthly_tau_diffuse: A list of 12 float values indicating the diffuse optical depth of the sky at each month of the year. timestep: An optional integer to set the number of time steps per hour. Default is 1 for one value per hour. is_leap_year: A boolean to indicate if values are representing a leap year. Default is False. """ # extract metadata metadata = {'source': location.source, 'country': location.country, 'city': location.city} # create sunpath and get altitude at every timestep of the year sp = Sunpath.from_location(location) sp.is_leap_year = is_leap_year altitudes = [[] for i in range(12)] dates = cls._get_datetimes(timestep, is_leap_year) for t_date in dates: sun = sp.calculate_sun_from_date_time(t_date) altitudes[sun.datetime.month - 1].append(sun.altitude) # run all of the months through the ashrae_revised_clear_sky model direct_norm, diffuse_horiz = [], [] for i_mon, alt_list in enumerate(altitudes): dir_norm_rad, dif_horiz_rad = ashrae_revised_clear_sky( alt_list, monthly_tau_beam[i_mon], monthly_tau_diffuse[i_mon]) direct_norm.extend(dir_norm_rad) diffuse_horiz.extend(dif_horiz_rad) direct_norm_rad, diffuse_horiz_rad = \ cls._get_data_collections(direct_norm, diffuse_horiz, metadata, timestep, is_leap_year) return cls(location, direct_norm_rad, diffuse_horiz_rad, timestep, is_leap_year)
def create_dispatcher(self): """ Return a dispatcher for configured channels. """ before_context = max(self.args.before_context, self.args.context) after_context = max(self.args.after_context, self.args.context) if self.args.files_with_match is not None or self.args.count or self.args.only_matching or self.args.quiet: # Sending of log lines disabled by arguments return UnbufferedDispatcher(self._channels) elif before_context == 0 and after_context == 0: # Don't need line buffering return UnbufferedDispatcher(self._channels) elif self.args.thread: return ThreadedDispatcher(self._channels, before_context, after_context) else: return LineBufferDispatcher(self._channels, before_context, after_context)
Return a dispatcher for configured channels.
Below is the the instruction that describes the task: ### Input: Return a dispatcher for configured channels. ### Response: def create_dispatcher(self): """ Return a dispatcher for configured channels. """ before_context = max(self.args.before_context, self.args.context) after_context = max(self.args.after_context, self.args.context) if self.args.files_with_match is not None or self.args.count or self.args.only_matching or self.args.quiet: # Sending of log lines disabled by arguments return UnbufferedDispatcher(self._channels) elif before_context == 0 and after_context == 0: # Don't need line buffering return UnbufferedDispatcher(self._channels) elif self.args.thread: return ThreadedDispatcher(self._channels, before_context, after_context) else: return LineBufferDispatcher(self._channels, before_context, after_context)
def _num_players(self): """Compute number of players, both human and computer.""" self._player_num = 0 self._computer_num = 0 for player in self._header.scenario.game_settings.player_info: if player.type == 'human': self._player_num += 1 elif player.type == 'computer': self._computer_num += 1
Compute number of players, both human and computer.
Below is the the instruction that describes the task: ### Input: Compute number of players, both human and computer. ### Response: def _num_players(self): """Compute number of players, both human and computer.""" self._player_num = 0 self._computer_num = 0 for player in self._header.scenario.game_settings.player_info: if player.type == 'human': self._player_num += 1 elif player.type == 'computer': self._computer_num += 1
def validate_plugin(self, plugin_class, experimental=False): """ Verifies that the plugin_class should execute under this policy """ valid_subclasses = [IndependentPlugin] + self.valid_subclasses if experimental: valid_subclasses += [ExperimentalPlugin] return any(issubclass(plugin_class, class_) for class_ in valid_subclasses)
Verifies that the plugin_class should execute under this policy
Below is the the instruction that describes the task: ### Input: Verifies that the plugin_class should execute under this policy ### Response: def validate_plugin(self, plugin_class, experimental=False): """ Verifies that the plugin_class should execute under this policy """ valid_subclasses = [IndependentPlugin] + self.valid_subclasses if experimental: valid_subclasses += [ExperimentalPlugin] return any(issubclass(plugin_class, class_) for class_ in valid_subclasses)
def get_mac_acl_for_intf_input_interface_name(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") get_mac_acl_for_intf = ET.Element("get_mac_acl_for_intf") config = get_mac_acl_for_intf input = ET.SubElement(get_mac_acl_for_intf, "input") interface_name = ET.SubElement(input, "interface-name") interface_name.text = kwargs.pop('interface_name') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def get_mac_acl_for_intf_input_interface_name(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") get_mac_acl_for_intf = ET.Element("get_mac_acl_for_intf") config = get_mac_acl_for_intf input = ET.SubElement(get_mac_acl_for_intf, "input") interface_name = ET.SubElement(input, "interface-name") interface_name.text = kwargs.pop('interface_name') callback = kwargs.pop('callback', self._callback) return callback(config)
def write_message(self, msg, timeout=None): """Write an arbitrary message (of one of the types above). For the host side implementation, this will only ever be a DataMessage, but it's implemented generically enough here that you could use FilesyncTransport to implement the device side if you wanted. Args: msg: The message to send, must be one of the types above. timeout: timeouts.PolledTimeout to use for the operation. """ replace_dict = {'command': self.CMD_TO_WIRE[msg.command]} if msg.has_data: # Swap out data for the data length for the wire. data = msg[-1] replace_dict[msg._fields[-1]] = len(data) self.stream.write(struct.pack(msg.struct_format, *msg._replace(**replace_dict)), timeout) if msg.has_data: self.stream.write(data, timeout)
Write an arbitrary message (of one of the types above). For the host side implementation, this will only ever be a DataMessage, but it's implemented generically enough here that you could use FilesyncTransport to implement the device side if you wanted. Args: msg: The message to send, must be one of the types above. timeout: timeouts.PolledTimeout to use for the operation.
Below is the the instruction that describes the task: ### Input: Write an arbitrary message (of one of the types above). For the host side implementation, this will only ever be a DataMessage, but it's implemented generically enough here that you could use FilesyncTransport to implement the device side if you wanted. Args: msg: The message to send, must be one of the types above. timeout: timeouts.PolledTimeout to use for the operation. ### Response: def write_message(self, msg, timeout=None): """Write an arbitrary message (of one of the types above). For the host side implementation, this will only ever be a DataMessage, but it's implemented generically enough here that you could use FilesyncTransport to implement the device side if you wanted. Args: msg: The message to send, must be one of the types above. timeout: timeouts.PolledTimeout to use for the operation. """ replace_dict = {'command': self.CMD_TO_WIRE[msg.command]} if msg.has_data: # Swap out data for the data length for the wire. data = msg[-1] replace_dict[msg._fields[-1]] = len(data) self.stream.write(struct.pack(msg.struct_format, *msg._replace(**replace_dict)), timeout) if msg.has_data: self.stream.write(data, timeout)
def get(self): """Return current profiler statistics.""" sort = self.get_argument('sort', 'cum_time') count = self.get_argument('count', 20) strip_dirs = self.get_argument('strip_dirs', True) error = '' sorts = ('num_calls', 'cum_time', 'total_time', 'cum_time_per_call', 'total_time_per_call') if sort not in sorts: error += "Invalid `sort` '%s', must be in %s." % (sort, sorts) try: count = int(count) except (ValueError, TypeError): error += "Can't cast `count` '%s' to int." % count if count <= 0: count = None strip_dirs = str(strip_dirs).lower() not in ('false', 'no', 'none', 'null', '0', '') if error: self.write({'error': error}) self.set_status(400) self.finish() return try: statistics = get_profiler_statistics(sort, count, strip_dirs) self.write({'statistics': statistics}) self.set_status(200) except TypeError: logger.exception('Error while retrieving profiler statistics') self.write({'error': 'No stats available. Start and stop the profiler before trying to retrieve stats.'}) self.set_status(404) self.finish()
Return current profiler statistics.
Below is the the instruction that describes the task: ### Input: Return current profiler statistics. ### Response: def get(self): """Return current profiler statistics.""" sort = self.get_argument('sort', 'cum_time') count = self.get_argument('count', 20) strip_dirs = self.get_argument('strip_dirs', True) error = '' sorts = ('num_calls', 'cum_time', 'total_time', 'cum_time_per_call', 'total_time_per_call') if sort not in sorts: error += "Invalid `sort` '%s', must be in %s." % (sort, sorts) try: count = int(count) except (ValueError, TypeError): error += "Can't cast `count` '%s' to int." % count if count <= 0: count = None strip_dirs = str(strip_dirs).lower() not in ('false', 'no', 'none', 'null', '0', '') if error: self.write({'error': error}) self.set_status(400) self.finish() return try: statistics = get_profiler_statistics(sort, count, strip_dirs) self.write({'statistics': statistics}) self.set_status(200) except TypeError: logger.exception('Error while retrieving profiler statistics') self.write({'error': 'No stats available. Start and stop the profiler before trying to retrieve stats.'}) self.set_status(404) self.finish()
def scores_to_preds(self, threshold, use_probs = True): """ use_probs : boolean, default True if True, use probabilities for predictions, else use scores. """ self.threshold = threshold if use_probs: if self.probs is None: raise DataError("Probabilities are not available to make " "predictions.") else: word = "probabilities" scores = self.probs else: if self.scores is None: raise DataError("Scores are not available to make predictions.") else: word = "scores" scores = self.scores if threshold > np.max(scores) or threshold < np.min(scores): warnings.warn("Threshold {} is outside the range of the " "{}.".format(self.threshold, word)) if self.preds is not None: warnings.warn("Overwriting predictions") self.preds = (scores >= threshold)*1
use_probs : boolean, default True if True, use probabilities for predictions, else use scores.
Below is the the instruction that describes the task: ### Input: use_probs : boolean, default True if True, use probabilities for predictions, else use scores. ### Response: def scores_to_preds(self, threshold, use_probs = True): """ use_probs : boolean, default True if True, use probabilities for predictions, else use scores. """ self.threshold = threshold if use_probs: if self.probs is None: raise DataError("Probabilities are not available to make " "predictions.") else: word = "probabilities" scores = self.probs else: if self.scores is None: raise DataError("Scores are not available to make predictions.") else: word = "scores" scores = self.scores if threshold > np.max(scores) or threshold < np.min(scores): warnings.warn("Threshold {} is outside the range of the " "{}.".format(self.threshold, word)) if self.preds is not None: warnings.warn("Overwriting predictions") self.preds = (scores >= threshold)*1
def post_comments(self, post_id, after='', order="chronological", filter="stream", fields=None, **params): """ :param post_id: :param after: :param order: Can be 'ranked', 'chronological', 'reverse_chronological' :param filter: Can be 'stream', 'toplevel' :param fields: Can be 'id', 'application', 'attachment', 'can_comment', 'can_remove', 'can_hide', 'can_like', 'can_reply_privately', 'comments', 'comment_count', 'created_time', 'from', 'likes', 'like_count', 'live_broadcast_timestamp', 'message', 'message_tags', 'object', 'parent', 'private_reply_conversation', 'user_likes' :param params: :return: """ if fields: fields = ",".join(fields) parameters = {"access_token": self.key, "after": after, "order": order, "fields": fields, "filter": filter} parameters = self.merge_params(parameters, params) return self.api_call('%s/comments' % post_id, parameters)
:param post_id: :param after: :param order: Can be 'ranked', 'chronological', 'reverse_chronological' :param filter: Can be 'stream', 'toplevel' :param fields: Can be 'id', 'application', 'attachment', 'can_comment', 'can_remove', 'can_hide', 'can_like', 'can_reply_privately', 'comments', 'comment_count', 'created_time', 'from', 'likes', 'like_count', 'live_broadcast_timestamp', 'message', 'message_tags', 'object', 'parent', 'private_reply_conversation', 'user_likes' :param params: :return:
Below is the the instruction that describes the task: ### Input: :param post_id: :param after: :param order: Can be 'ranked', 'chronological', 'reverse_chronological' :param filter: Can be 'stream', 'toplevel' :param fields: Can be 'id', 'application', 'attachment', 'can_comment', 'can_remove', 'can_hide', 'can_like', 'can_reply_privately', 'comments', 'comment_count', 'created_time', 'from', 'likes', 'like_count', 'live_broadcast_timestamp', 'message', 'message_tags', 'object', 'parent', 'private_reply_conversation', 'user_likes' :param params: :return: ### Response: def post_comments(self, post_id, after='', order="chronological", filter="stream", fields=None, **params): """ :param post_id: :param after: :param order: Can be 'ranked', 'chronological', 'reverse_chronological' :param filter: Can be 'stream', 'toplevel' :param fields: Can be 'id', 'application', 'attachment', 'can_comment', 'can_remove', 'can_hide', 'can_like', 'can_reply_privately', 'comments', 'comment_count', 'created_time', 'from', 'likes', 'like_count', 'live_broadcast_timestamp', 'message', 'message_tags', 'object', 'parent', 'private_reply_conversation', 'user_likes' :param params: :return: """ if fields: fields = ",".join(fields) parameters = {"access_token": self.key, "after": after, "order": order, "fields": fields, "filter": filter} parameters = self.merge_params(parameters, params) return self.api_call('%s/comments' % post_id, parameters)
def OnTogglePlay(self, event): """Toggles the video status between play and hold""" if self.player.get_state() == vlc.State.Playing: self.player.pause() else: self.player.play() event.Skip()
Toggles the video status between play and hold
Below is the the instruction that describes the task: ### Input: Toggles the video status between play and hold ### Response: def OnTogglePlay(self, event): """Toggles the video status between play and hold""" if self.player.get_state() == vlc.State.Playing: self.player.pause() else: self.player.play() event.Skip()
def keys(self): """Keys Returns a list of the node names in the parent Returns: list """ if hasattr(self._nodes, 'iterkeys'): return self._nodes.keys() else: return tuple(self._nodes.keys())
Keys Returns a list of the node names in the parent Returns: list
Below is the the instruction that describes the task: ### Input: Keys Returns a list of the node names in the parent Returns: list ### Response: def keys(self): """Keys Returns a list of the node names in the parent Returns: list """ if hasattr(self._nodes, 'iterkeys'): return self._nodes.keys() else: return tuple(self._nodes.keys())
def computed_fitting_parameters(self): """ A list identical to what is set with `scipy_data_fitting.Fit.fitting_parameters`, but in each dictionary, the key `value` is added with the fitted value of the quantity. The reported value is scaled by the inverse prefix. """ fitted_parameters = [] for (i, v) in enumerate(self.fitting_parameters): param = v.copy() param['value'] = self.fitted_parameters[i] * prefix_factor(param)**(-1) fitted_parameters.append(param) return fitted_parameters
A list identical to what is set with `scipy_data_fitting.Fit.fitting_parameters`, but in each dictionary, the key `value` is added with the fitted value of the quantity. The reported value is scaled by the inverse prefix.
Below is the the instruction that describes the task: ### Input: A list identical to what is set with `scipy_data_fitting.Fit.fitting_parameters`, but in each dictionary, the key `value` is added with the fitted value of the quantity. The reported value is scaled by the inverse prefix. ### Response: def computed_fitting_parameters(self): """ A list identical to what is set with `scipy_data_fitting.Fit.fitting_parameters`, but in each dictionary, the key `value` is added with the fitted value of the quantity. The reported value is scaled by the inverse prefix. """ fitted_parameters = [] for (i, v) in enumerate(self.fitting_parameters): param = v.copy() param['value'] = self.fitted_parameters[i] * prefix_factor(param)**(-1) fitted_parameters.append(param) return fitted_parameters
def getdict(self, crop=True): """Get final dictionary. If ``crop`` is ``True``, apply :func:`.cnvrep.bcrop` to returned array. """ global mp_D_Y0 D = mp_D_Y0 if crop: D = cr.bcrop(D, self.dstep.cri.dsz, self.dstep.cri.dimN) return D
Get final dictionary. If ``crop`` is ``True``, apply :func:`.cnvrep.bcrop` to returned array.
Below is the the instruction that describes the task: ### Input: Get final dictionary. If ``crop`` is ``True``, apply :func:`.cnvrep.bcrop` to returned array. ### Response: def getdict(self, crop=True): """Get final dictionary. If ``crop`` is ``True``, apply :func:`.cnvrep.bcrop` to returned array. """ global mp_D_Y0 D = mp_D_Y0 if crop: D = cr.bcrop(D, self.dstep.cri.dsz, self.dstep.cri.dimN) return D
def allocated_chunks(self): """ Returns an iterator over all the allocated chunks in the heap. """ raise NotImplementedError("%s not implemented for %s" % (self.allocated_chunks.__func__.__name__, self.__class__.__name__))
Returns an iterator over all the allocated chunks in the heap.
Below is the the instruction that describes the task: ### Input: Returns an iterator over all the allocated chunks in the heap. ### Response: def allocated_chunks(self): """ Returns an iterator over all the allocated chunks in the heap. """ raise NotImplementedError("%s not implemented for %s" % (self.allocated_chunks.__func__.__name__, self.__class__.__name__))
def import_txt(cls, txt_file, feed, filter_func=None): '''Import from the GTFS text file''' # Setup the conversion from GTFS to Django Format # Conversion functions def no_convert(value): return value def date_convert(value): return datetime.strptime(value, '%Y%m%d') def bool_convert(value): return (value == '1') def char_convert(value): return (value or '') def null_convert(value): return (value or None) def point_convert(value): """Convert latitude / longitude, strip leading +.""" if value.startswith('+'): return value[1:] else: return (value or 0.0) cache = {} def default_convert(field): def get_value_or_default(value): if value == '' or value is None: return field.get_default() else: return value return get_value_or_default def instance_convert(field, feed, rel_name): def get_instance(value): if value.strip(): related = field.related_model key1 = "{}:{}".format(related.__name__, rel_name) key2 = text_type(value) # Load existing objects if key1 not in cache: pairs = related.objects.filter( **{related._rel_to_feed: feed}).values_list( rel_name, 'id') cache[key1] = dict((text_type(x), i) for x, i in pairs) # Create new? if key2 not in cache[key1]: kwargs = { related._rel_to_feed: feed, rel_name: value} cache[key1][key2] = related.objects.create( **kwargs).id return cache[key1][key2] else: return None return get_instance # Check unique fields column_names = [c for c, _ in cls._column_map] for unique_field in cls._unique_fields: assert unique_field in column_names, \ '{} not in {}'.format(unique_field, column_names) # Map of field_name to converters from GTFS to Django format val_map = dict() name_map = dict() point_map = dict() for csv_name, field_pattern in cls._column_map: # Separate the local field name from foreign columns if '__' in field_pattern: field_base, rel_name = field_pattern.split('__', 1) field_name = field_base + '_id' else: field_name = field_base = field_pattern # Use the field name in the name mapping name_map[csv_name] = field_name # Is it a point field? point_match = re_point.match(field_name) if point_match: field = None else: field = cls._meta.get_field(field_base) # Pick a conversion function for the field if point_match: converter = point_convert elif isinstance(field, models.DateField): converter = date_convert elif isinstance(field, models.BooleanField): converter = bool_convert elif isinstance(field, models.CharField): converter = char_convert elif field.is_relation: converter = instance_convert(field, feed, rel_name) assert not isinstance(field, models.ManyToManyField) elif field.null: converter = null_convert elif field.has_default(): converter = default_convert(field) else: converter = no_convert if point_match: index = int(point_match.group('index')) point_map[csv_name] = (index, converter) else: val_map[csv_name] = converter # Read and convert the source txt csv_reader = reader(txt_file, skipinitialspace=True) unique_line = dict() count = 0 first = True extra_counts = defaultdict(int) new_objects = [] for row in csv_reader: if first: # Read the columns columns = row if columns[0].startswith(CSV_BOM): columns[0] = columns[0][len(CSV_BOM):] first = False continue if filter_func and not filter_func(zip(columns, row)): continue if not row: continue # Read a data row fields = dict() point_coords = [None, None] ukey_values = {} if cls._rel_to_feed == 'feed': fields['feed'] = feed for column_name, value in zip(columns, row): if column_name not in name_map: val = null_convert(value) if val is not None: fields.setdefault('extra_data', {})[column_name] = val extra_counts[column_name] += 1 elif column_name in val_map: fields[name_map[column_name]] = val_map[column_name](value) else: assert column_name in point_map pos, converter = point_map[column_name] point_coords[pos] = converter(value) # Is it part of the unique key? if column_name in cls._unique_fields: ukey_values[column_name] = value # Join the lat/long into a point if point_map: assert point_coords[0] and point_coords[1] fields['point'] = "POINT(%s)" % (' '.join(point_coords)) # Is the item unique? ukey = tuple(ukey_values.get(u) for u in cls._unique_fields) if ukey in unique_line: logger.warning( '%s line %d is a duplicate of line %d, not imported.', cls._filename, csv_reader.line_num, unique_line[ukey]) continue else: unique_line[ukey] = csv_reader.line_num # Create after accumulating a batch new_objects.append(cls(**fields)) if len(new_objects) % batch_size == 0: # pragma: no cover cls.objects.bulk_create(new_objects) count += len(new_objects) logger.info( "Imported %d %s", count, cls._meta.verbose_name_plural) new_objects = [] # Create remaining objects if new_objects: cls.objects.bulk_create(new_objects) # Take note of extra fields if extra_counts: extra_columns = feed.meta.setdefault( 'extra_columns', {}).setdefault(cls.__name__, []) for column in columns: if column in extra_counts and column not in extra_columns: extra_columns.append(column) feed.save() return len(unique_line)
Import from the GTFS text file
Below is the the instruction that describes the task: ### Input: Import from the GTFS text file ### Response: def import_txt(cls, txt_file, feed, filter_func=None): '''Import from the GTFS text file''' # Setup the conversion from GTFS to Django Format # Conversion functions def no_convert(value): return value def date_convert(value): return datetime.strptime(value, '%Y%m%d') def bool_convert(value): return (value == '1') def char_convert(value): return (value or '') def null_convert(value): return (value or None) def point_convert(value): """Convert latitude / longitude, strip leading +.""" if value.startswith('+'): return value[1:] else: return (value or 0.0) cache = {} def default_convert(field): def get_value_or_default(value): if value == '' or value is None: return field.get_default() else: return value return get_value_or_default def instance_convert(field, feed, rel_name): def get_instance(value): if value.strip(): related = field.related_model key1 = "{}:{}".format(related.__name__, rel_name) key2 = text_type(value) # Load existing objects if key1 not in cache: pairs = related.objects.filter( **{related._rel_to_feed: feed}).values_list( rel_name, 'id') cache[key1] = dict((text_type(x), i) for x, i in pairs) # Create new? if key2 not in cache[key1]: kwargs = { related._rel_to_feed: feed, rel_name: value} cache[key1][key2] = related.objects.create( **kwargs).id return cache[key1][key2] else: return None return get_instance # Check unique fields column_names = [c for c, _ in cls._column_map] for unique_field in cls._unique_fields: assert unique_field in column_names, \ '{} not in {}'.format(unique_field, column_names) # Map of field_name to converters from GTFS to Django format val_map = dict() name_map = dict() point_map = dict() for csv_name, field_pattern in cls._column_map: # Separate the local field name from foreign columns if '__' in field_pattern: field_base, rel_name = field_pattern.split('__', 1) field_name = field_base + '_id' else: field_name = field_base = field_pattern # Use the field name in the name mapping name_map[csv_name] = field_name # Is it a point field? point_match = re_point.match(field_name) if point_match: field = None else: field = cls._meta.get_field(field_base) # Pick a conversion function for the field if point_match: converter = point_convert elif isinstance(field, models.DateField): converter = date_convert elif isinstance(field, models.BooleanField): converter = bool_convert elif isinstance(field, models.CharField): converter = char_convert elif field.is_relation: converter = instance_convert(field, feed, rel_name) assert not isinstance(field, models.ManyToManyField) elif field.null: converter = null_convert elif field.has_default(): converter = default_convert(field) else: converter = no_convert if point_match: index = int(point_match.group('index')) point_map[csv_name] = (index, converter) else: val_map[csv_name] = converter # Read and convert the source txt csv_reader = reader(txt_file, skipinitialspace=True) unique_line = dict() count = 0 first = True extra_counts = defaultdict(int) new_objects = [] for row in csv_reader: if first: # Read the columns columns = row if columns[0].startswith(CSV_BOM): columns[0] = columns[0][len(CSV_BOM):] first = False continue if filter_func and not filter_func(zip(columns, row)): continue if not row: continue # Read a data row fields = dict() point_coords = [None, None] ukey_values = {} if cls._rel_to_feed == 'feed': fields['feed'] = feed for column_name, value in zip(columns, row): if column_name not in name_map: val = null_convert(value) if val is not None: fields.setdefault('extra_data', {})[column_name] = val extra_counts[column_name] += 1 elif column_name in val_map: fields[name_map[column_name]] = val_map[column_name](value) else: assert column_name in point_map pos, converter = point_map[column_name] point_coords[pos] = converter(value) # Is it part of the unique key? if column_name in cls._unique_fields: ukey_values[column_name] = value # Join the lat/long into a point if point_map: assert point_coords[0] and point_coords[1] fields['point'] = "POINT(%s)" % (' '.join(point_coords)) # Is the item unique? ukey = tuple(ukey_values.get(u) for u in cls._unique_fields) if ukey in unique_line: logger.warning( '%s line %d is a duplicate of line %d, not imported.', cls._filename, csv_reader.line_num, unique_line[ukey]) continue else: unique_line[ukey] = csv_reader.line_num # Create after accumulating a batch new_objects.append(cls(**fields)) if len(new_objects) % batch_size == 0: # pragma: no cover cls.objects.bulk_create(new_objects) count += len(new_objects) logger.info( "Imported %d %s", count, cls._meta.verbose_name_plural) new_objects = [] # Create remaining objects if new_objects: cls.objects.bulk_create(new_objects) # Take note of extra fields if extra_counts: extra_columns = feed.meta.setdefault( 'extra_columns', {}).setdefault(cls.__name__, []) for column in columns: if column in extra_counts and column not in extra_columns: extra_columns.append(column) feed.save() return len(unique_line)
def set_ev_cls(ev_cls, dispatchers=None): """ A decorator for Ryu application to declare an event handler. Decorated method will become an event handler. ev_cls is an event class whose instances this RyuApp wants to receive. dispatchers argument specifies one of the following negotiation phases (or a list of them) for which events should be generated for this handler. Note that, in case an event changes the phase, the phase before the change is used to check the interest. .. tabularcolumns:: |l|L| =========================================== =============================== Negotiation phase Description =========================================== =============================== ryu.controller.handler.HANDSHAKE_DISPATCHER Sending and waiting for hello message ryu.controller.handler.CONFIG_DISPATCHER Version negotiated and sent features-request message ryu.controller.handler.MAIN_DISPATCHER Switch-features message received and sent set-config message ryu.controller.handler.DEAD_DISPATCHER Disconnect from the peer. Or disconnecting due to some unrecoverable errors. =========================================== =============================== """ def _set_ev_cls_dec(handler): if 'callers' not in dir(handler): handler.callers = {} for e in _listify(ev_cls): handler.callers[e] = _Caller(_listify(dispatchers), e.__module__) return handler return _set_ev_cls_dec
A decorator for Ryu application to declare an event handler. Decorated method will become an event handler. ev_cls is an event class whose instances this RyuApp wants to receive. dispatchers argument specifies one of the following negotiation phases (or a list of them) for which events should be generated for this handler. Note that, in case an event changes the phase, the phase before the change is used to check the interest. .. tabularcolumns:: |l|L| =========================================== =============================== Negotiation phase Description =========================================== =============================== ryu.controller.handler.HANDSHAKE_DISPATCHER Sending and waiting for hello message ryu.controller.handler.CONFIG_DISPATCHER Version negotiated and sent features-request message ryu.controller.handler.MAIN_DISPATCHER Switch-features message received and sent set-config message ryu.controller.handler.DEAD_DISPATCHER Disconnect from the peer. Or disconnecting due to some unrecoverable errors. =========================================== ===============================
Below is the the instruction that describes the task: ### Input: A decorator for Ryu application to declare an event handler. Decorated method will become an event handler. ev_cls is an event class whose instances this RyuApp wants to receive. dispatchers argument specifies one of the following negotiation phases (or a list of them) for which events should be generated for this handler. Note that, in case an event changes the phase, the phase before the change is used to check the interest. .. tabularcolumns:: |l|L| =========================================== =============================== Negotiation phase Description =========================================== =============================== ryu.controller.handler.HANDSHAKE_DISPATCHER Sending and waiting for hello message ryu.controller.handler.CONFIG_DISPATCHER Version negotiated and sent features-request message ryu.controller.handler.MAIN_DISPATCHER Switch-features message received and sent set-config message ryu.controller.handler.DEAD_DISPATCHER Disconnect from the peer. Or disconnecting due to some unrecoverable errors. =========================================== =============================== ### Response: def set_ev_cls(ev_cls, dispatchers=None): """ A decorator for Ryu application to declare an event handler. Decorated method will become an event handler. ev_cls is an event class whose instances this RyuApp wants to receive. dispatchers argument specifies one of the following negotiation phases (or a list of them) for which events should be generated for this handler. Note that, in case an event changes the phase, the phase before the change is used to check the interest. .. tabularcolumns:: |l|L| =========================================== =============================== Negotiation phase Description =========================================== =============================== ryu.controller.handler.HANDSHAKE_DISPATCHER Sending and waiting for hello message ryu.controller.handler.CONFIG_DISPATCHER Version negotiated and sent features-request message ryu.controller.handler.MAIN_DISPATCHER Switch-features message received and sent set-config message ryu.controller.handler.DEAD_DISPATCHER Disconnect from the peer. Or disconnecting due to some unrecoverable errors. =========================================== =============================== """ def _set_ev_cls_dec(handler): if 'callers' not in dir(handler): handler.callers = {} for e in _listify(ev_cls): handler.callers[e] = _Caller(_listify(dispatchers), e.__module__) return handler return _set_ev_cls_dec
def get_child_for_path(self, path): """Get a child for a given path. Rather than repeated calls to get_child, children can be found by a derivation path. Paths look like: m/0/1'/10 Which is the same as self.get_child(0).get_child(-1).get_child(10) Or, in other words, the 10th publicly derived child of the 1st privately derived child of the 0th publicly derived child of master. You can use either ' or p to denote a prime (that is, privately derived) child. A child that has had its private key stripped can be requested by either passing a capital M or appending '.pub' to the end of the path. These three paths all give the same child that has had its private key scrubbed: M/0/1 m/0/1.pub M/0/1.pub """ path = ensure_str(path) if not path: raise InvalidPathError("%s is not a valid path" % path) # Figure out public/private derivation as_private = True if path.startswith("M"): as_private = False if path.endswith(".pub"): as_private = False path = path[:-4] parts = path.split("/") if len(parts) == 0: raise InvalidPathError() child = self for part in parts: if part.lower() == "m": continue is_prime = None # Let primeness be figured out by the child number if part[-1] in "'p": is_prime = True part = part.replace("'", "").replace("p", "") try: child_number = long_or_int(part) except ValueError: raise InvalidPathError("%s is not a valid path" % path) child = child.get_child(child_number, is_prime) if not as_private: return child.public_copy() return child
Get a child for a given path. Rather than repeated calls to get_child, children can be found by a derivation path. Paths look like: m/0/1'/10 Which is the same as self.get_child(0).get_child(-1).get_child(10) Or, in other words, the 10th publicly derived child of the 1st privately derived child of the 0th publicly derived child of master. You can use either ' or p to denote a prime (that is, privately derived) child. A child that has had its private key stripped can be requested by either passing a capital M or appending '.pub' to the end of the path. These three paths all give the same child that has had its private key scrubbed: M/0/1 m/0/1.pub M/0/1.pub
Below is the the instruction that describes the task: ### Input: Get a child for a given path. Rather than repeated calls to get_child, children can be found by a derivation path. Paths look like: m/0/1'/10 Which is the same as self.get_child(0).get_child(-1).get_child(10) Or, in other words, the 10th publicly derived child of the 1st privately derived child of the 0th publicly derived child of master. You can use either ' or p to denote a prime (that is, privately derived) child. A child that has had its private key stripped can be requested by either passing a capital M or appending '.pub' to the end of the path. These three paths all give the same child that has had its private key scrubbed: M/0/1 m/0/1.pub M/0/1.pub ### Response: def get_child_for_path(self, path): """Get a child for a given path. Rather than repeated calls to get_child, children can be found by a derivation path. Paths look like: m/0/1'/10 Which is the same as self.get_child(0).get_child(-1).get_child(10) Or, in other words, the 10th publicly derived child of the 1st privately derived child of the 0th publicly derived child of master. You can use either ' or p to denote a prime (that is, privately derived) child. A child that has had its private key stripped can be requested by either passing a capital M or appending '.pub' to the end of the path. These three paths all give the same child that has had its private key scrubbed: M/0/1 m/0/1.pub M/0/1.pub """ path = ensure_str(path) if not path: raise InvalidPathError("%s is not a valid path" % path) # Figure out public/private derivation as_private = True if path.startswith("M"): as_private = False if path.endswith(".pub"): as_private = False path = path[:-4] parts = path.split("/") if len(parts) == 0: raise InvalidPathError() child = self for part in parts: if part.lower() == "m": continue is_prime = None # Let primeness be figured out by the child number if part[-1] in "'p": is_prime = True part = part.replace("'", "").replace("p", "") try: child_number = long_or_int(part) except ValueError: raise InvalidPathError("%s is not a valid path" % path) child = child.get_child(child_number, is_prime) if not as_private: return child.public_copy() return child
def scourUnitlessLength(length, renderer_workaround=False, is_control_point=False): # length is of a numeric type """ Scours the numeric part of a length only. Does not accept units. This is faster than scourLength on elements guaranteed not to contain units. """ if not isinstance(length, Decimal): length = getcontext().create_decimal(str(length)) initial_length = length # reduce numeric precision # plus() corresponds to the unary prefix plus operator and applies context precision and rounding if is_control_point: length = scouringContextC.plus(length) else: length = scouringContext.plus(length) # remove trailing zeroes as we do not care for significance intLength = length.to_integral_value() if length == intLength: length = Decimal(intLength) else: length = length.normalize() # Gather the non-scientific notation version of the coordinate. # Re-quantize from the initial value to prevent unnecessary loss of precision # (e.g. 123.4 should become 123, not 120 or even 100) nonsci = '{0:f}'.format(length) nonsci = '{0:f}'.format(initial_length.quantize(Decimal(nonsci))) if not renderer_workaround: if len(nonsci) > 2 and nonsci[:2] == '0.': nonsci = nonsci[1:] # remove the 0, leave the dot elif len(nonsci) > 3 and nonsci[:3] == '-0.': nonsci = '-' + nonsci[2:] # remove the 0, leave the minus and dot return_value = nonsci # Gather the scientific notation version of the coordinate which # can only be shorter if the length of the number is at least 4 characters (e.g. 1000 = 1e3). if len(nonsci) > 3: # We have to implement this ourselves since both 'normalize()' and 'to_sci_string()' # don't handle negative exponents in a reasonable way (e.g. 0.000001 remains unchanged) exponent = length.adjusted() # how far do we have to shift the dot? length = length.scaleb(-exponent).normalize() # shift the dot and remove potential trailing zeroes sci = six.text_type(length) + 'e' + six.text_type(exponent) if len(sci) < len(nonsci): return_value = sci return return_value
Scours the numeric part of a length only. Does not accept units. This is faster than scourLength on elements guaranteed not to contain units.
Below is the the instruction that describes the task: ### Input: Scours the numeric part of a length only. Does not accept units. This is faster than scourLength on elements guaranteed not to contain units. ### Response: def scourUnitlessLength(length, renderer_workaround=False, is_control_point=False): # length is of a numeric type """ Scours the numeric part of a length only. Does not accept units. This is faster than scourLength on elements guaranteed not to contain units. """ if not isinstance(length, Decimal): length = getcontext().create_decimal(str(length)) initial_length = length # reduce numeric precision # plus() corresponds to the unary prefix plus operator and applies context precision and rounding if is_control_point: length = scouringContextC.plus(length) else: length = scouringContext.plus(length) # remove trailing zeroes as we do not care for significance intLength = length.to_integral_value() if length == intLength: length = Decimal(intLength) else: length = length.normalize() # Gather the non-scientific notation version of the coordinate. # Re-quantize from the initial value to prevent unnecessary loss of precision # (e.g. 123.4 should become 123, not 120 or even 100) nonsci = '{0:f}'.format(length) nonsci = '{0:f}'.format(initial_length.quantize(Decimal(nonsci))) if not renderer_workaround: if len(nonsci) > 2 and nonsci[:2] == '0.': nonsci = nonsci[1:] # remove the 0, leave the dot elif len(nonsci) > 3 and nonsci[:3] == '-0.': nonsci = '-' + nonsci[2:] # remove the 0, leave the minus and dot return_value = nonsci # Gather the scientific notation version of the coordinate which # can only be shorter if the length of the number is at least 4 characters (e.g. 1000 = 1e3). if len(nonsci) > 3: # We have to implement this ourselves since both 'normalize()' and 'to_sci_string()' # don't handle negative exponents in a reasonable way (e.g. 0.000001 remains unchanged) exponent = length.adjusted() # how far do we have to shift the dot? length = length.scaleb(-exponent).normalize() # shift the dot and remove potential trailing zeroes sci = six.text_type(length) + 'e' + six.text_type(exponent) if len(sci) < len(nonsci): return_value = sci return return_value
def is_in(allowed_values # type: Set ): """ 'Values in' validation_function generator. Returns a validation_function to check that x is in the provided set of allowed values :param allowed_values: a set of allowed values :return: """ def is_in_allowed_values(x): if x in allowed_values: return True else: # raise Failure('is_in: x in ' + str(allowed_values) + ' does not hold for x=' + str(x)) raise NotInAllowedValues(wrong_value=x, allowed_values=allowed_values) is_in_allowed_values.__name__ = 'is_in_{}'.format(allowed_values) return is_in_allowed_values
'Values in' validation_function generator. Returns a validation_function to check that x is in the provided set of allowed values :param allowed_values: a set of allowed values :return:
Below is the the instruction that describes the task: ### Input: 'Values in' validation_function generator. Returns a validation_function to check that x is in the provided set of allowed values :param allowed_values: a set of allowed values :return: ### Response: def is_in(allowed_values # type: Set ): """ 'Values in' validation_function generator. Returns a validation_function to check that x is in the provided set of allowed values :param allowed_values: a set of allowed values :return: """ def is_in_allowed_values(x): if x in allowed_values: return True else: # raise Failure('is_in: x in ' + str(allowed_values) + ' does not hold for x=' + str(x)) raise NotInAllowedValues(wrong_value=x, allowed_values=allowed_values) is_in_allowed_values.__name__ = 'is_in_{}'.format(allowed_values) return is_in_allowed_values
def QueueQueryAndOwn(self, queue, lease_seconds, limit, timestamp): """Returns a list of Tasks leased for a certain time. Args: queue: The queue to query from. lease_seconds: The tasks will be leased for this long. limit: Number of values to fetch. timestamp: Range of times for consideration. Returns: A list of GrrMessage() objects leased. """ # Do the real work in a transaction try: lock = DB.LockRetryWrapper(queue, lease_time=lease_seconds) return self._QueueQueryAndOwn( lock.subject, lease_seconds=lease_seconds, limit=limit, timestamp=timestamp) except DBSubjectLockError: # This exception just means that we could not obtain the lock on the queue # so we just return an empty list, let the worker sleep and come back to # fetch more tasks. return [] except Error as e: logging.warning("Datastore exception: %s", e) return []
Returns a list of Tasks leased for a certain time. Args: queue: The queue to query from. lease_seconds: The tasks will be leased for this long. limit: Number of values to fetch. timestamp: Range of times for consideration. Returns: A list of GrrMessage() objects leased.
Below is the the instruction that describes the task: ### Input: Returns a list of Tasks leased for a certain time. Args: queue: The queue to query from. lease_seconds: The tasks will be leased for this long. limit: Number of values to fetch. timestamp: Range of times for consideration. Returns: A list of GrrMessage() objects leased. ### Response: def QueueQueryAndOwn(self, queue, lease_seconds, limit, timestamp): """Returns a list of Tasks leased for a certain time. Args: queue: The queue to query from. lease_seconds: The tasks will be leased for this long. limit: Number of values to fetch. timestamp: Range of times for consideration. Returns: A list of GrrMessage() objects leased. """ # Do the real work in a transaction try: lock = DB.LockRetryWrapper(queue, lease_time=lease_seconds) return self._QueueQueryAndOwn( lock.subject, lease_seconds=lease_seconds, limit=limit, timestamp=timestamp) except DBSubjectLockError: # This exception just means that we could not obtain the lock on the queue # so we just return an empty list, let the worker sleep and come back to # fetch more tasks. return [] except Error as e: logging.warning("Datastore exception: %s", e) return []
def _rand_init(x_bounds, x_types, selection_num_starting_points): ''' Random sample some init seed within bounds. ''' return [lib_data.rand(x_bounds, x_types) for i \ in range(0, selection_num_starting_points)]
Random sample some init seed within bounds.
Below is the the instruction that describes the task: ### Input: Random sample some init seed within bounds. ### Response: def _rand_init(x_bounds, x_types, selection_num_starting_points): ''' Random sample some init seed within bounds. ''' return [lib_data.rand(x_bounds, x_types) for i \ in range(0, selection_num_starting_points)]
def _get_option(target_obj, key): """ Given a target object and option key, get that option from the target object, either through a get_{key} method or from an attribute directly. """ getter_name = 'get_{key}'.format(**locals()) by_attribute = functools.partial(getattr, target_obj, key) getter = getattr(target_obj, getter_name, by_attribute) return getter()
Given a target object and option key, get that option from the target object, either through a get_{key} method or from an attribute directly.
Below is the the instruction that describes the task: ### Input: Given a target object and option key, get that option from the target object, either through a get_{key} method or from an attribute directly. ### Response: def _get_option(target_obj, key): """ Given a target object and option key, get that option from the target object, either through a get_{key} method or from an attribute directly. """ getter_name = 'get_{key}'.format(**locals()) by_attribute = functools.partial(getattr, target_obj, key) getter = getattr(target_obj, getter_name, by_attribute) return getter()
def folderitem(self, obj, item, index): """Service triggered each time an item is iterated in folderitems. The use of this service prevents the extra-loops in child objects. :obj: the instance of the class to be foldered :item: dict containing the properties of the object to be used by the template :index: current index of the item """ item = super(ReferenceSamplesView, self).folderitem(obj, item, index) # ensure we have an object and not a brain obj = api.get_object(obj) url = api.get_url(obj) title = api.get_title(obj) item["Title"] = title item["replace"]["Title"] = get_link(url, value=title) item["allow_edit"] = self.get_editable_columns() # Supported Services supported_services_choices = self.make_supported_services_choices(obj) item["choices"]["SupportedServices"] = supported_services_choices # Position item["Position"] = "new" item["choices"]["Position"] = self.make_position_choices() return item
Service triggered each time an item is iterated in folderitems. The use of this service prevents the extra-loops in child objects. :obj: the instance of the class to be foldered :item: dict containing the properties of the object to be used by the template :index: current index of the item
Below is the the instruction that describes the task: ### Input: Service triggered each time an item is iterated in folderitems. The use of this service prevents the extra-loops in child objects. :obj: the instance of the class to be foldered :item: dict containing the properties of the object to be used by the template :index: current index of the item ### Response: def folderitem(self, obj, item, index): """Service triggered each time an item is iterated in folderitems. The use of this service prevents the extra-loops in child objects. :obj: the instance of the class to be foldered :item: dict containing the properties of the object to be used by the template :index: current index of the item """ item = super(ReferenceSamplesView, self).folderitem(obj, item, index) # ensure we have an object and not a brain obj = api.get_object(obj) url = api.get_url(obj) title = api.get_title(obj) item["Title"] = title item["replace"]["Title"] = get_link(url, value=title) item["allow_edit"] = self.get_editable_columns() # Supported Services supported_services_choices = self.make_supported_services_choices(obj) item["choices"]["SupportedServices"] = supported_services_choices # Position item["Position"] = "new" item["choices"]["Position"] = self.make_position_choices() return item
def add_attachment(message, attachment, rfc2231=True): '''Attach an attachment to a message as a side effect. Arguments: message: MIMEMultipart instance. attachment: Attachment instance. ''' data = attachment.read() part = MIMEBase('application', 'octet-stream') part.set_payload(data) encoders.encode_base64(part) filename = attachment.name if rfc2231 else Header(attachment.name).encode() part.add_header('Content-Disposition', 'attachment', filename=filename) message.attach(part)
Attach an attachment to a message as a side effect. Arguments: message: MIMEMultipart instance. attachment: Attachment instance.
Below is the the instruction that describes the task: ### Input: Attach an attachment to a message as a side effect. Arguments: message: MIMEMultipart instance. attachment: Attachment instance. ### Response: def add_attachment(message, attachment, rfc2231=True): '''Attach an attachment to a message as a side effect. Arguments: message: MIMEMultipart instance. attachment: Attachment instance. ''' data = attachment.read() part = MIMEBase('application', 'octet-stream') part.set_payload(data) encoders.encode_base64(part) filename = attachment.name if rfc2231 else Header(attachment.name).encode() part.add_header('Content-Disposition', 'attachment', filename=filename) message.attach(part)
def dynamodb_autoscaling_policy(tables): """Policy to allow AutoScaling a list of DynamoDB tables.""" return Policy( Statement=[ Statement( Effect=Allow, Resource=dynamodb_arns(tables), Action=[ dynamodb.DescribeTable, dynamodb.UpdateTable, ] ), Statement( Effect=Allow, Resource=['*'], Action=[ cloudwatch.PutMetricAlarm, cloudwatch.DescribeAlarms, cloudwatch.GetMetricStatistics, cloudwatch.SetAlarmState, cloudwatch.DeleteAlarms, ] ), ] )
Policy to allow AutoScaling a list of DynamoDB tables.
Below is the the instruction that describes the task: ### Input: Policy to allow AutoScaling a list of DynamoDB tables. ### Response: def dynamodb_autoscaling_policy(tables): """Policy to allow AutoScaling a list of DynamoDB tables.""" return Policy( Statement=[ Statement( Effect=Allow, Resource=dynamodb_arns(tables), Action=[ dynamodb.DescribeTable, dynamodb.UpdateTable, ] ), Statement( Effect=Allow, Resource=['*'], Action=[ cloudwatch.PutMetricAlarm, cloudwatch.DescribeAlarms, cloudwatch.GetMetricStatistics, cloudwatch.SetAlarmState, cloudwatch.DeleteAlarms, ] ), ] )
def process_request(self, request): """Called on each request, before Django decides which view to execute. :type request: :class:`~django.http.request.HttpRequest` :param request: Django http request. """ # Do not trace if the url is blacklisted if utils.disable_tracing_url(request.path, self.blacklist_paths): return # Add the request to thread local execution_context.set_opencensus_attr( REQUEST_THREAD_LOCAL_KEY, request) execution_context.set_opencensus_attr( 'blacklist_hostnames', self.blacklist_hostnames) try: # Start tracing this request span_context = self.propagator.from_headers( _DjangoMetaWrapper(_get_django_request().META)) # Reload the tracer with the new span context tracer = tracer_module.Tracer( span_context=span_context, sampler=self.sampler, exporter=self.exporter, propagator=self.propagator) # Span name is being set at process_view span = tracer.start_span() span.span_kind = span_module.SpanKind.SERVER tracer.add_attribute_to_current_span( attribute_key=HTTP_METHOD, attribute_value=request.method) tracer.add_attribute_to_current_span( attribute_key=HTTP_URL, attribute_value=str(request.path)) # Add the span to thread local # in some cases (exceptions, timeouts) currentspan in # response event will be one of a child spans. # let's keep reference to 'django' span and # use it in response event execution_context.set_opencensus_attr( SPAN_THREAD_LOCAL_KEY, span) except Exception: # pragma: NO COVER log.error('Failed to trace request', exc_info=True)
Called on each request, before Django decides which view to execute. :type request: :class:`~django.http.request.HttpRequest` :param request: Django http request.
Below is the the instruction that describes the task: ### Input: Called on each request, before Django decides which view to execute. :type request: :class:`~django.http.request.HttpRequest` :param request: Django http request. ### Response: def process_request(self, request): """Called on each request, before Django decides which view to execute. :type request: :class:`~django.http.request.HttpRequest` :param request: Django http request. """ # Do not trace if the url is blacklisted if utils.disable_tracing_url(request.path, self.blacklist_paths): return # Add the request to thread local execution_context.set_opencensus_attr( REQUEST_THREAD_LOCAL_KEY, request) execution_context.set_opencensus_attr( 'blacklist_hostnames', self.blacklist_hostnames) try: # Start tracing this request span_context = self.propagator.from_headers( _DjangoMetaWrapper(_get_django_request().META)) # Reload the tracer with the new span context tracer = tracer_module.Tracer( span_context=span_context, sampler=self.sampler, exporter=self.exporter, propagator=self.propagator) # Span name is being set at process_view span = tracer.start_span() span.span_kind = span_module.SpanKind.SERVER tracer.add_attribute_to_current_span( attribute_key=HTTP_METHOD, attribute_value=request.method) tracer.add_attribute_to_current_span( attribute_key=HTTP_URL, attribute_value=str(request.path)) # Add the span to thread local # in some cases (exceptions, timeouts) currentspan in # response event will be one of a child spans. # let's keep reference to 'django' span and # use it in response event execution_context.set_opencensus_attr( SPAN_THREAD_LOCAL_KEY, span) except Exception: # pragma: NO COVER log.error('Failed to trace request', exc_info=True)
def competition_submissions(self, competition): """ get the list of Submission for a particular competition Parameters ========== competition: the name of the competition """ submissions_result = self.process_response( self.competitions_submissions_list_with_http_info(id=competition)) return [Submission(s) for s in submissions_result]
get the list of Submission for a particular competition Parameters ========== competition: the name of the competition
Below is the the instruction that describes the task: ### Input: get the list of Submission for a particular competition Parameters ========== competition: the name of the competition ### Response: def competition_submissions(self, competition): """ get the list of Submission for a particular competition Parameters ========== competition: the name of the competition """ submissions_result = self.process_response( self.competitions_submissions_list_with_http_info(id=competition)) return [Submission(s) for s in submissions_result]
def get_zcta_ids(state=None): """ Get ids of all supported ZCTAs, optionally by state. Parameters ---------- state : str, optional Select zipcodes only from this state or territory, given as 2-letter abbreviation (e.g., ``'CA'``, ``'PR'``). Returns ------- results : list of str List of all supported selected ZCTA IDs. """ conn = metadata_db_connection_proxy.get_connection() cur = conn.cursor() if state is None: cur.execute( """ select zcta_id from zcta_metadata """ ) else: cur.execute( """ select zcta_id from zcta_metadata where state = ? """, (state,), ) return [row[0] for row in cur.fetchall()]
Get ids of all supported ZCTAs, optionally by state. Parameters ---------- state : str, optional Select zipcodes only from this state or territory, given as 2-letter abbreviation (e.g., ``'CA'``, ``'PR'``). Returns ------- results : list of str List of all supported selected ZCTA IDs.
Below is the the instruction that describes the task: ### Input: Get ids of all supported ZCTAs, optionally by state. Parameters ---------- state : str, optional Select zipcodes only from this state or territory, given as 2-letter abbreviation (e.g., ``'CA'``, ``'PR'``). Returns ------- results : list of str List of all supported selected ZCTA IDs. ### Response: def get_zcta_ids(state=None): """ Get ids of all supported ZCTAs, optionally by state. Parameters ---------- state : str, optional Select zipcodes only from this state or territory, given as 2-letter abbreviation (e.g., ``'CA'``, ``'PR'``). Returns ------- results : list of str List of all supported selected ZCTA IDs. """ conn = metadata_db_connection_proxy.get_connection() cur = conn.cursor() if state is None: cur.execute( """ select zcta_id from zcta_metadata """ ) else: cur.execute( """ select zcta_id from zcta_metadata where state = ? """, (state,), ) return [row[0] for row in cur.fetchall()]
def get_body(self, environ=None): """Get the request body.""" body = dict( status=self.code, message=self.description, ) errors = self.get_errors() if self.errors: body['errors'] = errors return json.dumps(body)
Get the request body.
Below is the the instruction that describes the task: ### Input: Get the request body. ### Response: def get_body(self, environ=None): """Get the request body.""" body = dict( status=self.code, message=self.description, ) errors = self.get_errors() if self.errors: body['errors'] = errors return json.dumps(body)
def expand_cmd_labels(self): """Expand make-style variables in cmd parameters. Currently: $(location <foo>) Location of one dependency or output file. $(locations <foo>) Space-delimited list of foo's output files. $(SRCS) Space-delimited list of this rule's source files. $(OUTS) Space-delimited list of this rule's output files. $(@D) Full path to the output directory for this rule. $@ Path to the output (single) file for this rule. """ cmd = self.cmd def _expand_onesrc(): """Expand $@ or $(@) to one output file.""" outs = self.rule.params['outs'] or [] if len(outs) != 1: raise error.TargetBuildFailed( self.address, '$@ substitution requires exactly one output file, but ' 'this rule has %s of them: %s' % (len(outs), outs)) else: return os.path.join(self.buildroot, self.path_to_this_rule, outs[0]) # TODO: this function is dumb and way too long def _expand_makevar(re_match): """Expands one substitution symbol.""" # Expand $(location foo) and $(locations foo): label = None tagstr = re_match.groups()[0] tag_location = re.match( r'\s*location\s+([A-Za-z0-9/\-_:\.]+)\s*', tagstr) tag_locations = re.match( r'\s*locations\s+([A-Za-z0-9/\-_:\.]+)\s*', tagstr) if tag_location: label = tag_location.groups()[0] elif tag_locations: label = tag_locations.groups()[0] if label: # Is it a filename found in the outputs of this rule? if label in self.rule.params['outs']: return os.path.join(self.buildroot, self.address.repo, self.address.path, label) # Is it an address found in the deps of this rule? addr = self.rule.makeaddress(label) if addr not in self.rule.composed_deps(): raise error.TargetBuildFailed( self.address, '%s is referenced in cmd but is neither an output ' 'file from this rule nor a dependency of this rule.' % label) else: paths = [x for x in self.rulefor(addr).output_files] if len(paths) is 0: raise error.TargetBuildFailed( self.address, 'cmd refers to %s, but it has no output files.') elif len(paths) > 1 and tag_location: raise error.TargetBuildFailed( self.address, 'Bad substitution in cmd: Expected exactly one ' 'file, but %s expands to %s files.' % ( addr, len(paths))) else: return ' '.join( [os.path.join(self.buildroot, x) for x in paths]) # Expand $(OUTS): elif re.match(r'OUTS', tagstr): return ' '.join( [os.path.join(self.buildroot, x) for x in self.rule.output_files]) # Expand $(SRCS): elif re.match(r'SRCS', tagstr): return ' '.join(os.path.join(self.path_to_this_rule, x) for x in self.rule.params['srcs'] or []) # Expand $(@D): elif re.match(r'\s*@D\s*', tagstr): ruledir = os.path.join(self.buildroot, self.path_to_this_rule) return ruledir # Expand $(@), $@: elif re.match(r'\s*@\s*', tagstr): return _expand_onesrc() else: raise error.TargetBuildFailed( self.address, '[%s] Unrecognized substitution in cmd: %s' % ( self.address, re_match.group())) cmd, _ = re.subn(self.paren_tag_re, _expand_makevar, cmd) # Match tags starting with $ without parens. Will also catch parens, so # this goes after the tag_re substitutions. cmd, _ = re.subn(self.noparen_tag_re, _expand_makevar, cmd) # Now that we're done looking for $(blabla) and $bla parameters, clean # up any $$ escaping: cmd, _ = re.subn(r'\$\$', '$', cmd) # Maybe try heuristic label expansion? Actually on second thought # that's a terrible idea. Use the explicit syntax, you lazy slobs. ;-) # TODO: Maybe consider other expansions from the gnu make manual? # $^ might be useful. # http://www.gnu.org/software/make/manual/html_node/Automatic-Variables.html#Automatic-Variables self.cmd = cmd
Expand make-style variables in cmd parameters. Currently: $(location <foo>) Location of one dependency or output file. $(locations <foo>) Space-delimited list of foo's output files. $(SRCS) Space-delimited list of this rule's source files. $(OUTS) Space-delimited list of this rule's output files. $(@D) Full path to the output directory for this rule. $@ Path to the output (single) file for this rule.
Below is the the instruction that describes the task: ### Input: Expand make-style variables in cmd parameters. Currently: $(location <foo>) Location of one dependency or output file. $(locations <foo>) Space-delimited list of foo's output files. $(SRCS) Space-delimited list of this rule's source files. $(OUTS) Space-delimited list of this rule's output files. $(@D) Full path to the output directory for this rule. $@ Path to the output (single) file for this rule. ### Response: def expand_cmd_labels(self): """Expand make-style variables in cmd parameters. Currently: $(location <foo>) Location of one dependency or output file. $(locations <foo>) Space-delimited list of foo's output files. $(SRCS) Space-delimited list of this rule's source files. $(OUTS) Space-delimited list of this rule's output files. $(@D) Full path to the output directory for this rule. $@ Path to the output (single) file for this rule. """ cmd = self.cmd def _expand_onesrc(): """Expand $@ or $(@) to one output file.""" outs = self.rule.params['outs'] or [] if len(outs) != 1: raise error.TargetBuildFailed( self.address, '$@ substitution requires exactly one output file, but ' 'this rule has %s of them: %s' % (len(outs), outs)) else: return os.path.join(self.buildroot, self.path_to_this_rule, outs[0]) # TODO: this function is dumb and way too long def _expand_makevar(re_match): """Expands one substitution symbol.""" # Expand $(location foo) and $(locations foo): label = None tagstr = re_match.groups()[0] tag_location = re.match( r'\s*location\s+([A-Za-z0-9/\-_:\.]+)\s*', tagstr) tag_locations = re.match( r'\s*locations\s+([A-Za-z0-9/\-_:\.]+)\s*', tagstr) if tag_location: label = tag_location.groups()[0] elif tag_locations: label = tag_locations.groups()[0] if label: # Is it a filename found in the outputs of this rule? if label in self.rule.params['outs']: return os.path.join(self.buildroot, self.address.repo, self.address.path, label) # Is it an address found in the deps of this rule? addr = self.rule.makeaddress(label) if addr not in self.rule.composed_deps(): raise error.TargetBuildFailed( self.address, '%s is referenced in cmd but is neither an output ' 'file from this rule nor a dependency of this rule.' % label) else: paths = [x for x in self.rulefor(addr).output_files] if len(paths) is 0: raise error.TargetBuildFailed( self.address, 'cmd refers to %s, but it has no output files.') elif len(paths) > 1 and tag_location: raise error.TargetBuildFailed( self.address, 'Bad substitution in cmd: Expected exactly one ' 'file, but %s expands to %s files.' % ( addr, len(paths))) else: return ' '.join( [os.path.join(self.buildroot, x) for x in paths]) # Expand $(OUTS): elif re.match(r'OUTS', tagstr): return ' '.join( [os.path.join(self.buildroot, x) for x in self.rule.output_files]) # Expand $(SRCS): elif re.match(r'SRCS', tagstr): return ' '.join(os.path.join(self.path_to_this_rule, x) for x in self.rule.params['srcs'] or []) # Expand $(@D): elif re.match(r'\s*@D\s*', tagstr): ruledir = os.path.join(self.buildroot, self.path_to_this_rule) return ruledir # Expand $(@), $@: elif re.match(r'\s*@\s*', tagstr): return _expand_onesrc() else: raise error.TargetBuildFailed( self.address, '[%s] Unrecognized substitution in cmd: %s' % ( self.address, re_match.group())) cmd, _ = re.subn(self.paren_tag_re, _expand_makevar, cmd) # Match tags starting with $ without parens. Will also catch parens, so # this goes after the tag_re substitutions. cmd, _ = re.subn(self.noparen_tag_re, _expand_makevar, cmd) # Now that we're done looking for $(blabla) and $bla parameters, clean # up any $$ escaping: cmd, _ = re.subn(r'\$\$', '$', cmd) # Maybe try heuristic label expansion? Actually on second thought # that's a terrible idea. Use the explicit syntax, you lazy slobs. ;-) # TODO: Maybe consider other expansions from the gnu make manual? # $^ might be useful. # http://www.gnu.org/software/make/manual/html_node/Automatic-Variables.html#Automatic-Variables self.cmd = cmd
def _set_base(self): '''set the API base or default to use Docker Hub. The user is able to set the base, api version, and protocol via a settings file of environment variables: SREGISTRY_NVIDIA_BASE: defaults to nvcr.io SREGISTRY_NVIDIA_TOKEN: defaults to $oauthtoken SREGISTRY_NVIDIA_VERSION: defaults to v2 SREGISTRY_NVIDIA_NO_HTTPS: defaults to not set (so https) ''' base = self._get_setting('SREGISTRY_NVIDIA_BASE') version = self._get_setting('SREGISTRY_NVIDIA_VERSION') if base is None: base = "nvcr.io" if version is None: version = "v2" nohttps = self._get_setting('SREGISTRY_NVIDIA_NOHTTPS') if nohttps is None: nohttps = "https://" else: nohttps = "http://" # <protocol>://<base>/<version> self.base = "%s%s/%s" %(nohttps, base.strip('/'), version)
set the API base or default to use Docker Hub. The user is able to set the base, api version, and protocol via a settings file of environment variables: SREGISTRY_NVIDIA_BASE: defaults to nvcr.io SREGISTRY_NVIDIA_TOKEN: defaults to $oauthtoken SREGISTRY_NVIDIA_VERSION: defaults to v2 SREGISTRY_NVIDIA_NO_HTTPS: defaults to not set (so https)
Below is the the instruction that describes the task: ### Input: set the API base or default to use Docker Hub. The user is able to set the base, api version, and protocol via a settings file of environment variables: SREGISTRY_NVIDIA_BASE: defaults to nvcr.io SREGISTRY_NVIDIA_TOKEN: defaults to $oauthtoken SREGISTRY_NVIDIA_VERSION: defaults to v2 SREGISTRY_NVIDIA_NO_HTTPS: defaults to not set (so https) ### Response: def _set_base(self): '''set the API base or default to use Docker Hub. The user is able to set the base, api version, and protocol via a settings file of environment variables: SREGISTRY_NVIDIA_BASE: defaults to nvcr.io SREGISTRY_NVIDIA_TOKEN: defaults to $oauthtoken SREGISTRY_NVIDIA_VERSION: defaults to v2 SREGISTRY_NVIDIA_NO_HTTPS: defaults to not set (so https) ''' base = self._get_setting('SREGISTRY_NVIDIA_BASE') version = self._get_setting('SREGISTRY_NVIDIA_VERSION') if base is None: base = "nvcr.io" if version is None: version = "v2" nohttps = self._get_setting('SREGISTRY_NVIDIA_NOHTTPS') if nohttps is None: nohttps = "https://" else: nohttps = "http://" # <protocol>://<base>/<version> self.base = "%s%s/%s" %(nohttps, base.strip('/'), version)