code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def equivalence_transform(compound, from_positions, to_positions, add_bond=True): """Computes an affine transformation that maps the from_positions to the respective to_positions, and applies this transformation to the compound. Parameters ---------- compound : mb.Compound The Compound to be transformed. from_positions : np.ndarray, shape=(n, 3), dtype=float Original positions. to_positions : np.ndarray, shape=(n, 3), dtype=float New positions. """ warn('The `equivalence_transform` function is being phased out in favor of' ' `force_overlap`.', DeprecationWarning) from mbuild.port import Port T = None if isinstance(from_positions, (list, tuple)) and isinstance(to_positions, (list, tuple)): equivalence_pairs = zip(from_positions, to_positions) elif isinstance(from_positions, Port) and isinstance(to_positions, Port): equivalence_pairs, T = _choose_correct_port(from_positions, to_positions) from_positions.used = True to_positions.used = True else: equivalence_pairs = [(from_positions, to_positions)] if not T: T = _create_equivalence_transform(equivalence_pairs) atom_positions = compound.xyz_with_ports atom_positions = T.apply_to(atom_positions) compound.xyz_with_ports = atom_positions if add_bond: if isinstance(from_positions, Port) and isinstance(to_positions, Port): if not from_positions.anchor or not to_positions.anchor: # TODO: I think warnings is undefined here warn("Attempting to form bond from port that has no anchor") else: from_positions.anchor.parent.add_bond((from_positions.anchor, to_positions.anchor)) to_positions.anchor.parent.add_bond((from_positions.anchor, to_positions.anchor))
Computes an affine transformation that maps the from_positions to the respective to_positions, and applies this transformation to the compound. Parameters ---------- compound : mb.Compound The Compound to be transformed. from_positions : np.ndarray, shape=(n, 3), dtype=float Original positions. to_positions : np.ndarray, shape=(n, 3), dtype=float New positions.
Below is the the instruction that describes the task: ### Input: Computes an affine transformation that maps the from_positions to the respective to_positions, and applies this transformation to the compound. Parameters ---------- compound : mb.Compound The Compound to be transformed. from_positions : np.ndarray, shape=(n, 3), dtype=float Original positions. to_positions : np.ndarray, shape=(n, 3), dtype=float New positions. ### Response: def equivalence_transform(compound, from_positions, to_positions, add_bond=True): """Computes an affine transformation that maps the from_positions to the respective to_positions, and applies this transformation to the compound. Parameters ---------- compound : mb.Compound The Compound to be transformed. from_positions : np.ndarray, shape=(n, 3), dtype=float Original positions. to_positions : np.ndarray, shape=(n, 3), dtype=float New positions. """ warn('The `equivalence_transform` function is being phased out in favor of' ' `force_overlap`.', DeprecationWarning) from mbuild.port import Port T = None if isinstance(from_positions, (list, tuple)) and isinstance(to_positions, (list, tuple)): equivalence_pairs = zip(from_positions, to_positions) elif isinstance(from_positions, Port) and isinstance(to_positions, Port): equivalence_pairs, T = _choose_correct_port(from_positions, to_positions) from_positions.used = True to_positions.used = True else: equivalence_pairs = [(from_positions, to_positions)] if not T: T = _create_equivalence_transform(equivalence_pairs) atom_positions = compound.xyz_with_ports atom_positions = T.apply_to(atom_positions) compound.xyz_with_ports = atom_positions if add_bond: if isinstance(from_positions, Port) and isinstance(to_positions, Port): if not from_positions.anchor or not to_positions.anchor: # TODO: I think warnings is undefined here warn("Attempting to form bond from port that has no anchor") else: from_positions.anchor.parent.add_bond((from_positions.anchor, to_positions.anchor)) to_positions.anchor.parent.add_bond((from_positions.anchor, to_positions.anchor))
def copy(self, site_properties=None, sanitize=False): """ Convenience method to get a copy of the structure, with options to add site properties. Args: site_properties (dict): Properties to add or override. The properties are specified in the same way as the constructor, i.e., as a dict of the form {property: [values]}. The properties should be in the order of the *original* structure if you are performing sanitization. sanitize (bool): If True, this method will return a sanitized structure. Sanitization performs a few things: (i) The sites are sorted by electronegativity, (ii) a LLL lattice reduction is carried out to obtain a relatively orthogonalized cell, (iii) all fractional coords for sites are mapped into the unit cell. Returns: A copy of the Structure, with optionally new site_properties and optionally sanitized. """ props = self.site_properties if site_properties: props.update(site_properties) if not sanitize: return self.__class__(self._lattice, self.species_and_occu, self.frac_coords, charge=self._charge, site_properties=props) else: reduced_latt = self._lattice.get_lll_reduced_lattice() new_sites = [] for i, site in enumerate(self): frac_coords = reduced_latt.get_fractional_coords(site.coords) site_props = {} for p in props: site_props[p] = props[p][i] new_sites.append(PeriodicSite(site.species, frac_coords, reduced_latt, to_unit_cell=True, properties=site_props)) new_sites = sorted(new_sites) return self.__class__.from_sites(new_sites, charge=self._charge)
Convenience method to get a copy of the structure, with options to add site properties. Args: site_properties (dict): Properties to add or override. The properties are specified in the same way as the constructor, i.e., as a dict of the form {property: [values]}. The properties should be in the order of the *original* structure if you are performing sanitization. sanitize (bool): If True, this method will return a sanitized structure. Sanitization performs a few things: (i) The sites are sorted by electronegativity, (ii) a LLL lattice reduction is carried out to obtain a relatively orthogonalized cell, (iii) all fractional coords for sites are mapped into the unit cell. Returns: A copy of the Structure, with optionally new site_properties and optionally sanitized.
Below is the the instruction that describes the task: ### Input: Convenience method to get a copy of the structure, with options to add site properties. Args: site_properties (dict): Properties to add or override. The properties are specified in the same way as the constructor, i.e., as a dict of the form {property: [values]}. The properties should be in the order of the *original* structure if you are performing sanitization. sanitize (bool): If True, this method will return a sanitized structure. Sanitization performs a few things: (i) The sites are sorted by electronegativity, (ii) a LLL lattice reduction is carried out to obtain a relatively orthogonalized cell, (iii) all fractional coords for sites are mapped into the unit cell. Returns: A copy of the Structure, with optionally new site_properties and optionally sanitized. ### Response: def copy(self, site_properties=None, sanitize=False): """ Convenience method to get a copy of the structure, with options to add site properties. Args: site_properties (dict): Properties to add or override. The properties are specified in the same way as the constructor, i.e., as a dict of the form {property: [values]}. The properties should be in the order of the *original* structure if you are performing sanitization. sanitize (bool): If True, this method will return a sanitized structure. Sanitization performs a few things: (i) The sites are sorted by electronegativity, (ii) a LLL lattice reduction is carried out to obtain a relatively orthogonalized cell, (iii) all fractional coords for sites are mapped into the unit cell. Returns: A copy of the Structure, with optionally new site_properties and optionally sanitized. """ props = self.site_properties if site_properties: props.update(site_properties) if not sanitize: return self.__class__(self._lattice, self.species_and_occu, self.frac_coords, charge=self._charge, site_properties=props) else: reduced_latt = self._lattice.get_lll_reduced_lattice() new_sites = [] for i, site in enumerate(self): frac_coords = reduced_latt.get_fractional_coords(site.coords) site_props = {} for p in props: site_props[p] = props[p][i] new_sites.append(PeriodicSite(site.species, frac_coords, reduced_latt, to_unit_cell=True, properties=site_props)) new_sites = sorted(new_sites) return self.__class__.from_sites(new_sites, charge=self._charge)
def get_key_to_last_completed_course_block(user, course_key): """ Returns the last block a "user" completed in a course (stated as "course_key"). raises UnavailableCompletionData when the user has not completed blocks in the course. raises UnavailableCompletionData when the visual progress waffle flag is disabled. """ last_completed_block = BlockCompletion.get_latest_block_completed(user, course_key) if last_completed_block is not None: return last_completed_block.block_key raise UnavailableCompletionData(course_key)
Returns the last block a "user" completed in a course (stated as "course_key"). raises UnavailableCompletionData when the user has not completed blocks in the course. raises UnavailableCompletionData when the visual progress waffle flag is disabled.
Below is the the instruction that describes the task: ### Input: Returns the last block a "user" completed in a course (stated as "course_key"). raises UnavailableCompletionData when the user has not completed blocks in the course. raises UnavailableCompletionData when the visual progress waffle flag is disabled. ### Response: def get_key_to_last_completed_course_block(user, course_key): """ Returns the last block a "user" completed in a course (stated as "course_key"). raises UnavailableCompletionData when the user has not completed blocks in the course. raises UnavailableCompletionData when the visual progress waffle flag is disabled. """ last_completed_block = BlockCompletion.get_latest_block_completed(user, course_key) if last_completed_block is not None: return last_completed_block.block_key raise UnavailableCompletionData(course_key)
def process_presence(self, stanza): """Process presence stanza. Pass it to a handler of the stanza's type and payload namespace. :Parameters: - `stanza`: presence stanza to be handled """ stanza_type = stanza.stanza_type return self.__try_handlers(self._presence_handlers, stanza, stanza_type)
Process presence stanza. Pass it to a handler of the stanza's type and payload namespace. :Parameters: - `stanza`: presence stanza to be handled
Below is the the instruction that describes the task: ### Input: Process presence stanza. Pass it to a handler of the stanza's type and payload namespace. :Parameters: - `stanza`: presence stanza to be handled ### Response: def process_presence(self, stanza): """Process presence stanza. Pass it to a handler of the stanza's type and payload namespace. :Parameters: - `stanza`: presence stanza to be handled """ stanza_type = stanza.stanza_type return self.__try_handlers(self._presence_handlers, stanza, stanza_type)
def changes(new_cmp_dict, old_cmp_dict, id_column, columns): """Return a list dict of the changes of the rows that exist in both dictionaries User must provide an ID column for old_cmp_dict """ update_ldict = [] same_keys = set(new_cmp_dict).intersection(set(old_cmp_dict)) for same_key in same_keys: # Get the Union of the set of keys # for both dictionaries to account # for missing keys old_dict = old_cmp_dict[same_key] new_dict = new_cmp_dict[same_key] dict_keys = set(old_dict).intersection(set(new_dict)) update_dict = {} for dict_key in columns: old_val = old_dict.get(dict_key, 'NaN') new_val = new_dict.get(dict_key, 'NaN') if old_val != new_val and new_val != 'NaN': if id_column!=None: try: update_dict[id_column] = old_dict[id_column] except KeyError: print("Input Dictionary 'old_cmp_dict' must have ID column") update_dict[dict_key] = new_val if update_dict: update_ldict.append(update_dict) return update_ldict
Return a list dict of the changes of the rows that exist in both dictionaries User must provide an ID column for old_cmp_dict
Below is the the instruction that describes the task: ### Input: Return a list dict of the changes of the rows that exist in both dictionaries User must provide an ID column for old_cmp_dict ### Response: def changes(new_cmp_dict, old_cmp_dict, id_column, columns): """Return a list dict of the changes of the rows that exist in both dictionaries User must provide an ID column for old_cmp_dict """ update_ldict = [] same_keys = set(new_cmp_dict).intersection(set(old_cmp_dict)) for same_key in same_keys: # Get the Union of the set of keys # for both dictionaries to account # for missing keys old_dict = old_cmp_dict[same_key] new_dict = new_cmp_dict[same_key] dict_keys = set(old_dict).intersection(set(new_dict)) update_dict = {} for dict_key in columns: old_val = old_dict.get(dict_key, 'NaN') new_val = new_dict.get(dict_key, 'NaN') if old_val != new_val and new_val != 'NaN': if id_column!=None: try: update_dict[id_column] = old_dict[id_column] except KeyError: print("Input Dictionary 'old_cmp_dict' must have ID column") update_dict[dict_key] = new_val if update_dict: update_ldict.append(update_dict) return update_ldict
def append(self, items): """ Add some items to this ItemList and save the changes to the server :param items: the items to add, either as a List of Item objects, an ItemList, a List of item URLs as Strings, a single item URL as a String, or a single Item object :rtype: String :returns: the server success message :raises: APIError if the API request is not successful """ resp = self.client.add_to_item_list(items, self.url()) self.refresh() return resp
Add some items to this ItemList and save the changes to the server :param items: the items to add, either as a List of Item objects, an ItemList, a List of item URLs as Strings, a single item URL as a String, or a single Item object :rtype: String :returns: the server success message :raises: APIError if the API request is not successful
Below is the the instruction that describes the task: ### Input: Add some items to this ItemList and save the changes to the server :param items: the items to add, either as a List of Item objects, an ItemList, a List of item URLs as Strings, a single item URL as a String, or a single Item object :rtype: String :returns: the server success message :raises: APIError if the API request is not successful ### Response: def append(self, items): """ Add some items to this ItemList and save the changes to the server :param items: the items to add, either as a List of Item objects, an ItemList, a List of item URLs as Strings, a single item URL as a String, or a single Item object :rtype: String :returns: the server success message :raises: APIError if the API request is not successful """ resp = self.client.add_to_item_list(items, self.url()) self.refresh() return resp
def psffunc(self, *args, **kwargs): """Calculates a linescan psf""" if self.polychromatic: func = psfcalc.calculate_polychrome_linescan_psf else: func = psfcalc.calculate_linescan_psf return func(*args, **kwargs)
Calculates a linescan psf
Below is the the instruction that describes the task: ### Input: Calculates a linescan psf ### Response: def psffunc(self, *args, **kwargs): """Calculates a linescan psf""" if self.polychromatic: func = psfcalc.calculate_polychrome_linescan_psf else: func = psfcalc.calculate_linescan_psf return func(*args, **kwargs)
def get_interface_switchport_output_switchport_fcoe_port_enabled(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") get_interface_switchport = ET.Element("get_interface_switchport") config = get_interface_switchport output = ET.SubElement(get_interface_switchport, "output") switchport = ET.SubElement(output, "switchport") interface_type_key = ET.SubElement(switchport, "interface-type") interface_type_key.text = kwargs.pop('interface_type') interface_name_key = ET.SubElement(switchport, "interface-name") interface_name_key.text = kwargs.pop('interface_name') fcoe_port_enabled = ET.SubElement(switchport, "fcoe-port-enabled") fcoe_port_enabled.text = kwargs.pop('fcoe_port_enabled') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def get_interface_switchport_output_switchport_fcoe_port_enabled(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") get_interface_switchport = ET.Element("get_interface_switchport") config = get_interface_switchport output = ET.SubElement(get_interface_switchport, "output") switchport = ET.SubElement(output, "switchport") interface_type_key = ET.SubElement(switchport, "interface-type") interface_type_key.text = kwargs.pop('interface_type') interface_name_key = ET.SubElement(switchport, "interface-name") interface_name_key.text = kwargs.pop('interface_name') fcoe_port_enabled = ET.SubElement(switchport, "fcoe-port-enabled") fcoe_port_enabled.text = kwargs.pop('fcoe_port_enabled') callback = kwargs.pop('callback', self._callback) return callback(config)
def getBestMatchingCell(self, c, activeState): """Find weakly activated cell in column. Returns index and segment of most activated segment above minThreshold. """ # Collect all cells in column c that have at least minThreshold in the most # activated segment bestActivityInCol = self.minThreshold bestSegIdxInCol = -1 bestCellInCol = -1 for i in xrange(self.cellsPerColumn): maxSegActivity = 0 maxSegIdx = 0 for j,s in enumerate(self.cells[c][i]): activity = self.getSegmentActivityLevel(s, activeState, connectedSynapsesOnly =False) if self.verbosity >= 6: print " Segment Activity for column ", c, " cell ", i, " segment ", " j is ", activity if activity > maxSegActivity: maxSegActivity = activity maxSegIdx = j if maxSegActivity >= bestActivityInCol: bestActivityInCol = maxSegActivity bestSegIdxInCol = maxSegIdx bestCellInCol = i if self.verbosity >= 6: print "Best Matching Cell In Col: ", bestCellInCol if bestCellInCol == -1: return (None, None) else: return bestCellInCol, self.cells[c][bestCellInCol][bestSegIdxInCol]
Find weakly activated cell in column. Returns index and segment of most activated segment above minThreshold.
Below is the the instruction that describes the task: ### Input: Find weakly activated cell in column. Returns index and segment of most activated segment above minThreshold. ### Response: def getBestMatchingCell(self, c, activeState): """Find weakly activated cell in column. Returns index and segment of most activated segment above minThreshold. """ # Collect all cells in column c that have at least minThreshold in the most # activated segment bestActivityInCol = self.minThreshold bestSegIdxInCol = -1 bestCellInCol = -1 for i in xrange(self.cellsPerColumn): maxSegActivity = 0 maxSegIdx = 0 for j,s in enumerate(self.cells[c][i]): activity = self.getSegmentActivityLevel(s, activeState, connectedSynapsesOnly =False) if self.verbosity >= 6: print " Segment Activity for column ", c, " cell ", i, " segment ", " j is ", activity if activity > maxSegActivity: maxSegActivity = activity maxSegIdx = j if maxSegActivity >= bestActivityInCol: bestActivityInCol = maxSegActivity bestSegIdxInCol = maxSegIdx bestCellInCol = i if self.verbosity >= 6: print "Best Matching Cell In Col: ", bestCellInCol if bestCellInCol == -1: return (None, None) else: return bestCellInCol, self.cells[c][bestCellInCol][bestSegIdxInCol]
def update_entries(entries: Entries, data: dict) -> None: """Update each entry in the list with some data.""" # TODO: Is mutating the list okay, making copies is such a pain in the ass for entry in entries: entry.update(data)
Update each entry in the list with some data.
Below is the the instruction that describes the task: ### Input: Update each entry in the list with some data. ### Response: def update_entries(entries: Entries, data: dict) -> None: """Update each entry in the list with some data.""" # TODO: Is mutating the list okay, making copies is such a pain in the ass for entry in entries: entry.update(data)
def fillna(data, other, join="left", dataset_join="left"): """Fill missing values in this object with data from the other object. Follows normal broadcasting and alignment rules. Parameters ---------- join : {'outer', 'inner', 'left', 'right'}, optional Method for joining the indexes of the passed objects along each dimension - 'outer': use the union of object indexes - 'inner': use the intersection of object indexes - 'left': use indexes from the first object with each dimension - 'right': use indexes from the last object with each dimension - 'exact': raise `ValueError` instead of aligning when indexes to be aligned are not equal dataset_join : {'outer', 'inner', 'left', 'right'}, optional Method for joining variables of Dataset objects with mismatched data variables. - 'outer': take variables from both Dataset objects - 'inner': take only overlapped variables - 'left': take only variables from the first object - 'right': take only variables from the last object """ from .computation import apply_ufunc return apply_ufunc(duck_array_ops.fillna, data, other, join=join, dask="allowed", dataset_join=dataset_join, dataset_fill_value=np.nan, keep_attrs=True)
Fill missing values in this object with data from the other object. Follows normal broadcasting and alignment rules. Parameters ---------- join : {'outer', 'inner', 'left', 'right'}, optional Method for joining the indexes of the passed objects along each dimension - 'outer': use the union of object indexes - 'inner': use the intersection of object indexes - 'left': use indexes from the first object with each dimension - 'right': use indexes from the last object with each dimension - 'exact': raise `ValueError` instead of aligning when indexes to be aligned are not equal dataset_join : {'outer', 'inner', 'left', 'right'}, optional Method for joining variables of Dataset objects with mismatched data variables. - 'outer': take variables from both Dataset objects - 'inner': take only overlapped variables - 'left': take only variables from the first object - 'right': take only variables from the last object
Below is the the instruction that describes the task: ### Input: Fill missing values in this object with data from the other object. Follows normal broadcasting and alignment rules. Parameters ---------- join : {'outer', 'inner', 'left', 'right'}, optional Method for joining the indexes of the passed objects along each dimension - 'outer': use the union of object indexes - 'inner': use the intersection of object indexes - 'left': use indexes from the first object with each dimension - 'right': use indexes from the last object with each dimension - 'exact': raise `ValueError` instead of aligning when indexes to be aligned are not equal dataset_join : {'outer', 'inner', 'left', 'right'}, optional Method for joining variables of Dataset objects with mismatched data variables. - 'outer': take variables from both Dataset objects - 'inner': take only overlapped variables - 'left': take only variables from the first object - 'right': take only variables from the last object ### Response: def fillna(data, other, join="left", dataset_join="left"): """Fill missing values in this object with data from the other object. Follows normal broadcasting and alignment rules. Parameters ---------- join : {'outer', 'inner', 'left', 'right'}, optional Method for joining the indexes of the passed objects along each dimension - 'outer': use the union of object indexes - 'inner': use the intersection of object indexes - 'left': use indexes from the first object with each dimension - 'right': use indexes from the last object with each dimension - 'exact': raise `ValueError` instead of aligning when indexes to be aligned are not equal dataset_join : {'outer', 'inner', 'left', 'right'}, optional Method for joining variables of Dataset objects with mismatched data variables. - 'outer': take variables from both Dataset objects - 'inner': take only overlapped variables - 'left': take only variables from the first object - 'right': take only variables from the last object """ from .computation import apply_ufunc return apply_ufunc(duck_array_ops.fillna, data, other, join=join, dask="allowed", dataset_join=dataset_join, dataset_fill_value=np.nan, keep_attrs=True)
async def copy_from_query(self, query, *args, output, timeout=None, format=None, oids=None, delimiter=None, null=None, header=None, quote=None, escape=None, force_quote=None, encoding=None): """Copy the results of a query to a file or file-like object. :param str query: The query to copy the results of. :param args: Query arguments. :param output: A :term:`path-like object <python:path-like object>`, or a :term:`file-like object <python:file-like object>`, or a :term:`coroutine function <python:coroutine function>` that takes a ``bytes`` instance as a sole argument. :param float timeout: Optional timeout value in seconds. The remaining keyword arguments are ``COPY`` statement options, see `COPY statement documentation`_ for details. :return: The status string of the COPY command. Example: .. code-block:: pycon >>> import asyncpg >>> import asyncio >>> async def run(): ... con = await asyncpg.connect(user='postgres') ... result = await con.copy_from_query( ... 'SELECT foo, bar FROM mytable WHERE foo > $1', 10, ... output='file.csv', format='csv') ... print(result) ... >>> asyncio.get_event_loop().run_until_complete(run()) 'COPY 10' .. _`COPY statement documentation`: https://www.postgresql.org/docs/current/static/sql-copy.html .. versionadded:: 0.11.0 """ opts = self._format_copy_opts( format=format, oids=oids, delimiter=delimiter, null=null, header=header, quote=quote, escape=escape, force_quote=force_quote, encoding=encoding ) if args: query = await utils._mogrify(self, query, args) copy_stmt = 'COPY ({query}) TO STDOUT {opts}'.format( query=query, opts=opts) return await self._copy_out(copy_stmt, output, timeout)
Copy the results of a query to a file or file-like object. :param str query: The query to copy the results of. :param args: Query arguments. :param output: A :term:`path-like object <python:path-like object>`, or a :term:`file-like object <python:file-like object>`, or a :term:`coroutine function <python:coroutine function>` that takes a ``bytes`` instance as a sole argument. :param float timeout: Optional timeout value in seconds. The remaining keyword arguments are ``COPY`` statement options, see `COPY statement documentation`_ for details. :return: The status string of the COPY command. Example: .. code-block:: pycon >>> import asyncpg >>> import asyncio >>> async def run(): ... con = await asyncpg.connect(user='postgres') ... result = await con.copy_from_query( ... 'SELECT foo, bar FROM mytable WHERE foo > $1', 10, ... output='file.csv', format='csv') ... print(result) ... >>> asyncio.get_event_loop().run_until_complete(run()) 'COPY 10' .. _`COPY statement documentation`: https://www.postgresql.org/docs/current/static/sql-copy.html .. versionadded:: 0.11.0
Below is the the instruction that describes the task: ### Input: Copy the results of a query to a file or file-like object. :param str query: The query to copy the results of. :param args: Query arguments. :param output: A :term:`path-like object <python:path-like object>`, or a :term:`file-like object <python:file-like object>`, or a :term:`coroutine function <python:coroutine function>` that takes a ``bytes`` instance as a sole argument. :param float timeout: Optional timeout value in seconds. The remaining keyword arguments are ``COPY`` statement options, see `COPY statement documentation`_ for details. :return: The status string of the COPY command. Example: .. code-block:: pycon >>> import asyncpg >>> import asyncio >>> async def run(): ... con = await asyncpg.connect(user='postgres') ... result = await con.copy_from_query( ... 'SELECT foo, bar FROM mytable WHERE foo > $1', 10, ... output='file.csv', format='csv') ... print(result) ... >>> asyncio.get_event_loop().run_until_complete(run()) 'COPY 10' .. _`COPY statement documentation`: https://www.postgresql.org/docs/current/static/sql-copy.html .. versionadded:: 0.11.0 ### Response: async def copy_from_query(self, query, *args, output, timeout=None, format=None, oids=None, delimiter=None, null=None, header=None, quote=None, escape=None, force_quote=None, encoding=None): """Copy the results of a query to a file or file-like object. :param str query: The query to copy the results of. :param args: Query arguments. :param output: A :term:`path-like object <python:path-like object>`, or a :term:`file-like object <python:file-like object>`, or a :term:`coroutine function <python:coroutine function>` that takes a ``bytes`` instance as a sole argument. :param float timeout: Optional timeout value in seconds. The remaining keyword arguments are ``COPY`` statement options, see `COPY statement documentation`_ for details. :return: The status string of the COPY command. Example: .. code-block:: pycon >>> import asyncpg >>> import asyncio >>> async def run(): ... con = await asyncpg.connect(user='postgres') ... result = await con.copy_from_query( ... 'SELECT foo, bar FROM mytable WHERE foo > $1', 10, ... output='file.csv', format='csv') ... print(result) ... >>> asyncio.get_event_loop().run_until_complete(run()) 'COPY 10' .. _`COPY statement documentation`: https://www.postgresql.org/docs/current/static/sql-copy.html .. versionadded:: 0.11.0 """ opts = self._format_copy_opts( format=format, oids=oids, delimiter=delimiter, null=null, header=header, quote=quote, escape=escape, force_quote=force_quote, encoding=encoding ) if args: query = await utils._mogrify(self, query, args) copy_stmt = 'COPY ({query}) TO STDOUT {opts}'.format( query=query, opts=opts) return await self._copy_out(copy_stmt, output, timeout)
def sample_path(alpha, A, pobs, T=None): """ Sample the hidden pathway S from the conditional distribution P ( S | Parameters, Observations ) Parameters ---------- alpha : ndarray((T,N), dtype = float), optional, default = None alpha[t,i] is the ith forward coefficient of time t. A : ndarray((N,N), dtype = float) transition matrix of the hidden states pobs : ndarray((T,N), dtype = float) pobs[t,i] is the observation probability for observation at time t given hidden state i T : int number of time steps Returns ------- S : numpy.array shape (T) maximum likelihood hidden path """ if __impl__ == __IMPL_PYTHON__: return ip.sample_path(alpha, A, pobs, T=T, dtype=config.dtype) elif __impl__ == __IMPL_C__: return ic.sample_path(alpha, A, pobs, T=T, dtype=config.dtype) else: raise RuntimeError('Nonexisting implementation selected: '+str(__impl__))
Sample the hidden pathway S from the conditional distribution P ( S | Parameters, Observations ) Parameters ---------- alpha : ndarray((T,N), dtype = float), optional, default = None alpha[t,i] is the ith forward coefficient of time t. A : ndarray((N,N), dtype = float) transition matrix of the hidden states pobs : ndarray((T,N), dtype = float) pobs[t,i] is the observation probability for observation at time t given hidden state i T : int number of time steps Returns ------- S : numpy.array shape (T) maximum likelihood hidden path
Below is the the instruction that describes the task: ### Input: Sample the hidden pathway S from the conditional distribution P ( S | Parameters, Observations ) Parameters ---------- alpha : ndarray((T,N), dtype = float), optional, default = None alpha[t,i] is the ith forward coefficient of time t. A : ndarray((N,N), dtype = float) transition matrix of the hidden states pobs : ndarray((T,N), dtype = float) pobs[t,i] is the observation probability for observation at time t given hidden state i T : int number of time steps Returns ------- S : numpy.array shape (T) maximum likelihood hidden path ### Response: def sample_path(alpha, A, pobs, T=None): """ Sample the hidden pathway S from the conditional distribution P ( S | Parameters, Observations ) Parameters ---------- alpha : ndarray((T,N), dtype = float), optional, default = None alpha[t,i] is the ith forward coefficient of time t. A : ndarray((N,N), dtype = float) transition matrix of the hidden states pobs : ndarray((T,N), dtype = float) pobs[t,i] is the observation probability for observation at time t given hidden state i T : int number of time steps Returns ------- S : numpy.array shape (T) maximum likelihood hidden path """ if __impl__ == __IMPL_PYTHON__: return ip.sample_path(alpha, A, pobs, T=T, dtype=config.dtype) elif __impl__ == __IMPL_C__: return ic.sample_path(alpha, A, pobs, T=T, dtype=config.dtype) else: raise RuntimeError('Nonexisting implementation selected: '+str(__impl__))
def clone(self, clone_member): """ - initialize the replica from an existing member (master or replica) - initialize the replica using the replica creation method that works without the replication connection (i.e. restore from on-disk base backup) """ self._rewind_state = REWIND_STATUS.INITIAL ret = self.create_replica(clone_member) == 0 if ret: self._post_restore() self._configure_server_parameters() return ret
- initialize the replica from an existing member (master or replica) - initialize the replica using the replica creation method that works without the replication connection (i.e. restore from on-disk base backup)
Below is the the instruction that describes the task: ### Input: - initialize the replica from an existing member (master or replica) - initialize the replica using the replica creation method that works without the replication connection (i.e. restore from on-disk base backup) ### Response: def clone(self, clone_member): """ - initialize the replica from an existing member (master or replica) - initialize the replica using the replica creation method that works without the replication connection (i.e. restore from on-disk base backup) """ self._rewind_state = REWIND_STATUS.INITIAL ret = self.create_replica(clone_member) == 0 if ret: self._post_restore() self._configure_server_parameters() return ret
def getShocks(self): ''' Gets new Markov states and permanent and transitory income shocks for this period. Samples from IncomeDstn for each period-state in the cycle. Parameters ---------- None Returns ------- None ''' # Get new Markov states for each agent if self.global_markov: base_draws = np.ones(self.AgentCount)*drawUniform(1,seed=self.RNG.randint(0,2**31-1)) else: base_draws = self.RNG.permutation(np.arange(self.AgentCount,dtype=float)/self.AgentCount + 1.0/(2*self.AgentCount)) newborn = self.t_age == 0 # Don't change Markov state for those who were just born (unless global_markov) MrkvPrev = self.MrkvNow MrkvNow = np.zeros(self.AgentCount,dtype=int) for t in range(self.T_cycle): Cutoffs = np.cumsum(self.MrkvArray[t],axis=1) for j in range(self.MrkvArray[t].shape[0]): these = np.logical_and(self.t_cycle == t,MrkvPrev == j) MrkvNow[these] = np.searchsorted(Cutoffs[j,:],base_draws[these]).astype(int) if not self.global_markov: MrkvNow[newborn] = MrkvPrev[newborn] self.MrkvNow = MrkvNow.astype(int) # Now get income shocks for each consumer, by cycle-time and discrete state PermShkNow = np.zeros(self.AgentCount) # Initialize shock arrays TranShkNow = np.zeros(self.AgentCount) for t in range(self.T_cycle): for j in range(self.MrkvArray[t].shape[0]): these = np.logical_and(t == self.t_cycle, j == MrkvNow) N = np.sum(these) if N > 0: IncomeDstnNow = self.IncomeDstn[t-1][j] # set current income distribution PermGroFacNow = self.PermGroFac[t-1][j] # and permanent growth factor Indices = np.arange(IncomeDstnNow[0].size) # just a list of integers # Get random draws of income shocks from the discrete distribution EventDraws = drawDiscrete(N,X=Indices,P=IncomeDstnNow[0],exact_match=False,seed=self.RNG.randint(0,2**31-1)) PermShkNow[these] = IncomeDstnNow[1][EventDraws]*PermGroFacNow # permanent "shock" includes expected growth TranShkNow[these] = IncomeDstnNow[2][EventDraws] newborn = self.t_age == 0 PermShkNow[newborn] = 1.0 TranShkNow[newborn] = 1.0 self.PermShkNow = PermShkNow self.TranShkNow = TranShkNow
Gets new Markov states and permanent and transitory income shocks for this period. Samples from IncomeDstn for each period-state in the cycle. Parameters ---------- None Returns ------- None
Below is the the instruction that describes the task: ### Input: Gets new Markov states and permanent and transitory income shocks for this period. Samples from IncomeDstn for each period-state in the cycle. Parameters ---------- None Returns ------- None ### Response: def getShocks(self): ''' Gets new Markov states and permanent and transitory income shocks for this period. Samples from IncomeDstn for each period-state in the cycle. Parameters ---------- None Returns ------- None ''' # Get new Markov states for each agent if self.global_markov: base_draws = np.ones(self.AgentCount)*drawUniform(1,seed=self.RNG.randint(0,2**31-1)) else: base_draws = self.RNG.permutation(np.arange(self.AgentCount,dtype=float)/self.AgentCount + 1.0/(2*self.AgentCount)) newborn = self.t_age == 0 # Don't change Markov state for those who were just born (unless global_markov) MrkvPrev = self.MrkvNow MrkvNow = np.zeros(self.AgentCount,dtype=int) for t in range(self.T_cycle): Cutoffs = np.cumsum(self.MrkvArray[t],axis=1) for j in range(self.MrkvArray[t].shape[0]): these = np.logical_and(self.t_cycle == t,MrkvPrev == j) MrkvNow[these] = np.searchsorted(Cutoffs[j,:],base_draws[these]).astype(int) if not self.global_markov: MrkvNow[newborn] = MrkvPrev[newborn] self.MrkvNow = MrkvNow.astype(int) # Now get income shocks for each consumer, by cycle-time and discrete state PermShkNow = np.zeros(self.AgentCount) # Initialize shock arrays TranShkNow = np.zeros(self.AgentCount) for t in range(self.T_cycle): for j in range(self.MrkvArray[t].shape[0]): these = np.logical_and(t == self.t_cycle, j == MrkvNow) N = np.sum(these) if N > 0: IncomeDstnNow = self.IncomeDstn[t-1][j] # set current income distribution PermGroFacNow = self.PermGroFac[t-1][j] # and permanent growth factor Indices = np.arange(IncomeDstnNow[0].size) # just a list of integers # Get random draws of income shocks from the discrete distribution EventDraws = drawDiscrete(N,X=Indices,P=IncomeDstnNow[0],exact_match=False,seed=self.RNG.randint(0,2**31-1)) PermShkNow[these] = IncomeDstnNow[1][EventDraws]*PermGroFacNow # permanent "shock" includes expected growth TranShkNow[these] = IncomeDstnNow[2][EventDraws] newborn = self.t_age == 0 PermShkNow[newborn] = 1.0 TranShkNow[newborn] = 1.0 self.PermShkNow = PermShkNow self.TranShkNow = TranShkNow
def convert_adc(value, output_type, max_volts): """ Converts the output from the ADC into the desired type. """ return { const.ADC_RAW: lambda x: x, const.ADC_PERCENTAGE: adc_to_percentage, const.ADC_VOLTS: adc_to_volts, const.ADC_MILLIVOLTS: adc_to_millivolts }[output_type](value, max_volts)
Converts the output from the ADC into the desired type.
Below is the the instruction that describes the task: ### Input: Converts the output from the ADC into the desired type. ### Response: def convert_adc(value, output_type, max_volts): """ Converts the output from the ADC into the desired type. """ return { const.ADC_RAW: lambda x: x, const.ADC_PERCENTAGE: adc_to_percentage, const.ADC_VOLTS: adc_to_volts, const.ADC_MILLIVOLTS: adc_to_millivolts }[output_type](value, max_volts)
def batch_results(self, results): """Push a batch of output results to the Spark output RDD of ``TFCluster.inference()``. Note: this currently expects a one-to-one mapping of input to output data, so the length of the ``results`` array should match the length of the previously retrieved batch of input data. Args: :results: array of output data for the equivalent batch of input data. """ logging.debug("batch_results() invoked") queue = self.mgr.get_queue(self.qname_out) for item in results: queue.put(item, block=True) logging.debug("batch_results() returning data")
Push a batch of output results to the Spark output RDD of ``TFCluster.inference()``. Note: this currently expects a one-to-one mapping of input to output data, so the length of the ``results`` array should match the length of the previously retrieved batch of input data. Args: :results: array of output data for the equivalent batch of input data.
Below is the the instruction that describes the task: ### Input: Push a batch of output results to the Spark output RDD of ``TFCluster.inference()``. Note: this currently expects a one-to-one mapping of input to output data, so the length of the ``results`` array should match the length of the previously retrieved batch of input data. Args: :results: array of output data for the equivalent batch of input data. ### Response: def batch_results(self, results): """Push a batch of output results to the Spark output RDD of ``TFCluster.inference()``. Note: this currently expects a one-to-one mapping of input to output data, so the length of the ``results`` array should match the length of the previously retrieved batch of input data. Args: :results: array of output data for the equivalent batch of input data. """ logging.debug("batch_results() invoked") queue = self.mgr.get_queue(self.qname_out) for item in results: queue.put(item, block=True) logging.debug("batch_results() returning data")
def finalize(self, **kwargs): """ Finalize the drawing setting labels and title. """ # Set the title self.set_title('Feature Importances of {} Features using {}'.format( len(self.features_), self.name)) # Set the xlabel self.ax.set_xlabel(self._get_xlabel()) # Remove the ygrid self.ax.grid(False, axis='y') if self.stack: plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") # Ensure we have a tight fit plt.tight_layout()
Finalize the drawing setting labels and title.
Below is the the instruction that describes the task: ### Input: Finalize the drawing setting labels and title. ### Response: def finalize(self, **kwargs): """ Finalize the drawing setting labels and title. """ # Set the title self.set_title('Feature Importances of {} Features using {}'.format( len(self.features_), self.name)) # Set the xlabel self.ax.set_xlabel(self._get_xlabel()) # Remove the ygrid self.ax.grid(False, axis='y') if self.stack: plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") # Ensure we have a tight fit plt.tight_layout()
def is_mixed_script(string, allowed_aliases=['COMMON']): """Checks if ``string`` contains mixed-scripts content, excluding script blocks aliases in ``allowed_aliases``. E.g. ``B. C`` is not considered mixed-scripts by default: it contains characters from **Latin** and **Common**, but **Common** is excluded by default. >>> confusables.is_mixed_script('Abç') False >>> confusables.is_mixed_script('ρτ.τ') False >>> confusables.is_mixed_script('ρτ.τ', allowed_aliases=[]) True >>> confusables.is_mixed_script('Alloτ') True :param string: A unicode string :type string: str :param allowed_aliases: Script blocks aliases not to consider. :type allowed_aliases: list(str) :return: Whether ``string`` is considered mixed-scripts or not. :rtype: bool """ allowed_aliases = [a.upper() for a in allowed_aliases] cats = unique_aliases(string) - set(allowed_aliases) return len(cats) > 1
Checks if ``string`` contains mixed-scripts content, excluding script blocks aliases in ``allowed_aliases``. E.g. ``B. C`` is not considered mixed-scripts by default: it contains characters from **Latin** and **Common**, but **Common** is excluded by default. >>> confusables.is_mixed_script('Abç') False >>> confusables.is_mixed_script('ρτ.τ') False >>> confusables.is_mixed_script('ρτ.τ', allowed_aliases=[]) True >>> confusables.is_mixed_script('Alloτ') True :param string: A unicode string :type string: str :param allowed_aliases: Script blocks aliases not to consider. :type allowed_aliases: list(str) :return: Whether ``string`` is considered mixed-scripts or not. :rtype: bool
Below is the the instruction that describes the task: ### Input: Checks if ``string`` contains mixed-scripts content, excluding script blocks aliases in ``allowed_aliases``. E.g. ``B. C`` is not considered mixed-scripts by default: it contains characters from **Latin** and **Common**, but **Common** is excluded by default. >>> confusables.is_mixed_script('Abç') False >>> confusables.is_mixed_script('ρτ.τ') False >>> confusables.is_mixed_script('ρτ.τ', allowed_aliases=[]) True >>> confusables.is_mixed_script('Alloτ') True :param string: A unicode string :type string: str :param allowed_aliases: Script blocks aliases not to consider. :type allowed_aliases: list(str) :return: Whether ``string`` is considered mixed-scripts or not. :rtype: bool ### Response: def is_mixed_script(string, allowed_aliases=['COMMON']): """Checks if ``string`` contains mixed-scripts content, excluding script blocks aliases in ``allowed_aliases``. E.g. ``B. C`` is not considered mixed-scripts by default: it contains characters from **Latin** and **Common**, but **Common** is excluded by default. >>> confusables.is_mixed_script('Abç') False >>> confusables.is_mixed_script('ρτ.τ') False >>> confusables.is_mixed_script('ρτ.τ', allowed_aliases=[]) True >>> confusables.is_mixed_script('Alloτ') True :param string: A unicode string :type string: str :param allowed_aliases: Script blocks aliases not to consider. :type allowed_aliases: list(str) :return: Whether ``string`` is considered mixed-scripts or not. :rtype: bool """ allowed_aliases = [a.upper() for a in allowed_aliases] cats = unique_aliases(string) - set(allowed_aliases) return len(cats) > 1
def calculate_connvectivity_radius(self, amount_clusters, maximum_iterations = 100): """! @brief Calculates connectivity radius of allocation specified amount of clusters using ordering diagram and marks borders of clusters using indexes of values of ordering diagram. @details Parameter 'maximum_iterations' is used to protect from hanging when it is impossible to allocate specified number of clusters. @param[in] amount_clusters (uint): amount of clusters that should be allocated by calculated connectivity radius. @param[in] maximum_iterations (uint): maximum number of iteration for searching connectivity radius to allocated specified amount of clusters (by default it is restricted by 100 iterations). @return (double, list) Value of connectivity radius and borders of clusters like (radius, borders), radius may be 'None' as well as borders may be '[]' if connectivity radius hasn't been found for the specified amount of iterations. """ maximum_distance = max(self.__ordering) upper_distance = maximum_distance lower_distance = 0.0 result = None amount, borders = self.extract_cluster_amount(maximum_distance) if amount <= amount_clusters: for _ in range(maximum_iterations): radius = (lower_distance + upper_distance) / 2.0 amount, borders = self.extract_cluster_amount(radius) if amount == amount_clusters: result = radius break elif amount == 0: break elif amount > amount_clusters: lower_distance = radius elif amount < amount_clusters: upper_distance = radius return result, borders
! @brief Calculates connectivity radius of allocation specified amount of clusters using ordering diagram and marks borders of clusters using indexes of values of ordering diagram. @details Parameter 'maximum_iterations' is used to protect from hanging when it is impossible to allocate specified number of clusters. @param[in] amount_clusters (uint): amount of clusters that should be allocated by calculated connectivity radius. @param[in] maximum_iterations (uint): maximum number of iteration for searching connectivity radius to allocated specified amount of clusters (by default it is restricted by 100 iterations). @return (double, list) Value of connectivity radius and borders of clusters like (radius, borders), radius may be 'None' as well as borders may be '[]' if connectivity radius hasn't been found for the specified amount of iterations.
Below is the the instruction that describes the task: ### Input: ! @brief Calculates connectivity radius of allocation specified amount of clusters using ordering diagram and marks borders of clusters using indexes of values of ordering diagram. @details Parameter 'maximum_iterations' is used to protect from hanging when it is impossible to allocate specified number of clusters. @param[in] amount_clusters (uint): amount of clusters that should be allocated by calculated connectivity radius. @param[in] maximum_iterations (uint): maximum number of iteration for searching connectivity radius to allocated specified amount of clusters (by default it is restricted by 100 iterations). @return (double, list) Value of connectivity radius and borders of clusters like (radius, borders), radius may be 'None' as well as borders may be '[]' if connectivity radius hasn't been found for the specified amount of iterations. ### Response: def calculate_connvectivity_radius(self, amount_clusters, maximum_iterations = 100): """! @brief Calculates connectivity radius of allocation specified amount of clusters using ordering diagram and marks borders of clusters using indexes of values of ordering diagram. @details Parameter 'maximum_iterations' is used to protect from hanging when it is impossible to allocate specified number of clusters. @param[in] amount_clusters (uint): amount of clusters that should be allocated by calculated connectivity radius. @param[in] maximum_iterations (uint): maximum number of iteration for searching connectivity radius to allocated specified amount of clusters (by default it is restricted by 100 iterations). @return (double, list) Value of connectivity radius and borders of clusters like (radius, borders), radius may be 'None' as well as borders may be '[]' if connectivity radius hasn't been found for the specified amount of iterations. """ maximum_distance = max(self.__ordering) upper_distance = maximum_distance lower_distance = 0.0 result = None amount, borders = self.extract_cluster_amount(maximum_distance) if amount <= amount_clusters: for _ in range(maximum_iterations): radius = (lower_distance + upper_distance) / 2.0 amount, borders = self.extract_cluster_amount(radius) if amount == amount_clusters: result = radius break elif amount == 0: break elif amount > amount_clusters: lower_distance = radius elif amount < amount_clusters: upper_distance = radius return result, borders
def parse(self, data): """ Split and iterate through the datafile to extract genres, tags and points. """ categories = data.split("\n\n") reference = {} reference_points = {} genre_index = [] tag_index = [] for category in categories: entries = category.strip().split("\n") entry_category, entry_points = self._parse_entry(entries[0].lower()) if entry_category.startswith("#"): continue for entry in entries: entry = entry.lower() if not entry: continue # Comment, ignore if entry.startswith("#"): continue # Handle genre if not entry.startswith("-"): genre, points = self._parse_entry(entry) reference[genre] = entry_category reference_points[genre] = points genre_index.append(genre) # Handle tag else: tag = entry[1:] tag, points = self._parse_entry(tag, limit=9.5) reference[tag] = entry_category reference_points[tag] = points tag_index.append(tag) self.reference = reference self.genres = genre_index self.tags = tag_index self.points = reference_points
Split and iterate through the datafile to extract genres, tags and points.
Below is the the instruction that describes the task: ### Input: Split and iterate through the datafile to extract genres, tags and points. ### Response: def parse(self, data): """ Split and iterate through the datafile to extract genres, tags and points. """ categories = data.split("\n\n") reference = {} reference_points = {} genre_index = [] tag_index = [] for category in categories: entries = category.strip().split("\n") entry_category, entry_points = self._parse_entry(entries[0].lower()) if entry_category.startswith("#"): continue for entry in entries: entry = entry.lower() if not entry: continue # Comment, ignore if entry.startswith("#"): continue # Handle genre if not entry.startswith("-"): genre, points = self._parse_entry(entry) reference[genre] = entry_category reference_points[genre] = points genre_index.append(genre) # Handle tag else: tag = entry[1:] tag, points = self._parse_entry(tag, limit=9.5) reference[tag] = entry_category reference_points[tag] = points tag_index.append(tag) self.reference = reference self.genres = genre_index self.tags = tag_index self.points = reference_points
def _prepare_args_with_initial_simplex(objective_function, initial_simplex, objective_at_initial_simplex, batch_evaluate_objective): """Evaluates the objective function at the specified initial simplex.""" initial_simplex = tf.convert_to_tensor(value=initial_simplex) # If d is the dimension of the problem, the number of vertices in the # simplex should be d+1. From this, we can infer the number of dimensions # as n - 1 where n is the number of vertices specified. num_vertices = tf.shape(input=initial_simplex)[0] dim = num_vertices - 1 num_evaluations = 0 if objective_at_initial_simplex is None: objective_at_initial_simplex, n_evals = _evaluate_objective_multiple( objective_function, initial_simplex, batch_evaluate_objective) num_evaluations += n_evals objective_at_initial_simplex = tf.convert_to_tensor( value=objective_at_initial_simplex) return (dim, num_vertices, initial_simplex, objective_at_initial_simplex, num_evaluations)
Evaluates the objective function at the specified initial simplex.
Below is the the instruction that describes the task: ### Input: Evaluates the objective function at the specified initial simplex. ### Response: def _prepare_args_with_initial_simplex(objective_function, initial_simplex, objective_at_initial_simplex, batch_evaluate_objective): """Evaluates the objective function at the specified initial simplex.""" initial_simplex = tf.convert_to_tensor(value=initial_simplex) # If d is the dimension of the problem, the number of vertices in the # simplex should be d+1. From this, we can infer the number of dimensions # as n - 1 where n is the number of vertices specified. num_vertices = tf.shape(input=initial_simplex)[0] dim = num_vertices - 1 num_evaluations = 0 if objective_at_initial_simplex is None: objective_at_initial_simplex, n_evals = _evaluate_objective_multiple( objective_function, initial_simplex, batch_evaluate_objective) num_evaluations += n_evals objective_at_initial_simplex = tf.convert_to_tensor( value=objective_at_initial_simplex) return (dim, num_vertices, initial_simplex, objective_at_initial_simplex, num_evaluations)
def cast_to_list(position): """Cast the positional argument at given position into a list if not already a list.""" @wrapt.decorator def wrapper(function, instance, args, kwargs): if not isinstance(args[position], list): args = list(args) args[position] = [args[position]] args = tuple(args) return function(*args, **kwargs) return wrapper
Cast the positional argument at given position into a list if not already a list.
Below is the the instruction that describes the task: ### Input: Cast the positional argument at given position into a list if not already a list. ### Response: def cast_to_list(position): """Cast the positional argument at given position into a list if not already a list.""" @wrapt.decorator def wrapper(function, instance, args, kwargs): if not isinstance(args[position], list): args = list(args) args[position] = [args[position]] args = tuple(args) return function(*args, **kwargs) return wrapper
def findSequenceOnDisk(cls, pattern, strictPadding=False): """ Search for a specific sequence on disk. The padding characters used in the `pattern` are used to filter the frame values of the files on disk (if `strictPadding` is True). Examples: Find sequence matching basename and extension, and a wildcard for any frame. returns bar.1.exr bar.10.exr, bar.100.exr, bar.1000.exr, inclusive >>> findSequenceOnDisk("seq/bar@@@@.exr") Find exactly 4-padded sequence, i.e. seq/bar1-100#.exr returns only frames bar1000.exr through bar9999.exr >>> findSequenceOnDisk("seq/bar#.exr", strictPadding=True) Args: pattern (str): the sequence pattern being searched for strictPadding (bool): if True, ignore files with padding length different from `pattern` Returns: str: Raises: :class:`.FileSeqException`: if no sequence is found on disk """ seq = cls(pattern) if seq.frameRange() == '' and seq.padding() == '': if os.path.isfile(pattern): return seq patt = seq.format('{dirname}{basename}*{extension}') ext = seq.extension() basename = seq.basename() pad = seq.padding() globbed = iglob(patt) if pad and strictPadding: globbed = cls._filterByPaddingNum(globbed, seq.zfill()) pad = cls.conformPadding(pad) matches = cls.yield_sequences_in_list(globbed) for match in matches: if match.basename() == basename and match.extension() == ext: if pad and strictPadding: match.setPadding(pad) return match msg = 'no sequence found on disk matching {0}' raise FileSeqException(msg.format(pattern))
Search for a specific sequence on disk. The padding characters used in the `pattern` are used to filter the frame values of the files on disk (if `strictPadding` is True). Examples: Find sequence matching basename and extension, and a wildcard for any frame. returns bar.1.exr bar.10.exr, bar.100.exr, bar.1000.exr, inclusive >>> findSequenceOnDisk("seq/bar@@@@.exr") Find exactly 4-padded sequence, i.e. seq/bar1-100#.exr returns only frames bar1000.exr through bar9999.exr >>> findSequenceOnDisk("seq/bar#.exr", strictPadding=True) Args: pattern (str): the sequence pattern being searched for strictPadding (bool): if True, ignore files with padding length different from `pattern` Returns: str: Raises: :class:`.FileSeqException`: if no sequence is found on disk
Below is the the instruction that describes the task: ### Input: Search for a specific sequence on disk. The padding characters used in the `pattern` are used to filter the frame values of the files on disk (if `strictPadding` is True). Examples: Find sequence matching basename and extension, and a wildcard for any frame. returns bar.1.exr bar.10.exr, bar.100.exr, bar.1000.exr, inclusive >>> findSequenceOnDisk("seq/bar@@@@.exr") Find exactly 4-padded sequence, i.e. seq/bar1-100#.exr returns only frames bar1000.exr through bar9999.exr >>> findSequenceOnDisk("seq/bar#.exr", strictPadding=True) Args: pattern (str): the sequence pattern being searched for strictPadding (bool): if True, ignore files with padding length different from `pattern` Returns: str: Raises: :class:`.FileSeqException`: if no sequence is found on disk ### Response: def findSequenceOnDisk(cls, pattern, strictPadding=False): """ Search for a specific sequence on disk. The padding characters used in the `pattern` are used to filter the frame values of the files on disk (if `strictPadding` is True). Examples: Find sequence matching basename and extension, and a wildcard for any frame. returns bar.1.exr bar.10.exr, bar.100.exr, bar.1000.exr, inclusive >>> findSequenceOnDisk("seq/bar@@@@.exr") Find exactly 4-padded sequence, i.e. seq/bar1-100#.exr returns only frames bar1000.exr through bar9999.exr >>> findSequenceOnDisk("seq/bar#.exr", strictPadding=True) Args: pattern (str): the sequence pattern being searched for strictPadding (bool): if True, ignore files with padding length different from `pattern` Returns: str: Raises: :class:`.FileSeqException`: if no sequence is found on disk """ seq = cls(pattern) if seq.frameRange() == '' and seq.padding() == '': if os.path.isfile(pattern): return seq patt = seq.format('{dirname}{basename}*{extension}') ext = seq.extension() basename = seq.basename() pad = seq.padding() globbed = iglob(patt) if pad and strictPadding: globbed = cls._filterByPaddingNum(globbed, seq.zfill()) pad = cls.conformPadding(pad) matches = cls.yield_sequences_in_list(globbed) for match in matches: if match.basename() == basename and match.extension() == ext: if pad and strictPadding: match.setPadding(pad) return match msg = 'no sequence found on disk matching {0}' raise FileSeqException(msg.format(pattern))
def do_GET(self): """Handles a GET request.""" thread_local.clock_start = get_time() thread_local.status_code = 200 thread_local.message = None thread_local.headers = [] thread_local.end_headers = [] thread_local.size = -1 thread_local.method = 'GET' try: self.cross_origin_headers() if self.handle_special(True, 'GET'): return SimpleHTTPRequestHandler.do_GET(self) except PreventDefaultResponse as pdr: if pdr.code: self.send_error(pdr.code, pdr.msg) except (KeyboardInterrupt, SystemExit): raise except Exception: self.handle_error()
Handles a GET request.
Below is the the instruction that describes the task: ### Input: Handles a GET request. ### Response: def do_GET(self): """Handles a GET request.""" thread_local.clock_start = get_time() thread_local.status_code = 200 thread_local.message = None thread_local.headers = [] thread_local.end_headers = [] thread_local.size = -1 thread_local.method = 'GET' try: self.cross_origin_headers() if self.handle_special(True, 'GET'): return SimpleHTTPRequestHandler.do_GET(self) except PreventDefaultResponse as pdr: if pdr.code: self.send_error(pdr.code, pdr.msg) except (KeyboardInterrupt, SystemExit): raise except Exception: self.handle_error()
def get_header_guard_dmlc(filename): """Get Header Guard Convention for DMLC Projects. For headers in include, directly use the path For headers in src, use project name plus path Examples: with project-name = dmlc include/dmlc/timer.h -> DMLC_TIMTER_H_ src/io/libsvm_parser.h -> DMLC_IO_LIBSVM_PARSER_H_ """ fileinfo = cpplint.FileInfo(filename) file_path_from_root = fileinfo.RepositoryName() inc_list = ['include', 'api', 'wrapper'] if file_path_from_root.find('src/') != -1 and _HELPER.project_name is not None: idx = file_path_from_root.find('src/') file_path_from_root = _HELPER.project_name + file_path_from_root[idx + 3:] else: for spath in inc_list: prefix = spath + os.sep if file_path_from_root.startswith(prefix): file_path_from_root = re.sub('^' + prefix, '', file_path_from_root) break return re.sub(r'[-./\s]', '_', file_path_from_root).upper() + '_'
Get Header Guard Convention for DMLC Projects. For headers in include, directly use the path For headers in src, use project name plus path Examples: with project-name = dmlc include/dmlc/timer.h -> DMLC_TIMTER_H_ src/io/libsvm_parser.h -> DMLC_IO_LIBSVM_PARSER_H_
Below is the the instruction that describes the task: ### Input: Get Header Guard Convention for DMLC Projects. For headers in include, directly use the path For headers in src, use project name plus path Examples: with project-name = dmlc include/dmlc/timer.h -> DMLC_TIMTER_H_ src/io/libsvm_parser.h -> DMLC_IO_LIBSVM_PARSER_H_ ### Response: def get_header_guard_dmlc(filename): """Get Header Guard Convention for DMLC Projects. For headers in include, directly use the path For headers in src, use project name plus path Examples: with project-name = dmlc include/dmlc/timer.h -> DMLC_TIMTER_H_ src/io/libsvm_parser.h -> DMLC_IO_LIBSVM_PARSER_H_ """ fileinfo = cpplint.FileInfo(filename) file_path_from_root = fileinfo.RepositoryName() inc_list = ['include', 'api', 'wrapper'] if file_path_from_root.find('src/') != -1 and _HELPER.project_name is not None: idx = file_path_from_root.find('src/') file_path_from_root = _HELPER.project_name + file_path_from_root[idx + 3:] else: for spath in inc_list: prefix = spath + os.sep if file_path_from_root.startswith(prefix): file_path_from_root = re.sub('^' + prefix, '', file_path_from_root) break return re.sub(r'[-./\s]', '_', file_path_from_root).upper() + '_'
def login(): """View function for login view""" form_class = _security.login_form if request.is_json: form = form_class(MultiDict(request.get_json())) else: form = form_class(request.form) if form.validate_on_submit(): login_user(form.user, remember=form.remember.data) after_this_request(_commit) if not request.is_json: return redirect(get_post_login_redirect(form.next.data)) if request.is_json: return _render_json(form, include_auth_token=True) return _security.render_template(config_value('LOGIN_USER_TEMPLATE'), login_user_form=form, **_ctx('login'))
View function for login view
Below is the the instruction that describes the task: ### Input: View function for login view ### Response: def login(): """View function for login view""" form_class = _security.login_form if request.is_json: form = form_class(MultiDict(request.get_json())) else: form = form_class(request.form) if form.validate_on_submit(): login_user(form.user, remember=form.remember.data) after_this_request(_commit) if not request.is_json: return redirect(get_post_login_redirect(form.next.data)) if request.is_json: return _render_json(form, include_auth_token=True) return _security.render_template(config_value('LOGIN_USER_TEMPLATE'), login_user_form=form, **_ctx('login'))
def declare_queue(self, queue_name): """Declare a queue. Has no effect if a queue with the given name has already been declared. Parameters: queue_name(str): The name of the new queue. """ if queue_name not in self.queues: self.emit_before("declare_queue", queue_name) self.queues[queue_name] = Queue() self.emit_after("declare_queue", queue_name) delayed_name = dq_name(queue_name) self.queues[delayed_name] = Queue() self.delay_queues.add(delayed_name) self.emit_after("declare_delay_queue", delayed_name)
Declare a queue. Has no effect if a queue with the given name has already been declared. Parameters: queue_name(str): The name of the new queue.
Below is the the instruction that describes the task: ### Input: Declare a queue. Has no effect if a queue with the given name has already been declared. Parameters: queue_name(str): The name of the new queue. ### Response: def declare_queue(self, queue_name): """Declare a queue. Has no effect if a queue with the given name has already been declared. Parameters: queue_name(str): The name of the new queue. """ if queue_name not in self.queues: self.emit_before("declare_queue", queue_name) self.queues[queue_name] = Queue() self.emit_after("declare_queue", queue_name) delayed_name = dq_name(queue_name) self.queues[delayed_name] = Queue() self.delay_queues.add(delayed_name) self.emit_after("declare_delay_queue", delayed_name)
def scramble_mutation(random, candidate, args): """Return the mutants created by scramble mutation on the candidates. This function performs scramble mutation. It randomly chooses two locations along the candidate and scrambles the values within that slice. .. Arguments: random -- the random number generator object candidate -- the candidate solution args -- a dictionary of keyword arguments Optional keyword arguments in args: - *mutation_rate* -- the rate at which mutation is performed (default 0.1) The mutation rate is applied to the candidate as a whole (i.e., it either mutates or it does not, based on the rate). """ rate = args.setdefault('mutation_rate', 0.1) if random.random() < rate: size = len(candidate) p = random.randint(0, size-1) q = random.randint(0, size-1) p, q = min(p, q), max(p, q) s = candidate[p:q+1] random.shuffle(s) return candidate[:p] + s[::-1] + candidate[q+1:] else: return candidate
Return the mutants created by scramble mutation on the candidates. This function performs scramble mutation. It randomly chooses two locations along the candidate and scrambles the values within that slice. .. Arguments: random -- the random number generator object candidate -- the candidate solution args -- a dictionary of keyword arguments Optional keyword arguments in args: - *mutation_rate* -- the rate at which mutation is performed (default 0.1) The mutation rate is applied to the candidate as a whole (i.e., it either mutates or it does not, based on the rate).
Below is the the instruction that describes the task: ### Input: Return the mutants created by scramble mutation on the candidates. This function performs scramble mutation. It randomly chooses two locations along the candidate and scrambles the values within that slice. .. Arguments: random -- the random number generator object candidate -- the candidate solution args -- a dictionary of keyword arguments Optional keyword arguments in args: - *mutation_rate* -- the rate at which mutation is performed (default 0.1) The mutation rate is applied to the candidate as a whole (i.e., it either mutates or it does not, based on the rate). ### Response: def scramble_mutation(random, candidate, args): """Return the mutants created by scramble mutation on the candidates. This function performs scramble mutation. It randomly chooses two locations along the candidate and scrambles the values within that slice. .. Arguments: random -- the random number generator object candidate -- the candidate solution args -- a dictionary of keyword arguments Optional keyword arguments in args: - *mutation_rate* -- the rate at which mutation is performed (default 0.1) The mutation rate is applied to the candidate as a whole (i.e., it either mutates or it does not, based on the rate). """ rate = args.setdefault('mutation_rate', 0.1) if random.random() < rate: size = len(candidate) p = random.randint(0, size-1) q = random.randint(0, size-1) p, q = min(p, q), max(p, q) s = candidate[p:q+1] random.shuffle(s) return candidate[:p] + s[::-1] + candidate[q+1:] else: return candidate
def find_blocked_biomass_precursors(reaction, model): """ Return a list of all biomass precursors that cannot be produced. Parameters ---------- reaction : cobra.core.reaction.Reaction The biomass reaction of the model under investigation. model : cobra.Model The metabolic model under investigation. Returns ------- list Metabolite objects that are reactants of the biomass reaction excluding ATP and H2O that cannot be produced by flux balance analysis. """ LOGGER.debug("Finding blocked biomass precursors") precursors = find_biomass_precursors(model, reaction) blocked_precursors = list() _, ub = helpers.find_bounds(model) for precursor in precursors: with model: dm_rxn = model.add_boundary( precursor, type="safe-demand", reaction_id="safe_demand", lb=0, ub=ub ) flux = helpers.run_fba(model, dm_rxn.id, direction='max') if np.isnan(flux) or abs(flux) < 1E-08: blocked_precursors.append(precursor) return blocked_precursors
Return a list of all biomass precursors that cannot be produced. Parameters ---------- reaction : cobra.core.reaction.Reaction The biomass reaction of the model under investigation. model : cobra.Model The metabolic model under investigation. Returns ------- list Metabolite objects that are reactants of the biomass reaction excluding ATP and H2O that cannot be produced by flux balance analysis.
Below is the the instruction that describes the task: ### Input: Return a list of all biomass precursors that cannot be produced. Parameters ---------- reaction : cobra.core.reaction.Reaction The biomass reaction of the model under investigation. model : cobra.Model The metabolic model under investigation. Returns ------- list Metabolite objects that are reactants of the biomass reaction excluding ATP and H2O that cannot be produced by flux balance analysis. ### Response: def find_blocked_biomass_precursors(reaction, model): """ Return a list of all biomass precursors that cannot be produced. Parameters ---------- reaction : cobra.core.reaction.Reaction The biomass reaction of the model under investigation. model : cobra.Model The metabolic model under investigation. Returns ------- list Metabolite objects that are reactants of the biomass reaction excluding ATP and H2O that cannot be produced by flux balance analysis. """ LOGGER.debug("Finding blocked biomass precursors") precursors = find_biomass_precursors(model, reaction) blocked_precursors = list() _, ub = helpers.find_bounds(model) for precursor in precursors: with model: dm_rxn = model.add_boundary( precursor, type="safe-demand", reaction_id="safe_demand", lb=0, ub=ub ) flux = helpers.run_fba(model, dm_rxn.id, direction='max') if np.isnan(flux) or abs(flux) < 1E-08: blocked_precursors.append(precursor) return blocked_precursors
def console_print_rect( con: tcod.console.Console, x: int, y: int, w: int, h: int, fmt: str ) -> int: """Print a string constrained to a rectangle. If h > 0 and the bottom of the rectangle is reached, the string is truncated. If h = 0, the string is only truncated if it reaches the bottom of the console. Returns: int: The number of lines of text once word-wrapped. .. deprecated:: 8.5 Use :any:`Console.print_rect` instead. """ return int( lib.TCOD_console_printf_rect(_console(con), x, y, w, h, _fmt(fmt)) )
Print a string constrained to a rectangle. If h > 0 and the bottom of the rectangle is reached, the string is truncated. If h = 0, the string is only truncated if it reaches the bottom of the console. Returns: int: The number of lines of text once word-wrapped. .. deprecated:: 8.5 Use :any:`Console.print_rect` instead.
Below is the the instruction that describes the task: ### Input: Print a string constrained to a rectangle. If h > 0 and the bottom of the rectangle is reached, the string is truncated. If h = 0, the string is only truncated if it reaches the bottom of the console. Returns: int: The number of lines of text once word-wrapped. .. deprecated:: 8.5 Use :any:`Console.print_rect` instead. ### Response: def console_print_rect( con: tcod.console.Console, x: int, y: int, w: int, h: int, fmt: str ) -> int: """Print a string constrained to a rectangle. If h > 0 and the bottom of the rectangle is reached, the string is truncated. If h = 0, the string is only truncated if it reaches the bottom of the console. Returns: int: The number of lines of text once word-wrapped. .. deprecated:: 8.5 Use :any:`Console.print_rect` instead. """ return int( lib.TCOD_console_printf_rect(_console(con), x, y, w, h, _fmt(fmt)) )
def list_to_json(source_list): """ Serialise all the items in source_list to json """ result = [] for item in source_list: result.append(item.to_json()) return result
Serialise all the items in source_list to json
Below is the the instruction that describes the task: ### Input: Serialise all the items in source_list to json ### Response: def list_to_json(source_list): """ Serialise all the items in source_list to json """ result = [] for item in source_list: result.append(item.to_json()) return result
def plot(self, overlay=True, **labels): # pragma: no cover '''Plot all time series in the group.''' pylab = LazyImport.pylab() colours = list('rgbymc') colours_len = len(colours) colours_pos = 0 plots = len(self.groups) for name, series in self.groups.iteritems(): colour = colours[colours_pos % colours_len] colours_pos += 1 if not overlay: pylab.subplot(plots, 1, colours_pos) kwargs = {} if name in labels: name = labels[name] if name is not None: kwargs['label'] = name pylab.plot(series.dates, series.values, '%s-' % colour, **kwargs) if name is not None: pylab.legend() pylab.show()
Plot all time series in the group.
Below is the the instruction that describes the task: ### Input: Plot all time series in the group. ### Response: def plot(self, overlay=True, **labels): # pragma: no cover '''Plot all time series in the group.''' pylab = LazyImport.pylab() colours = list('rgbymc') colours_len = len(colours) colours_pos = 0 plots = len(self.groups) for name, series in self.groups.iteritems(): colour = colours[colours_pos % colours_len] colours_pos += 1 if not overlay: pylab.subplot(plots, 1, colours_pos) kwargs = {} if name in labels: name = labels[name] if name is not None: kwargs['label'] = name pylab.plot(series.dates, series.values, '%s-' % colour, **kwargs) if name is not None: pylab.legend() pylab.show()
def _update_mean_coords(self, dig, N, centers_sum, **paircoords): """ Update the mean coordinate sums """ if N is None or centers_sum is None: return N.flat[:] += utils.bincount(dig, 1., minlength=N.size) for i, dim in enumerate(self.dims): size = centers_sum[i].size centers_sum[i].flat[:] += utils.bincount(dig, paircoords[dim], minlength=size)
Update the mean coordinate sums
Below is the the instruction that describes the task: ### Input: Update the mean coordinate sums ### Response: def _update_mean_coords(self, dig, N, centers_sum, **paircoords): """ Update the mean coordinate sums """ if N is None or centers_sum is None: return N.flat[:] += utils.bincount(dig, 1., minlength=N.size) for i, dim in enumerate(self.dims): size = centers_sum[i].size centers_sum[i].flat[:] += utils.bincount(dig, paircoords[dim], minlength=size)
def on_stop(self): """ stop requester """ LOGGER.debug("natsd.Requester.on_stop") self.is_started = False try: LOGGER.debug("natsd.Requester.on_stop - unsubscribe from " + str(self.responseQS)) next(self.nc.unsubscribe(self.responseQS)) except StopIteration as e: pass try: LOGGER.debug("natsd.Requester.on_stop - close nats connection") next(self.nc.close()) except StopIteration as e: pass LOGGER.debug("natsd.Requester.on_stop - nc is closed: " + str(self.nc.is_closed)) try: LOGGER.debug("natsd.Requester.on_stop - cancelling aio tasks loop") loop_to_stop = self.loop for task in asyncio.Task.all_tasks(loop_to_stop): LOGGER.debug("natsd.Requester.on_stop - cancelling task " + str(task)) task.cancel() LOGGER.debug("natsd.Requester.on_stop - stopping aio loop stop") loop_to_stop.stop() count = 0 while loop_to_stop.is_running(): count += 1 if count % 10 == 0: LOGGER.debug("natsd.Requester.on_stop - waiting aio loop to be stopped (" + str(asyncio.Task.all_tasks(loop_to_stop).__len__()) + " tasks left; " + "current task: " + str(asyncio.Task.current_task(loop_to_stop)) + ")") for task in asyncio.Task.all_tasks(loop_to_stop): LOGGER.debug("natsd.Requester.on_stop - cancelling task " + str(task)) task.cancel() time.sleep(1) if count == 120: LOGGER.error("natsd.Requester.on_stop - unable to stop aio loop after 120 sec (" + str(asyncio.Task.all_tasks(loop_to_stop).__len__()) + " tasks left; " + "current task: " + str(asyncio.Task.current_task(loop_to_stop)) + ")") break if not loop_to_stop.is_running(): LOGGER.debug("natsd.Requester.on_stop - close aio loop") loop_to_stop.close() except Exception as e: LOGGER.warn("natsd.Requester.on_stop - exception on aio clean : " + traceback.format_exc())
stop requester
Below is the the instruction that describes the task: ### Input: stop requester ### Response: def on_stop(self): """ stop requester """ LOGGER.debug("natsd.Requester.on_stop") self.is_started = False try: LOGGER.debug("natsd.Requester.on_stop - unsubscribe from " + str(self.responseQS)) next(self.nc.unsubscribe(self.responseQS)) except StopIteration as e: pass try: LOGGER.debug("natsd.Requester.on_stop - close nats connection") next(self.nc.close()) except StopIteration as e: pass LOGGER.debug("natsd.Requester.on_stop - nc is closed: " + str(self.nc.is_closed)) try: LOGGER.debug("natsd.Requester.on_stop - cancelling aio tasks loop") loop_to_stop = self.loop for task in asyncio.Task.all_tasks(loop_to_stop): LOGGER.debug("natsd.Requester.on_stop - cancelling task " + str(task)) task.cancel() LOGGER.debug("natsd.Requester.on_stop - stopping aio loop stop") loop_to_stop.stop() count = 0 while loop_to_stop.is_running(): count += 1 if count % 10 == 0: LOGGER.debug("natsd.Requester.on_stop - waiting aio loop to be stopped (" + str(asyncio.Task.all_tasks(loop_to_stop).__len__()) + " tasks left; " + "current task: " + str(asyncio.Task.current_task(loop_to_stop)) + ")") for task in asyncio.Task.all_tasks(loop_to_stop): LOGGER.debug("natsd.Requester.on_stop - cancelling task " + str(task)) task.cancel() time.sleep(1) if count == 120: LOGGER.error("natsd.Requester.on_stop - unable to stop aio loop after 120 sec (" + str(asyncio.Task.all_tasks(loop_to_stop).__len__()) + " tasks left; " + "current task: " + str(asyncio.Task.current_task(loop_to_stop)) + ")") break if not loop_to_stop.is_running(): LOGGER.debug("natsd.Requester.on_stop - close aio loop") loop_to_stop.close() except Exception as e: LOGGER.warn("natsd.Requester.on_stop - exception on aio clean : " + traceback.format_exc())
def wait_displayed(element, timeout=None, fail_on_timeout=None): """ Wait until element becomes visible or time out. Returns true is element became visible, otherwise false. If timeout is not specified or 0, then uses specific element wait timeout. :param element: :param timeout: :param fail_on_timeout: :return: """ return wait(lambda: element.is_displayed(), timeout or element.wait_timeout, fail_on_timeout)
Wait until element becomes visible or time out. Returns true is element became visible, otherwise false. If timeout is not specified or 0, then uses specific element wait timeout. :param element: :param timeout: :param fail_on_timeout: :return:
Below is the the instruction that describes the task: ### Input: Wait until element becomes visible or time out. Returns true is element became visible, otherwise false. If timeout is not specified or 0, then uses specific element wait timeout. :param element: :param timeout: :param fail_on_timeout: :return: ### Response: def wait_displayed(element, timeout=None, fail_on_timeout=None): """ Wait until element becomes visible or time out. Returns true is element became visible, otherwise false. If timeout is not specified or 0, then uses specific element wait timeout. :param element: :param timeout: :param fail_on_timeout: :return: """ return wait(lambda: element.is_displayed(), timeout or element.wait_timeout, fail_on_timeout)
def read_inquiry_scan_activity(sock): """returns the current inquiry scan interval and window, or -1 on failure""" # save current filter old_filter = sock.getsockopt( bluez.SOL_HCI, bluez.HCI_FILTER, 14) # Setup socket filter to receive only events related to the # read_inquiry_mode command flt = bluez.hci_filter_new() opcode = bluez.cmd_opcode_pack(bluez.OGF_HOST_CTL, bluez.OCF_READ_INQ_ACTIVITY) bluez.hci_filter_set_ptype(flt, bluez.HCI_EVENT_PKT) bluez.hci_filter_set_event(flt, bluez.EVT_CMD_COMPLETE); bluez.hci_filter_set_opcode(flt, opcode) sock.setsockopt( bluez.SOL_HCI, bluez.HCI_FILTER, flt ) # first read the current inquiry mode. bluez.hci_send_cmd(sock, bluez.OGF_HOST_CTL, bluez.OCF_READ_INQ_ACTIVITY ) pkt = sock.recv(255) status,interval,window = struct.unpack("!xxxxxxBHH", pkt) interval = bluez.btohs(interval) interval = (interval >> 8) | ( (interval & 0xFF) << 8 ) window = (window >> 8) | ( (window & 0xFF) << 8 ) if status != 0: mode = -1 # restore old filter sock.setsockopt( bluez.SOL_HCI, bluez.HCI_FILTER, old_filter ) return interval, window
returns the current inquiry scan interval and window, or -1 on failure
Below is the the instruction that describes the task: ### Input: returns the current inquiry scan interval and window, or -1 on failure ### Response: def read_inquiry_scan_activity(sock): """returns the current inquiry scan interval and window, or -1 on failure""" # save current filter old_filter = sock.getsockopt( bluez.SOL_HCI, bluez.HCI_FILTER, 14) # Setup socket filter to receive only events related to the # read_inquiry_mode command flt = bluez.hci_filter_new() opcode = bluez.cmd_opcode_pack(bluez.OGF_HOST_CTL, bluez.OCF_READ_INQ_ACTIVITY) bluez.hci_filter_set_ptype(flt, bluez.HCI_EVENT_PKT) bluez.hci_filter_set_event(flt, bluez.EVT_CMD_COMPLETE); bluez.hci_filter_set_opcode(flt, opcode) sock.setsockopt( bluez.SOL_HCI, bluez.HCI_FILTER, flt ) # first read the current inquiry mode. bluez.hci_send_cmd(sock, bluez.OGF_HOST_CTL, bluez.OCF_READ_INQ_ACTIVITY ) pkt = sock.recv(255) status,interval,window = struct.unpack("!xxxxxxBHH", pkt) interval = bluez.btohs(interval) interval = (interval >> 8) | ( (interval & 0xFF) << 8 ) window = (window >> 8) | ( (window & 0xFF) << 8 ) if status != 0: mode = -1 # restore old filter sock.setsockopt( bluez.SOL_HCI, bluez.HCI_FILTER, old_filter ) return interval, window
def remap_snps(self, individual, target_assembly, complement_bases=True): """ Remap the SNP coordinates of an individual from one assembly to another. This method uses the assembly map endpoint of the Ensembl REST API service (via ``Resources``'s ``EnsemblRestClient``) to convert SNP coordinates / positions from one assembly to another. After remapping, the coordinates / positions for the individual's SNPs will be that of the target assembly. If the SNPs are already mapped relative to the target assembly, remapping will not be performed. Parameters ---------- target_assembly : {'NCBI36', 'GRCh37', 'GRCh38', 36, 37, 38} assembly to remap to complement_bases : bool complement bases when remapping SNPs to the minus strand Returns ------- chromosomes_remapped : list of str chromosomes remapped; empty if None chromosomes_not_remapped : list of str chromosomes not remapped; empty if None Notes ----- An assembly is also know as a "build." For example: Assembly NCBI36 = Build 36 Assembly GRCh37 = Build 37 Assembly GRCh38 = Build 38 See https://www.ncbi.nlm.nih.gov/assembly for more information about assemblies and remapping. References ---------- ..[1] Ensembl, Assembly Map Endpoint, http://rest.ensembl.org/documentation/info/assembly_map """ chromosomes_remapped = [] chromosomes_not_remapped = [] snps = individual.snps if snps is None: print("No SNPs to remap") return chromosomes_remapped, chromosomes_not_remapped else: chromosomes_not_remapped = list(snps["chrom"].unique()) valid_assemblies = ["NCBI36", "GRCh37", "GRCh38", 36, 37, 38] if target_assembly not in valid_assemblies: print("Invalid target assembly") return chromosomes_remapped, chromosomes_not_remapped if isinstance(target_assembly, int): if target_assembly == 36: target_assembly = "NCBI36" else: target_assembly = "GRCh" + str(target_assembly) if individual.build == 36: source_assembly = "NCBI36" else: source_assembly = "GRCh" + str(individual.build) if source_assembly == target_assembly: return chromosomes_remapped, chromosomes_not_remapped assembly_mapping_data = self._resources.get_assembly_mapping_data( source_assembly, target_assembly ) if assembly_mapping_data is None: return chromosomes_remapped, chromosomes_not_remapped for chrom in snps["chrom"].unique(): # extract SNPs for this chrom for faster remapping temp = pd.DataFrame(snps.loc[snps["chrom"] == chrom]) temp["remapped"] = False if chrom in assembly_mapping_data: chromosomes_remapped.append(chrom) chromosomes_not_remapped.remove(chrom) mappings = assembly_mapping_data[chrom] else: print( "Chromosome " + chrom + " not remapped; " "removing chromosome from SNPs for consistency" ) snps = snps.drop(snps.loc[snps["chrom"] == chrom].index) continue pos_start = int(temp["pos"].describe()["min"]) pos_end = int(temp["pos"].describe()["max"]) for mapping in mappings["mappings"]: # skip if mapping is outside of range of SNP positions if ( mapping["original"]["end"] <= pos_start or mapping["original"]["start"] >= pos_end ): continue orig_range_len = ( mapping["original"]["end"] - mapping["original"]["start"] ) mapped_range_len = mapping["mapped"]["end"] - mapping["mapped"]["start"] orig_region = mapping["original"]["seq_region_name"] mapped_region = mapping["mapped"]["seq_region_name"] if orig_region != mapped_region: print("discrepant chroms") continue if orig_range_len != mapped_range_len: print("discrepant coords") # observed when mapping NCBI36 -> GRCh38 continue # find the SNPs that are being remapped for this mapping snp_indices = temp.loc[ ~temp["remapped"] & (temp["pos"] >= mapping["original"]["start"]) & (temp["pos"] <= mapping["original"]["end"]) ].index if len(snp_indices) > 0: # remap the SNPs if mapping["mapped"]["strand"] == -1: # flip and (optionally) complement since we're mapping to minus strand diff_from_start = ( temp.loc[snp_indices, "pos"] - mapping["original"]["start"] ) temp.loc[snp_indices, "pos"] = ( mapping["mapped"]["end"] - diff_from_start ) if complement_bases: snps.loc[snp_indices, "genotype"] = temp.loc[ snp_indices, "genotype" ].apply(self._complement_bases) else: # mapping is on same (plus) strand, so just remap based on offset offset = ( mapping["mapped"]["start"] - mapping["original"]["start"] ) temp.loc[snp_indices, "pos"] = temp["pos"] + offset # mark these SNPs as remapped temp.loc[snp_indices, "remapped"] = True # update SNP positions for this chrom snps.loc[temp.index, "pos"] = temp["pos"] individual._set_snps(sort_snps(snps), int(target_assembly[-2:])) return chromosomes_remapped, chromosomes_not_remapped
Remap the SNP coordinates of an individual from one assembly to another. This method uses the assembly map endpoint of the Ensembl REST API service (via ``Resources``'s ``EnsemblRestClient``) to convert SNP coordinates / positions from one assembly to another. After remapping, the coordinates / positions for the individual's SNPs will be that of the target assembly. If the SNPs are already mapped relative to the target assembly, remapping will not be performed. Parameters ---------- target_assembly : {'NCBI36', 'GRCh37', 'GRCh38', 36, 37, 38} assembly to remap to complement_bases : bool complement bases when remapping SNPs to the minus strand Returns ------- chromosomes_remapped : list of str chromosomes remapped; empty if None chromosomes_not_remapped : list of str chromosomes not remapped; empty if None Notes ----- An assembly is also know as a "build." For example: Assembly NCBI36 = Build 36 Assembly GRCh37 = Build 37 Assembly GRCh38 = Build 38 See https://www.ncbi.nlm.nih.gov/assembly for more information about assemblies and remapping. References ---------- ..[1] Ensembl, Assembly Map Endpoint, http://rest.ensembl.org/documentation/info/assembly_map
Below is the the instruction that describes the task: ### Input: Remap the SNP coordinates of an individual from one assembly to another. This method uses the assembly map endpoint of the Ensembl REST API service (via ``Resources``'s ``EnsemblRestClient``) to convert SNP coordinates / positions from one assembly to another. After remapping, the coordinates / positions for the individual's SNPs will be that of the target assembly. If the SNPs are already mapped relative to the target assembly, remapping will not be performed. Parameters ---------- target_assembly : {'NCBI36', 'GRCh37', 'GRCh38', 36, 37, 38} assembly to remap to complement_bases : bool complement bases when remapping SNPs to the minus strand Returns ------- chromosomes_remapped : list of str chromosomes remapped; empty if None chromosomes_not_remapped : list of str chromosomes not remapped; empty if None Notes ----- An assembly is also know as a "build." For example: Assembly NCBI36 = Build 36 Assembly GRCh37 = Build 37 Assembly GRCh38 = Build 38 See https://www.ncbi.nlm.nih.gov/assembly for more information about assemblies and remapping. References ---------- ..[1] Ensembl, Assembly Map Endpoint, http://rest.ensembl.org/documentation/info/assembly_map ### Response: def remap_snps(self, individual, target_assembly, complement_bases=True): """ Remap the SNP coordinates of an individual from one assembly to another. This method uses the assembly map endpoint of the Ensembl REST API service (via ``Resources``'s ``EnsemblRestClient``) to convert SNP coordinates / positions from one assembly to another. After remapping, the coordinates / positions for the individual's SNPs will be that of the target assembly. If the SNPs are already mapped relative to the target assembly, remapping will not be performed. Parameters ---------- target_assembly : {'NCBI36', 'GRCh37', 'GRCh38', 36, 37, 38} assembly to remap to complement_bases : bool complement bases when remapping SNPs to the minus strand Returns ------- chromosomes_remapped : list of str chromosomes remapped; empty if None chromosomes_not_remapped : list of str chromosomes not remapped; empty if None Notes ----- An assembly is also know as a "build." For example: Assembly NCBI36 = Build 36 Assembly GRCh37 = Build 37 Assembly GRCh38 = Build 38 See https://www.ncbi.nlm.nih.gov/assembly for more information about assemblies and remapping. References ---------- ..[1] Ensembl, Assembly Map Endpoint, http://rest.ensembl.org/documentation/info/assembly_map """ chromosomes_remapped = [] chromosomes_not_remapped = [] snps = individual.snps if snps is None: print("No SNPs to remap") return chromosomes_remapped, chromosomes_not_remapped else: chromosomes_not_remapped = list(snps["chrom"].unique()) valid_assemblies = ["NCBI36", "GRCh37", "GRCh38", 36, 37, 38] if target_assembly not in valid_assemblies: print("Invalid target assembly") return chromosomes_remapped, chromosomes_not_remapped if isinstance(target_assembly, int): if target_assembly == 36: target_assembly = "NCBI36" else: target_assembly = "GRCh" + str(target_assembly) if individual.build == 36: source_assembly = "NCBI36" else: source_assembly = "GRCh" + str(individual.build) if source_assembly == target_assembly: return chromosomes_remapped, chromosomes_not_remapped assembly_mapping_data = self._resources.get_assembly_mapping_data( source_assembly, target_assembly ) if assembly_mapping_data is None: return chromosomes_remapped, chromosomes_not_remapped for chrom in snps["chrom"].unique(): # extract SNPs for this chrom for faster remapping temp = pd.DataFrame(snps.loc[snps["chrom"] == chrom]) temp["remapped"] = False if chrom in assembly_mapping_data: chromosomes_remapped.append(chrom) chromosomes_not_remapped.remove(chrom) mappings = assembly_mapping_data[chrom] else: print( "Chromosome " + chrom + " not remapped; " "removing chromosome from SNPs for consistency" ) snps = snps.drop(snps.loc[snps["chrom"] == chrom].index) continue pos_start = int(temp["pos"].describe()["min"]) pos_end = int(temp["pos"].describe()["max"]) for mapping in mappings["mappings"]: # skip if mapping is outside of range of SNP positions if ( mapping["original"]["end"] <= pos_start or mapping["original"]["start"] >= pos_end ): continue orig_range_len = ( mapping["original"]["end"] - mapping["original"]["start"] ) mapped_range_len = mapping["mapped"]["end"] - mapping["mapped"]["start"] orig_region = mapping["original"]["seq_region_name"] mapped_region = mapping["mapped"]["seq_region_name"] if orig_region != mapped_region: print("discrepant chroms") continue if orig_range_len != mapped_range_len: print("discrepant coords") # observed when mapping NCBI36 -> GRCh38 continue # find the SNPs that are being remapped for this mapping snp_indices = temp.loc[ ~temp["remapped"] & (temp["pos"] >= mapping["original"]["start"]) & (temp["pos"] <= mapping["original"]["end"]) ].index if len(snp_indices) > 0: # remap the SNPs if mapping["mapped"]["strand"] == -1: # flip and (optionally) complement since we're mapping to minus strand diff_from_start = ( temp.loc[snp_indices, "pos"] - mapping["original"]["start"] ) temp.loc[snp_indices, "pos"] = ( mapping["mapped"]["end"] - diff_from_start ) if complement_bases: snps.loc[snp_indices, "genotype"] = temp.loc[ snp_indices, "genotype" ].apply(self._complement_bases) else: # mapping is on same (plus) strand, so just remap based on offset offset = ( mapping["mapped"]["start"] - mapping["original"]["start"] ) temp.loc[snp_indices, "pos"] = temp["pos"] + offset # mark these SNPs as remapped temp.loc[snp_indices, "remapped"] = True # update SNP positions for this chrom snps.loc[temp.index, "pos"] = temp["pos"] individual._set_snps(sort_snps(snps), int(target_assembly[-2:])) return chromosomes_remapped, chromosomes_not_remapped
def check_candidate(a, d, n, s): """Part of the Miller-Rabin primality test in is_prime().""" if pow(a, d, n) == 1: return False for i in range(s): if pow(a, 2 ** i * d, n) == n - 1: return False return True
Part of the Miller-Rabin primality test in is_prime().
Below is the the instruction that describes the task: ### Input: Part of the Miller-Rabin primality test in is_prime(). ### Response: def check_candidate(a, d, n, s): """Part of the Miller-Rabin primality test in is_prime().""" if pow(a, d, n) == 1: return False for i in range(s): if pow(a, 2 ** i * d, n) == n - 1: return False return True
def index(self, key, start=None, stop=None): """ Return the smallest *k* such that `itemssview[k] == key` and `start <= k < end`. Raises `KeyError` if *key* is not present. *stop* defaults to the end of the set. *start* defaults to the beginning. Negative indexes are supported, as for slice indices. """ # pylint: disable=arguments-differ temp, value = key pos = self._list.index(temp, start, stop) if value == self._dict[temp]: return pos else: raise ValueError('{0!r} is not in dict'.format(key))
Return the smallest *k* such that `itemssview[k] == key` and `start <= k < end`. Raises `KeyError` if *key* is not present. *stop* defaults to the end of the set. *start* defaults to the beginning. Negative indexes are supported, as for slice indices.
Below is the the instruction that describes the task: ### Input: Return the smallest *k* such that `itemssview[k] == key` and `start <= k < end`. Raises `KeyError` if *key* is not present. *stop* defaults to the end of the set. *start* defaults to the beginning. Negative indexes are supported, as for slice indices. ### Response: def index(self, key, start=None, stop=None): """ Return the smallest *k* such that `itemssview[k] == key` and `start <= k < end`. Raises `KeyError` if *key* is not present. *stop* defaults to the end of the set. *start* defaults to the beginning. Negative indexes are supported, as for slice indices. """ # pylint: disable=arguments-differ temp, value = key pos = self._list.index(temp, start, stop) if value == self._dict[temp]: return pos else: raise ValueError('{0!r} is not in dict'.format(key))
def keep_alive(self): ''' Keep current transaction alive, updates self.expires Args: None Return: None: sets new self.expires ''' # keep transaction alive txn_response = self.api.http_request('POST','%sfcr:tx' % self.root, data=None, headers=None) # if 204, transaction kept alive if txn_response.status_code == 204: logger.debug("continuing transaction: %s" % self.root) # update status and timer self.active = True self.expires = txn_response.headers['Expires'] return True # if 410, transaction does not exist elif txn_response.status_code == 410: logger.debug("transaction does not exist: %s" % self.root) self.active = False return False else: raise Exception('HTTP %s, could not continue transaction' % txn_response.status_code)
Keep current transaction alive, updates self.expires Args: None Return: None: sets new self.expires
Below is the the instruction that describes the task: ### Input: Keep current transaction alive, updates self.expires Args: None Return: None: sets new self.expires ### Response: def keep_alive(self): ''' Keep current transaction alive, updates self.expires Args: None Return: None: sets new self.expires ''' # keep transaction alive txn_response = self.api.http_request('POST','%sfcr:tx' % self.root, data=None, headers=None) # if 204, transaction kept alive if txn_response.status_code == 204: logger.debug("continuing transaction: %s" % self.root) # update status and timer self.active = True self.expires = txn_response.headers['Expires'] return True # if 410, transaction does not exist elif txn_response.status_code == 410: logger.debug("transaction does not exist: %s" % self.root) self.active = False return False else: raise Exception('HTTP %s, could not continue transaction' % txn_response.status_code)
def _set_openflow_controller(self, v, load=False): """ Setter method for openflow_controller, mapped from YANG variable /openflow_controller (list) If this variable is read-only (config: false) in the source YANG file, then _set_openflow_controller is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_openflow_controller() directly. YANG Description: OpenFlow controller configuration """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("controller_name",openflow_controller.openflow_controller, yang_name="openflow-controller", rest_name="openflow-controller", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='controller-name', extensions={u'tailf-common': {u'info': u'OpenFlow controller configuration', u'cli-no-key-completion': None, u'sort-priority': u'66', u'cli-suppress-list-no': None, u'cli-suppress-key-abbreviation': None, u'cli-no-match-completion': None, u'callpoint': u'OpenFlowGlobalController'}}), is_container='list', yang_name="openflow-controller", rest_name="openflow-controller", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'OpenFlow controller configuration', u'cli-no-key-completion': None, u'sort-priority': u'66', u'cli-suppress-list-no': None, u'cli-suppress-key-abbreviation': None, u'cli-no-match-completion': None, u'callpoint': u'OpenFlowGlobalController'}}, namespace='urn:brocade.com:mgmt:brocade-openflow', defining_module='brocade-openflow', yang_type='list', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """openflow_controller must be of a type compatible with list""", 'defined-type': "list", 'generated-type': """YANGDynClass(base=YANGListType("controller_name",openflow_controller.openflow_controller, yang_name="openflow-controller", rest_name="openflow-controller", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='controller-name', extensions={u'tailf-common': {u'info': u'OpenFlow controller configuration', u'cli-no-key-completion': None, u'sort-priority': u'66', u'cli-suppress-list-no': None, u'cli-suppress-key-abbreviation': None, u'cli-no-match-completion': None, u'callpoint': u'OpenFlowGlobalController'}}), is_container='list', yang_name="openflow-controller", rest_name="openflow-controller", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'OpenFlow controller configuration', u'cli-no-key-completion': None, u'sort-priority': u'66', u'cli-suppress-list-no': None, u'cli-suppress-key-abbreviation': None, u'cli-no-match-completion': None, u'callpoint': u'OpenFlowGlobalController'}}, namespace='urn:brocade.com:mgmt:brocade-openflow', defining_module='brocade-openflow', yang_type='list', is_config=True)""", }) self.__openflow_controller = t if hasattr(self, '_set'): self._set()
Setter method for openflow_controller, mapped from YANG variable /openflow_controller (list) If this variable is read-only (config: false) in the source YANG file, then _set_openflow_controller is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_openflow_controller() directly. YANG Description: OpenFlow controller configuration
Below is the the instruction that describes the task: ### Input: Setter method for openflow_controller, mapped from YANG variable /openflow_controller (list) If this variable is read-only (config: false) in the source YANG file, then _set_openflow_controller is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_openflow_controller() directly. YANG Description: OpenFlow controller configuration ### Response: def _set_openflow_controller(self, v, load=False): """ Setter method for openflow_controller, mapped from YANG variable /openflow_controller (list) If this variable is read-only (config: false) in the source YANG file, then _set_openflow_controller is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_openflow_controller() directly. YANG Description: OpenFlow controller configuration """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("controller_name",openflow_controller.openflow_controller, yang_name="openflow-controller", rest_name="openflow-controller", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='controller-name', extensions={u'tailf-common': {u'info': u'OpenFlow controller configuration', u'cli-no-key-completion': None, u'sort-priority': u'66', u'cli-suppress-list-no': None, u'cli-suppress-key-abbreviation': None, u'cli-no-match-completion': None, u'callpoint': u'OpenFlowGlobalController'}}), is_container='list', yang_name="openflow-controller", rest_name="openflow-controller", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'OpenFlow controller configuration', u'cli-no-key-completion': None, u'sort-priority': u'66', u'cli-suppress-list-no': None, u'cli-suppress-key-abbreviation': None, u'cli-no-match-completion': None, u'callpoint': u'OpenFlowGlobalController'}}, namespace='urn:brocade.com:mgmt:brocade-openflow', defining_module='brocade-openflow', yang_type='list', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """openflow_controller must be of a type compatible with list""", 'defined-type': "list", 'generated-type': """YANGDynClass(base=YANGListType("controller_name",openflow_controller.openflow_controller, yang_name="openflow-controller", rest_name="openflow-controller", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='controller-name', extensions={u'tailf-common': {u'info': u'OpenFlow controller configuration', u'cli-no-key-completion': None, u'sort-priority': u'66', u'cli-suppress-list-no': None, u'cli-suppress-key-abbreviation': None, u'cli-no-match-completion': None, u'callpoint': u'OpenFlowGlobalController'}}), is_container='list', yang_name="openflow-controller", rest_name="openflow-controller", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'OpenFlow controller configuration', u'cli-no-key-completion': None, u'sort-priority': u'66', u'cli-suppress-list-no': None, u'cli-suppress-key-abbreviation': None, u'cli-no-match-completion': None, u'callpoint': u'OpenFlowGlobalController'}}, namespace='urn:brocade.com:mgmt:brocade-openflow', defining_module='brocade-openflow', yang_type='list', is_config=True)""", }) self.__openflow_controller = t if hasattr(self, '_set'): self._set()
def notify(self, value): """Add a new observation to the metric""" with self.lock: #TODO: this could slow down slow-rate incoming updates # since the number of ticks depends on the actual time # passed since the latest notification. Consider using # a real timer to tick the EWMA. self.tick() for avg in (self.m1, self.m5, self.m15, self.day): avg.update(value) self.count += value
Add a new observation to the metric
Below is the the instruction that describes the task: ### Input: Add a new observation to the metric ### Response: def notify(self, value): """Add a new observation to the metric""" with self.lock: #TODO: this could slow down slow-rate incoming updates # since the number of ticks depends on the actual time # passed since the latest notification. Consider using # a real timer to tick the EWMA. self.tick() for avg in (self.m1, self.m5, self.m15, self.day): avg.update(value) self.count += value
def step_amp(self): """ Change the amplitude according to the change rate and drift target. Returns: None """ difference = self.drift_target - self._raw_value if abs(difference) < self.change_rate: self.value = self.drift_target else: delta = self.change_rate * numpy.sign(difference) self.value = self._raw_value + delta
Change the amplitude according to the change rate and drift target. Returns: None
Below is the the instruction that describes the task: ### Input: Change the amplitude according to the change rate and drift target. Returns: None ### Response: def step_amp(self): """ Change the amplitude according to the change rate and drift target. Returns: None """ difference = self.drift_target - self._raw_value if abs(difference) < self.change_rate: self.value = self.drift_target else: delta = self.change_rate * numpy.sign(difference) self.value = self._raw_value + delta
def expand_variables(self): """ Expand variables in the task code. Only variables who use the $[<variable name>] format are expanded. Variables using the $<variable name> and ${<variable name>} formats are expanded by the shell (in the cases where bash is the interpreter. """ self.environment["INPUTS"] = " ".join(self.inputs) self.environment["OUTPUTS"] = " ".join(self.outputs) for n, input_file in enumerate(self.inputs): self.environment["INPUT{}".format(n +1)] = input_file for n, output_file in enumerate(self.outputs): self.environment["OUTPUT{}".format(n +1)] = output_file for n, line in enumerate(self.code): match = self.__variable_pattern.findall(line) if len(match) > 0: for item in match: value = self.environment.get(item) if value is not None: self.code[n] = self.code[n].replace("$[" + item + "]", value)
Expand variables in the task code. Only variables who use the $[<variable name>] format are expanded. Variables using the $<variable name> and ${<variable name>} formats are expanded by the shell (in the cases where bash is the interpreter.
Below is the the instruction that describes the task: ### Input: Expand variables in the task code. Only variables who use the $[<variable name>] format are expanded. Variables using the $<variable name> and ${<variable name>} formats are expanded by the shell (in the cases where bash is the interpreter. ### Response: def expand_variables(self): """ Expand variables in the task code. Only variables who use the $[<variable name>] format are expanded. Variables using the $<variable name> and ${<variable name>} formats are expanded by the shell (in the cases where bash is the interpreter. """ self.environment["INPUTS"] = " ".join(self.inputs) self.environment["OUTPUTS"] = " ".join(self.outputs) for n, input_file in enumerate(self.inputs): self.environment["INPUT{}".format(n +1)] = input_file for n, output_file in enumerate(self.outputs): self.environment["OUTPUT{}".format(n +1)] = output_file for n, line in enumerate(self.code): match = self.__variable_pattern.findall(line) if len(match) > 0: for item in match: value = self.environment.get(item) if value is not None: self.code[n] = self.code[n].replace("$[" + item + "]", value)
def get_file_from_iso(self, local_path, **kwargs): # type: (str, Any) -> None ''' A method to fetch a single file from the ISO and write it out to a local file. Parameters: local_path - The local file to write to. blocksize - The number of bytes in each transfer. iso_path - The absolute ISO9660 path to lookup on the ISO (exclusive with rr_path, joliet_path, and udf_path). rr_path - The absolute Rock Ridge path to lookup on the ISO (exclusive with iso_path, joliet_path, and udf_path). joliet_path - The absolute Joliet path to lookup on the ISO (exclusive with iso_path, rr_path, and udf_path). udf_path - The absolute UDF path to lookup on the ISO (exclusive with iso_path, rr_path, and joliet_path). Returns: Nothing. ''' if not self._initialized: raise pycdlibexception.PyCdlibInvalidInput('This object is not yet initialized; call either open() or new() to create an ISO') blocksize = 8192 joliet_path = None iso_path = None rr_path = None udf_path = None num_paths = 0 for key in kwargs: if key == 'blocksize': blocksize = kwargs[key] elif key == 'iso_path' and kwargs[key] is not None: iso_path = utils.normpath(kwargs[key]) num_paths += 1 elif key == 'rr_path' and kwargs[key] is not None: rr_path = utils.normpath(kwargs[key]) num_paths += 1 elif key == 'joliet_path' and kwargs[key] is not None: joliet_path = utils.normpath(kwargs[key]) num_paths += 1 elif key == 'udf_path' and kwargs[key] is not None: udf_path = utils.normpath(kwargs[key]) num_paths += 1 else: raise pycdlibexception.PyCdlibInvalidInput('Unknown keyword %s' % (key)) if num_paths != 1: raise pycdlibexception.PyCdlibInvalidInput("Exactly one of 'iso_path', 'rr_path', 'joliet_path', or 'udf_path' must be passed") with open(local_path, 'wb') as fp: if udf_path is not None: self._udf_get_file_from_iso_fp(fp, blocksize, udf_path) else: self._get_file_from_iso_fp(fp, blocksize, iso_path, rr_path, joliet_path)
A method to fetch a single file from the ISO and write it out to a local file. Parameters: local_path - The local file to write to. blocksize - The number of bytes in each transfer. iso_path - The absolute ISO9660 path to lookup on the ISO (exclusive with rr_path, joliet_path, and udf_path). rr_path - The absolute Rock Ridge path to lookup on the ISO (exclusive with iso_path, joliet_path, and udf_path). joliet_path - The absolute Joliet path to lookup on the ISO (exclusive with iso_path, rr_path, and udf_path). udf_path - The absolute UDF path to lookup on the ISO (exclusive with iso_path, rr_path, and joliet_path). Returns: Nothing.
Below is the the instruction that describes the task: ### Input: A method to fetch a single file from the ISO and write it out to a local file. Parameters: local_path - The local file to write to. blocksize - The number of bytes in each transfer. iso_path - The absolute ISO9660 path to lookup on the ISO (exclusive with rr_path, joliet_path, and udf_path). rr_path - The absolute Rock Ridge path to lookup on the ISO (exclusive with iso_path, joliet_path, and udf_path). joliet_path - The absolute Joliet path to lookup on the ISO (exclusive with iso_path, rr_path, and udf_path). udf_path - The absolute UDF path to lookup on the ISO (exclusive with iso_path, rr_path, and joliet_path). Returns: Nothing. ### Response: def get_file_from_iso(self, local_path, **kwargs): # type: (str, Any) -> None ''' A method to fetch a single file from the ISO and write it out to a local file. Parameters: local_path - The local file to write to. blocksize - The number of bytes in each transfer. iso_path - The absolute ISO9660 path to lookup on the ISO (exclusive with rr_path, joliet_path, and udf_path). rr_path - The absolute Rock Ridge path to lookup on the ISO (exclusive with iso_path, joliet_path, and udf_path). joliet_path - The absolute Joliet path to lookup on the ISO (exclusive with iso_path, rr_path, and udf_path). udf_path - The absolute UDF path to lookup on the ISO (exclusive with iso_path, rr_path, and joliet_path). Returns: Nothing. ''' if not self._initialized: raise pycdlibexception.PyCdlibInvalidInput('This object is not yet initialized; call either open() or new() to create an ISO') blocksize = 8192 joliet_path = None iso_path = None rr_path = None udf_path = None num_paths = 0 for key in kwargs: if key == 'blocksize': blocksize = kwargs[key] elif key == 'iso_path' and kwargs[key] is not None: iso_path = utils.normpath(kwargs[key]) num_paths += 1 elif key == 'rr_path' and kwargs[key] is not None: rr_path = utils.normpath(kwargs[key]) num_paths += 1 elif key == 'joliet_path' and kwargs[key] is not None: joliet_path = utils.normpath(kwargs[key]) num_paths += 1 elif key == 'udf_path' and kwargs[key] is not None: udf_path = utils.normpath(kwargs[key]) num_paths += 1 else: raise pycdlibexception.PyCdlibInvalidInput('Unknown keyword %s' % (key)) if num_paths != 1: raise pycdlibexception.PyCdlibInvalidInput("Exactly one of 'iso_path', 'rr_path', 'joliet_path', or 'udf_path' must be passed") with open(local_path, 'wb') as fp: if udf_path is not None: self._udf_get_file_from_iso_fp(fp, blocksize, udf_path) else: self._get_file_from_iso_fp(fp, blocksize, iso_path, rr_path, joliet_path)
def datetime_is_iso(date_str): """Attempts to parse a date formatted in ISO 8601 format""" try: if len(date_str) > 10: dt = isodate.parse_datetime(date_str) else: dt = isodate.parse_date(date_str) return True, [] except: # Any error qualifies as not ISO format return False, ['Datetime provided is not in a valid ISO 8601 format']
Attempts to parse a date formatted in ISO 8601 format
Below is the the instruction that describes the task: ### Input: Attempts to parse a date formatted in ISO 8601 format ### Response: def datetime_is_iso(date_str): """Attempts to parse a date formatted in ISO 8601 format""" try: if len(date_str) > 10: dt = isodate.parse_datetime(date_str) else: dt = isodate.parse_date(date_str) return True, [] except: # Any error qualifies as not ISO format return False, ['Datetime provided is not in a valid ISO 8601 format']
def calcontime(data, inds=None): """ Given indices of good times, calculate total time per scan with indices. """ if not inds: inds = range(len(data['time'])) logger.info('No indices provided. Assuming all are valid.') scans = set([data['scan'][i] for i in inds]) total = 0. for scan in scans: time = [data['time'][i] for i in inds if data['scan'][i] == scan] total += max(time) - min(time) return total
Given indices of good times, calculate total time per scan with indices.
Below is the the instruction that describes the task: ### Input: Given indices of good times, calculate total time per scan with indices. ### Response: def calcontime(data, inds=None): """ Given indices of good times, calculate total time per scan with indices. """ if not inds: inds = range(len(data['time'])) logger.info('No indices provided. Assuming all are valid.') scans = set([data['scan'][i] for i in inds]) total = 0. for scan in scans: time = [data['time'][i] for i in inds if data['scan'][i] == scan] total += max(time) - min(time) return total
def _set_bundle_message(self, v, load=False): """ Setter method for bundle_message, mapped from YANG variable /mpls_config/router/mpls/mpls_cmds_holder/mpls_interface/rsvp/interface_refresh_reduction/bundle_message (container) If this variable is read-only (config: false) in the source YANG file, then _set_bundle_message is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_bundle_message() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=bundle_message.bundle_message, is_container='container', presence=True, yang_name="bundle-message", rest_name="bundle-message", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Refresh Reduction bundle messaging feature', u'alt-name': u'bundle-message'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='container', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """bundle_message must be of a type compatible with container""", 'defined-type': "container", 'generated-type': """YANGDynClass(base=bundle_message.bundle_message, is_container='container', presence=True, yang_name="bundle-message", rest_name="bundle-message", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Refresh Reduction bundle messaging feature', u'alt-name': u'bundle-message'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='container', is_config=True)""", }) self.__bundle_message = t if hasattr(self, '_set'): self._set()
Setter method for bundle_message, mapped from YANG variable /mpls_config/router/mpls/mpls_cmds_holder/mpls_interface/rsvp/interface_refresh_reduction/bundle_message (container) If this variable is read-only (config: false) in the source YANG file, then _set_bundle_message is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_bundle_message() directly.
Below is the the instruction that describes the task: ### Input: Setter method for bundle_message, mapped from YANG variable /mpls_config/router/mpls/mpls_cmds_holder/mpls_interface/rsvp/interface_refresh_reduction/bundle_message (container) If this variable is read-only (config: false) in the source YANG file, then _set_bundle_message is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_bundle_message() directly. ### Response: def _set_bundle_message(self, v, load=False): """ Setter method for bundle_message, mapped from YANG variable /mpls_config/router/mpls/mpls_cmds_holder/mpls_interface/rsvp/interface_refresh_reduction/bundle_message (container) If this variable is read-only (config: false) in the source YANG file, then _set_bundle_message is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_bundle_message() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=bundle_message.bundle_message, is_container='container', presence=True, yang_name="bundle-message", rest_name="bundle-message", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Refresh Reduction bundle messaging feature', u'alt-name': u'bundle-message'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='container', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """bundle_message must be of a type compatible with container""", 'defined-type': "container", 'generated-type': """YANGDynClass(base=bundle_message.bundle_message, is_container='container', presence=True, yang_name="bundle-message", rest_name="bundle-message", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Refresh Reduction bundle messaging feature', u'alt-name': u'bundle-message'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='container', is_config=True)""", }) self.__bundle_message = t if hasattr(self, '_set'): self._set()
def warm_night_frequency(tasmin, thresh='22 degC', freq='YS'): r"""Frequency of extreme warm nights Return the number of days with tasmin > thresh per period Parameters ---------- tasmin : xarray.DataArray Minimum daily temperature [℃] or [K] thresh : str Threshold temperature on which to base evaluation [℃] or [K]. Default : '22 degC' freq : str, optional Resampling frequency Returns ------- xarray.DataArray The number of days with tasmin > thresh per period """ thresh = utils.convert_units_to(thresh, tasmin, ) events = (tasmin > thresh) * 1 return events.resample(time=freq).sum(dim='time')
r"""Frequency of extreme warm nights Return the number of days with tasmin > thresh per period Parameters ---------- tasmin : xarray.DataArray Minimum daily temperature [℃] or [K] thresh : str Threshold temperature on which to base evaluation [℃] or [K]. Default : '22 degC' freq : str, optional Resampling frequency Returns ------- xarray.DataArray The number of days with tasmin > thresh per period
Below is the the instruction that describes the task: ### Input: r"""Frequency of extreme warm nights Return the number of days with tasmin > thresh per period Parameters ---------- tasmin : xarray.DataArray Minimum daily temperature [℃] or [K] thresh : str Threshold temperature on which to base evaluation [℃] or [K]. Default : '22 degC' freq : str, optional Resampling frequency Returns ------- xarray.DataArray The number of days with tasmin > thresh per period ### Response: def warm_night_frequency(tasmin, thresh='22 degC', freq='YS'): r"""Frequency of extreme warm nights Return the number of days with tasmin > thresh per period Parameters ---------- tasmin : xarray.DataArray Minimum daily temperature [℃] or [K] thresh : str Threshold temperature on which to base evaluation [℃] or [K]. Default : '22 degC' freq : str, optional Resampling frequency Returns ------- xarray.DataArray The number of days with tasmin > thresh per period """ thresh = utils.convert_units_to(thresh, tasmin, ) events = (tasmin > thresh) * 1 return events.resample(time=freq).sum(dim='time')
def store_hash_configuration(self, lshash): """ Stores hash configuration """ self.mongo_object.insert_one( {'hash_conf_name': lshash.hash_name+'_conf', 'hash_configuration': pickle.dumps(lshash.get_config()) } )
Stores hash configuration
Below is the the instruction that describes the task: ### Input: Stores hash configuration ### Response: def store_hash_configuration(self, lshash): """ Stores hash configuration """ self.mongo_object.insert_one( {'hash_conf_name': lshash.hash_name+'_conf', 'hash_configuration': pickle.dumps(lshash.get_config()) } )
def switch(template, version): """ Switch a project's template to a different template. """ temple.update.update(new_template=template, new_version=version)
Switch a project's template to a different template.
Below is the the instruction that describes the task: ### Input: Switch a project's template to a different template. ### Response: def switch(template, version): """ Switch a project's template to a different template. """ temple.update.update(new_template=template, new_version=version)
def compose_projects_json(projects, data): """ Compose projects.json with all data sources :param projects: projects.json :param data: eclipse JSON :return: projects.json with all data sources """ projects = compose_git(projects, data) projects = compose_mailing_lists(projects, data) projects = compose_bugzilla(projects, data) projects = compose_github(projects, data) projects = compose_gerrit(projects) projects = compose_mbox(projects) return projects
Compose projects.json with all data sources :param projects: projects.json :param data: eclipse JSON :return: projects.json with all data sources
Below is the the instruction that describes the task: ### Input: Compose projects.json with all data sources :param projects: projects.json :param data: eclipse JSON :return: projects.json with all data sources ### Response: def compose_projects_json(projects, data): """ Compose projects.json with all data sources :param projects: projects.json :param data: eclipse JSON :return: projects.json with all data sources """ projects = compose_git(projects, data) projects = compose_mailing_lists(projects, data) projects = compose_bugzilla(projects, data) projects = compose_github(projects, data) projects = compose_gerrit(projects) projects = compose_mbox(projects) return projects
def sink(self, name, filter_=None, destination=None): """Creates a sink bound to the current client. :type name: str :param name: the name of the sink to be constructed. :type filter_: str :param filter_: (optional) the advanced logs filter expression defining the entries exported by the sink. If not passed, the instance should already exist, to be refreshed via :meth:`Sink.reload`. :type destination: str :param destination: destination URI for the entries exported by the sink. If not passed, the instance should already exist, to be refreshed via :meth:`Sink.reload`. :rtype: :class:`google.cloud.logging.sink.Sink` :returns: Sink created with the current client. """ return Sink(name, filter_, destination, client=self)
Creates a sink bound to the current client. :type name: str :param name: the name of the sink to be constructed. :type filter_: str :param filter_: (optional) the advanced logs filter expression defining the entries exported by the sink. If not passed, the instance should already exist, to be refreshed via :meth:`Sink.reload`. :type destination: str :param destination: destination URI for the entries exported by the sink. If not passed, the instance should already exist, to be refreshed via :meth:`Sink.reload`. :rtype: :class:`google.cloud.logging.sink.Sink` :returns: Sink created with the current client.
Below is the the instruction that describes the task: ### Input: Creates a sink bound to the current client. :type name: str :param name: the name of the sink to be constructed. :type filter_: str :param filter_: (optional) the advanced logs filter expression defining the entries exported by the sink. If not passed, the instance should already exist, to be refreshed via :meth:`Sink.reload`. :type destination: str :param destination: destination URI for the entries exported by the sink. If not passed, the instance should already exist, to be refreshed via :meth:`Sink.reload`. :rtype: :class:`google.cloud.logging.sink.Sink` :returns: Sink created with the current client. ### Response: def sink(self, name, filter_=None, destination=None): """Creates a sink bound to the current client. :type name: str :param name: the name of the sink to be constructed. :type filter_: str :param filter_: (optional) the advanced logs filter expression defining the entries exported by the sink. If not passed, the instance should already exist, to be refreshed via :meth:`Sink.reload`. :type destination: str :param destination: destination URI for the entries exported by the sink. If not passed, the instance should already exist, to be refreshed via :meth:`Sink.reload`. :rtype: :class:`google.cloud.logging.sink.Sink` :returns: Sink created with the current client. """ return Sink(name, filter_, destination, client=self)
def bind(self, database): """Associate the pool with a database. :type database: :class:`~google.cloud.spanner_v1.database.Database` :param database: database used by the pool: used to create sessions when needed. """ self._database = database while not self._sessions.full(): session = self._new_session() session.create() self._sessions.put(session)
Associate the pool with a database. :type database: :class:`~google.cloud.spanner_v1.database.Database` :param database: database used by the pool: used to create sessions when needed.
Below is the the instruction that describes the task: ### Input: Associate the pool with a database. :type database: :class:`~google.cloud.spanner_v1.database.Database` :param database: database used by the pool: used to create sessions when needed. ### Response: def bind(self, database): """Associate the pool with a database. :type database: :class:`~google.cloud.spanner_v1.database.Database` :param database: database used by the pool: used to create sessions when needed. """ self._database = database while not self._sessions.full(): session = self._new_session() session.create() self._sessions.put(session)
def unlock(self): """ Unlock the table(s) """ cursor = connection.cursor() cursor.execute("UNLOCK TABLES") logger.debug('Unlocked tables') row = cursor.fetchone() return row
Unlock the table(s)
Below is the the instruction that describes the task: ### Input: Unlock the table(s) ### Response: def unlock(self): """ Unlock the table(s) """ cursor = connection.cursor() cursor.execute("UNLOCK TABLES") logger.debug('Unlocked tables') row = cursor.fetchone() return row
def dump_file(self, value, relativePath, description=None, dump=None, pull=None, replace=False, raiseError=True, ntrials=3): """ Dump a file using its value to the system and creates its attribute in the Repository with utc timestamp. :Parameters: #. value (object): The value of a file to dump and add to the repository. It is any python object or file. #. relativePath (str): The relative to the repository path to where to dump the file. #. description (None, string): Any description about the file. #. dump (None, string): The dumping method. If None it will be set automatically to pickle and therefore the object must be pickleable. If a string is given, it can be a keyword ('json','pickle','dill') or a string compileable code to dump the data. The string code must include all the necessary imports and a '$FILE_PATH' that replaces the absolute file path when the dumping will be performed.\n e.g. "import numpy as np; np.savetxt(fname='$FILE_PATH', X=value, fmt='%.6e')" #. pull (None, string): The pulling method. If None it will be set automatically to pickle and therefore the object must be pickleable. If a string is given, it can be a keyword ('json','pickle','dill') or a string compileable code to pull the data. The string code must include all the necessary imports, a '$FILE_PATH' that replaces the absolute file path when the dumping will be performed and finally a PULLED_DATA variable.\n e.g "import numpy as np; PULLED_DATA=np.loadtxt(fname='$FILE_PATH')" #. replace (boolean): Whether to replace any existing file. #. raiseError (boolean): Whether to raise encountered error instead of returning failure. #. ntrials (int): After aquiring all locks, ntrials is the maximum number of trials allowed before failing. In rare cases, when multiple processes are accessing the same repository components, different processes can alter repository components between successive lock releases of some other process. Bigger number of trials lowers the likelyhood of failure due to multiple processes same time alteration. :Returns: #. success (boolean): Whether renaming the directory was successful. #. message (None, string): Some explanatory message or error reason why directory was not dumped. """ # check arguments assert isinstance(raiseError, bool), "raiseError must be boolean" assert isinstance(replace, bool), "replace must be boolean" assert isinstance(ntrials, int), "ntrials must be integer" assert ntrials>0, "ntrials must be >0" if description is None: description = '' assert isinstance(description, basestring), "description must be None or a string" # convert dump and pull methods to strings if pull is None and dump is not None: if dump.startswith('pickle') or dump.startswith('dill') or dump.startswith('numpy') or dump =='json': pull = dump dump = get_dump_method(dump, protocol=self._DEFAULT_PICKLE_PROTOCOL) pull = get_pull_method(pull) # check name and path relativePath = self.to_repo_relative_path(path=relativePath, split=False) savePath = os.path.join(self.__path,relativePath) fPath, fName = os.path.split(savePath) # check if name is allowed success, reason = self.is_name_allowed(savePath) if not success: assert not raiseError, reason return False, reason # ensure directory added try: success, reason = self.add_directory(fPath, raiseError=False, ntrials=ntrials) except Exception as err: reason = "Unable to add directory (%s)"%(str(err)) success = False if not success: assert not raiseError, reason return False, reason # lock repository LR = Locker(filePath=None, lockPass=str(uuid.uuid1()), lockPath=os.path.join(self.__path, self.__repoLock)) acquired, code = LR.acquire_lock() if not acquired: m = "code %s. Unable to aquire the repository lock. You may try again!"%(code,) assert raiseError, Exception(m) return False,m # lock file LF = Locker(filePath=None, lockPass=str(uuid.uuid1()), lockPath=os.path.join(fPath,self.__fileLock%fName)) acquired, code = LF.acquire_lock() if not acquired: LR.release_lock() error = "Code %s. Unable to aquire the lock when adding '%s'"%(code,relativePath) assert not raiseError, error return False, error # load repository info for _trial in range(ntrials): try: repo = self.__load_repository_pickle_file(os.path.join(self.__path, self.__repoFile)) self.__repo['walk_repo'] = repo['walk_repo'] except Exception as err: error = str(err) if self.DEBUG_PRINT_FAILED_TRIALS: print("Trial %i failed in Repository.%s (%s). Set Repository.DEBUG_PRINT_FAILED_TRIALS to False to mute"%(_trial, inspect.stack()[1][3], str(error))) else: error = None break if error is not None: LR.release_lock() LF.release_lock() assert not raiseError, Exception(error) return False, error # dump file for _trial in range(ntrials): error = None try: isRepoFile, fileOnDisk, infoOnDisk, classOnDisk = self.is_repository_file(relativePath) if isRepoFile: assert replace, "file is a registered repository file. set replace to True to replace" fileInfoPath = os.path.join(self.__path,os.path.dirname(relativePath),self.__fileInfo%fName) if isRepoFile and fileOnDisk: with open(fileInfoPath, 'rb') as fd: info = pickle.load(fd) assert info['repository_unique_name'] == self.__repo['repository_unique_name'], "it seems that file was created by another repository" info['last_update_utctime'] = time.time() else: info = {'repository_unique_name':self.__repo['repository_unique_name']} info['create_utctime'] = info['last_update_utctime'] = time.time() info['dump'] = dump info['pull'] = pull info['description'] = description # get parent directory list if file is new and not being replaced if not isRepoFile: dirList = self.__get_repository_directory(fPath) # dump file #exec( dump.replace("$FILE_PATH", str(savePath)) ) my_exec( dump.replace("$FILE_PATH", str(savePath)), locals=locals(), globals=globals(), description='dump' ) # update info with open(fileInfoPath, 'wb') as fd: pickle.dump( info,fd, protocol=self._DEFAULT_PICKLE_PROTOCOL) fd.flush() os.fsync(fd.fileno()) # update class file fileClassPath = os.path.join(self.__path,os.path.dirname(relativePath),self.__fileClass%fName) with open(fileClassPath, 'wb') as fd: if value is None: klass = None else: klass = value.__class__ pickle.dump(klass , fd, protocol=self._DEFAULT_PICKLE_PROTOCOL ) fd.flush() os.fsync(fd.fileno()) # add to repo if file is new and not being replaced if not isRepoFile: dirList.append(fName) except Exception as err: error = "unable to dump the file (%s)"%(str(err),) try: if 'pickle.dump(' in dump: mi = get_pickling_errors(value) if mi is not None: error += '\nmore info: %s'%str(mi) except: pass if self.DEBUG_PRINT_FAILED_TRIALS: print("Trial %i failed in Repository.%s (%s). Set Repository.DEBUG_PRINT_FAILED_TRIALS to False to mute"%(_trial, inspect.stack()[1][3], str(error))) else: error = None break # save repository if error is None: _, error = self.__save_repository_pickle_file(lockFirst=False, raiseError=False) # release locks LR.release_lock() LF.release_lock() assert not raiseError or error is None, "unable to dump file '%s' after %i trials (%s)"%(relativePath, ntrials, error,) return success, error
Dump a file using its value to the system and creates its attribute in the Repository with utc timestamp. :Parameters: #. value (object): The value of a file to dump and add to the repository. It is any python object or file. #. relativePath (str): The relative to the repository path to where to dump the file. #. description (None, string): Any description about the file. #. dump (None, string): The dumping method. If None it will be set automatically to pickle and therefore the object must be pickleable. If a string is given, it can be a keyword ('json','pickle','dill') or a string compileable code to dump the data. The string code must include all the necessary imports and a '$FILE_PATH' that replaces the absolute file path when the dumping will be performed.\n e.g. "import numpy as np; np.savetxt(fname='$FILE_PATH', X=value, fmt='%.6e')" #. pull (None, string): The pulling method. If None it will be set automatically to pickle and therefore the object must be pickleable. If a string is given, it can be a keyword ('json','pickle','dill') or a string compileable code to pull the data. The string code must include all the necessary imports, a '$FILE_PATH' that replaces the absolute file path when the dumping will be performed and finally a PULLED_DATA variable.\n e.g "import numpy as np; PULLED_DATA=np.loadtxt(fname='$FILE_PATH')" #. replace (boolean): Whether to replace any existing file. #. raiseError (boolean): Whether to raise encountered error instead of returning failure. #. ntrials (int): After aquiring all locks, ntrials is the maximum number of trials allowed before failing. In rare cases, when multiple processes are accessing the same repository components, different processes can alter repository components between successive lock releases of some other process. Bigger number of trials lowers the likelyhood of failure due to multiple processes same time alteration. :Returns: #. success (boolean): Whether renaming the directory was successful. #. message (None, string): Some explanatory message or error reason why directory was not dumped.
Below is the the instruction that describes the task: ### Input: Dump a file using its value to the system and creates its attribute in the Repository with utc timestamp. :Parameters: #. value (object): The value of a file to dump and add to the repository. It is any python object or file. #. relativePath (str): The relative to the repository path to where to dump the file. #. description (None, string): Any description about the file. #. dump (None, string): The dumping method. If None it will be set automatically to pickle and therefore the object must be pickleable. If a string is given, it can be a keyword ('json','pickle','dill') or a string compileable code to dump the data. The string code must include all the necessary imports and a '$FILE_PATH' that replaces the absolute file path when the dumping will be performed.\n e.g. "import numpy as np; np.savetxt(fname='$FILE_PATH', X=value, fmt='%.6e')" #. pull (None, string): The pulling method. If None it will be set automatically to pickle and therefore the object must be pickleable. If a string is given, it can be a keyword ('json','pickle','dill') or a string compileable code to pull the data. The string code must include all the necessary imports, a '$FILE_PATH' that replaces the absolute file path when the dumping will be performed and finally a PULLED_DATA variable.\n e.g "import numpy as np; PULLED_DATA=np.loadtxt(fname='$FILE_PATH')" #. replace (boolean): Whether to replace any existing file. #. raiseError (boolean): Whether to raise encountered error instead of returning failure. #. ntrials (int): After aquiring all locks, ntrials is the maximum number of trials allowed before failing. In rare cases, when multiple processes are accessing the same repository components, different processes can alter repository components between successive lock releases of some other process. Bigger number of trials lowers the likelyhood of failure due to multiple processes same time alteration. :Returns: #. success (boolean): Whether renaming the directory was successful. #. message (None, string): Some explanatory message or error reason why directory was not dumped. ### Response: def dump_file(self, value, relativePath, description=None, dump=None, pull=None, replace=False, raiseError=True, ntrials=3): """ Dump a file using its value to the system and creates its attribute in the Repository with utc timestamp. :Parameters: #. value (object): The value of a file to dump and add to the repository. It is any python object or file. #. relativePath (str): The relative to the repository path to where to dump the file. #. description (None, string): Any description about the file. #. dump (None, string): The dumping method. If None it will be set automatically to pickle and therefore the object must be pickleable. If a string is given, it can be a keyword ('json','pickle','dill') or a string compileable code to dump the data. The string code must include all the necessary imports and a '$FILE_PATH' that replaces the absolute file path when the dumping will be performed.\n e.g. "import numpy as np; np.savetxt(fname='$FILE_PATH', X=value, fmt='%.6e')" #. pull (None, string): The pulling method. If None it will be set automatically to pickle and therefore the object must be pickleable. If a string is given, it can be a keyword ('json','pickle','dill') or a string compileable code to pull the data. The string code must include all the necessary imports, a '$FILE_PATH' that replaces the absolute file path when the dumping will be performed and finally a PULLED_DATA variable.\n e.g "import numpy as np; PULLED_DATA=np.loadtxt(fname='$FILE_PATH')" #. replace (boolean): Whether to replace any existing file. #. raiseError (boolean): Whether to raise encountered error instead of returning failure. #. ntrials (int): After aquiring all locks, ntrials is the maximum number of trials allowed before failing. In rare cases, when multiple processes are accessing the same repository components, different processes can alter repository components between successive lock releases of some other process. Bigger number of trials lowers the likelyhood of failure due to multiple processes same time alteration. :Returns: #. success (boolean): Whether renaming the directory was successful. #. message (None, string): Some explanatory message or error reason why directory was not dumped. """ # check arguments assert isinstance(raiseError, bool), "raiseError must be boolean" assert isinstance(replace, bool), "replace must be boolean" assert isinstance(ntrials, int), "ntrials must be integer" assert ntrials>0, "ntrials must be >0" if description is None: description = '' assert isinstance(description, basestring), "description must be None or a string" # convert dump and pull methods to strings if pull is None and dump is not None: if dump.startswith('pickle') or dump.startswith('dill') or dump.startswith('numpy') or dump =='json': pull = dump dump = get_dump_method(dump, protocol=self._DEFAULT_PICKLE_PROTOCOL) pull = get_pull_method(pull) # check name and path relativePath = self.to_repo_relative_path(path=relativePath, split=False) savePath = os.path.join(self.__path,relativePath) fPath, fName = os.path.split(savePath) # check if name is allowed success, reason = self.is_name_allowed(savePath) if not success: assert not raiseError, reason return False, reason # ensure directory added try: success, reason = self.add_directory(fPath, raiseError=False, ntrials=ntrials) except Exception as err: reason = "Unable to add directory (%s)"%(str(err)) success = False if not success: assert not raiseError, reason return False, reason # lock repository LR = Locker(filePath=None, lockPass=str(uuid.uuid1()), lockPath=os.path.join(self.__path, self.__repoLock)) acquired, code = LR.acquire_lock() if not acquired: m = "code %s. Unable to aquire the repository lock. You may try again!"%(code,) assert raiseError, Exception(m) return False,m # lock file LF = Locker(filePath=None, lockPass=str(uuid.uuid1()), lockPath=os.path.join(fPath,self.__fileLock%fName)) acquired, code = LF.acquire_lock() if not acquired: LR.release_lock() error = "Code %s. Unable to aquire the lock when adding '%s'"%(code,relativePath) assert not raiseError, error return False, error # load repository info for _trial in range(ntrials): try: repo = self.__load_repository_pickle_file(os.path.join(self.__path, self.__repoFile)) self.__repo['walk_repo'] = repo['walk_repo'] except Exception as err: error = str(err) if self.DEBUG_PRINT_FAILED_TRIALS: print("Trial %i failed in Repository.%s (%s). Set Repository.DEBUG_PRINT_FAILED_TRIALS to False to mute"%(_trial, inspect.stack()[1][3], str(error))) else: error = None break if error is not None: LR.release_lock() LF.release_lock() assert not raiseError, Exception(error) return False, error # dump file for _trial in range(ntrials): error = None try: isRepoFile, fileOnDisk, infoOnDisk, classOnDisk = self.is_repository_file(relativePath) if isRepoFile: assert replace, "file is a registered repository file. set replace to True to replace" fileInfoPath = os.path.join(self.__path,os.path.dirname(relativePath),self.__fileInfo%fName) if isRepoFile and fileOnDisk: with open(fileInfoPath, 'rb') as fd: info = pickle.load(fd) assert info['repository_unique_name'] == self.__repo['repository_unique_name'], "it seems that file was created by another repository" info['last_update_utctime'] = time.time() else: info = {'repository_unique_name':self.__repo['repository_unique_name']} info['create_utctime'] = info['last_update_utctime'] = time.time() info['dump'] = dump info['pull'] = pull info['description'] = description # get parent directory list if file is new and not being replaced if not isRepoFile: dirList = self.__get_repository_directory(fPath) # dump file #exec( dump.replace("$FILE_PATH", str(savePath)) ) my_exec( dump.replace("$FILE_PATH", str(savePath)), locals=locals(), globals=globals(), description='dump' ) # update info with open(fileInfoPath, 'wb') as fd: pickle.dump( info,fd, protocol=self._DEFAULT_PICKLE_PROTOCOL) fd.flush() os.fsync(fd.fileno()) # update class file fileClassPath = os.path.join(self.__path,os.path.dirname(relativePath),self.__fileClass%fName) with open(fileClassPath, 'wb') as fd: if value is None: klass = None else: klass = value.__class__ pickle.dump(klass , fd, protocol=self._DEFAULT_PICKLE_PROTOCOL ) fd.flush() os.fsync(fd.fileno()) # add to repo if file is new and not being replaced if not isRepoFile: dirList.append(fName) except Exception as err: error = "unable to dump the file (%s)"%(str(err),) try: if 'pickle.dump(' in dump: mi = get_pickling_errors(value) if mi is not None: error += '\nmore info: %s'%str(mi) except: pass if self.DEBUG_PRINT_FAILED_TRIALS: print("Trial %i failed in Repository.%s (%s). Set Repository.DEBUG_PRINT_FAILED_TRIALS to False to mute"%(_trial, inspect.stack()[1][3], str(error))) else: error = None break # save repository if error is None: _, error = self.__save_repository_pickle_file(lockFirst=False, raiseError=False) # release locks LR.release_lock() LF.release_lock() assert not raiseError or error is None, "unable to dump file '%s' after %i trials (%s)"%(relativePath, ntrials, error,) return success, error
def stop(cls): """Change back the normal stdout after the end""" if any(cls.streams): sys.stdout = cls.streams.pop(-1) else: sys.stdout = sys.__stdout__
Change back the normal stdout after the end
Below is the the instruction that describes the task: ### Input: Change back the normal stdout after the end ### Response: def stop(cls): """Change back the normal stdout after the end""" if any(cls.streams): sys.stdout = cls.streams.pop(-1) else: sys.stdout = sys.__stdout__
def connectdown(np, p, acc, outlet, wtsd=None, workingdir=None, mpiexedir=None, exedir=None, log_file=None, runtime_file=None, hostfile=None): """Reads an ad8 contributing area file, identifies the location of the largest ad8 value as the outlet of the largest watershed""" # If watershed is not specified, use acc to generate a mask layer. if wtsd is None or not os.path.isfile(wtsd): p, workingdir = TauDEM.check_infile_and_wp(p, workingdir) wtsd = workingdir + os.sep + 'wtsd_default.tif' RasterUtilClass.get_mask_from_raster(p, wtsd, True) fname = TauDEM.func_name('connectdown') return TauDEM.run(FileClass.get_executable_fullpath(fname, exedir), {'-p': p, '-ad8': acc, '-w': wtsd}, workingdir, None, {'-o': outlet}, {'mpipath': mpiexedir, 'hostfile': hostfile, 'n': np}, {'logfile': log_file, 'runtimefile': runtime_file})
Reads an ad8 contributing area file, identifies the location of the largest ad8 value as the outlet of the largest watershed
Below is the the instruction that describes the task: ### Input: Reads an ad8 contributing area file, identifies the location of the largest ad8 value as the outlet of the largest watershed ### Response: def connectdown(np, p, acc, outlet, wtsd=None, workingdir=None, mpiexedir=None, exedir=None, log_file=None, runtime_file=None, hostfile=None): """Reads an ad8 contributing area file, identifies the location of the largest ad8 value as the outlet of the largest watershed""" # If watershed is not specified, use acc to generate a mask layer. if wtsd is None or not os.path.isfile(wtsd): p, workingdir = TauDEM.check_infile_and_wp(p, workingdir) wtsd = workingdir + os.sep + 'wtsd_default.tif' RasterUtilClass.get_mask_from_raster(p, wtsd, True) fname = TauDEM.func_name('connectdown') return TauDEM.run(FileClass.get_executable_fullpath(fname, exedir), {'-p': p, '-ad8': acc, '-w': wtsd}, workingdir, None, {'-o': outlet}, {'mpipath': mpiexedir, 'hostfile': hostfile, 'n': np}, {'logfile': log_file, 'runtimefile': runtime_file})
def query_nds2(cls, name, host=None, port=None, connection=None, type=None): """Query an NDS server for channel information Parameters ---------- name : `str` name of requested channel host : `str`, optional name of NDS2 server. port : `int`, optional port number for NDS2 connection connection : `nds2.connection` open connection to use for query type : `str`, `int` NDS2 channel type with which to restrict query Returns ------- channel : `Channel` channel with metadata retrieved from NDS2 server Raises ------ ValueError if multiple channels are found for a given name Notes ----- .. warning:: A `host` is required if an open `connection` is not given """ return ChannelList.query_nds2([name], host=host, port=port, connection=connection, type=type, unique=True)[0]
Query an NDS server for channel information Parameters ---------- name : `str` name of requested channel host : `str`, optional name of NDS2 server. port : `int`, optional port number for NDS2 connection connection : `nds2.connection` open connection to use for query type : `str`, `int` NDS2 channel type with which to restrict query Returns ------- channel : `Channel` channel with metadata retrieved from NDS2 server Raises ------ ValueError if multiple channels are found for a given name Notes ----- .. warning:: A `host` is required if an open `connection` is not given
Below is the the instruction that describes the task: ### Input: Query an NDS server for channel information Parameters ---------- name : `str` name of requested channel host : `str`, optional name of NDS2 server. port : `int`, optional port number for NDS2 connection connection : `nds2.connection` open connection to use for query type : `str`, `int` NDS2 channel type with which to restrict query Returns ------- channel : `Channel` channel with metadata retrieved from NDS2 server Raises ------ ValueError if multiple channels are found for a given name Notes ----- .. warning:: A `host` is required if an open `connection` is not given ### Response: def query_nds2(cls, name, host=None, port=None, connection=None, type=None): """Query an NDS server for channel information Parameters ---------- name : `str` name of requested channel host : `str`, optional name of NDS2 server. port : `int`, optional port number for NDS2 connection connection : `nds2.connection` open connection to use for query type : `str`, `int` NDS2 channel type with which to restrict query Returns ------- channel : `Channel` channel with metadata retrieved from NDS2 server Raises ------ ValueError if multiple channels are found for a given name Notes ----- .. warning:: A `host` is required if an open `connection` is not given """ return ChannelList.query_nds2([name], host=host, port=port, connection=connection, type=type, unique=True)[0]
def shift(self, h=0, m=0, s=0, ms=0, frames=None, fps=None): """ Shift start and end times. See :meth:`SSAFile.shift()` for full description. """ delta = make_time(h=h, m=m, s=s, ms=ms, frames=frames, fps=fps) self.start += delta self.end += delta
Shift start and end times. See :meth:`SSAFile.shift()` for full description.
Below is the the instruction that describes the task: ### Input: Shift start and end times. See :meth:`SSAFile.shift()` for full description. ### Response: def shift(self, h=0, m=0, s=0, ms=0, frames=None, fps=None): """ Shift start and end times. See :meth:`SSAFile.shift()` for full description. """ delta = make_time(h=h, m=m, s=s, ms=ms, frames=frames, fps=fps) self.start += delta self.end += delta
def _write_weight_histograms(self, iteration:int)->None: "Writes model weight histograms to Tensorboard." self.hist_writer.write(model=self.learn.model, iteration=iteration, tbwriter=self.tbwriter)
Writes model weight histograms to Tensorboard.
Below is the the instruction that describes the task: ### Input: Writes model weight histograms to Tensorboard. ### Response: def _write_weight_histograms(self, iteration:int)->None: "Writes model weight histograms to Tensorboard." self.hist_writer.write(model=self.learn.model, iteration=iteration, tbwriter=self.tbwriter)
def exit_code_from_run_infos(run_infos: t.List[RunInfo]) -> int: """Generate a single exit code from a list of RunInfo objects. Takes a list of RunInfos and returns the exit code that is furthest away from 0. Args: run_infos (t.List[RunInfo]): [description] Returns: int: [description] """ assert run_infos is not None if not hasattr(run_infos, "__iter__"): return run_infos.retcode rcs = [ri.retcode for ri in run_infos] max_rc = max(rcs) min_rc = min(rcs) if max_rc == 0: return min_rc return max_rc
Generate a single exit code from a list of RunInfo objects. Takes a list of RunInfos and returns the exit code that is furthest away from 0. Args: run_infos (t.List[RunInfo]): [description] Returns: int: [description]
Below is the the instruction that describes the task: ### Input: Generate a single exit code from a list of RunInfo objects. Takes a list of RunInfos and returns the exit code that is furthest away from 0. Args: run_infos (t.List[RunInfo]): [description] Returns: int: [description] ### Response: def exit_code_from_run_infos(run_infos: t.List[RunInfo]) -> int: """Generate a single exit code from a list of RunInfo objects. Takes a list of RunInfos and returns the exit code that is furthest away from 0. Args: run_infos (t.List[RunInfo]): [description] Returns: int: [description] """ assert run_infos is not None if not hasattr(run_infos, "__iter__"): return run_infos.retcode rcs = [ri.retcode for ri in run_infos] max_rc = max(rcs) min_rc = min(rcs) if max_rc == 0: return min_rc return max_rc
def insert(self, table_name, rows, fields=None, delimiter=None, null='NULL', parse_dates=False, quotechar='"'): """ Load a text file into the specified :code:`table_name` or Insert Python :code:`list` rows into the specified :code:`table_name` :param str table_name: The name of the destination table :param list/str rows: A list of rows **or** the name of an input file. Each row must be a :code:`list` of field values. :param list fields: The names of the target fields, in the order that the data will be presented (defaults to :code:`None` for all columns in the table). :param str delimiter: The delimiter used by the input file (or :code:`None` to infer it from the header). :param str null: The string used to indicated nulled values in the file (defaults to :code:`'NULL'`). :param str quotechar: The character used to quote fields containing special characters, like the delimiter. :param bool parse_dates: If :code:`True`, attempts to coerce date fields into a standard format (defaults to :code:`False`). :raises `giraffez.errors.GiraffeEncodeError`: if the number of values in a row does not match the length of :code:`fields` :raises `giraffez.errors.GiraffeError`: if :code:`panic` is set and the insert statement caused an error. :return: A dictionary containing counts of applied rows and errors :rtype: :class:`dict` For most insertions, this will be faster and produce less strain on Teradata than using :class:`~giraffez.load.TeradataBulkLoad` (:class:`giraffez.BulkLoad <giraffez.load.TeradataBulkLoad>`). Requires that any input file be a properly delimited text file, with a header that corresponds to the target fields for insertion. Valid delimiters include '|', ',', and <tab> or a properly encoded JSON stream. """ if not isfile(rows): return self._insert(table_name, rows, fields, parse_dates) with Reader(rows, delimiter=delimiter, quotechar=quotechar) as f: preprocessor = null_handler(null) rows = (preprocessor(l) for l in f) if isinstance(f, CSVReader): self.options("delimiter", unescape_string(f.reader.dialect.delimiter), 1) self.options("quote char", f.reader.dialect.quotechar, 2) elif isinstance(f, JSONReader): self.options("encoding", "json", 1) return self._insert(table_name, rows, f.header, parse_dates)
Load a text file into the specified :code:`table_name` or Insert Python :code:`list` rows into the specified :code:`table_name` :param str table_name: The name of the destination table :param list/str rows: A list of rows **or** the name of an input file. Each row must be a :code:`list` of field values. :param list fields: The names of the target fields, in the order that the data will be presented (defaults to :code:`None` for all columns in the table). :param str delimiter: The delimiter used by the input file (or :code:`None` to infer it from the header). :param str null: The string used to indicated nulled values in the file (defaults to :code:`'NULL'`). :param str quotechar: The character used to quote fields containing special characters, like the delimiter. :param bool parse_dates: If :code:`True`, attempts to coerce date fields into a standard format (defaults to :code:`False`). :raises `giraffez.errors.GiraffeEncodeError`: if the number of values in a row does not match the length of :code:`fields` :raises `giraffez.errors.GiraffeError`: if :code:`panic` is set and the insert statement caused an error. :return: A dictionary containing counts of applied rows and errors :rtype: :class:`dict` For most insertions, this will be faster and produce less strain on Teradata than using :class:`~giraffez.load.TeradataBulkLoad` (:class:`giraffez.BulkLoad <giraffez.load.TeradataBulkLoad>`). Requires that any input file be a properly delimited text file, with a header that corresponds to the target fields for insertion. Valid delimiters include '|', ',', and <tab> or a properly encoded JSON stream.
Below is the the instruction that describes the task: ### Input: Load a text file into the specified :code:`table_name` or Insert Python :code:`list` rows into the specified :code:`table_name` :param str table_name: The name of the destination table :param list/str rows: A list of rows **or** the name of an input file. Each row must be a :code:`list` of field values. :param list fields: The names of the target fields, in the order that the data will be presented (defaults to :code:`None` for all columns in the table). :param str delimiter: The delimiter used by the input file (or :code:`None` to infer it from the header). :param str null: The string used to indicated nulled values in the file (defaults to :code:`'NULL'`). :param str quotechar: The character used to quote fields containing special characters, like the delimiter. :param bool parse_dates: If :code:`True`, attempts to coerce date fields into a standard format (defaults to :code:`False`). :raises `giraffez.errors.GiraffeEncodeError`: if the number of values in a row does not match the length of :code:`fields` :raises `giraffez.errors.GiraffeError`: if :code:`panic` is set and the insert statement caused an error. :return: A dictionary containing counts of applied rows and errors :rtype: :class:`dict` For most insertions, this will be faster and produce less strain on Teradata than using :class:`~giraffez.load.TeradataBulkLoad` (:class:`giraffez.BulkLoad <giraffez.load.TeradataBulkLoad>`). Requires that any input file be a properly delimited text file, with a header that corresponds to the target fields for insertion. Valid delimiters include '|', ',', and <tab> or a properly encoded JSON stream. ### Response: def insert(self, table_name, rows, fields=None, delimiter=None, null='NULL', parse_dates=False, quotechar='"'): """ Load a text file into the specified :code:`table_name` or Insert Python :code:`list` rows into the specified :code:`table_name` :param str table_name: The name of the destination table :param list/str rows: A list of rows **or** the name of an input file. Each row must be a :code:`list` of field values. :param list fields: The names of the target fields, in the order that the data will be presented (defaults to :code:`None` for all columns in the table). :param str delimiter: The delimiter used by the input file (or :code:`None` to infer it from the header). :param str null: The string used to indicated nulled values in the file (defaults to :code:`'NULL'`). :param str quotechar: The character used to quote fields containing special characters, like the delimiter. :param bool parse_dates: If :code:`True`, attempts to coerce date fields into a standard format (defaults to :code:`False`). :raises `giraffez.errors.GiraffeEncodeError`: if the number of values in a row does not match the length of :code:`fields` :raises `giraffez.errors.GiraffeError`: if :code:`panic` is set and the insert statement caused an error. :return: A dictionary containing counts of applied rows and errors :rtype: :class:`dict` For most insertions, this will be faster and produce less strain on Teradata than using :class:`~giraffez.load.TeradataBulkLoad` (:class:`giraffez.BulkLoad <giraffez.load.TeradataBulkLoad>`). Requires that any input file be a properly delimited text file, with a header that corresponds to the target fields for insertion. Valid delimiters include '|', ',', and <tab> or a properly encoded JSON stream. """ if not isfile(rows): return self._insert(table_name, rows, fields, parse_dates) with Reader(rows, delimiter=delimiter, quotechar=quotechar) as f: preprocessor = null_handler(null) rows = (preprocessor(l) for l in f) if isinstance(f, CSVReader): self.options("delimiter", unescape_string(f.reader.dialect.delimiter), 1) self.options("quote char", f.reader.dialect.quotechar, 2) elif isinstance(f, JSONReader): self.options("encoding", "json", 1) return self._insert(table_name, rows, f.header, parse_dates)
def purge_metadata(self, force=False): """Instance-based version of ProcessMetadataManager.purge_metadata_by_name() that checks for process liveness before purging metadata. :param bool force: If True, skip process liveness check before purging metadata. :raises: `ProcessManager.MetadataError` when OSError is encountered on metadata dir removal. """ if not force and self.is_alive(): raise ProcessMetadataManager.MetadataError('cannot purge metadata for a running process!') super(ProcessManager, self).purge_metadata_by_name(self._name)
Instance-based version of ProcessMetadataManager.purge_metadata_by_name() that checks for process liveness before purging metadata. :param bool force: If True, skip process liveness check before purging metadata. :raises: `ProcessManager.MetadataError` when OSError is encountered on metadata dir removal.
Below is the the instruction that describes the task: ### Input: Instance-based version of ProcessMetadataManager.purge_metadata_by_name() that checks for process liveness before purging metadata. :param bool force: If True, skip process liveness check before purging metadata. :raises: `ProcessManager.MetadataError` when OSError is encountered on metadata dir removal. ### Response: def purge_metadata(self, force=False): """Instance-based version of ProcessMetadataManager.purge_metadata_by_name() that checks for process liveness before purging metadata. :param bool force: If True, skip process liveness check before purging metadata. :raises: `ProcessManager.MetadataError` when OSError is encountered on metadata dir removal. """ if not force and self.is_alive(): raise ProcessMetadataManager.MetadataError('cannot purge metadata for a running process!') super(ProcessManager, self).purge_metadata_by_name(self._name)
def xstep(self): r"""Minimise Augmented Lagrangian with respect to :math:`\mathbf{x}`. """ ngsit = 0 gsrrs = np.inf while gsrrs > self.opt['GSTol'] and ngsit < self.opt['MaxGSIter']: self.X = self.GaussSeidelStep(self.S, self.X, self.cnst_AT(self.Y-self.U), self.rho, self.lcw, self.Wdf2) gsrrs = sl.rrs( self.rho*self.cnst_AT(self.cnst_A(self.X)) + self.Wdf2*self.X, self.Wdf2*self.S + self.rho*self.cnst_AT(self.Y - self.U)) ngsit += 1 self.xs = (ngsit, gsrrs)
r"""Minimise Augmented Lagrangian with respect to :math:`\mathbf{x}`.
Below is the the instruction that describes the task: ### Input: r"""Minimise Augmented Lagrangian with respect to :math:`\mathbf{x}`. ### Response: def xstep(self): r"""Minimise Augmented Lagrangian with respect to :math:`\mathbf{x}`. """ ngsit = 0 gsrrs = np.inf while gsrrs > self.opt['GSTol'] and ngsit < self.opt['MaxGSIter']: self.X = self.GaussSeidelStep(self.S, self.X, self.cnst_AT(self.Y-self.U), self.rho, self.lcw, self.Wdf2) gsrrs = sl.rrs( self.rho*self.cnst_AT(self.cnst_A(self.X)) + self.Wdf2*self.X, self.Wdf2*self.S + self.rho*self.cnst_AT(self.Y - self.U)) ngsit += 1 self.xs = (ngsit, gsrrs)
def detect_lang(path): """Detect the language used in the given file.""" blob = FileBlob(path, os.getcwd()) if blob.is_text: print('Programming language of the file detected: {0}'.format(blob.language.name)) return blob.language.name else:#images, binary and what-have-you won't be pasted print('File not a text file. Exiting...') sys.exit()
Detect the language used in the given file.
Below is the the instruction that describes the task: ### Input: Detect the language used in the given file. ### Response: def detect_lang(path): """Detect the language used in the given file.""" blob = FileBlob(path, os.getcwd()) if blob.is_text: print('Programming language of the file detected: {0}'.format(blob.language.name)) return blob.language.name else:#images, binary and what-have-you won't be pasted print('File not a text file. Exiting...') sys.exit()
def geodetic_to_ecef(latitude, longitude, altitude): """Convert WGS84 geodetic coordinates into ECEF Parameters ---------- latitude : float or array_like Geodetic latitude (degrees) longitude : float or array_like Geodetic longitude (degrees) altitude : float or array_like Geodetic Height (km) above WGS84 reference ellipsoid. Returns ------- x, y, z numpy arrays of x, y, z locations in km """ ellip = np.sqrt(1. - earth_b ** 2 / earth_a ** 2) r_n = earth_a / np.sqrt(1. - ellip ** 2 * np.sin(np.deg2rad(latitude)) ** 2) # colatitude = 90. - latitude x = (r_n + altitude) * np.cos(np.deg2rad(latitude)) * np.cos(np.deg2rad(longitude)) y = (r_n + altitude) * np.cos(np.deg2rad(latitude)) * np.sin(np.deg2rad(longitude)) z = (r_n * (1. - ellip ** 2) + altitude) * np.sin(np.deg2rad(latitude)) return x, y, z
Convert WGS84 geodetic coordinates into ECEF Parameters ---------- latitude : float or array_like Geodetic latitude (degrees) longitude : float or array_like Geodetic longitude (degrees) altitude : float or array_like Geodetic Height (km) above WGS84 reference ellipsoid. Returns ------- x, y, z numpy arrays of x, y, z locations in km
Below is the the instruction that describes the task: ### Input: Convert WGS84 geodetic coordinates into ECEF Parameters ---------- latitude : float or array_like Geodetic latitude (degrees) longitude : float or array_like Geodetic longitude (degrees) altitude : float or array_like Geodetic Height (km) above WGS84 reference ellipsoid. Returns ------- x, y, z numpy arrays of x, y, z locations in km ### Response: def geodetic_to_ecef(latitude, longitude, altitude): """Convert WGS84 geodetic coordinates into ECEF Parameters ---------- latitude : float or array_like Geodetic latitude (degrees) longitude : float or array_like Geodetic longitude (degrees) altitude : float or array_like Geodetic Height (km) above WGS84 reference ellipsoid. Returns ------- x, y, z numpy arrays of x, y, z locations in km """ ellip = np.sqrt(1. - earth_b ** 2 / earth_a ** 2) r_n = earth_a / np.sqrt(1. - ellip ** 2 * np.sin(np.deg2rad(latitude)) ** 2) # colatitude = 90. - latitude x = (r_n + altitude) * np.cos(np.deg2rad(latitude)) * np.cos(np.deg2rad(longitude)) y = (r_n + altitude) * np.cos(np.deg2rad(latitude)) * np.sin(np.deg2rad(longitude)) z = (r_n * (1. - ellip ** 2) + altitude) * np.sin(np.deg2rad(latitude)) return x, y, z
def clean_comment_body(body): """Returns given comment HTML as plaintext. Converts all HTML tags and entities within 4chan comments into human-readable text equivalents. """ body = _parser.unescape(body) body = re.sub(r'<a [^>]+>(.+?)</a>', r'\1', body) body = body.replace('<br>', '\n') body = re.sub(r'<.+?>', '', body) return body
Returns given comment HTML as plaintext. Converts all HTML tags and entities within 4chan comments into human-readable text equivalents.
Below is the the instruction that describes the task: ### Input: Returns given comment HTML as plaintext. Converts all HTML tags and entities within 4chan comments into human-readable text equivalents. ### Response: def clean_comment_body(body): """Returns given comment HTML as plaintext. Converts all HTML tags and entities within 4chan comments into human-readable text equivalents. """ body = _parser.unescape(body) body = re.sub(r'<a [^>]+>(.+?)</a>', r'\1', body) body = body.replace('<br>', '\n') body = re.sub(r'<.+?>', '', body) return body
async def activate_scene(self, scene_id: int): """Activate a scene :param scene_id: Scene id. :return: """ _scene = await self.get_scene(scene_id) await _scene.activate()
Activate a scene :param scene_id: Scene id. :return:
Below is the the instruction that describes the task: ### Input: Activate a scene :param scene_id: Scene id. :return: ### Response: async def activate_scene(self, scene_id: int): """Activate a scene :param scene_id: Scene id. :return: """ _scene = await self.get_scene(scene_id) await _scene.activate()
def add_route(self, route): ''' Add a route object, but do not change the :data:`Route.app` attribute.''' self.routes.append(route) self.router.add(route.rule, route.method, route, name=route.name) if DEBUG: route.prepare()
Add a route object, but do not change the :data:`Route.app` attribute.
Below is the the instruction that describes the task: ### Input: Add a route object, but do not change the :data:`Route.app` attribute. ### Response: def add_route(self, route): ''' Add a route object, but do not change the :data:`Route.app` attribute.''' self.routes.append(route) self.router.add(route.rule, route.method, route, name=route.name) if DEBUG: route.prepare()
def set_traindata(self, training_rdd, batch_size): """ Set new training dataset, for optimizer reuse :param training_rdd: the training dataset :param batch_size: training batch size :return: """ callBigDlFunc(self.bigdl_type, "setTrainData", self.value, training_rdd, batch_size)
Set new training dataset, for optimizer reuse :param training_rdd: the training dataset :param batch_size: training batch size :return:
Below is the the instruction that describes the task: ### Input: Set new training dataset, for optimizer reuse :param training_rdd: the training dataset :param batch_size: training batch size :return: ### Response: def set_traindata(self, training_rdd, batch_size): """ Set new training dataset, for optimizer reuse :param training_rdd: the training dataset :param batch_size: training batch size :return: """ callBigDlFunc(self.bigdl_type, "setTrainData", self.value, training_rdd, batch_size)
def _deserialize_encrypted_data_keys(stream): # type: (IO) -> Set[EncryptedDataKey] """Deserialize some encrypted data keys from a stream. :param stream: Stream from which to read encrypted data keys :return: Loaded encrypted data keys :rtype: set of :class:`EncryptedDataKey` """ (encrypted_data_key_count,) = unpack_values(">H", stream) encrypted_data_keys = set([]) for _ in range(encrypted_data_key_count): (key_provider_length,) = unpack_values(">H", stream) (key_provider_identifier,) = unpack_values(">{}s".format(key_provider_length), stream) (key_provider_information_length,) = unpack_values(">H", stream) (key_provider_information,) = unpack_values(">{}s".format(key_provider_information_length), stream) (encrypted_data_key_length,) = unpack_values(">H", stream) encrypted_data_key = stream.read(encrypted_data_key_length) encrypted_data_keys.add( EncryptedDataKey( key_provider=MasterKeyInfo( provider_id=to_str(key_provider_identifier), key_info=key_provider_information ), encrypted_data_key=encrypted_data_key, ) ) return encrypted_data_keys
Deserialize some encrypted data keys from a stream. :param stream: Stream from which to read encrypted data keys :return: Loaded encrypted data keys :rtype: set of :class:`EncryptedDataKey`
Below is the the instruction that describes the task: ### Input: Deserialize some encrypted data keys from a stream. :param stream: Stream from which to read encrypted data keys :return: Loaded encrypted data keys :rtype: set of :class:`EncryptedDataKey` ### Response: def _deserialize_encrypted_data_keys(stream): # type: (IO) -> Set[EncryptedDataKey] """Deserialize some encrypted data keys from a stream. :param stream: Stream from which to read encrypted data keys :return: Loaded encrypted data keys :rtype: set of :class:`EncryptedDataKey` """ (encrypted_data_key_count,) = unpack_values(">H", stream) encrypted_data_keys = set([]) for _ in range(encrypted_data_key_count): (key_provider_length,) = unpack_values(">H", stream) (key_provider_identifier,) = unpack_values(">{}s".format(key_provider_length), stream) (key_provider_information_length,) = unpack_values(">H", stream) (key_provider_information,) = unpack_values(">{}s".format(key_provider_information_length), stream) (encrypted_data_key_length,) = unpack_values(">H", stream) encrypted_data_key = stream.read(encrypted_data_key_length) encrypted_data_keys.add( EncryptedDataKey( key_provider=MasterKeyInfo( provider_id=to_str(key_provider_identifier), key_info=key_provider_information ), encrypted_data_key=encrypted_data_key, ) ) return encrypted_data_keys
def visit_BoolOp(self, node): ''' Resulting node may alias to either operands: >>> from pythran import passmanager >>> pm = passmanager.PassManager('demo') >>> module = ast.parse('def foo(a, b): return a or b') >>> result = pm.gather(Aliases, module) >>> Aliases.dump(result, filter=ast.BoolOp) (a or b) => ['a', 'b'] Note that a literal does not create any alias >>> module = ast.parse('def foo(a, b): return a or 0') >>> result = pm.gather(Aliases, module) >>> Aliases.dump(result, filter=ast.BoolOp) (a or 0) => ['<unbound-value>', 'a'] ''' return self.add(node, set.union(*[self.visit(n) for n in node.values]))
Resulting node may alias to either operands: >>> from pythran import passmanager >>> pm = passmanager.PassManager('demo') >>> module = ast.parse('def foo(a, b): return a or b') >>> result = pm.gather(Aliases, module) >>> Aliases.dump(result, filter=ast.BoolOp) (a or b) => ['a', 'b'] Note that a literal does not create any alias >>> module = ast.parse('def foo(a, b): return a or 0') >>> result = pm.gather(Aliases, module) >>> Aliases.dump(result, filter=ast.BoolOp) (a or 0) => ['<unbound-value>', 'a']
Below is the the instruction that describes the task: ### Input: Resulting node may alias to either operands: >>> from pythran import passmanager >>> pm = passmanager.PassManager('demo') >>> module = ast.parse('def foo(a, b): return a or b') >>> result = pm.gather(Aliases, module) >>> Aliases.dump(result, filter=ast.BoolOp) (a or b) => ['a', 'b'] Note that a literal does not create any alias >>> module = ast.parse('def foo(a, b): return a or 0') >>> result = pm.gather(Aliases, module) >>> Aliases.dump(result, filter=ast.BoolOp) (a or 0) => ['<unbound-value>', 'a'] ### Response: def visit_BoolOp(self, node): ''' Resulting node may alias to either operands: >>> from pythran import passmanager >>> pm = passmanager.PassManager('demo') >>> module = ast.parse('def foo(a, b): return a or b') >>> result = pm.gather(Aliases, module) >>> Aliases.dump(result, filter=ast.BoolOp) (a or b) => ['a', 'b'] Note that a literal does not create any alias >>> module = ast.parse('def foo(a, b): return a or 0') >>> result = pm.gather(Aliases, module) >>> Aliases.dump(result, filter=ast.BoolOp) (a or 0) => ['<unbound-value>', 'a'] ''' return self.add(node, set.union(*[self.visit(n) for n in node.values]))
def checkerboard(img_spec1=None, img_spec2=None, patch_size=10, view_set=(0, 1, 2), num_slices=(10,), num_rows=2, rescale_method='global', background_threshold=0.05, annot=None, padding=5, output_path=None, figsize=None, ): """ Checkerboard mixer. Parameters ---------- img_spec1 : str or nibabel image-like object MR image (or path to one) to be visualized img_spec2 : str or nibabel image-like object MR image (or path to one) to be visualized patch_size : int or list or (int, int) or None size of checker patch (either square or rectangular) If None, number of voxels/patch are chosen such that, there will be 7 patches through the width/height. view_set : iterable Integers specifying the dimensions to be visualized. Choices: one or more of (0, 1, 2) for a 3D image num_slices : int or iterable of size as view_set number of slices to be selected for each view Must be of the same length as view_set, each element specifying the number of slices for each dimension. If only one number is given, same number will be chosen for all dimensions. num_rows : int number of rows (top to bottom) per each of 3 dimensions rescale_method : bool or str or list or None Range to rescale the intensity values to Default: 'global', min and max values computed based on ranges from both images. If false or None, no rescaling is done (does not work yet). background_threshold : float or str A threshold value below which all the background voxels will be set to zero. Default : 0.05. Other option is a string specifying a percentile: '5%', '10%'. Specify None if you don't want any thresholding. annot : str Text to display to annotate the visualization padding : int number of voxels to pad around each panel. output_path : str path to save the generate collage to. figsize : list Size of figure in inches to be passed on to plt.figure() e.g. [12, 12] or [20, 20] Returns ------- fig : figure handle handle to the collage figure generated. """ img_one, img_two = _preprocess_images(img_spec1, img_spec2, rescale_method=rescale_method, bkground_thresh=background_threshold, padding=padding) display_params = dict(interpolation='none', aspect='auto', origin='lower', cmap='gray', vmin=0.0, vmax=1.0) mixer = partial(_checker_mixer, checker_size=patch_size) collage = Collage(view_set=view_set, num_slices=num_slices, num_rows=num_rows, figsize=figsize, display_params=display_params) collage.transform_and_attach((img_one, img_two), func=mixer) collage.save(output_path=output_path, annot=annot) return collage
Checkerboard mixer. Parameters ---------- img_spec1 : str or nibabel image-like object MR image (or path to one) to be visualized img_spec2 : str or nibabel image-like object MR image (or path to one) to be visualized patch_size : int or list or (int, int) or None size of checker patch (either square or rectangular) If None, number of voxels/patch are chosen such that, there will be 7 patches through the width/height. view_set : iterable Integers specifying the dimensions to be visualized. Choices: one or more of (0, 1, 2) for a 3D image num_slices : int or iterable of size as view_set number of slices to be selected for each view Must be of the same length as view_set, each element specifying the number of slices for each dimension. If only one number is given, same number will be chosen for all dimensions. num_rows : int number of rows (top to bottom) per each of 3 dimensions rescale_method : bool or str or list or None Range to rescale the intensity values to Default: 'global', min and max values computed based on ranges from both images. If false or None, no rescaling is done (does not work yet). background_threshold : float or str A threshold value below which all the background voxels will be set to zero. Default : 0.05. Other option is a string specifying a percentile: '5%', '10%'. Specify None if you don't want any thresholding. annot : str Text to display to annotate the visualization padding : int number of voxels to pad around each panel. output_path : str path to save the generate collage to. figsize : list Size of figure in inches to be passed on to plt.figure() e.g. [12, 12] or [20, 20] Returns ------- fig : figure handle handle to the collage figure generated.
Below is the the instruction that describes the task: ### Input: Checkerboard mixer. Parameters ---------- img_spec1 : str or nibabel image-like object MR image (or path to one) to be visualized img_spec2 : str or nibabel image-like object MR image (or path to one) to be visualized patch_size : int or list or (int, int) or None size of checker patch (either square or rectangular) If None, number of voxels/patch are chosen such that, there will be 7 patches through the width/height. view_set : iterable Integers specifying the dimensions to be visualized. Choices: one or more of (0, 1, 2) for a 3D image num_slices : int or iterable of size as view_set number of slices to be selected for each view Must be of the same length as view_set, each element specifying the number of slices for each dimension. If only one number is given, same number will be chosen for all dimensions. num_rows : int number of rows (top to bottom) per each of 3 dimensions rescale_method : bool or str or list or None Range to rescale the intensity values to Default: 'global', min and max values computed based on ranges from both images. If false or None, no rescaling is done (does not work yet). background_threshold : float or str A threshold value below which all the background voxels will be set to zero. Default : 0.05. Other option is a string specifying a percentile: '5%', '10%'. Specify None if you don't want any thresholding. annot : str Text to display to annotate the visualization padding : int number of voxels to pad around each panel. output_path : str path to save the generate collage to. figsize : list Size of figure in inches to be passed on to plt.figure() e.g. [12, 12] or [20, 20] Returns ------- fig : figure handle handle to the collage figure generated. ### Response: def checkerboard(img_spec1=None, img_spec2=None, patch_size=10, view_set=(0, 1, 2), num_slices=(10,), num_rows=2, rescale_method='global', background_threshold=0.05, annot=None, padding=5, output_path=None, figsize=None, ): """ Checkerboard mixer. Parameters ---------- img_spec1 : str or nibabel image-like object MR image (or path to one) to be visualized img_spec2 : str or nibabel image-like object MR image (or path to one) to be visualized patch_size : int or list or (int, int) or None size of checker patch (either square or rectangular) If None, number of voxels/patch are chosen such that, there will be 7 patches through the width/height. view_set : iterable Integers specifying the dimensions to be visualized. Choices: one or more of (0, 1, 2) for a 3D image num_slices : int or iterable of size as view_set number of slices to be selected for each view Must be of the same length as view_set, each element specifying the number of slices for each dimension. If only one number is given, same number will be chosen for all dimensions. num_rows : int number of rows (top to bottom) per each of 3 dimensions rescale_method : bool or str or list or None Range to rescale the intensity values to Default: 'global', min and max values computed based on ranges from both images. If false or None, no rescaling is done (does not work yet). background_threshold : float or str A threshold value below which all the background voxels will be set to zero. Default : 0.05. Other option is a string specifying a percentile: '5%', '10%'. Specify None if you don't want any thresholding. annot : str Text to display to annotate the visualization padding : int number of voxels to pad around each panel. output_path : str path to save the generate collage to. figsize : list Size of figure in inches to be passed on to plt.figure() e.g. [12, 12] or [20, 20] Returns ------- fig : figure handle handle to the collage figure generated. """ img_one, img_two = _preprocess_images(img_spec1, img_spec2, rescale_method=rescale_method, bkground_thresh=background_threshold, padding=padding) display_params = dict(interpolation='none', aspect='auto', origin='lower', cmap='gray', vmin=0.0, vmax=1.0) mixer = partial(_checker_mixer, checker_size=patch_size) collage = Collage(view_set=view_set, num_slices=num_slices, num_rows=num_rows, figsize=figsize, display_params=display_params) collage.transform_and_attach((img_one, img_two), func=mixer) collage.save(output_path=output_path, annot=annot) return collage
def index(request, template_name="tagging_ext/index.html", min_size=0,limit=10): """ min_size: Smallest size count accepted for a tag order_by: asc or desc by count limit: maximum number of tags to display TODO: convert the hand-written query to an ORM call. Right now I know this works with Sqlite3 and PostGreSQL. """ query = """ SELECT tag_item.tag_id as tag_id, COUNT(tag_item.tag_id) as counter FROM tagging_taggeditem as tag_item GROUP BY tag_id HAVING COUNT(tag_item.tag_id) > %s ORDER BY counter desc LIMIT %s """ cursor = connection.cursor() cursor.execute(query, [min_size, limit]) results = [] for row in cursor.fetchall(): try: tag=Tag.objects.get(id=row[0]) except ObjectDoesNotExist: continue if ' ' in tag.name: continue record = dict( tag=tag, count=row[1] ) results.append(record) dictionary = { 'tags':results } return render_to_response(template_name, dictionary, context_instance=RequestContext(request))
min_size: Smallest size count accepted for a tag order_by: asc or desc by count limit: maximum number of tags to display TODO: convert the hand-written query to an ORM call. Right now I know this works with Sqlite3 and PostGreSQL.
Below is the the instruction that describes the task: ### Input: min_size: Smallest size count accepted for a tag order_by: asc or desc by count limit: maximum number of tags to display TODO: convert the hand-written query to an ORM call. Right now I know this works with Sqlite3 and PostGreSQL. ### Response: def index(request, template_name="tagging_ext/index.html", min_size=0,limit=10): """ min_size: Smallest size count accepted for a tag order_by: asc or desc by count limit: maximum number of tags to display TODO: convert the hand-written query to an ORM call. Right now I know this works with Sqlite3 and PostGreSQL. """ query = """ SELECT tag_item.tag_id as tag_id, COUNT(tag_item.tag_id) as counter FROM tagging_taggeditem as tag_item GROUP BY tag_id HAVING COUNT(tag_item.tag_id) > %s ORDER BY counter desc LIMIT %s """ cursor = connection.cursor() cursor.execute(query, [min_size, limit]) results = [] for row in cursor.fetchall(): try: tag=Tag.objects.get(id=row[0]) except ObjectDoesNotExist: continue if ' ' in tag.name: continue record = dict( tag=tag, count=row[1] ) results.append(record) dictionary = { 'tags':results } return render_to_response(template_name, dictionary, context_instance=RequestContext(request))
def episodes(self): """Return a flat episode iterator. :returns: Iterator :code:`((season_num, episode_num), Episode)` :rtype: iterator """ for sk, season in iteritems(self.seasons): # Yield each episode in season for ek, episode in iteritems(season.episodes): yield (sk, ek), episode
Return a flat episode iterator. :returns: Iterator :code:`((season_num, episode_num), Episode)` :rtype: iterator
Below is the the instruction that describes the task: ### Input: Return a flat episode iterator. :returns: Iterator :code:`((season_num, episode_num), Episode)` :rtype: iterator ### Response: def episodes(self): """Return a flat episode iterator. :returns: Iterator :code:`((season_num, episode_num), Episode)` :rtype: iterator """ for sk, season in iteritems(self.seasons): # Yield each episode in season for ek, episode in iteritems(season.episodes): yield (sk, ek), episode
def get(cls, attachment_public_uuid, custom_headers=None): """ Get a specific attachment's metadata through its UUID. The Content-Type header of the response will describe the MIME type of the attachment file. :type api_context: context.ApiContext :type attachment_public_uuid: str :type custom_headers: dict[str, str]|None :rtype: BunqResponseAttachmentPublic """ if custom_headers is None: custom_headers = {} api_client = client.ApiClient(cls._get_api_context()) endpoint_url = cls._ENDPOINT_URL_READ.format(attachment_public_uuid) response_raw = api_client.get(endpoint_url, {}, custom_headers) return BunqResponseAttachmentPublic.cast_from_bunq_response( cls._from_json(response_raw, cls._OBJECT_TYPE_GET) )
Get a specific attachment's metadata through its UUID. The Content-Type header of the response will describe the MIME type of the attachment file. :type api_context: context.ApiContext :type attachment_public_uuid: str :type custom_headers: dict[str, str]|None :rtype: BunqResponseAttachmentPublic
Below is the the instruction that describes the task: ### Input: Get a specific attachment's metadata through its UUID. The Content-Type header of the response will describe the MIME type of the attachment file. :type api_context: context.ApiContext :type attachment_public_uuid: str :type custom_headers: dict[str, str]|None :rtype: BunqResponseAttachmentPublic ### Response: def get(cls, attachment_public_uuid, custom_headers=None): """ Get a specific attachment's metadata through its UUID. The Content-Type header of the response will describe the MIME type of the attachment file. :type api_context: context.ApiContext :type attachment_public_uuid: str :type custom_headers: dict[str, str]|None :rtype: BunqResponseAttachmentPublic """ if custom_headers is None: custom_headers = {} api_client = client.ApiClient(cls._get_api_context()) endpoint_url = cls._ENDPOINT_URL_READ.format(attachment_public_uuid) response_raw = api_client.get(endpoint_url, {}, custom_headers) return BunqResponseAttachmentPublic.cast_from_bunq_response( cls._from_json(response_raw, cls._OBJECT_TYPE_GET) )
def _addupdate_hdxobject(self, hdxobjects, id_field, new_hdxobject): # type: (List[HDXObjectUpperBound], str, HDXObjectUpperBound) -> HDXObjectUpperBound """Helper function to add a new HDX object to a supplied list of HDX objects or update existing metadata if the object already exists in the list Args: hdxobjects (List[T <= HDXObject]): list of HDX objects to which to add new objects or update existing ones id_field (str): Field on which to match to determine if object already exists in list new_hdxobject (T <= HDXObject): The HDX object to be added/updated Returns: T <= HDXObject: The HDX object which was added or updated """ for hdxobject in hdxobjects: if hdxobject[id_field] == new_hdxobject[id_field]: merge_two_dictionaries(hdxobject, new_hdxobject) return hdxobject hdxobjects.append(new_hdxobject) return new_hdxobject
Helper function to add a new HDX object to a supplied list of HDX objects or update existing metadata if the object already exists in the list Args: hdxobjects (List[T <= HDXObject]): list of HDX objects to which to add new objects or update existing ones id_field (str): Field on which to match to determine if object already exists in list new_hdxobject (T <= HDXObject): The HDX object to be added/updated Returns: T <= HDXObject: The HDX object which was added or updated
Below is the the instruction that describes the task: ### Input: Helper function to add a new HDX object to a supplied list of HDX objects or update existing metadata if the object already exists in the list Args: hdxobjects (List[T <= HDXObject]): list of HDX objects to which to add new objects or update existing ones id_field (str): Field on which to match to determine if object already exists in list new_hdxobject (T <= HDXObject): The HDX object to be added/updated Returns: T <= HDXObject: The HDX object which was added or updated ### Response: def _addupdate_hdxobject(self, hdxobjects, id_field, new_hdxobject): # type: (List[HDXObjectUpperBound], str, HDXObjectUpperBound) -> HDXObjectUpperBound """Helper function to add a new HDX object to a supplied list of HDX objects or update existing metadata if the object already exists in the list Args: hdxobjects (List[T <= HDXObject]): list of HDX objects to which to add new objects or update existing ones id_field (str): Field on which to match to determine if object already exists in list new_hdxobject (T <= HDXObject): The HDX object to be added/updated Returns: T <= HDXObject: The HDX object which was added or updated """ for hdxobject in hdxobjects: if hdxobject[id_field] == new_hdxobject[id_field]: merge_two_dictionaries(hdxobject, new_hdxobject) return hdxobject hdxobjects.append(new_hdxobject) return new_hdxobject
def make_repr(obj, params=None, keywords=None, data=None, name=None, reprs=None): """Generates a string of object initialization code style. It is useful for custom __repr__ methods:: class Example(object): def __init__(self, param, keyword=None): self.param = param self.keyword = keyword def __repr__(self): return make_repr(self, ['param'], ['keyword']) See the representation of example object:: >>> Example('hello', keyword='world') Example('hello', keyword='world') """ opts = [] if params is not None: opts.append(', '.join( _repr_attr(obj, attr, data, reprs) for attr in params)) if keywords is not None: opts.append(', '.join( '%s=%s' % (attr, _repr_attr(obj, attr, data, reprs)) for attr in keywords)) if name is None: name = class_name(obj) return '%s(%s)' % (name, ', '.join(opts))
Generates a string of object initialization code style. It is useful for custom __repr__ methods:: class Example(object): def __init__(self, param, keyword=None): self.param = param self.keyword = keyword def __repr__(self): return make_repr(self, ['param'], ['keyword']) See the representation of example object:: >>> Example('hello', keyword='world') Example('hello', keyword='world')
Below is the the instruction that describes the task: ### Input: Generates a string of object initialization code style. It is useful for custom __repr__ methods:: class Example(object): def __init__(self, param, keyword=None): self.param = param self.keyword = keyword def __repr__(self): return make_repr(self, ['param'], ['keyword']) See the representation of example object:: >>> Example('hello', keyword='world') Example('hello', keyword='world') ### Response: def make_repr(obj, params=None, keywords=None, data=None, name=None, reprs=None): """Generates a string of object initialization code style. It is useful for custom __repr__ methods:: class Example(object): def __init__(self, param, keyword=None): self.param = param self.keyword = keyword def __repr__(self): return make_repr(self, ['param'], ['keyword']) See the representation of example object:: >>> Example('hello', keyword='world') Example('hello', keyword='world') """ opts = [] if params is not None: opts.append(', '.join( _repr_attr(obj, attr, data, reprs) for attr in params)) if keywords is not None: opts.append(', '.join( '%s=%s' % (attr, _repr_attr(obj, attr, data, reprs)) for attr in keywords)) if name is None: name = class_name(obj) return '%s(%s)' % (name, ', '.join(opts))
def transmit_metrics(self): """ Keep metrics updated about how long time ago each filetype was successfully uploaded. Transmits max once per ten seconds, regardless of how many threads are running. """ global _last_stats_transmit_time # pylint: disable=global-statement with _STATS_LOCK: # pylint: disable=not-context-manager if time.monotonic() - _last_stats_transmit_time < 10.0: return for site in self.state: for filetype, prop in self.state[site]["upload"].items(): if prop["last_success"]: self.metrics.gauge( "pghoard.last_upload_age", time.monotonic() - prop["last_success"], tags={ "site": site, "type": filetype, } ) _last_stats_transmit_time = time.monotonic()
Keep metrics updated about how long time ago each filetype was successfully uploaded. Transmits max once per ten seconds, regardless of how many threads are running.
Below is the the instruction that describes the task: ### Input: Keep metrics updated about how long time ago each filetype was successfully uploaded. Transmits max once per ten seconds, regardless of how many threads are running. ### Response: def transmit_metrics(self): """ Keep metrics updated about how long time ago each filetype was successfully uploaded. Transmits max once per ten seconds, regardless of how many threads are running. """ global _last_stats_transmit_time # pylint: disable=global-statement with _STATS_LOCK: # pylint: disable=not-context-manager if time.monotonic() - _last_stats_transmit_time < 10.0: return for site in self.state: for filetype, prop in self.state[site]["upload"].items(): if prop["last_success"]: self.metrics.gauge( "pghoard.last_upload_age", time.monotonic() - prop["last_success"], tags={ "site": site, "type": filetype, } ) _last_stats_transmit_time = time.monotonic()
def dens_floc(ConcAl, ConcClay, DIM_FRACTAL, DiamTarget, coag, material, Temp): """Calculate floc density as a function of size.""" WaterDensity = pc.density_water(Temp).magnitude return ((dens_floc_init(ConcAl, ConcClay, coag, material).magnitude - WaterDensity ) * (material.Diameter / DiamTarget)**(3 - DIM_FRACTAL) + WaterDensity )
Calculate floc density as a function of size.
Below is the the instruction that describes the task: ### Input: Calculate floc density as a function of size. ### Response: def dens_floc(ConcAl, ConcClay, DIM_FRACTAL, DiamTarget, coag, material, Temp): """Calculate floc density as a function of size.""" WaterDensity = pc.density_water(Temp).magnitude return ((dens_floc_init(ConcAl, ConcClay, coag, material).magnitude - WaterDensity ) * (material.Diameter / DiamTarget)**(3 - DIM_FRACTAL) + WaterDensity )
def new(self, platform_id): # type: (int) -> None ''' A method to create a new El Torito Validation Entry. Parameters: platform_id - The platform ID to set for this validation entry. Returns: Nothing. ''' if self._initialized: raise pycdlibexception.PyCdlibInternalError('El Torito Validation Entry already initialized') self.platform_id = platform_id self.id_string = b'\x00' * 24 # FIXME: let the user set this self.checksum = 0 self.checksum = utils.swab_16bit(self._checksum(self._record()) - 1) self._initialized = True
A method to create a new El Torito Validation Entry. Parameters: platform_id - The platform ID to set for this validation entry. Returns: Nothing.
Below is the the instruction that describes the task: ### Input: A method to create a new El Torito Validation Entry. Parameters: platform_id - The platform ID to set for this validation entry. Returns: Nothing. ### Response: def new(self, platform_id): # type: (int) -> None ''' A method to create a new El Torito Validation Entry. Parameters: platform_id - The platform ID to set for this validation entry. Returns: Nothing. ''' if self._initialized: raise pycdlibexception.PyCdlibInternalError('El Torito Validation Entry already initialized') self.platform_id = platform_id self.id_string = b'\x00' * 24 # FIXME: let the user set this self.checksum = 0 self.checksum = utils.swab_16bit(self._checksum(self._record()) - 1) self._initialized = True
def compress_ranges_to_lists(self): ''' Converts the internal dimension ranges on lists into list of the restricted size. Thus all dimension rules are applied to all dimensions of the list wrapper and returned as a list (of lists). ''' clist = [] for elem in self: if isinstance(elem, FixedListSubset): clist.append(elem.compress_ranges_to_lists()) else: clist.append(elem) return clist
Converts the internal dimension ranges on lists into list of the restricted size. Thus all dimension rules are applied to all dimensions of the list wrapper and returned as a list (of lists).
Below is the the instruction that describes the task: ### Input: Converts the internal dimension ranges on lists into list of the restricted size. Thus all dimension rules are applied to all dimensions of the list wrapper and returned as a list (of lists). ### Response: def compress_ranges_to_lists(self): ''' Converts the internal dimension ranges on lists into list of the restricted size. Thus all dimension rules are applied to all dimensions of the list wrapper and returned as a list (of lists). ''' clist = [] for elem in self: if isinstance(elem, FixedListSubset): clist.append(elem.compress_ranges_to_lists()) else: clist.append(elem) return clist
def info(name, m, p, b, w, **kwargs): """ Show information about cocaine runtime. Return json-like string with information about cocaine-runtime. If the name option is not specified, shows information about all applications. Flags can be specified for fine-grained control of the output verbosity. """ m = (m << 1) & 0b010 p = (p << 2) & 0b100 # Brief disables all further flags. if b: flags = 0b000 else: flags = m | p | 0b001 ctx = Context(**kwargs) ctx.execute_action('info', **{ 'node': ctx.repo.create_secure_service('node'), 'locator': ctx.locator, 'name': name, 'flags': flags, 'use_wildcard': w, 'timeout': ctx.timeout, })
Show information about cocaine runtime. Return json-like string with information about cocaine-runtime. If the name option is not specified, shows information about all applications. Flags can be specified for fine-grained control of the output verbosity.
Below is the the instruction that describes the task: ### Input: Show information about cocaine runtime. Return json-like string with information about cocaine-runtime. If the name option is not specified, shows information about all applications. Flags can be specified for fine-grained control of the output verbosity. ### Response: def info(name, m, p, b, w, **kwargs): """ Show information about cocaine runtime. Return json-like string with information about cocaine-runtime. If the name option is not specified, shows information about all applications. Flags can be specified for fine-grained control of the output verbosity. """ m = (m << 1) & 0b010 p = (p << 2) & 0b100 # Brief disables all further flags. if b: flags = 0b000 else: flags = m | p | 0b001 ctx = Context(**kwargs) ctx.execute_action('info', **{ 'node': ctx.repo.create_secure_service('node'), 'locator': ctx.locator, 'name': name, 'flags': flags, 'use_wildcard': w, 'timeout': ctx.timeout, })
def replace(self, year=None, month=None, day=None): """ Returns a new datetime.date or asn1crypto.util.extended_date object with the specified components replaced :return: A datetime.date or asn1crypto.util.extended_date object """ if year is None: year = self.year if month is None: month = self.month if day is None: day = self.day if year > 0: cls = date else: cls = extended_date return cls( year, month, day )
Returns a new datetime.date or asn1crypto.util.extended_date object with the specified components replaced :return: A datetime.date or asn1crypto.util.extended_date object
Below is the the instruction that describes the task: ### Input: Returns a new datetime.date or asn1crypto.util.extended_date object with the specified components replaced :return: A datetime.date or asn1crypto.util.extended_date object ### Response: def replace(self, year=None, month=None, day=None): """ Returns a new datetime.date or asn1crypto.util.extended_date object with the specified components replaced :return: A datetime.date or asn1crypto.util.extended_date object """ if year is None: year = self.year if month is None: month = self.month if day is None: day = self.day if year > 0: cls = date else: cls = extended_date return cls( year, month, day )
def ds2p(self): """Calculates the derivative of the neutron separation energies: ds2n(Z,A) = s2n(Z,A) - s2n(Z,A+2) """ idx = [(x[0] + 2, x[1]) for x in self.df.index] values = self.s2p.values - self.s2p.loc[idx].values return Table(df=pd.Series(values, index=self.df.index, name='ds2p' + '(' + self.name + ')'))
Calculates the derivative of the neutron separation energies: ds2n(Z,A) = s2n(Z,A) - s2n(Z,A+2)
Below is the the instruction that describes the task: ### Input: Calculates the derivative of the neutron separation energies: ds2n(Z,A) = s2n(Z,A) - s2n(Z,A+2) ### Response: def ds2p(self): """Calculates the derivative of the neutron separation energies: ds2n(Z,A) = s2n(Z,A) - s2n(Z,A+2) """ idx = [(x[0] + 2, x[1]) for x in self.df.index] values = self.s2p.values - self.s2p.loc[idx].values return Table(df=pd.Series(values, index=self.df.index, name='ds2p' + '(' + self.name + ')'))
def _do_request( self, method, url, headers, data, target_object ): # pylint: disable=unused-argument """Low-level helper: perform the actual API request over HTTP. Allows batch context managers to override and defer a request. :type method: str :param method: The HTTP method to use in the request. :type url: str :param url: The URL to send the request to. :type headers: dict :param headers: A dictionary of HTTP headers to send with the request. :type data: str :param data: The data to send as the body of the request. :type target_object: object :param target_object: (Optional) Unused ``target_object`` here but may be used by a superclass. :rtype: :class:`requests.Response` :returns: The HTTP response. """ return self.http.request(url=url, method=method, headers=headers, data=data)
Low-level helper: perform the actual API request over HTTP. Allows batch context managers to override and defer a request. :type method: str :param method: The HTTP method to use in the request. :type url: str :param url: The URL to send the request to. :type headers: dict :param headers: A dictionary of HTTP headers to send with the request. :type data: str :param data: The data to send as the body of the request. :type target_object: object :param target_object: (Optional) Unused ``target_object`` here but may be used by a superclass. :rtype: :class:`requests.Response` :returns: The HTTP response.
Below is the the instruction that describes the task: ### Input: Low-level helper: perform the actual API request over HTTP. Allows batch context managers to override and defer a request. :type method: str :param method: The HTTP method to use in the request. :type url: str :param url: The URL to send the request to. :type headers: dict :param headers: A dictionary of HTTP headers to send with the request. :type data: str :param data: The data to send as the body of the request. :type target_object: object :param target_object: (Optional) Unused ``target_object`` here but may be used by a superclass. :rtype: :class:`requests.Response` :returns: The HTTP response. ### Response: def _do_request( self, method, url, headers, data, target_object ): # pylint: disable=unused-argument """Low-level helper: perform the actual API request over HTTP. Allows batch context managers to override and defer a request. :type method: str :param method: The HTTP method to use in the request. :type url: str :param url: The URL to send the request to. :type headers: dict :param headers: A dictionary of HTTP headers to send with the request. :type data: str :param data: The data to send as the body of the request. :type target_object: object :param target_object: (Optional) Unused ``target_object`` here but may be used by a superclass. :rtype: :class:`requests.Response` :returns: The HTTP response. """ return self.http.request(url=url, method=method, headers=headers, data=data)
def count(self, name=None, **attrs): r"""Number of descendants matching criteria. :param Union[None,str] name: name of LaTeX expression :param attrs: LaTeX expression attributes, such as item text. :return: number of matching expressions :rtype: int >>> from TexSoup import TexSoup >>> soup = TexSoup(r''' ... \section{Hey} ... \textit{Silly} ... \textit{Willy}''') >>> soup.count('section') 1 >>> soup.count('textit') 2 """ return len(list(self.find_all(name, **attrs)))
r"""Number of descendants matching criteria. :param Union[None,str] name: name of LaTeX expression :param attrs: LaTeX expression attributes, such as item text. :return: number of matching expressions :rtype: int >>> from TexSoup import TexSoup >>> soup = TexSoup(r''' ... \section{Hey} ... \textit{Silly} ... \textit{Willy}''') >>> soup.count('section') 1 >>> soup.count('textit') 2
Below is the the instruction that describes the task: ### Input: r"""Number of descendants matching criteria. :param Union[None,str] name: name of LaTeX expression :param attrs: LaTeX expression attributes, such as item text. :return: number of matching expressions :rtype: int >>> from TexSoup import TexSoup >>> soup = TexSoup(r''' ... \section{Hey} ... \textit{Silly} ... \textit{Willy}''') >>> soup.count('section') 1 >>> soup.count('textit') 2 ### Response: def count(self, name=None, **attrs): r"""Number of descendants matching criteria. :param Union[None,str] name: name of LaTeX expression :param attrs: LaTeX expression attributes, such as item text. :return: number of matching expressions :rtype: int >>> from TexSoup import TexSoup >>> soup = TexSoup(r''' ... \section{Hey} ... \textit{Silly} ... \textit{Willy}''') >>> soup.count('section') 1 >>> soup.count('textit') 2 """ return len(list(self.find_all(name, **attrs)))
def decorator(cls, candidate, *exp_args, **exp_kwargs): ''' Decorate a control function in order to conduct an experiment when called. :param callable candidate: your candidate function :param iterable exp_args: positional arguments passed to :class:`Experiment` :param dict exp_kwargs: keyword arguments passed to :class:`Experiment` Usage:: candidate_func = lambda: True @Experiment.decorator(candidate_func) def control_func(): return True ''' def wrapper(control): @wraps(control) def inner(*args, **kwargs): experiment = cls(*exp_args, **exp_kwargs) experiment.control(control, args=args, kwargs=kwargs) experiment.candidate(candidate, args=args, kwargs=kwargs) return experiment.conduct() return inner return wrapper
Decorate a control function in order to conduct an experiment when called. :param callable candidate: your candidate function :param iterable exp_args: positional arguments passed to :class:`Experiment` :param dict exp_kwargs: keyword arguments passed to :class:`Experiment` Usage:: candidate_func = lambda: True @Experiment.decorator(candidate_func) def control_func(): return True
Below is the the instruction that describes the task: ### Input: Decorate a control function in order to conduct an experiment when called. :param callable candidate: your candidate function :param iterable exp_args: positional arguments passed to :class:`Experiment` :param dict exp_kwargs: keyword arguments passed to :class:`Experiment` Usage:: candidate_func = lambda: True @Experiment.decorator(candidate_func) def control_func(): return True ### Response: def decorator(cls, candidate, *exp_args, **exp_kwargs): ''' Decorate a control function in order to conduct an experiment when called. :param callable candidate: your candidate function :param iterable exp_args: positional arguments passed to :class:`Experiment` :param dict exp_kwargs: keyword arguments passed to :class:`Experiment` Usage:: candidate_func = lambda: True @Experiment.decorator(candidate_func) def control_func(): return True ''' def wrapper(control): @wraps(control) def inner(*args, **kwargs): experiment = cls(*exp_args, **exp_kwargs) experiment.control(control, args=args, kwargs=kwargs) experiment.candidate(candidate, args=args, kwargs=kwargs) return experiment.conduct() return inner return wrapper
def send_command(self, *args, **kwargs): """Palo Alto requires an extra delay""" kwargs["delay_factor"] = kwargs.get("delay_factor", 2.5) return super(PaloAltoPanosBase, self).send_command(*args, **kwargs)
Palo Alto requires an extra delay
Below is the the instruction that describes the task: ### Input: Palo Alto requires an extra delay ### Response: def send_command(self, *args, **kwargs): """Palo Alto requires an extra delay""" kwargs["delay_factor"] = kwargs.get("delay_factor", 2.5) return super(PaloAltoPanosBase, self).send_command(*args, **kwargs)
def edit(self, id): """ Edit a pool. """ c.pool = Pool.get(int(id)) c.prefix_list = Prefix.list({ 'pool_id': c.pool.id }) c.prefix = '' # save changes to NIPAP if request.method == 'POST': c.pool.name = request.params['name'] c.pool.description = request.params['description'] c.pool.default_type = request.params['default_type'] if request.params['ipv4_default_prefix_length'].strip() == '': c.pool.ipv4_default_prefix_length = None else: c.pool.ipv4_default_prefix_length = request.params['ipv4_default_prefix_length'] if request.params['ipv6_default_prefix_length'].strip() == '': c.pool.ipv6_default_prefix_length = None else: c.pool.ipv6_default_prefix_length = request.params['ipv6_default_prefix_length'] c.pool.save() redirect(url(controller = 'pool', action = 'list')) c.search_opt_parent = 'all' c.search_opt_child = 'none' return render("/pool_edit.html")
Edit a pool.
Below is the the instruction that describes the task: ### Input: Edit a pool. ### Response: def edit(self, id): """ Edit a pool. """ c.pool = Pool.get(int(id)) c.prefix_list = Prefix.list({ 'pool_id': c.pool.id }) c.prefix = '' # save changes to NIPAP if request.method == 'POST': c.pool.name = request.params['name'] c.pool.description = request.params['description'] c.pool.default_type = request.params['default_type'] if request.params['ipv4_default_prefix_length'].strip() == '': c.pool.ipv4_default_prefix_length = None else: c.pool.ipv4_default_prefix_length = request.params['ipv4_default_prefix_length'] if request.params['ipv6_default_prefix_length'].strip() == '': c.pool.ipv6_default_prefix_length = None else: c.pool.ipv6_default_prefix_length = request.params['ipv6_default_prefix_length'] c.pool.save() redirect(url(controller = 'pool', action = 'list')) c.search_opt_parent = 'all' c.search_opt_child = 'none' return render("/pool_edit.html")
def add_attachment(self, issue_id_or_key, temp_attachment_id, public=True, comment=None): """ Adds temporary attachment that were created using attach_temporary_file function to a customer request :param issue_id_or_key: str :param temp_attachment_id: str, ID from result attach_temporary_file function :param public: bool (default is True) :param comment: str (default is None) :return: """ log.warning('Adding attachment') data = { 'temporaryAttachmentIds': [temp_attachment_id], 'public': public, 'additionalComment': {'body': comment} } url = 'rest/servicedeskapi/request/{}/attachment'.format(issue_id_or_key) return self.post(url, headers=self.experimental_headers, data=data)
Adds temporary attachment that were created using attach_temporary_file function to a customer request :param issue_id_or_key: str :param temp_attachment_id: str, ID from result attach_temporary_file function :param public: bool (default is True) :param comment: str (default is None) :return:
Below is the the instruction that describes the task: ### Input: Adds temporary attachment that were created using attach_temporary_file function to a customer request :param issue_id_or_key: str :param temp_attachment_id: str, ID from result attach_temporary_file function :param public: bool (default is True) :param comment: str (default is None) :return: ### Response: def add_attachment(self, issue_id_or_key, temp_attachment_id, public=True, comment=None): """ Adds temporary attachment that were created using attach_temporary_file function to a customer request :param issue_id_or_key: str :param temp_attachment_id: str, ID from result attach_temporary_file function :param public: bool (default is True) :param comment: str (default is None) :return: """ log.warning('Adding attachment') data = { 'temporaryAttachmentIds': [temp_attachment_id], 'public': public, 'additionalComment': {'body': comment} } url = 'rest/servicedeskapi/request/{}/attachment'.format(issue_id_or_key) return self.post(url, headers=self.experimental_headers, data=data)
def combine(self, expert_out, multiply_by_gates=True): """Sum together the expert output, weighted by the gates. The slice corresponding to a particular batch element `b` is computed as the sum over all experts `i` of the expert output, weighted by the corresponding gate values. If `multiply_by_gates` is set to False, the gate values are ignored. Args: expert_out: a list of `num_experts` `Tensor`s, each with shape `[expert_batch_size_i, <extra_output_dims>]`. multiply_by_gates: a boolean Returns: a `Tensor` with shape `[batch_size, <extra_output_dims>]`. """ # see comments on convert_gradient_to_tensor stitched = common_layers.convert_gradient_to_tensor( tf.concat(expert_out, 0)) if multiply_by_gates: stitched *= tf.expand_dims(self._nonzero_gates, 1) combined = tf.unsorted_segment_sum(stitched, self._batch_index, tf.shape(self._gates)[0]) return combined
Sum together the expert output, weighted by the gates. The slice corresponding to a particular batch element `b` is computed as the sum over all experts `i` of the expert output, weighted by the corresponding gate values. If `multiply_by_gates` is set to False, the gate values are ignored. Args: expert_out: a list of `num_experts` `Tensor`s, each with shape `[expert_batch_size_i, <extra_output_dims>]`. multiply_by_gates: a boolean Returns: a `Tensor` with shape `[batch_size, <extra_output_dims>]`.
Below is the the instruction that describes the task: ### Input: Sum together the expert output, weighted by the gates. The slice corresponding to a particular batch element `b` is computed as the sum over all experts `i` of the expert output, weighted by the corresponding gate values. If `multiply_by_gates` is set to False, the gate values are ignored. Args: expert_out: a list of `num_experts` `Tensor`s, each with shape `[expert_batch_size_i, <extra_output_dims>]`. multiply_by_gates: a boolean Returns: a `Tensor` with shape `[batch_size, <extra_output_dims>]`. ### Response: def combine(self, expert_out, multiply_by_gates=True): """Sum together the expert output, weighted by the gates. The slice corresponding to a particular batch element `b` is computed as the sum over all experts `i` of the expert output, weighted by the corresponding gate values. If `multiply_by_gates` is set to False, the gate values are ignored. Args: expert_out: a list of `num_experts` `Tensor`s, each with shape `[expert_batch_size_i, <extra_output_dims>]`. multiply_by_gates: a boolean Returns: a `Tensor` with shape `[batch_size, <extra_output_dims>]`. """ # see comments on convert_gradient_to_tensor stitched = common_layers.convert_gradient_to_tensor( tf.concat(expert_out, 0)) if multiply_by_gates: stitched *= tf.expand_dims(self._nonzero_gates, 1) combined = tf.unsorted_segment_sum(stitched, self._batch_index, tf.shape(self._gates)[0]) return combined
def update_one(cls, filter, update, upsert=False): """ Updates a document that passes the filter with the update value Will upsert a new document if upsert=True and no document is filtered """ return cls.collection.update_one(filter, update, upsert).raw_result
Updates a document that passes the filter with the update value Will upsert a new document if upsert=True and no document is filtered
Below is the the instruction that describes the task: ### Input: Updates a document that passes the filter with the update value Will upsert a new document if upsert=True and no document is filtered ### Response: def update_one(cls, filter, update, upsert=False): """ Updates a document that passes the filter with the update value Will upsert a new document if upsert=True and no document is filtered """ return cls.collection.update_one(filter, update, upsert).raw_result
def eval_policy(eval_positions): """Evaluate all positions with all models save the policy heatmaps as CSVs CSV name is "heatmap-<position_name>-<model-index>.csv" CSV format is: model number, value network output, policy network outputs position_name is taken from the SGF file Policy network outputs (19x19) are saved in flat order (see coord.from_flat) """ model_paths = oneoff_utils.get_model_paths(fsdb.models_dir()) idx_start = FLAGS.idx_start eval_every = FLAGS.eval_every print("Evaluating models {}-{}, eval_every={}".format( idx_start, len(model_paths), eval_every)) player = None for i, idx in enumerate(tqdm(range(idx_start, len(model_paths), eval_every))): if player and i % 20 == 0: player.network.sess.close() tf.reset_default_graph() player = None if not player: player = oneoff_utils.load_player(model_paths[idx]) else: oneoff_utils.restore_params(model_paths[idx], player) pos_names, positions = zip(*eval_positions) # This should be batched at somepoint. eval_probs, eval_values = player.network.run_many(positions) for pos_name, probs, value in zip(pos_names, eval_probs, eval_values): save_file = os.path.join( FLAGS.data_dir, "heatmap-{}-{}.csv".format(pos_name, idx)) with open(save_file, "w") as data: data.write("{}, {}, {}\n".format( idx, value, ",".join(map(str, probs))))
Evaluate all positions with all models save the policy heatmaps as CSVs CSV name is "heatmap-<position_name>-<model-index>.csv" CSV format is: model number, value network output, policy network outputs position_name is taken from the SGF file Policy network outputs (19x19) are saved in flat order (see coord.from_flat)
Below is the the instruction that describes the task: ### Input: Evaluate all positions with all models save the policy heatmaps as CSVs CSV name is "heatmap-<position_name>-<model-index>.csv" CSV format is: model number, value network output, policy network outputs position_name is taken from the SGF file Policy network outputs (19x19) are saved in flat order (see coord.from_flat) ### Response: def eval_policy(eval_positions): """Evaluate all positions with all models save the policy heatmaps as CSVs CSV name is "heatmap-<position_name>-<model-index>.csv" CSV format is: model number, value network output, policy network outputs position_name is taken from the SGF file Policy network outputs (19x19) are saved in flat order (see coord.from_flat) """ model_paths = oneoff_utils.get_model_paths(fsdb.models_dir()) idx_start = FLAGS.idx_start eval_every = FLAGS.eval_every print("Evaluating models {}-{}, eval_every={}".format( idx_start, len(model_paths), eval_every)) player = None for i, idx in enumerate(tqdm(range(idx_start, len(model_paths), eval_every))): if player and i % 20 == 0: player.network.sess.close() tf.reset_default_graph() player = None if not player: player = oneoff_utils.load_player(model_paths[idx]) else: oneoff_utils.restore_params(model_paths[idx], player) pos_names, positions = zip(*eval_positions) # This should be batched at somepoint. eval_probs, eval_values = player.network.run_many(positions) for pos_name, probs, value in zip(pos_names, eval_probs, eval_values): save_file = os.path.join( FLAGS.data_dir, "heatmap-{}-{}.csv".format(pos_name, idx)) with open(save_file, "w") as data: data.write("{}, {}, {}\n".format( idx, value, ",".join(map(str, probs))))
def setup_consumers(self): """Iterate through each consumer in the configuration and kick off the minimal amount of processes, setting up the runtime data as well. """ if not self.consumer_cfg: LOGGER.warning('No consumers are configured') for name in self.consumer_cfg.keys(): self.consumers[name] = self.new_consumer( self.consumer_cfg[name], name) self.start_processes(name, self.consumers[name].qty)
Iterate through each consumer in the configuration and kick off the minimal amount of processes, setting up the runtime data as well.
Below is the the instruction that describes the task: ### Input: Iterate through each consumer in the configuration and kick off the minimal amount of processes, setting up the runtime data as well. ### Response: def setup_consumers(self): """Iterate through each consumer in the configuration and kick off the minimal amount of processes, setting up the runtime data as well. """ if not self.consumer_cfg: LOGGER.warning('No consumers are configured') for name in self.consumer_cfg.keys(): self.consumers[name] = self.new_consumer( self.consumer_cfg[name], name) self.start_processes(name, self.consumers[name].qty)
def get_enumeration(rq, v, endpoint, metadata={}, auth=None): """ Returns a list of enumerated values for variable 'v' in query 'rq' """ # glogger.debug("Metadata before processing enums: {}".format(metadata)) # We only fire the enum filling queries if indicated by the query metadata if 'enumerate' not in metadata: return None enumDict = _getDictWithKey(v, metadata['enumerate']) if enumDict: return enumDict[v] if v in metadata['enumerate']: return get_enumeration_sparql(rq, v, endpoint, auth) return None
Returns a list of enumerated values for variable 'v' in query 'rq'
Below is the the instruction that describes the task: ### Input: Returns a list of enumerated values for variable 'v' in query 'rq' ### Response: def get_enumeration(rq, v, endpoint, metadata={}, auth=None): """ Returns a list of enumerated values for variable 'v' in query 'rq' """ # glogger.debug("Metadata before processing enums: {}".format(metadata)) # We only fire the enum filling queries if indicated by the query metadata if 'enumerate' not in metadata: return None enumDict = _getDictWithKey(v, metadata['enumerate']) if enumDict: return enumDict[v] if v in metadata['enumerate']: return get_enumeration_sparql(rq, v, endpoint, auth) return None
def upload(cls, file_obj, store=None): """Uploads a file and returns ``File`` instance. Args: - file_obj: file object to upload to - store (Optional[bool]): Should the file be automatically stored upon upload. Defaults to None. - False - do not store file - True - store file (can result in error if autostore is disabled for project) - None - use project settings Returns: ``File`` instance """ if store is None: store = 'auto' elif store: store = '1' else: store = '0' data = { 'UPLOADCARE_STORE': store, } files = uploading_request('POST', 'base/', data=data, files={'file': file_obj}) file_ = cls(files['file']) return file_
Uploads a file and returns ``File`` instance. Args: - file_obj: file object to upload to - store (Optional[bool]): Should the file be automatically stored upon upload. Defaults to None. - False - do not store file - True - store file (can result in error if autostore is disabled for project) - None - use project settings Returns: ``File`` instance
Below is the the instruction that describes the task: ### Input: Uploads a file and returns ``File`` instance. Args: - file_obj: file object to upload to - store (Optional[bool]): Should the file be automatically stored upon upload. Defaults to None. - False - do not store file - True - store file (can result in error if autostore is disabled for project) - None - use project settings Returns: ``File`` instance ### Response: def upload(cls, file_obj, store=None): """Uploads a file and returns ``File`` instance. Args: - file_obj: file object to upload to - store (Optional[bool]): Should the file be automatically stored upon upload. Defaults to None. - False - do not store file - True - store file (can result in error if autostore is disabled for project) - None - use project settings Returns: ``File`` instance """ if store is None: store = 'auto' elif store: store = '1' else: store = '0' data = { 'UPLOADCARE_STORE': store, } files = uploading_request('POST', 'base/', data=data, files={'file': file_obj}) file_ = cls(files['file']) return file_
def line(self, *args): """ Called one at a time for each dataset args are of the form:: <data set n line thickness>, <length of line segment>, <length of blank segment> APIPARAM: chls """ self.lines.append(','.join(['%.1f'%x for x in map(float,args)])) return self
Called one at a time for each dataset args are of the form:: <data set n line thickness>, <length of line segment>, <length of blank segment> APIPARAM: chls
Below is the the instruction that describes the task: ### Input: Called one at a time for each dataset args are of the form:: <data set n line thickness>, <length of line segment>, <length of blank segment> APIPARAM: chls ### Response: def line(self, *args): """ Called one at a time for each dataset args are of the form:: <data set n line thickness>, <length of line segment>, <length of blank segment> APIPARAM: chls """ self.lines.append(','.join(['%.1f'%x for x in map(float,args)])) return self