code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def get_albums(self, limit=None): """ Return a list of the user's albums. Secret and hidden albums are only returned if this is the logged-in user. """ url = (self._imgur._base_url + "/3/account/{0}/albums/{1}".format(self.name, '{}')) resp = self._imgur._send_request(url, limit=limit) return [Album(alb, self._imgur, False) for alb in resp]
Return a list of the user's albums. Secret and hidden albums are only returned if this is the logged-in user.
Below is the the instruction that describes the task: ### Input: Return a list of the user's albums. Secret and hidden albums are only returned if this is the logged-in user. ### Response: def get_albums(self, limit=None): """ Return a list of the user's albums. Secret and hidden albums are only returned if this is the logged-in user. """ url = (self._imgur._base_url + "/3/account/{0}/albums/{1}".format(self.name, '{}')) resp = self._imgur._send_request(url, limit=limit) return [Album(alb, self._imgur, False) for alb in resp]
def make_line_segments(x, y, ispath=True): """ Return an (n x 2 x 2) array of n line segments Parameters ---------- x : array-like x points y : array-like y points ispath : bool If True, the points represent a path from one point to the next until the last. If False, then each pair of successive(even-odd pair) points yields a line. """ if ispath: x = interleave(x[:-1], x[1:]) y = interleave(y[:-1], y[1:]) elif len(x) % 2: raise PlotnineError("Expects an even number of points") n = len(x) // 2 segments = np.reshape(list(zip(x, y)), [n, 2, 2]) return segments
Return an (n x 2 x 2) array of n line segments Parameters ---------- x : array-like x points y : array-like y points ispath : bool If True, the points represent a path from one point to the next until the last. If False, then each pair of successive(even-odd pair) points yields a line.
Below is the the instruction that describes the task: ### Input: Return an (n x 2 x 2) array of n line segments Parameters ---------- x : array-like x points y : array-like y points ispath : bool If True, the points represent a path from one point to the next until the last. If False, then each pair of successive(even-odd pair) points yields a line. ### Response: def make_line_segments(x, y, ispath=True): """ Return an (n x 2 x 2) array of n line segments Parameters ---------- x : array-like x points y : array-like y points ispath : bool If True, the points represent a path from one point to the next until the last. If False, then each pair of successive(even-odd pair) points yields a line. """ if ispath: x = interleave(x[:-1], x[1:]) y = interleave(y[:-1], y[1:]) elif len(x) % 2: raise PlotnineError("Expects an even number of points") n = len(x) // 2 segments = np.reshape(list(zip(x, y)), [n, 2, 2]) return segments
def calc_geo_branches_in_polygon(mv_grid, polygon, mode, proj): """ Calculate geographical branches in polygon. For a given `mv_grid` all branches (edges in the graph of the grid) are tested if they are in the given `polygon`. You can choose different modes and projections for this operation. Parameters ---------- mv_grid : MVGridDing0 MV Grid object. Edges contained in `mv_grid.graph_edges()` are taken for the test. polygon : :shapely:`Shapely Point object<points>` Polygon that contains edges. mode : str Choose between 'intersects' or 'contains'. proj : int EPSG code to specify projection Returns ------- :any:`list` of :any:`BranchDing0` objects List of branches """ branches = [] polygon_shp = transform(proj, polygon) for branch in mv_grid.graph_edges(): nodes = branch['adj_nodes'] branch_shp = transform(proj, LineString([nodes[0].geo_data, nodes[1].geo_data])) # check if branches intersect with polygon if mode = 'intersects' if mode == 'intersects': if polygon_shp.intersects(branch_shp): branches.append(branch) # check if polygon contains branches if mode = 'contains' elif mode == 'contains': if polygon_shp.contains(branch_shp): branches.append(branch) # error else: raise ValueError('Mode is invalid!') return branches
Calculate geographical branches in polygon. For a given `mv_grid` all branches (edges in the graph of the grid) are tested if they are in the given `polygon`. You can choose different modes and projections for this operation. Parameters ---------- mv_grid : MVGridDing0 MV Grid object. Edges contained in `mv_grid.graph_edges()` are taken for the test. polygon : :shapely:`Shapely Point object<points>` Polygon that contains edges. mode : str Choose between 'intersects' or 'contains'. proj : int EPSG code to specify projection Returns ------- :any:`list` of :any:`BranchDing0` objects List of branches
Below is the the instruction that describes the task: ### Input: Calculate geographical branches in polygon. For a given `mv_grid` all branches (edges in the graph of the grid) are tested if they are in the given `polygon`. You can choose different modes and projections for this operation. Parameters ---------- mv_grid : MVGridDing0 MV Grid object. Edges contained in `mv_grid.graph_edges()` are taken for the test. polygon : :shapely:`Shapely Point object<points>` Polygon that contains edges. mode : str Choose between 'intersects' or 'contains'. proj : int EPSG code to specify projection Returns ------- :any:`list` of :any:`BranchDing0` objects List of branches ### Response: def calc_geo_branches_in_polygon(mv_grid, polygon, mode, proj): """ Calculate geographical branches in polygon. For a given `mv_grid` all branches (edges in the graph of the grid) are tested if they are in the given `polygon`. You can choose different modes and projections for this operation. Parameters ---------- mv_grid : MVGridDing0 MV Grid object. Edges contained in `mv_grid.graph_edges()` are taken for the test. polygon : :shapely:`Shapely Point object<points>` Polygon that contains edges. mode : str Choose between 'intersects' or 'contains'. proj : int EPSG code to specify projection Returns ------- :any:`list` of :any:`BranchDing0` objects List of branches """ branches = [] polygon_shp = transform(proj, polygon) for branch in mv_grid.graph_edges(): nodes = branch['adj_nodes'] branch_shp = transform(proj, LineString([nodes[0].geo_data, nodes[1].geo_data])) # check if branches intersect with polygon if mode = 'intersects' if mode == 'intersects': if polygon_shp.intersects(branch_shp): branches.append(branch) # check if polygon contains branches if mode = 'contains' elif mode == 'contains': if polygon_shp.contains(branch_shp): branches.append(branch) # error else: raise ValueError('Mode is invalid!') return branches
def download(supported_tags, date_array, tag, sat_id, ftp_site='cdaweb.gsfc.nasa.gov', data_path=None, user=None, password=None, fake_daily_files_from_monthly=False): """Routine to download NASA CDAWeb CDF data. This routine is intended to be used by pysat instrument modules supporting a particular NASA CDAWeb dataset. Parameters ----------- supported_tags : dict dict of dicts. Keys are supported tag names for download. Value is a dict with 'dir', 'remote_fname', 'local_fname'. Inteded to be pre-set with functools.partial then assigned to new instrument code. date_array : array_like Array of datetimes to download data for. Provided by pysat. tag : (str or NoneType) tag or None (default=None) sat_id : (str or NoneType) satellite id or None (default=None) data_path : (string or NoneType) Path to data directory. If None is specified, the value previously set in Instrument.files.data_path is used. (default=None) user : (string or NoneType) Username to be passed along to resource with relevant data. (default=None) password : (string or NoneType) User password to be passed along to resource with relevant data. (default=None) fake_daily_files_from_monthly : bool Some CDAWeb instrument data files are stored by month.This flag, when true, accomodates this reality with user feedback on a monthly time frame. Returns -------- Void : (NoneType) Downloads data to disk. Examples -------- :: # download support added to cnofs_vefi.py using code below rn = '{year:4d}/cnofs_vefi_bfield_1sec_{year:4d}{month:02d}{day:02d}_v05.cdf' ln = 'cnofs_vefi_bfield_1sec_{year:4d}{month:02d}{day:02d}_v05.cdf' dc_b_tag = {'dir':'/pub/data/cnofs/vefi/bfield_1sec', 'remote_fname':rn, 'local_fname':ln} supported_tags = {'dc_b':dc_b_tag} download = functools.partial(nasa_cdaweb_methods.download, supported_tags=supported_tags) """ import os import ftplib # connect to CDAWeb default port ftp = ftplib.FTP(ftp_site) # user anonymous, passwd anonymous@ ftp.login() try: ftp_dict = supported_tags[tag] except KeyError: raise ValueError('Tag name unknown.') # path to relevant file on CDAWeb ftp.cwd(ftp_dict['dir']) # naming scheme for files on the CDAWeb server remote_fname = ftp_dict['remote_fname'] # naming scheme for local files, should be closely related # to CDAWeb scheme, though directory structures may be reduced # if desired local_fname = ftp_dict['local_fname'] for date in date_array: # format files for specific dates and download location formatted_remote_fname = remote_fname.format(year=date.year, month=date.month, day=date.day) formatted_local_fname = local_fname.format(year=date.year, month=date.month, day=date.day) saved_local_fname = os.path.join(data_path,formatted_local_fname) # perform download try: print('Attempting to download file for '+date.strftime('%x')) sys.stdout.flush() ftp.retrbinary('RETR '+formatted_remote_fname, open(saved_local_fname,'wb').write) print('Finished.') except ftplib.error_perm as exception: # if exception[0][0:3] != '550': if str(exception.args[0]).split(" ", 1)[0] != '550': raise else: os.remove(saved_local_fname) print('File not available for '+ date.strftime('%x')) ftp.close()
Routine to download NASA CDAWeb CDF data. This routine is intended to be used by pysat instrument modules supporting a particular NASA CDAWeb dataset. Parameters ----------- supported_tags : dict dict of dicts. Keys are supported tag names for download. Value is a dict with 'dir', 'remote_fname', 'local_fname'. Inteded to be pre-set with functools.partial then assigned to new instrument code. date_array : array_like Array of datetimes to download data for. Provided by pysat. tag : (str or NoneType) tag or None (default=None) sat_id : (str or NoneType) satellite id or None (default=None) data_path : (string or NoneType) Path to data directory. If None is specified, the value previously set in Instrument.files.data_path is used. (default=None) user : (string or NoneType) Username to be passed along to resource with relevant data. (default=None) password : (string or NoneType) User password to be passed along to resource with relevant data. (default=None) fake_daily_files_from_monthly : bool Some CDAWeb instrument data files are stored by month.This flag, when true, accomodates this reality with user feedback on a monthly time frame. Returns -------- Void : (NoneType) Downloads data to disk. Examples -------- :: # download support added to cnofs_vefi.py using code below rn = '{year:4d}/cnofs_vefi_bfield_1sec_{year:4d}{month:02d}{day:02d}_v05.cdf' ln = 'cnofs_vefi_bfield_1sec_{year:4d}{month:02d}{day:02d}_v05.cdf' dc_b_tag = {'dir':'/pub/data/cnofs/vefi/bfield_1sec', 'remote_fname':rn, 'local_fname':ln} supported_tags = {'dc_b':dc_b_tag} download = functools.partial(nasa_cdaweb_methods.download, supported_tags=supported_tags)
Below is the the instruction that describes the task: ### Input: Routine to download NASA CDAWeb CDF data. This routine is intended to be used by pysat instrument modules supporting a particular NASA CDAWeb dataset. Parameters ----------- supported_tags : dict dict of dicts. Keys are supported tag names for download. Value is a dict with 'dir', 'remote_fname', 'local_fname'. Inteded to be pre-set with functools.partial then assigned to new instrument code. date_array : array_like Array of datetimes to download data for. Provided by pysat. tag : (str or NoneType) tag or None (default=None) sat_id : (str or NoneType) satellite id or None (default=None) data_path : (string or NoneType) Path to data directory. If None is specified, the value previously set in Instrument.files.data_path is used. (default=None) user : (string or NoneType) Username to be passed along to resource with relevant data. (default=None) password : (string or NoneType) User password to be passed along to resource with relevant data. (default=None) fake_daily_files_from_monthly : bool Some CDAWeb instrument data files are stored by month.This flag, when true, accomodates this reality with user feedback on a monthly time frame. Returns -------- Void : (NoneType) Downloads data to disk. Examples -------- :: # download support added to cnofs_vefi.py using code below rn = '{year:4d}/cnofs_vefi_bfield_1sec_{year:4d}{month:02d}{day:02d}_v05.cdf' ln = 'cnofs_vefi_bfield_1sec_{year:4d}{month:02d}{day:02d}_v05.cdf' dc_b_tag = {'dir':'/pub/data/cnofs/vefi/bfield_1sec', 'remote_fname':rn, 'local_fname':ln} supported_tags = {'dc_b':dc_b_tag} download = functools.partial(nasa_cdaweb_methods.download, supported_tags=supported_tags) ### Response: def download(supported_tags, date_array, tag, sat_id, ftp_site='cdaweb.gsfc.nasa.gov', data_path=None, user=None, password=None, fake_daily_files_from_monthly=False): """Routine to download NASA CDAWeb CDF data. This routine is intended to be used by pysat instrument modules supporting a particular NASA CDAWeb dataset. Parameters ----------- supported_tags : dict dict of dicts. Keys are supported tag names for download. Value is a dict with 'dir', 'remote_fname', 'local_fname'. Inteded to be pre-set with functools.partial then assigned to new instrument code. date_array : array_like Array of datetimes to download data for. Provided by pysat. tag : (str or NoneType) tag or None (default=None) sat_id : (str or NoneType) satellite id or None (default=None) data_path : (string or NoneType) Path to data directory. If None is specified, the value previously set in Instrument.files.data_path is used. (default=None) user : (string or NoneType) Username to be passed along to resource with relevant data. (default=None) password : (string or NoneType) User password to be passed along to resource with relevant data. (default=None) fake_daily_files_from_monthly : bool Some CDAWeb instrument data files are stored by month.This flag, when true, accomodates this reality with user feedback on a monthly time frame. Returns -------- Void : (NoneType) Downloads data to disk. Examples -------- :: # download support added to cnofs_vefi.py using code below rn = '{year:4d}/cnofs_vefi_bfield_1sec_{year:4d}{month:02d}{day:02d}_v05.cdf' ln = 'cnofs_vefi_bfield_1sec_{year:4d}{month:02d}{day:02d}_v05.cdf' dc_b_tag = {'dir':'/pub/data/cnofs/vefi/bfield_1sec', 'remote_fname':rn, 'local_fname':ln} supported_tags = {'dc_b':dc_b_tag} download = functools.partial(nasa_cdaweb_methods.download, supported_tags=supported_tags) """ import os import ftplib # connect to CDAWeb default port ftp = ftplib.FTP(ftp_site) # user anonymous, passwd anonymous@ ftp.login() try: ftp_dict = supported_tags[tag] except KeyError: raise ValueError('Tag name unknown.') # path to relevant file on CDAWeb ftp.cwd(ftp_dict['dir']) # naming scheme for files on the CDAWeb server remote_fname = ftp_dict['remote_fname'] # naming scheme for local files, should be closely related # to CDAWeb scheme, though directory structures may be reduced # if desired local_fname = ftp_dict['local_fname'] for date in date_array: # format files for specific dates and download location formatted_remote_fname = remote_fname.format(year=date.year, month=date.month, day=date.day) formatted_local_fname = local_fname.format(year=date.year, month=date.month, day=date.day) saved_local_fname = os.path.join(data_path,formatted_local_fname) # perform download try: print('Attempting to download file for '+date.strftime('%x')) sys.stdout.flush() ftp.retrbinary('RETR '+formatted_remote_fname, open(saved_local_fname,'wb').write) print('Finished.') except ftplib.error_perm as exception: # if exception[0][0:3] != '550': if str(exception.args[0]).split(" ", 1)[0] != '550': raise else: os.remove(saved_local_fname) print('File not available for '+ date.strftime('%x')) ftp.close()
def weighted_random_choice(items): """ Returns a weighted random choice from a list of items. :param items: A list of tuples (object, weight) :return: A random object, whose likelihood is proportional to its weight. """ l = list(items) r = random.random() * sum([i[1] for i in l]) for x, p in l: if p > r: return x r -= p return None
Returns a weighted random choice from a list of items. :param items: A list of tuples (object, weight) :return: A random object, whose likelihood is proportional to its weight.
Below is the the instruction that describes the task: ### Input: Returns a weighted random choice from a list of items. :param items: A list of tuples (object, weight) :return: A random object, whose likelihood is proportional to its weight. ### Response: def weighted_random_choice(items): """ Returns a weighted random choice from a list of items. :param items: A list of tuples (object, weight) :return: A random object, whose likelihood is proportional to its weight. """ l = list(items) r = random.random() * sum([i[1] for i in l]) for x, p in l: if p > r: return x r -= p return None
def merge(revision, branch_label, message, list_revisions=''): """ Merge two revision together, create new revision file """ alembic_command.merge( config=get_config(), revisions=list_revisions, message=message, branch_label=branch_label, rev_id=revision )
Merge two revision together, create new revision file
Below is the the instruction that describes the task: ### Input: Merge two revision together, create new revision file ### Response: def merge(revision, branch_label, message, list_revisions=''): """ Merge two revision together, create new revision file """ alembic_command.merge( config=get_config(), revisions=list_revisions, message=message, branch_label=branch_label, rev_id=revision )
def get_dependants(cls, dist): """Yield dependant user packages for a given package name.""" for package in cls.installed_distributions: for requirement_package in package.requires(): requirement_name = requirement_package.project_name # perform case-insensitive matching if requirement_name.lower() == dist.lower(): yield package
Yield dependant user packages for a given package name.
Below is the the instruction that describes the task: ### Input: Yield dependant user packages for a given package name. ### Response: def get_dependants(cls, dist): """Yield dependant user packages for a given package name.""" for package in cls.installed_distributions: for requirement_package in package.requires(): requirement_name = requirement_package.project_name # perform case-insensitive matching if requirement_name.lower() == dist.lower(): yield package
def _cleanup_resourceprovider(self): """ Calls cleanup for ResourceProvider of this run. :return: Nothing """ # Disable too broad exception warning # pylint: disable=W0703 self.resourceprovider = ResourceProvider(self.args) try: self.resourceprovider.cleanup() self.logger.info("Cleanup done.") except Exception as error: self.logger.error("Cleanup failed! %s", error)
Calls cleanup for ResourceProvider of this run. :return: Nothing
Below is the the instruction that describes the task: ### Input: Calls cleanup for ResourceProvider of this run. :return: Nothing ### Response: def _cleanup_resourceprovider(self): """ Calls cleanup for ResourceProvider of this run. :return: Nothing """ # Disable too broad exception warning # pylint: disable=W0703 self.resourceprovider = ResourceProvider(self.args) try: self.resourceprovider.cleanup() self.logger.info("Cleanup done.") except Exception as error: self.logger.error("Cleanup failed! %s", error)
def quit_all(editor, force=False): """ Quit all. """ quit(editor, all_=True, force=force)
Quit all.
Below is the the instruction that describes the task: ### Input: Quit all. ### Response: def quit_all(editor, force=False): """ Quit all. """ quit(editor, all_=True, force=force)
def get_ps(self, field_name, wait_pstate=True): """Get property from PersonaState `See full list of available fields_names <https://github.com/ValvePython/steam/blob/fa8a5127e9bb23185483930da0b6ae85e93055a7/protobufs/steammessages_clientserver_friends.proto#L125-L153>`_ """ if not wait_pstate or self._pstate_ready.wait(timeout=5): if self._pstate is None and wait_pstate: self._steam.request_persona_state([self.steam_id]) self._pstate_ready.wait(timeout=5) return getattr(self._pstate, field_name) return None
Get property from PersonaState `See full list of available fields_names <https://github.com/ValvePython/steam/blob/fa8a5127e9bb23185483930da0b6ae85e93055a7/protobufs/steammessages_clientserver_friends.proto#L125-L153>`_
Below is the the instruction that describes the task: ### Input: Get property from PersonaState `See full list of available fields_names <https://github.com/ValvePython/steam/blob/fa8a5127e9bb23185483930da0b6ae85e93055a7/protobufs/steammessages_clientserver_friends.proto#L125-L153>`_ ### Response: def get_ps(self, field_name, wait_pstate=True): """Get property from PersonaState `See full list of available fields_names <https://github.com/ValvePython/steam/blob/fa8a5127e9bb23185483930da0b6ae85e93055a7/protobufs/steammessages_clientserver_friends.proto#L125-L153>`_ """ if not wait_pstate or self._pstate_ready.wait(timeout=5): if self._pstate is None and wait_pstate: self._steam.request_persona_state([self.steam_id]) self._pstate_ready.wait(timeout=5) return getattr(self._pstate, field_name) return None
def all(cls, path=''): """Return all ocurrences of the item.""" url = urljoin(cls._meta.base_url, path) pq_items = cls._get_items(url=url, **cls._meta._pyquery_kwargs) return [cls(item=i) for i in pq_items.items()]
Return all ocurrences of the item.
Below is the the instruction that describes the task: ### Input: Return all ocurrences of the item. ### Response: def all(cls, path=''): """Return all ocurrences of the item.""" url = urljoin(cls._meta.base_url, path) pq_items = cls._get_items(url=url, **cls._meta._pyquery_kwargs) return [cls(item=i) for i in pq_items.items()]
def remove_artf_evts(times, annot, chan=None, min_dur=0.1): """Correct times to remove events marked 'Artefact'. Parameters ---------- times : list of tuple of float the start and end times of each segment annot : instance of Annotations the annotation file containing events and epochs chan : str, optional full name of channel on which artefacts were marked. Channel format is 'chan_name (group_name)'. If None, artefacts from any channel will be removed. min_dur : float resulting segments, after concatenation, are rejected if shorter than this duration Returns ------- list of tuple of float the new start and end times of each segment, with artefact periods taken out """ new_times = times beg = times[0][0] end = times[-1][-1] chan = (chan, '') if chan else None # '' is for channel-global artefacts artefact = annot.get_events(name='Artefact', time=(beg, end), chan=chan, qual='Good') if artefact: new_times = [] for seg in times: reject = False new_seg = True while new_seg is not False: if type(new_seg) is tuple: seg = new_seg end = seg[1] for artf in artefact: if artf['start'] <= seg[0] and seg[1] <= artf['end']: reject = True new_seg = False break a_starts_in_s = seg[0] <= artf['start'] <= seg[1] a_ends_in_s = seg[0] <= artf['end'] <= seg[1] if a_ends_in_s and not a_starts_in_s: seg = artf['end'], seg[1] elif a_starts_in_s: seg = seg[0], artf['start'] if a_ends_in_s: new_seg = artf['end'], end else: new_seg = False break new_seg = False if reject is False and seg[1] - seg[0] >= min_dur: new_times.append(seg) return new_times
Correct times to remove events marked 'Artefact'. Parameters ---------- times : list of tuple of float the start and end times of each segment annot : instance of Annotations the annotation file containing events and epochs chan : str, optional full name of channel on which artefacts were marked. Channel format is 'chan_name (group_name)'. If None, artefacts from any channel will be removed. min_dur : float resulting segments, after concatenation, are rejected if shorter than this duration Returns ------- list of tuple of float the new start and end times of each segment, with artefact periods taken out
Below is the the instruction that describes the task: ### Input: Correct times to remove events marked 'Artefact'. Parameters ---------- times : list of tuple of float the start and end times of each segment annot : instance of Annotations the annotation file containing events and epochs chan : str, optional full name of channel on which artefacts were marked. Channel format is 'chan_name (group_name)'. If None, artefacts from any channel will be removed. min_dur : float resulting segments, after concatenation, are rejected if shorter than this duration Returns ------- list of tuple of float the new start and end times of each segment, with artefact periods taken out ### Response: def remove_artf_evts(times, annot, chan=None, min_dur=0.1): """Correct times to remove events marked 'Artefact'. Parameters ---------- times : list of tuple of float the start and end times of each segment annot : instance of Annotations the annotation file containing events and epochs chan : str, optional full name of channel on which artefacts were marked. Channel format is 'chan_name (group_name)'. If None, artefacts from any channel will be removed. min_dur : float resulting segments, after concatenation, are rejected if shorter than this duration Returns ------- list of tuple of float the new start and end times of each segment, with artefact periods taken out """ new_times = times beg = times[0][0] end = times[-1][-1] chan = (chan, '') if chan else None # '' is for channel-global artefacts artefact = annot.get_events(name='Artefact', time=(beg, end), chan=chan, qual='Good') if artefact: new_times = [] for seg in times: reject = False new_seg = True while new_seg is not False: if type(new_seg) is tuple: seg = new_seg end = seg[1] for artf in artefact: if artf['start'] <= seg[0] and seg[1] <= artf['end']: reject = True new_seg = False break a_starts_in_s = seg[0] <= artf['start'] <= seg[1] a_ends_in_s = seg[0] <= artf['end'] <= seg[1] if a_ends_in_s and not a_starts_in_s: seg = artf['end'], seg[1] elif a_starts_in_s: seg = seg[0], artf['start'] if a_ends_in_s: new_seg = artf['end'], end else: new_seg = False break new_seg = False if reject is False and seg[1] - seg[0] >= min_dur: new_times.append(seg) return new_times
def get_transport(name): ''' Return the transport class. ''' try: log.debug('Using %s as transport', name) return TRANSPORT_LOOKUP[name] except KeyError: msg = 'Transport {} is not available. Are the dependencies installed?'.format(name) log.error(msg, exc_info=True) raise InvalidTransportException(msg)
Return the transport class.
Below is the the instruction that describes the task: ### Input: Return the transport class. ### Response: def get_transport(name): ''' Return the transport class. ''' try: log.debug('Using %s as transport', name) return TRANSPORT_LOOKUP[name] except KeyError: msg = 'Transport {} is not available. Are the dependencies installed?'.format(name) log.error(msg, exc_info=True) raise InvalidTransportException(msg)
def log(self, *args): """Log a log message. Used for debugging recurring events.""" if _canShortcutLogging(self.logCategory, LOG): return logObject(self.logObjectName(), self.logCategory, *self.logFunction(*args))
Log a log message. Used for debugging recurring events.
Below is the the instruction that describes the task: ### Input: Log a log message. Used for debugging recurring events. ### Response: def log(self, *args): """Log a log message. Used for debugging recurring events.""" if _canShortcutLogging(self.logCategory, LOG): return logObject(self.logObjectName(), self.logCategory, *self.logFunction(*args))
def split_path(path, ref=None): """ Split a path into its components. Parameters ---------- path : str absolute or relative path with respect to `ref` ref : str or None reference path if `path` is relative Returns ------- list : str components of the path """ path = abspath(path, ref) return path.strip(os.path.sep).split(os.path.sep)
Split a path into its components. Parameters ---------- path : str absolute or relative path with respect to `ref` ref : str or None reference path if `path` is relative Returns ------- list : str components of the path
Below is the the instruction that describes the task: ### Input: Split a path into its components. Parameters ---------- path : str absolute or relative path with respect to `ref` ref : str or None reference path if `path` is relative Returns ------- list : str components of the path ### Response: def split_path(path, ref=None): """ Split a path into its components. Parameters ---------- path : str absolute or relative path with respect to `ref` ref : str or None reference path if `path` is relative Returns ------- list : str components of the path """ path = abspath(path, ref) return path.strip(os.path.sep).split(os.path.sep)
def explainParam(self, param): """ Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. """ param = self._resolveParam(param) values = [] if self.isDefined(param): if param in self._defaultParamMap: values.append("default: %s" % self._defaultParamMap[param]) if param in self._paramMap: values.append("current: %s" % self._paramMap[param]) else: values.append("undefined") valueStr = "(" + ", ".join(values) + ")" return "%s: %s %s" % (param.name, param.doc, valueStr)
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
Below is the the instruction that describes the task: ### Input: Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. ### Response: def explainParam(self, param): """ Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. """ param = self._resolveParam(param) values = [] if self.isDefined(param): if param in self._defaultParamMap: values.append("default: %s" % self._defaultParamMap[param]) if param in self._paramMap: values.append("current: %s" % self._paramMap[param]) else: values.append("undefined") valueStr = "(" + ", ".join(values) + ")" return "%s: %s %s" % (param.name, param.doc, valueStr)
def topological_sort(dependencies, start_nodes): """ Perform a topological sort on the dependency graph `dependencies`, starting from list `start_nodes`. """ retval = [] def edges(node): return dependencies[node][1] def in_degree(node): return dependencies[node][0] def remove_incoming(node): dependencies[node][0] = in_degree(node) - 1 while start_nodes: node = start_nodes.pop() retval.append(node) for child in edges(node): remove_incoming(child) if not in_degree(child): start_nodes.append(child) leftover_nodes = [node for node in list(dependencies.keys()) if in_degree(node) > 0] if leftover_nodes: raise CyclicDependency(leftover_nodes) else: return retval
Perform a topological sort on the dependency graph `dependencies`, starting from list `start_nodes`.
Below is the the instruction that describes the task: ### Input: Perform a topological sort on the dependency graph `dependencies`, starting from list `start_nodes`. ### Response: def topological_sort(dependencies, start_nodes): """ Perform a topological sort on the dependency graph `dependencies`, starting from list `start_nodes`. """ retval = [] def edges(node): return dependencies[node][1] def in_degree(node): return dependencies[node][0] def remove_incoming(node): dependencies[node][0] = in_degree(node) - 1 while start_nodes: node = start_nodes.pop() retval.append(node) for child in edges(node): remove_incoming(child) if not in_degree(child): start_nodes.append(child) leftover_nodes = [node for node in list(dependencies.keys()) if in_degree(node) > 0] if leftover_nodes: raise CyclicDependency(leftover_nodes) else: return retval
def __compose(self): """ Compose the message, pulling together body, attachments etc """ msg = MIMEMultipart() msg['Subject'] = self.config['shutit.core.alerting.emailer.subject'] msg['To'] = self.config['shutit.core.alerting.emailer.mailto'] msg['From'] = self.config['shutit.core.alerting.emailer.mailfrom'] # add the module's maintainer as a CC if configured if self.config['shutit.core.alerting.emailer.mailto_maintainer']: msg['Cc'] = self.config['shutit.core.alerting.emailer.maintainer'] if self.config['shutit.core.alerting.emailer.signature'] != '': signature = '\n\n' + self.config['shutit.core.alerting.emailer.signature'] else: signature = self.config['shutit.core.alerting.emailer.signature'] body = MIMEText('\n'.join(self.lines) + signature) msg.attach(body) for attach in self.attaches: msg.attach(attach) return msg
Compose the message, pulling together body, attachments etc
Below is the the instruction that describes the task: ### Input: Compose the message, pulling together body, attachments etc ### Response: def __compose(self): """ Compose the message, pulling together body, attachments etc """ msg = MIMEMultipart() msg['Subject'] = self.config['shutit.core.alerting.emailer.subject'] msg['To'] = self.config['shutit.core.alerting.emailer.mailto'] msg['From'] = self.config['shutit.core.alerting.emailer.mailfrom'] # add the module's maintainer as a CC if configured if self.config['shutit.core.alerting.emailer.mailto_maintainer']: msg['Cc'] = self.config['shutit.core.alerting.emailer.maintainer'] if self.config['shutit.core.alerting.emailer.signature'] != '': signature = '\n\n' + self.config['shutit.core.alerting.emailer.signature'] else: signature = self.config['shutit.core.alerting.emailer.signature'] body = MIMEText('\n'.join(self.lines) + signature) msg.attach(body) for attach in self.attaches: msg.attach(attach) return msg
def _expand_options(cls, options, backend=None): """ Validates and expands a dictionaries of options indexed by type[.group][.label] keys into separate style, plot, norm and output options. opts._expand_options({'Image': dict(cmap='viridis', show_title=False)}) returns {'Image': {'plot': dict(show_title=False), 'style': dict(cmap='viridis')}} """ current_backend = Store.current_backend try: backend_options = Store.options(backend=backend or current_backend) except KeyError as e: raise Exception('The %s backend is not loaded. Please load the backend using hv.extension.' % str(e)) expanded = {} if isinstance(options, list): options = merge_options_to_dict(options) for objspec, options in options.items(): objtype = objspec.split('.')[0] if objtype not in backend_options: raise ValueError('%s type not found, could not apply options.' % objtype) obj_options = backend_options[objtype] expanded[objspec] = {g: {} for g in obj_options.groups} for opt, value in options.items(): found = False valid_options = [] for g, group_opts in sorted(obj_options.groups.items()): if opt in group_opts.allowed_keywords: expanded[objspec][g][opt] = value found = True break valid_options += group_opts.allowed_keywords if found: continue cls._options_error(opt, objtype, backend, valid_options) return expanded
Validates and expands a dictionaries of options indexed by type[.group][.label] keys into separate style, plot, norm and output options. opts._expand_options({'Image': dict(cmap='viridis', show_title=False)}) returns {'Image': {'plot': dict(show_title=False), 'style': dict(cmap='viridis')}}
Below is the the instruction that describes the task: ### Input: Validates and expands a dictionaries of options indexed by type[.group][.label] keys into separate style, plot, norm and output options. opts._expand_options({'Image': dict(cmap='viridis', show_title=False)}) returns {'Image': {'plot': dict(show_title=False), 'style': dict(cmap='viridis')}} ### Response: def _expand_options(cls, options, backend=None): """ Validates and expands a dictionaries of options indexed by type[.group][.label] keys into separate style, plot, norm and output options. opts._expand_options({'Image': dict(cmap='viridis', show_title=False)}) returns {'Image': {'plot': dict(show_title=False), 'style': dict(cmap='viridis')}} """ current_backend = Store.current_backend try: backend_options = Store.options(backend=backend or current_backend) except KeyError as e: raise Exception('The %s backend is not loaded. Please load the backend using hv.extension.' % str(e)) expanded = {} if isinstance(options, list): options = merge_options_to_dict(options) for objspec, options in options.items(): objtype = objspec.split('.')[0] if objtype not in backend_options: raise ValueError('%s type not found, could not apply options.' % objtype) obj_options = backend_options[objtype] expanded[objspec] = {g: {} for g in obj_options.groups} for opt, value in options.items(): found = False valid_options = [] for g, group_opts in sorted(obj_options.groups.items()): if opt in group_opts.allowed_keywords: expanded[objspec][g][opt] = value found = True break valid_options += group_opts.allowed_keywords if found: continue cls._options_error(opt, objtype, backend, valid_options) return expanded
def indent(self, space=4): '''Return an indented Newick string, just like ``nw_indent`` in Newick Utilities Args: ``space`` (``int``): The number of spaces a tab should equal Returns: ``str``: An indented Newick string ''' if not isinstance(space,int): raise TypeError("space must be an int") if space < 0: raise ValueError("space must be a non-negative integer") space = ' '*space; o = []; l = 0 for c in self.newick(): if c == '(': o.append('(\n'); l += 1; o.append(space*l) elif c == ')': o.append('\n'); l -= 1; o.append(space*l); o.append(')') elif c == ',': o.append(',\n'); o.append(space*l) else: o.append(c) return ''.join(o)
Return an indented Newick string, just like ``nw_indent`` in Newick Utilities Args: ``space`` (``int``): The number of spaces a tab should equal Returns: ``str``: An indented Newick string
Below is the the instruction that describes the task: ### Input: Return an indented Newick string, just like ``nw_indent`` in Newick Utilities Args: ``space`` (``int``): The number of spaces a tab should equal Returns: ``str``: An indented Newick string ### Response: def indent(self, space=4): '''Return an indented Newick string, just like ``nw_indent`` in Newick Utilities Args: ``space`` (``int``): The number of spaces a tab should equal Returns: ``str``: An indented Newick string ''' if not isinstance(space,int): raise TypeError("space must be an int") if space < 0: raise ValueError("space must be a non-negative integer") space = ' '*space; o = []; l = 0 for c in self.newick(): if c == '(': o.append('(\n'); l += 1; o.append(space*l) elif c == ')': o.append('\n'); l -= 1; o.append(space*l); o.append(')') elif c == ',': o.append(',\n'); o.append(space*l) else: o.append(c) return ''.join(o)
def _process_name_or_alias_filter_directive(filter_operation_info, location, context, parameters): """Return a Filter basic block that checks for a match against an Entity's name or alias. Args: filter_operation_info: FilterOperationInfo object, containing the directive and field info of the field where the filter is to be applied. location: Location where this filter is used. context: dict, various per-compilation data (e.g. declared tags, whether the current block is optional, etc.). May be mutated in-place in this function! parameters: list of 1 element, containing the value to check the name or alias against; if the parameter is optional and missing, the check will return True Returns: a Filter basic block that performs the check against the name or alias """ filtered_field_type = filter_operation_info.field_type if isinstance(filtered_field_type, GraphQLUnionType): raise GraphQLCompilationError(u'Cannot apply "name_or_alias" to union type ' u'{}'.format(filtered_field_type)) current_type_fields = filtered_field_type.fields name_field = current_type_fields.get('name', None) alias_field = current_type_fields.get('alias', None) if not name_field or not alias_field: raise GraphQLCompilationError(u'Cannot apply "name_or_alias" to type {} because it lacks a ' u'"name" or "alias" field.'.format(filtered_field_type)) name_field_type = strip_non_null_from_type(name_field.type) alias_field_type = strip_non_null_from_type(alias_field.type) if not isinstance(name_field_type, GraphQLScalarType): raise GraphQLCompilationError(u'Cannot apply "name_or_alias" to type {} because its "name" ' u'field is not a scalar.'.format(filtered_field_type)) if not isinstance(alias_field_type, GraphQLList): raise GraphQLCompilationError(u'Cannot apply "name_or_alias" to type {} because its ' u'"alias" field is not a list.'.format(filtered_field_type)) alias_field_inner_type = strip_non_null_from_type(alias_field_type.of_type) if alias_field_inner_type != name_field_type: raise GraphQLCompilationError(u'Cannot apply "name_or_alias" to type {} because the ' u'"name" field and the inner type of the "alias" field ' u'do not match: {} vs {}'.format(filtered_field_type, name_field_type, alias_field_inner_type)) argument_inferred_type = name_field_type argument_expression, non_existence_expression = _represent_argument( location, context, parameters[0], argument_inferred_type) check_against_name = expressions.BinaryComposition( u'=', expressions.LocalField('name'), argument_expression) check_against_alias = expressions.BinaryComposition( u'contains', expressions.LocalField('alias'), argument_expression) filter_predicate = expressions.BinaryComposition( u'||', check_against_name, check_against_alias) if non_existence_expression is not None: # The argument comes from an optional block and might not exist, # in which case the filter expression should evaluate to True. filter_predicate = expressions.BinaryComposition( u'||', non_existence_expression, filter_predicate) return blocks.Filter(filter_predicate)
Return a Filter basic block that checks for a match against an Entity's name or alias. Args: filter_operation_info: FilterOperationInfo object, containing the directive and field info of the field where the filter is to be applied. location: Location where this filter is used. context: dict, various per-compilation data (e.g. declared tags, whether the current block is optional, etc.). May be mutated in-place in this function! parameters: list of 1 element, containing the value to check the name or alias against; if the parameter is optional and missing, the check will return True Returns: a Filter basic block that performs the check against the name or alias
Below is the the instruction that describes the task: ### Input: Return a Filter basic block that checks for a match against an Entity's name or alias. Args: filter_operation_info: FilterOperationInfo object, containing the directive and field info of the field where the filter is to be applied. location: Location where this filter is used. context: dict, various per-compilation data (e.g. declared tags, whether the current block is optional, etc.). May be mutated in-place in this function! parameters: list of 1 element, containing the value to check the name or alias against; if the parameter is optional and missing, the check will return True Returns: a Filter basic block that performs the check against the name or alias ### Response: def _process_name_or_alias_filter_directive(filter_operation_info, location, context, parameters): """Return a Filter basic block that checks for a match against an Entity's name or alias. Args: filter_operation_info: FilterOperationInfo object, containing the directive and field info of the field where the filter is to be applied. location: Location where this filter is used. context: dict, various per-compilation data (e.g. declared tags, whether the current block is optional, etc.). May be mutated in-place in this function! parameters: list of 1 element, containing the value to check the name or alias against; if the parameter is optional and missing, the check will return True Returns: a Filter basic block that performs the check against the name or alias """ filtered_field_type = filter_operation_info.field_type if isinstance(filtered_field_type, GraphQLUnionType): raise GraphQLCompilationError(u'Cannot apply "name_or_alias" to union type ' u'{}'.format(filtered_field_type)) current_type_fields = filtered_field_type.fields name_field = current_type_fields.get('name', None) alias_field = current_type_fields.get('alias', None) if not name_field or not alias_field: raise GraphQLCompilationError(u'Cannot apply "name_or_alias" to type {} because it lacks a ' u'"name" or "alias" field.'.format(filtered_field_type)) name_field_type = strip_non_null_from_type(name_field.type) alias_field_type = strip_non_null_from_type(alias_field.type) if not isinstance(name_field_type, GraphQLScalarType): raise GraphQLCompilationError(u'Cannot apply "name_or_alias" to type {} because its "name" ' u'field is not a scalar.'.format(filtered_field_type)) if not isinstance(alias_field_type, GraphQLList): raise GraphQLCompilationError(u'Cannot apply "name_or_alias" to type {} because its ' u'"alias" field is not a list.'.format(filtered_field_type)) alias_field_inner_type = strip_non_null_from_type(alias_field_type.of_type) if alias_field_inner_type != name_field_type: raise GraphQLCompilationError(u'Cannot apply "name_or_alias" to type {} because the ' u'"name" field and the inner type of the "alias" field ' u'do not match: {} vs {}'.format(filtered_field_type, name_field_type, alias_field_inner_type)) argument_inferred_type = name_field_type argument_expression, non_existence_expression = _represent_argument( location, context, parameters[0], argument_inferred_type) check_against_name = expressions.BinaryComposition( u'=', expressions.LocalField('name'), argument_expression) check_against_alias = expressions.BinaryComposition( u'contains', expressions.LocalField('alias'), argument_expression) filter_predicate = expressions.BinaryComposition( u'||', check_against_name, check_against_alias) if non_existence_expression is not None: # The argument comes from an optional block and might not exist, # in which case the filter expression should evaluate to True. filter_predicate = expressions.BinaryComposition( u'||', non_existence_expression, filter_predicate) return blocks.Filter(filter_predicate)
def get_content(identifier, default=None): ''' Returns the DynamicContent instance for the given identifier. If no object is found, a new one will be created. :param identifier: String representing the unique identifier of a ``DynamicContent`` object. :param default: String that should be used in case that no matching ``DynamicContent`` object exists. ''' if default is None: default = '' try: return models.DynamicContent.objects.get(identifier=identifier) except models.DynamicContent.DoesNotExist: return models.DynamicContent.objects.create( identifier=identifier, content=default)
Returns the DynamicContent instance for the given identifier. If no object is found, a new one will be created. :param identifier: String representing the unique identifier of a ``DynamicContent`` object. :param default: String that should be used in case that no matching ``DynamicContent`` object exists.
Below is the the instruction that describes the task: ### Input: Returns the DynamicContent instance for the given identifier. If no object is found, a new one will be created. :param identifier: String representing the unique identifier of a ``DynamicContent`` object. :param default: String that should be used in case that no matching ``DynamicContent`` object exists. ### Response: def get_content(identifier, default=None): ''' Returns the DynamicContent instance for the given identifier. If no object is found, a new one will be created. :param identifier: String representing the unique identifier of a ``DynamicContent`` object. :param default: String that should be used in case that no matching ``DynamicContent`` object exists. ''' if default is None: default = '' try: return models.DynamicContent.objects.get(identifier=identifier) except models.DynamicContent.DoesNotExist: return models.DynamicContent.objects.create( identifier=identifier, content=default)
def validate(source): """Validate a Wagon archive. Return True if succeeds, False otherwise. It also prints a list of all validation errors. This will test that some of the metadata is solid, that the required wheels are present within the archives and that the package is installable. Note that if the metadata file is corrupted, validation of the required wheels will be corrupted as well, since validation checks that the required wheels exist vs. the list of wheels supplied in the `wheels` key. """ _assert_virtualenv_is_installed() logger.info('Validating %s', source) processed_source = get_source(source) metadata = _get_metadata(processed_source) wheels_path = os.path.join(processed_source, DEFAULT_WHEELS_PATH) validation_errors = [] logger.debug('Verifying that all required files exist...') for wheel in metadata['wheels']: if not os.path.isfile(os.path.join(wheels_path, wheel)): validation_errors.append( '{0} is missing from the archive'.format(wheel)) logger.debug('Testing package installation...') tmpenv = _make_virtualenv() try: install(source=processed_source, venv=tmpenv) if not _check_installed(metadata['package_name'], tmpenv): validation_errors.append( '{0} failed to install (Reason unknown)'.format( metadata['package_name'])) finally: shutil.rmtree(tmpenv) if validation_errors: logger.info('Validation failed!') for error in validation_errors: logger.info(error) logger.info('Source can be found at: %s', processed_source) else: logger.info('Validation Passed!') if processed_source != source: shutil.rmtree(processed_source) return validation_errors
Validate a Wagon archive. Return True if succeeds, False otherwise. It also prints a list of all validation errors. This will test that some of the metadata is solid, that the required wheels are present within the archives and that the package is installable. Note that if the metadata file is corrupted, validation of the required wheels will be corrupted as well, since validation checks that the required wheels exist vs. the list of wheels supplied in the `wheels` key.
Below is the the instruction that describes the task: ### Input: Validate a Wagon archive. Return True if succeeds, False otherwise. It also prints a list of all validation errors. This will test that some of the metadata is solid, that the required wheels are present within the archives and that the package is installable. Note that if the metadata file is corrupted, validation of the required wheels will be corrupted as well, since validation checks that the required wheels exist vs. the list of wheels supplied in the `wheels` key. ### Response: def validate(source): """Validate a Wagon archive. Return True if succeeds, False otherwise. It also prints a list of all validation errors. This will test that some of the metadata is solid, that the required wheels are present within the archives and that the package is installable. Note that if the metadata file is corrupted, validation of the required wheels will be corrupted as well, since validation checks that the required wheels exist vs. the list of wheels supplied in the `wheels` key. """ _assert_virtualenv_is_installed() logger.info('Validating %s', source) processed_source = get_source(source) metadata = _get_metadata(processed_source) wheels_path = os.path.join(processed_source, DEFAULT_WHEELS_PATH) validation_errors = [] logger.debug('Verifying that all required files exist...') for wheel in metadata['wheels']: if not os.path.isfile(os.path.join(wheels_path, wheel)): validation_errors.append( '{0} is missing from the archive'.format(wheel)) logger.debug('Testing package installation...') tmpenv = _make_virtualenv() try: install(source=processed_source, venv=tmpenv) if not _check_installed(metadata['package_name'], tmpenv): validation_errors.append( '{0} failed to install (Reason unknown)'.format( metadata['package_name'])) finally: shutil.rmtree(tmpenv) if validation_errors: logger.info('Validation failed!') for error in validation_errors: logger.info(error) logger.info('Source can be found at: %s', processed_source) else: logger.info('Validation Passed!') if processed_source != source: shutil.rmtree(processed_source) return validation_errors
def print_statements(self): """Print all extracted INDRA Statements.""" logger.info('--- Direct INDRA statements ----------') for i, stmt in enumerate(self.statements): logger.info("%s: %s" % (i, stmt)) logger.info('--- Indirect INDRA statements ----------') for i, stmt in enumerate(self.indirect_stmts): logger.info("%s: %s" % (i, stmt))
Print all extracted INDRA Statements.
Below is the the instruction that describes the task: ### Input: Print all extracted INDRA Statements. ### Response: def print_statements(self): """Print all extracted INDRA Statements.""" logger.info('--- Direct INDRA statements ----------') for i, stmt in enumerate(self.statements): logger.info("%s: %s" % (i, stmt)) logger.info('--- Indirect INDRA statements ----------') for i, stmt in enumerate(self.indirect_stmts): logger.info("%s: %s" % (i, stmt))
def content(): """Helper method that returns just the content. This method was added so that the text could be reused in the dock_help module. .. versionadded:: 4.1.0 :returns: A message object without brand element. :rtype: safe.messaging.message.Message """ message = m.Message() paragraph = m.Paragraph( m.Image( 'file:///%s/img/screenshots/' 'field-mapping-tool-screenshot.png' % resources_path()), style_class='text-center' ) message.add(paragraph) paragraph = m.Paragraph(tr( 'This tool allows you to define field mappings to use for demographic ' 'breakdowns of your analysis results. You can activate the ' 'tool on the InaSAFE toolbar:'), m.Image( 'file:///%s/img/icons/' 'show-mapping-tool.svg' % resources_path(), **SMALL_ICON_STYLE), ) message.add(paragraph) message.add(field_mapping_help_content()) return message
Helper method that returns just the content. This method was added so that the text could be reused in the dock_help module. .. versionadded:: 4.1.0 :returns: A message object without brand element. :rtype: safe.messaging.message.Message
Below is the the instruction that describes the task: ### Input: Helper method that returns just the content. This method was added so that the text could be reused in the dock_help module. .. versionadded:: 4.1.0 :returns: A message object without brand element. :rtype: safe.messaging.message.Message ### Response: def content(): """Helper method that returns just the content. This method was added so that the text could be reused in the dock_help module. .. versionadded:: 4.1.0 :returns: A message object without brand element. :rtype: safe.messaging.message.Message """ message = m.Message() paragraph = m.Paragraph( m.Image( 'file:///%s/img/screenshots/' 'field-mapping-tool-screenshot.png' % resources_path()), style_class='text-center' ) message.add(paragraph) paragraph = m.Paragraph(tr( 'This tool allows you to define field mappings to use for demographic ' 'breakdowns of your analysis results. You can activate the ' 'tool on the InaSAFE toolbar:'), m.Image( 'file:///%s/img/icons/' 'show-mapping-tool.svg' % resources_path(), **SMALL_ICON_STYLE), ) message.add(paragraph) message.add(field_mapping_help_content()) return message
def iterative_overlap_assembly( variant_sequences, min_overlap_size=MIN_VARIANT_SEQUENCE_ASSEMBLY_OVERLAP_SIZE): """ Assembles longer sequences from reads centered on a variant by between merging all pairs of overlapping sequences and collapsing shorter sequences onto every longer sequence which contains them. Returns a list of variant sequences, sorted by decreasing read support. """ if len(variant_sequences) <= 1: # if we don't have at least two sequences to start with then # skip the whole mess below return variant_sequences # reduce the number of inputs to the merge algorithm by first collapsing # shorter sequences onto the longer sequences which contain them n_before_collapse = len(variant_sequences) variant_sequences = collapse_substrings(variant_sequences) n_after_collapse = len(variant_sequences) logger.info( "Collapsed %d -> %d sequences", n_before_collapse, n_after_collapse) merged_variant_sequences = greedy_merge(variant_sequences, min_overlap_size) return list(sorted( merged_variant_sequences, key=lambda seq: -len(seq.reads)))
Assembles longer sequences from reads centered on a variant by between merging all pairs of overlapping sequences and collapsing shorter sequences onto every longer sequence which contains them. Returns a list of variant sequences, sorted by decreasing read support.
Below is the the instruction that describes the task: ### Input: Assembles longer sequences from reads centered on a variant by between merging all pairs of overlapping sequences and collapsing shorter sequences onto every longer sequence which contains them. Returns a list of variant sequences, sorted by decreasing read support. ### Response: def iterative_overlap_assembly( variant_sequences, min_overlap_size=MIN_VARIANT_SEQUENCE_ASSEMBLY_OVERLAP_SIZE): """ Assembles longer sequences from reads centered on a variant by between merging all pairs of overlapping sequences and collapsing shorter sequences onto every longer sequence which contains them. Returns a list of variant sequences, sorted by decreasing read support. """ if len(variant_sequences) <= 1: # if we don't have at least two sequences to start with then # skip the whole mess below return variant_sequences # reduce the number of inputs to the merge algorithm by first collapsing # shorter sequences onto the longer sequences which contain them n_before_collapse = len(variant_sequences) variant_sequences = collapse_substrings(variant_sequences) n_after_collapse = len(variant_sequences) logger.info( "Collapsed %d -> %d sequences", n_before_collapse, n_after_collapse) merged_variant_sequences = greedy_merge(variant_sequences, min_overlap_size) return list(sorted( merged_variant_sequences, key=lambda seq: -len(seq.reads)))
def show(filename=None, *args, **kwargs): """ Show the current figure. Parameters ---------- filename : :obj:`str` filename to save the image to, for auto-saving """ if filename is None: plt.show(*args, **kwargs) else: plt.savefig(filename, *args, **kwargs)
Show the current figure. Parameters ---------- filename : :obj:`str` filename to save the image to, for auto-saving
Below is the the instruction that describes the task: ### Input: Show the current figure. Parameters ---------- filename : :obj:`str` filename to save the image to, for auto-saving ### Response: def show(filename=None, *args, **kwargs): """ Show the current figure. Parameters ---------- filename : :obj:`str` filename to save the image to, for auto-saving """ if filename is None: plt.show(*args, **kwargs) else: plt.savefig(filename, *args, **kwargs)
def Get(self, interface_name, property_name): '''Standard D-Bus API for getting a property value''' self.log('Get %s.%s' % (interface_name, property_name)) if not interface_name: interface_name = self.interface try: return self.GetAll(interface_name)[property_name] except KeyError: raise dbus.exceptions.DBusException( 'no such property ' + property_name, name=self.interface + '.UnknownProperty')
Standard D-Bus API for getting a property value
Below is the the instruction that describes the task: ### Input: Standard D-Bus API for getting a property value ### Response: def Get(self, interface_name, property_name): '''Standard D-Bus API for getting a property value''' self.log('Get %s.%s' % (interface_name, property_name)) if not interface_name: interface_name = self.interface try: return self.GetAll(interface_name)[property_name] except KeyError: raise dbus.exceptions.DBusException( 'no such property ' + property_name, name=self.interface + '.UnknownProperty')
def get_host_for_command(self, command, args): """Returns the host this command should be executed against.""" return self.get_host_for_key(self.get_key(command, args))
Returns the host this command should be executed against.
Below is the the instruction that describes the task: ### Input: Returns the host this command should be executed against. ### Response: def get_host_for_command(self, command, args): """Returns the host this command should be executed against.""" return self.get_host_for_key(self.get_key(command, args))
def _save_customization(self, widgets): """ Save the complete customization to the activity. :param widgets: The complete set of widgets to be customized """ if len(widgets) > 0: # Get the current customization and only replace the 'ext' part of it customization = self.activity._json_data.get('customization', dict()) if customization: customization['ext'] = dict(widgets=widgets) else: customization = dict(ext=dict(widgets=widgets)) # Empty the customization if if the widgets list is empty else: customization = None # perform validation if customization: validate(customization, widgetconfig_json_schema) # Save to the activity and store the saved activity to self response = self._client._request("PUT", self._client._build_url("activity", activity_id=str(self.activity.id)), json=dict(customization=customization)) if response.status_code != requests.codes.ok: # pragma: no cover raise APIError("Could not save customization ({})".format(response)) else: # refresh the activity json self.activity = self._client.activity(pk=self.activity.id)
Save the complete customization to the activity. :param widgets: The complete set of widgets to be customized
Below is the the instruction that describes the task: ### Input: Save the complete customization to the activity. :param widgets: The complete set of widgets to be customized ### Response: def _save_customization(self, widgets): """ Save the complete customization to the activity. :param widgets: The complete set of widgets to be customized """ if len(widgets) > 0: # Get the current customization and only replace the 'ext' part of it customization = self.activity._json_data.get('customization', dict()) if customization: customization['ext'] = dict(widgets=widgets) else: customization = dict(ext=dict(widgets=widgets)) # Empty the customization if if the widgets list is empty else: customization = None # perform validation if customization: validate(customization, widgetconfig_json_schema) # Save to the activity and store the saved activity to self response = self._client._request("PUT", self._client._build_url("activity", activity_id=str(self.activity.id)), json=dict(customization=customization)) if response.status_code != requests.codes.ok: # pragma: no cover raise APIError("Could not save customization ({})".format(response)) else: # refresh the activity json self.activity = self._client.activity(pk=self.activity.id)
def get_event_consumer(config, success_channel, error_channel, metrics, **kwargs): """Get a GPSEventConsumer client. A factory function that validates configuration, creates schema validator and parser clients, creates an auth and a pubsub client, and returns an event consumer (:interface:`gordon.interfaces. IRunnable` and :interface:`gordon.interfaces.IMessageHandler`) provider. Args: config (dict): Google Cloud Pub/Sub-related configuration. success_channel (asyncio.Queue): Queue to place a successfully consumed message to be further handled by the ``gordon`` core system. error_channel (asyncio.Queue): Queue to place a message met with errors to be further handled by the ``gordon`` core system. metrics (obj): :interface:`IMetricRelay` implementation. kwargs (dict): Additional keyword arguments to pass to the event consumer. Returns: A :class:`GPSEventConsumer` instance. """ builder = event_consumer.GPSEventConsumerBuilder( config, success_channel, error_channel, metrics, **kwargs) return builder.build_event_consumer()
Get a GPSEventConsumer client. A factory function that validates configuration, creates schema validator and parser clients, creates an auth and a pubsub client, and returns an event consumer (:interface:`gordon.interfaces. IRunnable` and :interface:`gordon.interfaces.IMessageHandler`) provider. Args: config (dict): Google Cloud Pub/Sub-related configuration. success_channel (asyncio.Queue): Queue to place a successfully consumed message to be further handled by the ``gordon`` core system. error_channel (asyncio.Queue): Queue to place a message met with errors to be further handled by the ``gordon`` core system. metrics (obj): :interface:`IMetricRelay` implementation. kwargs (dict): Additional keyword arguments to pass to the event consumer. Returns: A :class:`GPSEventConsumer` instance.
Below is the the instruction that describes the task: ### Input: Get a GPSEventConsumer client. A factory function that validates configuration, creates schema validator and parser clients, creates an auth and a pubsub client, and returns an event consumer (:interface:`gordon.interfaces. IRunnable` and :interface:`gordon.interfaces.IMessageHandler`) provider. Args: config (dict): Google Cloud Pub/Sub-related configuration. success_channel (asyncio.Queue): Queue to place a successfully consumed message to be further handled by the ``gordon`` core system. error_channel (asyncio.Queue): Queue to place a message met with errors to be further handled by the ``gordon`` core system. metrics (obj): :interface:`IMetricRelay` implementation. kwargs (dict): Additional keyword arguments to pass to the event consumer. Returns: A :class:`GPSEventConsumer` instance. ### Response: def get_event_consumer(config, success_channel, error_channel, metrics, **kwargs): """Get a GPSEventConsumer client. A factory function that validates configuration, creates schema validator and parser clients, creates an auth and a pubsub client, and returns an event consumer (:interface:`gordon.interfaces. IRunnable` and :interface:`gordon.interfaces.IMessageHandler`) provider. Args: config (dict): Google Cloud Pub/Sub-related configuration. success_channel (asyncio.Queue): Queue to place a successfully consumed message to be further handled by the ``gordon`` core system. error_channel (asyncio.Queue): Queue to place a message met with errors to be further handled by the ``gordon`` core system. metrics (obj): :interface:`IMetricRelay` implementation. kwargs (dict): Additional keyword arguments to pass to the event consumer. Returns: A :class:`GPSEventConsumer` instance. """ builder = event_consumer.GPSEventConsumerBuilder( config, success_channel, error_channel, metrics, **kwargs) return builder.build_event_consumer()
def solve_discrete_lyapunov(A, B, max_it=50, method="doubling"): r""" Computes the solution to the discrete lyapunov equation .. math:: AXA' - X + B = 0 :math:`X` is computed by using a doubling algorithm. In particular, we iterate to convergence on :math:`X_j` with the following recursions for :math:`j = 1, 2, \dots` starting from :math:`X_0 = B`, :math:`a_0 = A`: .. math:: a_j = a_{j-1} a_{j-1} .. math:: X_j = X_{j-1} + a_{j-1} X_{j-1} a_{j-1}' Parameters ---------- A : array_like(float, ndim=2) An n x n matrix as described above. We assume in order for convergence that the eigenvalues of A have moduli bounded by unity B : array_like(float, ndim=2) An n x n matrix as described above. We assume in order for convergence that the eigenvalues of A have moduli bounded by unity max_it : scalar(int), optional(default=50) The maximum number of iterations method : string, optional(default="doubling") Describes the solution method to use. If it is "doubling" then uses the doubling algorithm to solve, if it is "bartels-stewart" then it uses scipy's implementation of the Bartels-Stewart approach. Returns ------- gamma1: array_like(float, ndim=2) Represents the value :math:`X` """ if method == "doubling": A, B = list(map(np.atleast_2d, [A, B])) alpha0 = A gamma0 = B diff = 5 n_its = 1 while diff > 1e-15: alpha1 = alpha0.dot(alpha0) gamma1 = gamma0 + np.dot(alpha0.dot(gamma0), alpha0.conjugate().T) diff = np.max(np.abs(gamma1 - gamma0)) alpha0 = alpha1 gamma0 = gamma1 n_its += 1 if n_its > max_it: msg = "Exceeded maximum iterations {}, check input matrics" raise ValueError(msg.format(n_its)) elif method == "bartels-stewart": gamma1 = sp_solve_discrete_lyapunov(A, B) else: msg = "Check your method input. Should be doubling or bartels-stewart" raise ValueError(msg) return gamma1
r""" Computes the solution to the discrete lyapunov equation .. math:: AXA' - X + B = 0 :math:`X` is computed by using a doubling algorithm. In particular, we iterate to convergence on :math:`X_j` with the following recursions for :math:`j = 1, 2, \dots` starting from :math:`X_0 = B`, :math:`a_0 = A`: .. math:: a_j = a_{j-1} a_{j-1} .. math:: X_j = X_{j-1} + a_{j-1} X_{j-1} a_{j-1}' Parameters ---------- A : array_like(float, ndim=2) An n x n matrix as described above. We assume in order for convergence that the eigenvalues of A have moduli bounded by unity B : array_like(float, ndim=2) An n x n matrix as described above. We assume in order for convergence that the eigenvalues of A have moduli bounded by unity max_it : scalar(int), optional(default=50) The maximum number of iterations method : string, optional(default="doubling") Describes the solution method to use. If it is "doubling" then uses the doubling algorithm to solve, if it is "bartels-stewart" then it uses scipy's implementation of the Bartels-Stewart approach. Returns ------- gamma1: array_like(float, ndim=2) Represents the value :math:`X`
Below is the the instruction that describes the task: ### Input: r""" Computes the solution to the discrete lyapunov equation .. math:: AXA' - X + B = 0 :math:`X` is computed by using a doubling algorithm. In particular, we iterate to convergence on :math:`X_j` with the following recursions for :math:`j = 1, 2, \dots` starting from :math:`X_0 = B`, :math:`a_0 = A`: .. math:: a_j = a_{j-1} a_{j-1} .. math:: X_j = X_{j-1} + a_{j-1} X_{j-1} a_{j-1}' Parameters ---------- A : array_like(float, ndim=2) An n x n matrix as described above. We assume in order for convergence that the eigenvalues of A have moduli bounded by unity B : array_like(float, ndim=2) An n x n matrix as described above. We assume in order for convergence that the eigenvalues of A have moduli bounded by unity max_it : scalar(int), optional(default=50) The maximum number of iterations method : string, optional(default="doubling") Describes the solution method to use. If it is "doubling" then uses the doubling algorithm to solve, if it is "bartels-stewart" then it uses scipy's implementation of the Bartels-Stewart approach. Returns ------- gamma1: array_like(float, ndim=2) Represents the value :math:`X` ### Response: def solve_discrete_lyapunov(A, B, max_it=50, method="doubling"): r""" Computes the solution to the discrete lyapunov equation .. math:: AXA' - X + B = 0 :math:`X` is computed by using a doubling algorithm. In particular, we iterate to convergence on :math:`X_j` with the following recursions for :math:`j = 1, 2, \dots` starting from :math:`X_0 = B`, :math:`a_0 = A`: .. math:: a_j = a_{j-1} a_{j-1} .. math:: X_j = X_{j-1} + a_{j-1} X_{j-1} a_{j-1}' Parameters ---------- A : array_like(float, ndim=2) An n x n matrix as described above. We assume in order for convergence that the eigenvalues of A have moduli bounded by unity B : array_like(float, ndim=2) An n x n matrix as described above. We assume in order for convergence that the eigenvalues of A have moduli bounded by unity max_it : scalar(int), optional(default=50) The maximum number of iterations method : string, optional(default="doubling") Describes the solution method to use. If it is "doubling" then uses the doubling algorithm to solve, if it is "bartels-stewart" then it uses scipy's implementation of the Bartels-Stewart approach. Returns ------- gamma1: array_like(float, ndim=2) Represents the value :math:`X` """ if method == "doubling": A, B = list(map(np.atleast_2d, [A, B])) alpha0 = A gamma0 = B diff = 5 n_its = 1 while diff > 1e-15: alpha1 = alpha0.dot(alpha0) gamma1 = gamma0 + np.dot(alpha0.dot(gamma0), alpha0.conjugate().T) diff = np.max(np.abs(gamma1 - gamma0)) alpha0 = alpha1 gamma0 = gamma1 n_its += 1 if n_its > max_it: msg = "Exceeded maximum iterations {}, check input matrics" raise ValueError(msg.format(n_its)) elif method == "bartels-stewart": gamma1 = sp_solve_discrete_lyapunov(A, B) else: msg = "Check your method input. Should be doubling or bartels-stewart" raise ValueError(msg) return gamma1
def update_changes(changes,newtext,change): "decide whether to compact the newest change into the old last; return new change list. assumes changes is safe to mutate.\ note: newtext MUST be the result of applying change to changes, and is only passed to save doing the computation again." # the criteria for a new version are: # 1. mode change (modes are adding to end, deleting from end, internal edits) # 2. length changed by more than 256 chars (why power of 2? why not) # 3. time delta > COMPACTION_TIME_THRESH if not changes: return [change] # todo(awinter): needs test case if change.utc-changes[-1].utc>COMPACTION_TIME_THRESH: changes.append(change) return changes base=reduce(apply_change,changes[:-1],'') final=apply_change(base,changes[-1]) prev_mode=detect_change_mode(base,changes[-1]) cur_mode=detect_change_mode(final,change) if prev_mode==cur_mode and abs(len(newtext)-len(final)<COMPACTION_LEN_THRESH): changes[-1]=mkchange(base,newtext,change.version,change.utc) else: changes.append(change) return changes
decide whether to compact the newest change into the old last; return new change list. assumes changes is safe to mutate.\ note: newtext MUST be the result of applying change to changes, and is only passed to save doing the computation again.
Below is the the instruction that describes the task: ### Input: decide whether to compact the newest change into the old last; return new change list. assumes changes is safe to mutate.\ note: newtext MUST be the result of applying change to changes, and is only passed to save doing the computation again. ### Response: def update_changes(changes,newtext,change): "decide whether to compact the newest change into the old last; return new change list. assumes changes is safe to mutate.\ note: newtext MUST be the result of applying change to changes, and is only passed to save doing the computation again." # the criteria for a new version are: # 1. mode change (modes are adding to end, deleting from end, internal edits) # 2. length changed by more than 256 chars (why power of 2? why not) # 3. time delta > COMPACTION_TIME_THRESH if not changes: return [change] # todo(awinter): needs test case if change.utc-changes[-1].utc>COMPACTION_TIME_THRESH: changes.append(change) return changes base=reduce(apply_change,changes[:-1],'') final=apply_change(base,changes[-1]) prev_mode=detect_change_mode(base,changes[-1]) cur_mode=detect_change_mode(final,change) if prev_mode==cur_mode and abs(len(newtext)-len(final)<COMPACTION_LEN_THRESH): changes[-1]=mkchange(base,newtext,change.version,change.utc) else: changes.append(change) return changes
def V_vertical_conical(D, a, h): r'''Calculates volume of a vertical tank with a convex conical bottom, according to [1]_. No provision for the top of the tank is made here. .. math:: V_f = \frac{\pi}{4}\left(\frac{Dh}{a}\right)^2\left(\frac{h}{3}\right),\; h < a .. math:: V_f = \frac{\pi D^2}{4}\left(h - \frac{2a}{3}\right),\; h\ge a Parameters ---------- D : float Diameter of the main cylindrical section, [m] a : float Distance the cone head extends under the main cylinder, [m] h : float Height, as measured up to where the fluid ends, [m] Returns ------- V : float Volume [m^3] Examples -------- Matching example from [1]_, with inputs in inches and volume in gallons. >>> V_vertical_conical(132., 33., 24)/231. 250.67461381371024 References ---------- .. [1] Jones, D. "Calculating Tank Volume." Text. Accessed December 22, 2015. http://www.webcalc.com.br/blog/Tank_Volume.PDF''' if h < a: Vf = pi/4*(D*h/a)**2*(h/3.) else: Vf = pi*D**2/4*(h - 2*a/3.) return Vf
r'''Calculates volume of a vertical tank with a convex conical bottom, according to [1]_. No provision for the top of the tank is made here. .. math:: V_f = \frac{\pi}{4}\left(\frac{Dh}{a}\right)^2\left(\frac{h}{3}\right),\; h < a .. math:: V_f = \frac{\pi D^2}{4}\left(h - \frac{2a}{3}\right),\; h\ge a Parameters ---------- D : float Diameter of the main cylindrical section, [m] a : float Distance the cone head extends under the main cylinder, [m] h : float Height, as measured up to where the fluid ends, [m] Returns ------- V : float Volume [m^3] Examples -------- Matching example from [1]_, with inputs in inches and volume in gallons. >>> V_vertical_conical(132., 33., 24)/231. 250.67461381371024 References ---------- .. [1] Jones, D. "Calculating Tank Volume." Text. Accessed December 22, 2015. http://www.webcalc.com.br/blog/Tank_Volume.PDF
Below is the the instruction that describes the task: ### Input: r'''Calculates volume of a vertical tank with a convex conical bottom, according to [1]_. No provision for the top of the tank is made here. .. math:: V_f = \frac{\pi}{4}\left(\frac{Dh}{a}\right)^2\left(\frac{h}{3}\right),\; h < a .. math:: V_f = \frac{\pi D^2}{4}\left(h - \frac{2a}{3}\right),\; h\ge a Parameters ---------- D : float Diameter of the main cylindrical section, [m] a : float Distance the cone head extends under the main cylinder, [m] h : float Height, as measured up to where the fluid ends, [m] Returns ------- V : float Volume [m^3] Examples -------- Matching example from [1]_, with inputs in inches and volume in gallons. >>> V_vertical_conical(132., 33., 24)/231. 250.67461381371024 References ---------- .. [1] Jones, D. "Calculating Tank Volume." Text. Accessed December 22, 2015. http://www.webcalc.com.br/blog/Tank_Volume.PDF ### Response: def V_vertical_conical(D, a, h): r'''Calculates volume of a vertical tank with a convex conical bottom, according to [1]_. No provision for the top of the tank is made here. .. math:: V_f = \frac{\pi}{4}\left(\frac{Dh}{a}\right)^2\left(\frac{h}{3}\right),\; h < a .. math:: V_f = \frac{\pi D^2}{4}\left(h - \frac{2a}{3}\right),\; h\ge a Parameters ---------- D : float Diameter of the main cylindrical section, [m] a : float Distance the cone head extends under the main cylinder, [m] h : float Height, as measured up to where the fluid ends, [m] Returns ------- V : float Volume [m^3] Examples -------- Matching example from [1]_, with inputs in inches and volume in gallons. >>> V_vertical_conical(132., 33., 24)/231. 250.67461381371024 References ---------- .. [1] Jones, D. "Calculating Tank Volume." Text. Accessed December 22, 2015. http://www.webcalc.com.br/blog/Tank_Volume.PDF''' if h < a: Vf = pi/4*(D*h/a)**2*(h/3.) else: Vf = pi*D**2/4*(h - 2*a/3.) return Vf
def getSiblings(self, retracted=False): """ Return the list of duplicate analyses that share the same Request and are included in the same Worksheet as the current analysis. The current duplicate is excluded from the list. :param retracted: If false, retracted/rejected siblings are dismissed :type retracted: bool :return: list of siblings for this analysis :rtype: list of IAnalysis """ worksheet = self.getWorksheet() requestuid = self.getRequestUID() if not requestuid or not worksheet: return [] siblings = [] retracted_states = [STATE_RETRACTED, STATE_REJECTED] analyses = worksheet.getAnalyses() for analysis in analyses: if analysis.UID() == self.UID(): # Exclude me from the list continue if not IRequestAnalysis.providedBy(analysis): # Exclude analyses that do not have an analysis request # associated continue if analysis.getRequestUID() != requestuid: # Exclude those analyses that does not belong to the same # analysis request I belong to continue if retracted is False and in_state(analysis, retracted_states): # Exclude retracted analyses continue siblings.append(analysis) return siblings
Return the list of duplicate analyses that share the same Request and are included in the same Worksheet as the current analysis. The current duplicate is excluded from the list. :param retracted: If false, retracted/rejected siblings are dismissed :type retracted: bool :return: list of siblings for this analysis :rtype: list of IAnalysis
Below is the the instruction that describes the task: ### Input: Return the list of duplicate analyses that share the same Request and are included in the same Worksheet as the current analysis. The current duplicate is excluded from the list. :param retracted: If false, retracted/rejected siblings are dismissed :type retracted: bool :return: list of siblings for this analysis :rtype: list of IAnalysis ### Response: def getSiblings(self, retracted=False): """ Return the list of duplicate analyses that share the same Request and are included in the same Worksheet as the current analysis. The current duplicate is excluded from the list. :param retracted: If false, retracted/rejected siblings are dismissed :type retracted: bool :return: list of siblings for this analysis :rtype: list of IAnalysis """ worksheet = self.getWorksheet() requestuid = self.getRequestUID() if not requestuid or not worksheet: return [] siblings = [] retracted_states = [STATE_RETRACTED, STATE_REJECTED] analyses = worksheet.getAnalyses() for analysis in analyses: if analysis.UID() == self.UID(): # Exclude me from the list continue if not IRequestAnalysis.providedBy(analysis): # Exclude analyses that do not have an analysis request # associated continue if analysis.getRequestUID() != requestuid: # Exclude those analyses that does not belong to the same # analysis request I belong to continue if retracted is False and in_state(analysis, retracted_states): # Exclude retracted analyses continue siblings.append(analysis) return siblings
def _build_folder_tree(top_abspath, followsymlinks, file_filter): """ Build a tree of LocalFolder with children based on a path. :param top_abspath: str path to a directory to walk :param followsymlinks: bool should we follow symlinks when walking :param file_filter: FileFilter: include method returns True if we should include a file/folder :return: the top node of the tree LocalFolder """ path_to_content = {} child_to_parent = {} ignore_file_patterns = IgnoreFilePatterns(file_filter) ignore_file_patterns.load_directory(top_abspath, followsymlinks) for dir_name, child_dirs, child_files in os.walk(top_abspath, followlinks=followsymlinks): abspath = os.path.abspath(dir_name) folder = LocalFolder(abspath) path_to_content[abspath] = folder # If we have a parent add us to it. parent_path = child_to_parent.get(abspath) if parent_path: path_to_content[parent_path].add_child(folder) remove_child_dirs = [] for child_dir in child_dirs: # Record dir_name as the parent of child_dir so we can call add_child when get to it. abs_child_path = os.path.abspath(os.path.join(dir_name, child_dir)) if ignore_file_patterns.include(abs_child_path, is_file=False): child_to_parent[abs_child_path] = abspath else: remove_child_dirs.append(child_dir) for remove_child_dir in remove_child_dirs: child_dirs.remove(remove_child_dir) for child_filename in child_files: abs_child_filename = os.path.join(dir_name, child_filename) if ignore_file_patterns.include(abs_child_filename, is_file=True): folder.add_child(LocalFile(abs_child_filename)) return path_to_content.get(top_abspath)
Build a tree of LocalFolder with children based on a path. :param top_abspath: str path to a directory to walk :param followsymlinks: bool should we follow symlinks when walking :param file_filter: FileFilter: include method returns True if we should include a file/folder :return: the top node of the tree LocalFolder
Below is the the instruction that describes the task: ### Input: Build a tree of LocalFolder with children based on a path. :param top_abspath: str path to a directory to walk :param followsymlinks: bool should we follow symlinks when walking :param file_filter: FileFilter: include method returns True if we should include a file/folder :return: the top node of the tree LocalFolder ### Response: def _build_folder_tree(top_abspath, followsymlinks, file_filter): """ Build a tree of LocalFolder with children based on a path. :param top_abspath: str path to a directory to walk :param followsymlinks: bool should we follow symlinks when walking :param file_filter: FileFilter: include method returns True if we should include a file/folder :return: the top node of the tree LocalFolder """ path_to_content = {} child_to_parent = {} ignore_file_patterns = IgnoreFilePatterns(file_filter) ignore_file_patterns.load_directory(top_abspath, followsymlinks) for dir_name, child_dirs, child_files in os.walk(top_abspath, followlinks=followsymlinks): abspath = os.path.abspath(dir_name) folder = LocalFolder(abspath) path_to_content[abspath] = folder # If we have a parent add us to it. parent_path = child_to_parent.get(abspath) if parent_path: path_to_content[parent_path].add_child(folder) remove_child_dirs = [] for child_dir in child_dirs: # Record dir_name as the parent of child_dir so we can call add_child when get to it. abs_child_path = os.path.abspath(os.path.join(dir_name, child_dir)) if ignore_file_patterns.include(abs_child_path, is_file=False): child_to_parent[abs_child_path] = abspath else: remove_child_dirs.append(child_dir) for remove_child_dir in remove_child_dirs: child_dirs.remove(remove_child_dir) for child_filename in child_files: abs_child_filename = os.path.join(dir_name, child_filename) if ignore_file_patterns.include(abs_child_filename, is_file=True): folder.add_child(LocalFile(abs_child_filename)) return path_to_content.get(top_abspath)
def generate(self,rprs=None, mass=None, radius=None, n=2e4, fp_specific=0.01, u1=None, u2=None, starmodel=None, Teff=None, logg=None, rbin_width=0.3, MAfn=None, lhoodcachefile=None): """Generates Population All arguments defined in ``__init__``. """ n = int(n) if starmodel is None: if type(mass) is type((1,)): mass = dists.Gaussian_Distribution(*mass) if isinstance(mass, dists.Distribution): mdist = mass mass = mdist.rvs(1e5) if type(radius) is type((1,)): radius = dists.Gaussian_Distribution(*radius) if isinstance(radius, dists.Distribution): rdist = radius radius = rdist.rvs(1e5) else: samples = starmodel.random_samples(1e5) mass = samples['mass_0_0'].values radius = samples['radius_0_0'].values Teff = samples['Teff_0_0'].mean() logg = samples['logg_0_0'].mean() logging.debug('star mass: {}'.format(mass)) logging.debug('star radius: {}'.format(radius)) logging.debug('Teff: {}'.format(Teff)) logging.debug('logg: {}'.format(logg)) if u1 is None or u2 is None: if Teff is None or logg is None: logging.warning('Teff, logg not provided; using solar limb darkening') u1 = 0.394; u2=0.296 else: u1,u2 = ldcoeffs(Teff, logg) #use point estimate of rprs to construct planets in radius bin #rp = self.rprs*np.median(radius) #rbin_min = (1-rbin_width)*rp #rbin_max = (1+rbin_width)*rp rprs_bin_min = (1-rbin_width)*self.rprs rprs_bin_max = (1+rbin_width)*self.rprs radius_p = radius * (np.random.random(int(1e5))*(rprs_bin_max - rprs_bin_min) + rprs_bin_min) mass_p = (radius_p*RSUN/REARTH)**2.06 * MEARTH/MSUN #hokey, but doesn't matter logging.debug('planet radius: {}'.format(radius_p)) stars = pd.DataFrame() #df_orbpop = pd.DataFrame() #for orbit population tot_prob = None; tot_dprob = None; prob_norm = None n_adapt = n while len(stars) < n: n_adapt = int(n_adapt) inds = np.random.randint(len(mass), size=n_adapt) #calculate eclipses. ecl_inds, df, (prob,dprob) = calculate_eclipses(mass[inds], mass_p[inds], radius[inds], radius_p[inds], 15, np.inf, #arbitrary u11s=u1, u21s=u2, band=self.band, period=self.period, calc_mininc=True, return_indices=True, MAfn=MAfn) df['mass_A'] = mass[inds][ecl_inds] df['mass_B'] = mass_p[inds][ecl_inds] df['radius_A'] = radius[inds][ecl_inds] df['radius_B'] = radius_p[inds][ecl_inds] df['u1'] = u1 * np.ones_like(df['mass_A']) df['u2'] = u2 * np.ones_like(df['mass_A']) df['P'] = self.period * np.ones_like(df['mass_A']) ok = (df['dpri']>0) & (df['T14_pri'] > 0) stars = pd.concat((stars, df[ok])) logging.info('{} Transiting planet systems generated (target {})'.format(len(stars),n)) logging.debug('{} nans in stars[dpri]'.format(np.isnan(stars['dpri']).sum())) if tot_prob is None: prob_norm = (1/dprob**2) tot_prob = prob tot_dprob = dprob else: prob_norm = (1/tot_dprob**2 + 1/dprob**2) tot_prob = (tot_prob/tot_dprob**2 + prob/dprob**2)/prob_norm tot_dprob = 1/np.sqrt(prob_norm) n_adapt = min(int(1.2*(n-len(stars)) * n_adapt//len(df)), 5e4) n_adapt = max(n_adapt, 100) stars = stars.reset_index() stars.drop('index', axis=1, inplace=True) stars = stars.iloc[:n] stars['mass_1'] = stars['mass_A'] stars['radius_1'] = stars['radius_A'] stars['mass_2'] = stars['mass_B'] stars['radius_2'] = stars['radius_B'] #make OrbitPopulation? #finish below. if fp_specific is None: rp = stars['radius_2'].mean() * RSUN/REARTH fp_specific = fp_fressin(rp) priorfactors = {'fp_specific':fp_specific} self._starmodel = starmodel EclipsePopulation.__init__(self, stars=stars, period=self.period, cadence=self.cadence, model=self.model, priorfactors=priorfactors, prob=tot_prob, lhoodcachefile=lhoodcachefile)
Generates Population All arguments defined in ``__init__``.
Below is the the instruction that describes the task: ### Input: Generates Population All arguments defined in ``__init__``. ### Response: def generate(self,rprs=None, mass=None, radius=None, n=2e4, fp_specific=0.01, u1=None, u2=None, starmodel=None, Teff=None, logg=None, rbin_width=0.3, MAfn=None, lhoodcachefile=None): """Generates Population All arguments defined in ``__init__``. """ n = int(n) if starmodel is None: if type(mass) is type((1,)): mass = dists.Gaussian_Distribution(*mass) if isinstance(mass, dists.Distribution): mdist = mass mass = mdist.rvs(1e5) if type(radius) is type((1,)): radius = dists.Gaussian_Distribution(*radius) if isinstance(radius, dists.Distribution): rdist = radius radius = rdist.rvs(1e5) else: samples = starmodel.random_samples(1e5) mass = samples['mass_0_0'].values radius = samples['radius_0_0'].values Teff = samples['Teff_0_0'].mean() logg = samples['logg_0_0'].mean() logging.debug('star mass: {}'.format(mass)) logging.debug('star radius: {}'.format(radius)) logging.debug('Teff: {}'.format(Teff)) logging.debug('logg: {}'.format(logg)) if u1 is None or u2 is None: if Teff is None or logg is None: logging.warning('Teff, logg not provided; using solar limb darkening') u1 = 0.394; u2=0.296 else: u1,u2 = ldcoeffs(Teff, logg) #use point estimate of rprs to construct planets in radius bin #rp = self.rprs*np.median(radius) #rbin_min = (1-rbin_width)*rp #rbin_max = (1+rbin_width)*rp rprs_bin_min = (1-rbin_width)*self.rprs rprs_bin_max = (1+rbin_width)*self.rprs radius_p = radius * (np.random.random(int(1e5))*(rprs_bin_max - rprs_bin_min) + rprs_bin_min) mass_p = (radius_p*RSUN/REARTH)**2.06 * MEARTH/MSUN #hokey, but doesn't matter logging.debug('planet radius: {}'.format(radius_p)) stars = pd.DataFrame() #df_orbpop = pd.DataFrame() #for orbit population tot_prob = None; tot_dprob = None; prob_norm = None n_adapt = n while len(stars) < n: n_adapt = int(n_adapt) inds = np.random.randint(len(mass), size=n_adapt) #calculate eclipses. ecl_inds, df, (prob,dprob) = calculate_eclipses(mass[inds], mass_p[inds], radius[inds], radius_p[inds], 15, np.inf, #arbitrary u11s=u1, u21s=u2, band=self.band, period=self.period, calc_mininc=True, return_indices=True, MAfn=MAfn) df['mass_A'] = mass[inds][ecl_inds] df['mass_B'] = mass_p[inds][ecl_inds] df['radius_A'] = radius[inds][ecl_inds] df['radius_B'] = radius_p[inds][ecl_inds] df['u1'] = u1 * np.ones_like(df['mass_A']) df['u2'] = u2 * np.ones_like(df['mass_A']) df['P'] = self.period * np.ones_like(df['mass_A']) ok = (df['dpri']>0) & (df['T14_pri'] > 0) stars = pd.concat((stars, df[ok])) logging.info('{} Transiting planet systems generated (target {})'.format(len(stars),n)) logging.debug('{} nans in stars[dpri]'.format(np.isnan(stars['dpri']).sum())) if tot_prob is None: prob_norm = (1/dprob**2) tot_prob = prob tot_dprob = dprob else: prob_norm = (1/tot_dprob**2 + 1/dprob**2) tot_prob = (tot_prob/tot_dprob**2 + prob/dprob**2)/prob_norm tot_dprob = 1/np.sqrt(prob_norm) n_adapt = min(int(1.2*(n-len(stars)) * n_adapt//len(df)), 5e4) n_adapt = max(n_adapt, 100) stars = stars.reset_index() stars.drop('index', axis=1, inplace=True) stars = stars.iloc[:n] stars['mass_1'] = stars['mass_A'] stars['radius_1'] = stars['radius_A'] stars['mass_2'] = stars['mass_B'] stars['radius_2'] = stars['radius_B'] #make OrbitPopulation? #finish below. if fp_specific is None: rp = stars['radius_2'].mean() * RSUN/REARTH fp_specific = fp_fressin(rp) priorfactors = {'fp_specific':fp_specific} self._starmodel = starmodel EclipsePopulation.__init__(self, stars=stars, period=self.period, cadence=self.cadence, model=self.model, priorfactors=priorfactors, prob=tot_prob, lhoodcachefile=lhoodcachefile)
def add_file_group(self, fileGrp): """ Add a new ``mets:fileGrp``. Arguments: fileGrp (string): ``USE`` attribute of the new filegroup. """ el_fileSec = self._tree.getroot().find('mets:fileSec', NS) if el_fileSec is None: el_fileSec = ET.SubElement(self._tree.getroot(), TAG_METS_FILESEC) el_fileGrp = el_fileSec.find('mets:fileGrp[@USE="%s"]' % fileGrp, NS) if el_fileGrp is None: el_fileGrp = ET.SubElement(el_fileSec, TAG_METS_FILEGRP) el_fileGrp.set('USE', fileGrp) return el_fileGrp
Add a new ``mets:fileGrp``. Arguments: fileGrp (string): ``USE`` attribute of the new filegroup.
Below is the the instruction that describes the task: ### Input: Add a new ``mets:fileGrp``. Arguments: fileGrp (string): ``USE`` attribute of the new filegroup. ### Response: def add_file_group(self, fileGrp): """ Add a new ``mets:fileGrp``. Arguments: fileGrp (string): ``USE`` attribute of the new filegroup. """ el_fileSec = self._tree.getroot().find('mets:fileSec', NS) if el_fileSec is None: el_fileSec = ET.SubElement(self._tree.getroot(), TAG_METS_FILESEC) el_fileGrp = el_fileSec.find('mets:fileGrp[@USE="%s"]' % fileGrp, NS) if el_fileGrp is None: el_fileGrp = ET.SubElement(el_fileSec, TAG_METS_FILEGRP) el_fileGrp.set('USE', fileGrp) return el_fileGrp
def set_request_args(self, args): """ Set the Limit parameter into the request args """ if self.scan_limit is not None: args['Limit'] = self.scan_limit elif self.item_limit is not None: args['Limit'] = max(self.item_limit, self.min_scan_limit) else: args.pop('Limit', None)
Set the Limit parameter into the request args
Below is the the instruction that describes the task: ### Input: Set the Limit parameter into the request args ### Response: def set_request_args(self, args): """ Set the Limit parameter into the request args """ if self.scan_limit is not None: args['Limit'] = self.scan_limit elif self.item_limit is not None: args['Limit'] = max(self.item_limit, self.min_scan_limit) else: args.pop('Limit', None)
def symbols(names, **args): """ Transform strings into instances of :class:`Symbol` class. :func:`symbols` function returns a sequence of symbols with names taken from ``names`` argument, which can be a comma or whitespace delimited string, or a sequence of strings:: >>> from symengine import symbols >>> x, y, z = symbols('x,y,z') >>> a, b, c = symbols('a b c') The type of output is dependent on the properties of input arguments:: >>> symbols('x') x >>> symbols('x,') (x,) >>> symbols('x,y') (x, y) >>> symbols(('a', 'b', 'c')) (a, b, c) >>> symbols(['a', 'b', 'c']) [a, b, c] >>> symbols(set(['a', 'b', 'c'])) set([a, b, c]) If an iterable container is needed for a single symbol, set the ``seq`` argument to ``True`` or terminate the symbol name with a comma:: >>> symbols('x', seq=True) (x,) To reduce typing, range syntax is supported to create indexed symbols. Ranges are indicated by a colon and the type of range is determined by the character to the right of the colon. If the character is a digit then all contiguous digits to the left are taken as the nonnegative starting value (or 0 if there is no digit left of the colon) and all contiguous digits to the right are taken as 1 greater than the ending value:: >>> symbols('x:10') (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9) >>> symbols('x5:10') (x5, x6, x7, x8, x9) >>> symbols('x5(:2)') (x50, x51) >>> symbols('x5:10,y:5') (x5, x6, x7, x8, x9, y0, y1, y2, y3, y4) >>> symbols(('x5:10', 'y:5')) ((x5, x6, x7, x8, x9), (y0, y1, y2, y3, y4)) If the character to the right of the colon is a letter, then the single letter to the left (or 'a' if there is none) is taken as the start and all characters in the lexicographic range *through* the letter to the right are used as the range:: >>> symbols('x:z') (x, y, z) >>> symbols('x:c') # null range () >>> symbols('x(:c)') (xa, xb, xc) >>> symbols(':c') (a, b, c) >>> symbols('a:d, x:z') (a, b, c, d, x, y, z) >>> symbols(('a:d', 'x:z')) ((a, b, c, d), (x, y, z)) Multiple ranges are supported; contiguous numerical ranges should be separated by parentheses to disambiguate the ending number of one range from the starting number of the next:: >>> symbols('x:2(1:3)') (x01, x02, x11, x12) >>> symbols(':3:2') # parsing is from left to right (00, 01, 10, 11, 20, 21) Only one pair of parentheses surrounding ranges are removed, so to include parentheses around ranges, double them. And to include spaces, commas, or colons, escape them with a backslash:: >>> symbols('x((a:b))') (x(a), x(b)) >>> symbols('x(:1\,:2)') # or 'x((:1)\,(:2))' (x(0,0), x(0,1)) """ result = [] if isinstance(names, string_types): marker = 0 literals = ['\,', '\:', '\ '] for i in range(len(literals)): lit = literals.pop(0) if lit in names: while chr(marker) in names: marker += 1 lit_char = chr(marker) marker += 1 names = names.replace(lit, lit_char) literals.append((lit_char, lit[1:])) def literal(s): if literals: for c, l in literals: s = s.replace(c, l) return s names = names.strip() as_seq = names.endswith(',') if as_seq: names = names[:-1].rstrip() if not names: raise ValueError('no symbols given') # split on commas names = [n.strip() for n in names.split(',')] if not all(n for n in names): raise ValueError('missing symbol between commas') # split on spaces for i in range(len(names) - 1, -1, -1): names[i: i + 1] = names[i].split() cls = args.pop('cls', Symbol) seq = args.pop('seq', as_seq) for name in names: if not name: raise ValueError('missing symbol') if ':' not in name: symbol = cls(literal(name), **args) result.append(symbol) continue split = _range.split(name) # remove 1 layer of bounding parentheses around ranges for i in range(len(split) - 1): if i and ':' in split[i] and split[i] != ':' and \ split[i - 1].endswith('(') and \ split[i + 1].startswith(')'): split[i - 1] = split[i - 1][:-1] split[i + 1] = split[i + 1][1:] for i, s in enumerate(split): if ':' in s: if s[-1].endswith(':'): raise ValueError('missing end range') a, b = s.split(':') if b[-1] in string.digits: a = 0 if not a else int(a) b = int(b) split[i] = [str(c) for c in range(a, b)] else: a = a or 'a' split[i] = [string.ascii_letters[c] for c in range( string.ascii_letters.index(a), string.ascii_letters.index(b) + 1)] # inclusive if not split[i]: break else: split[i] = [s] else: seq = True if len(split) == 1: names = split[0] else: names = [''.join(s) for s in cartes(*split)] if literals: result.extend([cls(literal(s), **args) for s in names]) else: result.extend([cls(s, **args) for s in names]) if not seq and len(result) <= 1: if not result: return () return result[0] return tuple(result) else: for name in names: result.append(symbols(name, **args)) return type(names)(result)
Transform strings into instances of :class:`Symbol` class. :func:`symbols` function returns a sequence of symbols with names taken from ``names`` argument, which can be a comma or whitespace delimited string, or a sequence of strings:: >>> from symengine import symbols >>> x, y, z = symbols('x,y,z') >>> a, b, c = symbols('a b c') The type of output is dependent on the properties of input arguments:: >>> symbols('x') x >>> symbols('x,') (x,) >>> symbols('x,y') (x, y) >>> symbols(('a', 'b', 'c')) (a, b, c) >>> symbols(['a', 'b', 'c']) [a, b, c] >>> symbols(set(['a', 'b', 'c'])) set([a, b, c]) If an iterable container is needed for a single symbol, set the ``seq`` argument to ``True`` or terminate the symbol name with a comma:: >>> symbols('x', seq=True) (x,) To reduce typing, range syntax is supported to create indexed symbols. Ranges are indicated by a colon and the type of range is determined by the character to the right of the colon. If the character is a digit then all contiguous digits to the left are taken as the nonnegative starting value (or 0 if there is no digit left of the colon) and all contiguous digits to the right are taken as 1 greater than the ending value:: >>> symbols('x:10') (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9) >>> symbols('x5:10') (x5, x6, x7, x8, x9) >>> symbols('x5(:2)') (x50, x51) >>> symbols('x5:10,y:5') (x5, x6, x7, x8, x9, y0, y1, y2, y3, y4) >>> symbols(('x5:10', 'y:5')) ((x5, x6, x7, x8, x9), (y0, y1, y2, y3, y4)) If the character to the right of the colon is a letter, then the single letter to the left (or 'a' if there is none) is taken as the start and all characters in the lexicographic range *through* the letter to the right are used as the range:: >>> symbols('x:z') (x, y, z) >>> symbols('x:c') # null range () >>> symbols('x(:c)') (xa, xb, xc) >>> symbols(':c') (a, b, c) >>> symbols('a:d, x:z') (a, b, c, d, x, y, z) >>> symbols(('a:d', 'x:z')) ((a, b, c, d), (x, y, z)) Multiple ranges are supported; contiguous numerical ranges should be separated by parentheses to disambiguate the ending number of one range from the starting number of the next:: >>> symbols('x:2(1:3)') (x01, x02, x11, x12) >>> symbols(':3:2') # parsing is from left to right (00, 01, 10, 11, 20, 21) Only one pair of parentheses surrounding ranges are removed, so to include parentheses around ranges, double them. And to include spaces, commas, or colons, escape them with a backslash:: >>> symbols('x((a:b))') (x(a), x(b)) >>> symbols('x(:1\,:2)') # or 'x((:1)\,(:2))' (x(0,0), x(0,1))
Below is the the instruction that describes the task: ### Input: Transform strings into instances of :class:`Symbol` class. :func:`symbols` function returns a sequence of symbols with names taken from ``names`` argument, which can be a comma or whitespace delimited string, or a sequence of strings:: >>> from symengine import symbols >>> x, y, z = symbols('x,y,z') >>> a, b, c = symbols('a b c') The type of output is dependent on the properties of input arguments:: >>> symbols('x') x >>> symbols('x,') (x,) >>> symbols('x,y') (x, y) >>> symbols(('a', 'b', 'c')) (a, b, c) >>> symbols(['a', 'b', 'c']) [a, b, c] >>> symbols(set(['a', 'b', 'c'])) set([a, b, c]) If an iterable container is needed for a single symbol, set the ``seq`` argument to ``True`` or terminate the symbol name with a comma:: >>> symbols('x', seq=True) (x,) To reduce typing, range syntax is supported to create indexed symbols. Ranges are indicated by a colon and the type of range is determined by the character to the right of the colon. If the character is a digit then all contiguous digits to the left are taken as the nonnegative starting value (or 0 if there is no digit left of the colon) and all contiguous digits to the right are taken as 1 greater than the ending value:: >>> symbols('x:10') (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9) >>> symbols('x5:10') (x5, x6, x7, x8, x9) >>> symbols('x5(:2)') (x50, x51) >>> symbols('x5:10,y:5') (x5, x6, x7, x8, x9, y0, y1, y2, y3, y4) >>> symbols(('x5:10', 'y:5')) ((x5, x6, x7, x8, x9), (y0, y1, y2, y3, y4)) If the character to the right of the colon is a letter, then the single letter to the left (or 'a' if there is none) is taken as the start and all characters in the lexicographic range *through* the letter to the right are used as the range:: >>> symbols('x:z') (x, y, z) >>> symbols('x:c') # null range () >>> symbols('x(:c)') (xa, xb, xc) >>> symbols(':c') (a, b, c) >>> symbols('a:d, x:z') (a, b, c, d, x, y, z) >>> symbols(('a:d', 'x:z')) ((a, b, c, d), (x, y, z)) Multiple ranges are supported; contiguous numerical ranges should be separated by parentheses to disambiguate the ending number of one range from the starting number of the next:: >>> symbols('x:2(1:3)') (x01, x02, x11, x12) >>> symbols(':3:2') # parsing is from left to right (00, 01, 10, 11, 20, 21) Only one pair of parentheses surrounding ranges are removed, so to include parentheses around ranges, double them. And to include spaces, commas, or colons, escape them with a backslash:: >>> symbols('x((a:b))') (x(a), x(b)) >>> symbols('x(:1\,:2)') # or 'x((:1)\,(:2))' (x(0,0), x(0,1)) ### Response: def symbols(names, **args): """ Transform strings into instances of :class:`Symbol` class. :func:`symbols` function returns a sequence of symbols with names taken from ``names`` argument, which can be a comma or whitespace delimited string, or a sequence of strings:: >>> from symengine import symbols >>> x, y, z = symbols('x,y,z') >>> a, b, c = symbols('a b c') The type of output is dependent on the properties of input arguments:: >>> symbols('x') x >>> symbols('x,') (x,) >>> symbols('x,y') (x, y) >>> symbols(('a', 'b', 'c')) (a, b, c) >>> symbols(['a', 'b', 'c']) [a, b, c] >>> symbols(set(['a', 'b', 'c'])) set([a, b, c]) If an iterable container is needed for a single symbol, set the ``seq`` argument to ``True`` or terminate the symbol name with a comma:: >>> symbols('x', seq=True) (x,) To reduce typing, range syntax is supported to create indexed symbols. Ranges are indicated by a colon and the type of range is determined by the character to the right of the colon. If the character is a digit then all contiguous digits to the left are taken as the nonnegative starting value (or 0 if there is no digit left of the colon) and all contiguous digits to the right are taken as 1 greater than the ending value:: >>> symbols('x:10') (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9) >>> symbols('x5:10') (x5, x6, x7, x8, x9) >>> symbols('x5(:2)') (x50, x51) >>> symbols('x5:10,y:5') (x5, x6, x7, x8, x9, y0, y1, y2, y3, y4) >>> symbols(('x5:10', 'y:5')) ((x5, x6, x7, x8, x9), (y0, y1, y2, y3, y4)) If the character to the right of the colon is a letter, then the single letter to the left (or 'a' if there is none) is taken as the start and all characters in the lexicographic range *through* the letter to the right are used as the range:: >>> symbols('x:z') (x, y, z) >>> symbols('x:c') # null range () >>> symbols('x(:c)') (xa, xb, xc) >>> symbols(':c') (a, b, c) >>> symbols('a:d, x:z') (a, b, c, d, x, y, z) >>> symbols(('a:d', 'x:z')) ((a, b, c, d), (x, y, z)) Multiple ranges are supported; contiguous numerical ranges should be separated by parentheses to disambiguate the ending number of one range from the starting number of the next:: >>> symbols('x:2(1:3)') (x01, x02, x11, x12) >>> symbols(':3:2') # parsing is from left to right (00, 01, 10, 11, 20, 21) Only one pair of parentheses surrounding ranges are removed, so to include parentheses around ranges, double them. And to include spaces, commas, or colons, escape them with a backslash:: >>> symbols('x((a:b))') (x(a), x(b)) >>> symbols('x(:1\,:2)') # or 'x((:1)\,(:2))' (x(0,0), x(0,1)) """ result = [] if isinstance(names, string_types): marker = 0 literals = ['\,', '\:', '\ '] for i in range(len(literals)): lit = literals.pop(0) if lit in names: while chr(marker) in names: marker += 1 lit_char = chr(marker) marker += 1 names = names.replace(lit, lit_char) literals.append((lit_char, lit[1:])) def literal(s): if literals: for c, l in literals: s = s.replace(c, l) return s names = names.strip() as_seq = names.endswith(',') if as_seq: names = names[:-1].rstrip() if not names: raise ValueError('no symbols given') # split on commas names = [n.strip() for n in names.split(',')] if not all(n for n in names): raise ValueError('missing symbol between commas') # split on spaces for i in range(len(names) - 1, -1, -1): names[i: i + 1] = names[i].split() cls = args.pop('cls', Symbol) seq = args.pop('seq', as_seq) for name in names: if not name: raise ValueError('missing symbol') if ':' not in name: symbol = cls(literal(name), **args) result.append(symbol) continue split = _range.split(name) # remove 1 layer of bounding parentheses around ranges for i in range(len(split) - 1): if i and ':' in split[i] and split[i] != ':' and \ split[i - 1].endswith('(') and \ split[i + 1].startswith(')'): split[i - 1] = split[i - 1][:-1] split[i + 1] = split[i + 1][1:] for i, s in enumerate(split): if ':' in s: if s[-1].endswith(':'): raise ValueError('missing end range') a, b = s.split(':') if b[-1] in string.digits: a = 0 if not a else int(a) b = int(b) split[i] = [str(c) for c in range(a, b)] else: a = a or 'a' split[i] = [string.ascii_letters[c] for c in range( string.ascii_letters.index(a), string.ascii_letters.index(b) + 1)] # inclusive if not split[i]: break else: split[i] = [s] else: seq = True if len(split) == 1: names = split[0] else: names = [''.join(s) for s in cartes(*split)] if literals: result.extend([cls(literal(s), **args) for s in names]) else: result.extend([cls(s, **args) for s in names]) if not seq and len(result) <= 1: if not result: return () return result[0] return tuple(result) else: for name in names: result.append(symbols(name, **args)) return type(names)(result)
def parse_motifs(motifs): """Parse motifs in a variety of formats to return a list of motifs. Parameters ---------- motifs : list or str Filename of motif, list of motifs or single Motif instance. Returns ------- motifs : list List of Motif instances. """ if isinstance(motifs, six.string_types): with open(motifs) as f: if motifs.endswith("pwm") or motifs.endswith("pfm"): motifs = read_motifs(f, fmt="pwm") elif motifs.endswith("transfac"): motifs = read_motifs(f, fmt="transfac") else: motifs = read_motifs(f) elif isinstance(motifs, Motif): motifs = [motifs] else: if not isinstance(list(motifs)[0], Motif): raise ValueError("Not a list of motifs") return list(motifs)
Parse motifs in a variety of formats to return a list of motifs. Parameters ---------- motifs : list or str Filename of motif, list of motifs or single Motif instance. Returns ------- motifs : list List of Motif instances.
Below is the the instruction that describes the task: ### Input: Parse motifs in a variety of formats to return a list of motifs. Parameters ---------- motifs : list or str Filename of motif, list of motifs or single Motif instance. Returns ------- motifs : list List of Motif instances. ### Response: def parse_motifs(motifs): """Parse motifs in a variety of formats to return a list of motifs. Parameters ---------- motifs : list or str Filename of motif, list of motifs or single Motif instance. Returns ------- motifs : list List of Motif instances. """ if isinstance(motifs, six.string_types): with open(motifs) as f: if motifs.endswith("pwm") or motifs.endswith("pfm"): motifs = read_motifs(f, fmt="pwm") elif motifs.endswith("transfac"): motifs = read_motifs(f, fmt="transfac") else: motifs = read_motifs(f) elif isinstance(motifs, Motif): motifs = [motifs] else: if not isinstance(list(motifs)[0], Motif): raise ValueError("Not a list of motifs") return list(motifs)
def _set_anycast_rp_ip(self, v, load=False): """ Setter method for anycast_rp_ip, mapped from YANG variable /routing_system/router/hide_pim_holder/pim/anycast_rp_ip (list) If this variable is read-only (config: false) in the source YANG file, then _set_anycast_rp_ip is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_anycast_rp_ip() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("anycast_rp_ip_addr",anycast_rp_ip.anycast_rp_ip, yang_name="anycast-rp-ip", rest_name="anycast-rp-ip", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='anycast-rp-ip-addr', extensions={u'tailf-common': {u'info': u'Set Anycast RP address and peer address', u'cli-suppress-mode': None, u'hidden': u'full', u'callpoint': u'PimAnycastRpIpCfgCallpoint', u'cli-suppress-list-no': None}}), is_container='list', yang_name="anycast-rp-ip", rest_name="anycast-rp-ip", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set Anycast RP address and peer address', u'cli-suppress-mode': None, u'hidden': u'full', u'callpoint': u'PimAnycastRpIpCfgCallpoint', u'cli-suppress-list-no': None}}, namespace='urn:brocade.com:mgmt:brocade-pim', defining_module='brocade-pim', yang_type='list', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """anycast_rp_ip must be of a type compatible with list""", 'defined-type': "list", 'generated-type': """YANGDynClass(base=YANGListType("anycast_rp_ip_addr",anycast_rp_ip.anycast_rp_ip, yang_name="anycast-rp-ip", rest_name="anycast-rp-ip", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='anycast-rp-ip-addr', extensions={u'tailf-common': {u'info': u'Set Anycast RP address and peer address', u'cli-suppress-mode': None, u'hidden': u'full', u'callpoint': u'PimAnycastRpIpCfgCallpoint', u'cli-suppress-list-no': None}}), is_container='list', yang_name="anycast-rp-ip", rest_name="anycast-rp-ip", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set Anycast RP address and peer address', u'cli-suppress-mode': None, u'hidden': u'full', u'callpoint': u'PimAnycastRpIpCfgCallpoint', u'cli-suppress-list-no': None}}, namespace='urn:brocade.com:mgmt:brocade-pim', defining_module='brocade-pim', yang_type='list', is_config=True)""", }) self.__anycast_rp_ip = t if hasattr(self, '_set'): self._set()
Setter method for anycast_rp_ip, mapped from YANG variable /routing_system/router/hide_pim_holder/pim/anycast_rp_ip (list) If this variable is read-only (config: false) in the source YANG file, then _set_anycast_rp_ip is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_anycast_rp_ip() directly.
Below is the the instruction that describes the task: ### Input: Setter method for anycast_rp_ip, mapped from YANG variable /routing_system/router/hide_pim_holder/pim/anycast_rp_ip (list) If this variable is read-only (config: false) in the source YANG file, then _set_anycast_rp_ip is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_anycast_rp_ip() directly. ### Response: def _set_anycast_rp_ip(self, v, load=False): """ Setter method for anycast_rp_ip, mapped from YANG variable /routing_system/router/hide_pim_holder/pim/anycast_rp_ip (list) If this variable is read-only (config: false) in the source YANG file, then _set_anycast_rp_ip is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_anycast_rp_ip() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("anycast_rp_ip_addr",anycast_rp_ip.anycast_rp_ip, yang_name="anycast-rp-ip", rest_name="anycast-rp-ip", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='anycast-rp-ip-addr', extensions={u'tailf-common': {u'info': u'Set Anycast RP address and peer address', u'cli-suppress-mode': None, u'hidden': u'full', u'callpoint': u'PimAnycastRpIpCfgCallpoint', u'cli-suppress-list-no': None}}), is_container='list', yang_name="anycast-rp-ip", rest_name="anycast-rp-ip", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set Anycast RP address and peer address', u'cli-suppress-mode': None, u'hidden': u'full', u'callpoint': u'PimAnycastRpIpCfgCallpoint', u'cli-suppress-list-no': None}}, namespace='urn:brocade.com:mgmt:brocade-pim', defining_module='brocade-pim', yang_type='list', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """anycast_rp_ip must be of a type compatible with list""", 'defined-type': "list", 'generated-type': """YANGDynClass(base=YANGListType("anycast_rp_ip_addr",anycast_rp_ip.anycast_rp_ip, yang_name="anycast-rp-ip", rest_name="anycast-rp-ip", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='anycast-rp-ip-addr', extensions={u'tailf-common': {u'info': u'Set Anycast RP address and peer address', u'cli-suppress-mode': None, u'hidden': u'full', u'callpoint': u'PimAnycastRpIpCfgCallpoint', u'cli-suppress-list-no': None}}), is_container='list', yang_name="anycast-rp-ip", rest_name="anycast-rp-ip", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set Anycast RP address and peer address', u'cli-suppress-mode': None, u'hidden': u'full', u'callpoint': u'PimAnycastRpIpCfgCallpoint', u'cli-suppress-list-no': None}}, namespace='urn:brocade.com:mgmt:brocade-pim', defining_module='brocade-pim', yang_type='list', is_config=True)""", }) self.__anycast_rp_ip = t if hasattr(self, '_set'): self._set()
def get_level_values(self, level): """ Return vector of label values for requested level, equal to the length of the index. Parameters ---------- level : int or str ``level`` is either the integer position of the level in the MultiIndex, or the name of the level. Returns ------- values : Index Values is a level of this MultiIndex converted to a single :class:`Index` (or subclass thereof). Examples --------- Create a MultiIndex: >>> mi = pd.MultiIndex.from_arrays((list('abc'), list('def'))) >>> mi.names = ['level_1', 'level_2'] Get level values by supplying level as either integer or name: >>> mi.get_level_values(0) Index(['a', 'b', 'c'], dtype='object', name='level_1') >>> mi.get_level_values('level_2') Index(['d', 'e', 'f'], dtype='object', name='level_2') """ level = self._get_level_number(level) values = self._get_level_values(level) return values
Return vector of label values for requested level, equal to the length of the index. Parameters ---------- level : int or str ``level`` is either the integer position of the level in the MultiIndex, or the name of the level. Returns ------- values : Index Values is a level of this MultiIndex converted to a single :class:`Index` (or subclass thereof). Examples --------- Create a MultiIndex: >>> mi = pd.MultiIndex.from_arrays((list('abc'), list('def'))) >>> mi.names = ['level_1', 'level_2'] Get level values by supplying level as either integer or name: >>> mi.get_level_values(0) Index(['a', 'b', 'c'], dtype='object', name='level_1') >>> mi.get_level_values('level_2') Index(['d', 'e', 'f'], dtype='object', name='level_2')
Below is the the instruction that describes the task: ### Input: Return vector of label values for requested level, equal to the length of the index. Parameters ---------- level : int or str ``level`` is either the integer position of the level in the MultiIndex, or the name of the level. Returns ------- values : Index Values is a level of this MultiIndex converted to a single :class:`Index` (or subclass thereof). Examples --------- Create a MultiIndex: >>> mi = pd.MultiIndex.from_arrays((list('abc'), list('def'))) >>> mi.names = ['level_1', 'level_2'] Get level values by supplying level as either integer or name: >>> mi.get_level_values(0) Index(['a', 'b', 'c'], dtype='object', name='level_1') >>> mi.get_level_values('level_2') Index(['d', 'e', 'f'], dtype='object', name='level_2') ### Response: def get_level_values(self, level): """ Return vector of label values for requested level, equal to the length of the index. Parameters ---------- level : int or str ``level`` is either the integer position of the level in the MultiIndex, or the name of the level. Returns ------- values : Index Values is a level of this MultiIndex converted to a single :class:`Index` (or subclass thereof). Examples --------- Create a MultiIndex: >>> mi = pd.MultiIndex.from_arrays((list('abc'), list('def'))) >>> mi.names = ['level_1', 'level_2'] Get level values by supplying level as either integer or name: >>> mi.get_level_values(0) Index(['a', 'b', 'c'], dtype='object', name='level_1') >>> mi.get_level_values('level_2') Index(['d', 'e', 'f'], dtype='object', name='level_2') """ level = self._get_level_number(level) values = self._get_level_values(level) return values
def _run_frame(self, executor, limit=False, iteration=0): """ Run single frame of the bot :param source_or_code: path to code to run, or actual code. :param limit: Time a frame should take to run (float - seconds) """ # # Gets a bit complex here... # # Nodebox (which we are trying to be compatible with) supports two # kinds of bot 'dynamic' which has a 'draw' function and non dynamic # which doesn't have one. # # Dynamic bots: # # First run: # run body and 'setup' if it exists, then 'draw' # # Later runs: # run 'draw' # # Non Dynamic bots: # # Just have a 'body' and run once... # # UNLESS... a 'var' is changed, then run it again. # # # Livecoding: # # Code can be 'known_good' or 'tenous' (when it has been edited). # # If code is tenous and an exception occurs, attempt to roll # everything back. # # Livecoding and vars # # If vars are added / removed or renamed then attempt to update # the GUI start_time = time() if iteration != 0 and self._speed != 0: self._canvas.reset_canvas() self._set_dynamic_vars() if iteration == 0: # First frame executor.run() # run setup and draw # (assume user hasn't live edited already) executor.ns['setup']() executor.ns['draw']() self._canvas.flush(self._frame) else: # Subsequent frames if self._dynamic: if self._speed != 0: # speed 0 is paused, so do nothing with executor.run_context() as (known_good, source, ns): # Code in main block may redefine 'draw' if not known_good: executor.reload_functions() with VarListener.batch(self._vars, self._oldvars, ns): self._oldvars.clear() # Re-run the function body - ideally this would only # happen if the body had actually changed # - Or perhaps if the line included a variable declaration exec source in ns ns['draw']() self._canvas.flush(self._frame) else: # Non "dynamic" bots # # TODO - This part is overly complex, before live-coding it # was just exec source in ns ... have to see if it # can be simplified again. # with executor.run_context() as (known_good, source, ns): if not known_good: executor.reload_functions() with VarListener.batch(self._vars, self._oldvars, ns): self._oldvars.clear() # Re-run the function body - ideally this would only # happen if the body had actually changed # - Or perhaps if the line included a variable declaration exec source in ns else: exec source in ns self._canvas.flush(self._frame) if limit: self._frame_limit(start_time) # Can set speed to go backwards using the shell if you really want # or pause by setting speed == 0 if self._speed > 0: self._frame += 1 elif self._speed < 0: self._frame -= 1
Run single frame of the bot :param source_or_code: path to code to run, or actual code. :param limit: Time a frame should take to run (float - seconds)
Below is the the instruction that describes the task: ### Input: Run single frame of the bot :param source_or_code: path to code to run, or actual code. :param limit: Time a frame should take to run (float - seconds) ### Response: def _run_frame(self, executor, limit=False, iteration=0): """ Run single frame of the bot :param source_or_code: path to code to run, or actual code. :param limit: Time a frame should take to run (float - seconds) """ # # Gets a bit complex here... # # Nodebox (which we are trying to be compatible with) supports two # kinds of bot 'dynamic' which has a 'draw' function and non dynamic # which doesn't have one. # # Dynamic bots: # # First run: # run body and 'setup' if it exists, then 'draw' # # Later runs: # run 'draw' # # Non Dynamic bots: # # Just have a 'body' and run once... # # UNLESS... a 'var' is changed, then run it again. # # # Livecoding: # # Code can be 'known_good' or 'tenous' (when it has been edited). # # If code is tenous and an exception occurs, attempt to roll # everything back. # # Livecoding and vars # # If vars are added / removed or renamed then attempt to update # the GUI start_time = time() if iteration != 0 and self._speed != 0: self._canvas.reset_canvas() self._set_dynamic_vars() if iteration == 0: # First frame executor.run() # run setup and draw # (assume user hasn't live edited already) executor.ns['setup']() executor.ns['draw']() self._canvas.flush(self._frame) else: # Subsequent frames if self._dynamic: if self._speed != 0: # speed 0 is paused, so do nothing with executor.run_context() as (known_good, source, ns): # Code in main block may redefine 'draw' if not known_good: executor.reload_functions() with VarListener.batch(self._vars, self._oldvars, ns): self._oldvars.clear() # Re-run the function body - ideally this would only # happen if the body had actually changed # - Or perhaps if the line included a variable declaration exec source in ns ns['draw']() self._canvas.flush(self._frame) else: # Non "dynamic" bots # # TODO - This part is overly complex, before live-coding it # was just exec source in ns ... have to see if it # can be simplified again. # with executor.run_context() as (known_good, source, ns): if not known_good: executor.reload_functions() with VarListener.batch(self._vars, self._oldvars, ns): self._oldvars.clear() # Re-run the function body - ideally this would only # happen if the body had actually changed # - Or perhaps if the line included a variable declaration exec source in ns else: exec source in ns self._canvas.flush(self._frame) if limit: self._frame_limit(start_time) # Can set speed to go backwards using the shell if you really want # or pause by setting speed == 0 if self._speed > 0: self._frame += 1 elif self._speed < 0: self._frame -= 1
def get_page_meta(page, language): """ Retrieves all the meta information for the page in the given language :param page: a Page instance :param lang: a language code :return: Meta instance :type: object """ from django.core.cache import cache from meta.views import Meta from .models import PageMeta, TitleMeta try: meta_key = get_cache_key(page, language) except AttributeError: return None gplus_server = 'https://plus.google.com' meta = cache.get(meta_key) if not meta: meta = Meta() title = page.get_title_obj(language) meta.extra_custom_props = [] meta.title = page.get_page_title(language) if not meta.title: meta.title = page.get_title(language) if title.meta_description: meta.description = title.meta_description.strip() try: titlemeta = title.titlemeta if titlemeta.description: meta.description = titlemeta.description.strip() if titlemeta.keywords: meta.keywords = titlemeta.keywords.strip().split(',') meta.locale = titlemeta.locale meta.og_description = titlemeta.og_description.strip() if not meta.og_description: meta.og_description = meta.description meta.twitter_description = titlemeta.twitter_description.strip() if not meta.twitter_description: meta.twitter_description = meta.description meta.gplus_description = titlemeta.gplus_description.strip() if not meta.gplus_description: meta.gplus_description = meta.description if titlemeta.image: meta.image = title.titlemeta.image.canonical_url or title.titlemeta.image.url for item in titlemeta.extra.all(): attribute = item.attribute if not attribute: attribute = item.DEFAULT_ATTRIBUTE meta.extra_custom_props.append((attribute, item.name, item.value)) except (TitleMeta.DoesNotExist, AttributeError): # Skipping title-level metas if meta.description: meta.og_description = meta.description meta.twitter_description = meta.description meta.gplus_description = meta.description defaults = { 'object_type': meta_settings.FB_TYPE, 'og_type': meta_settings.FB_TYPE, 'og_app_id': meta_settings.FB_APPID, 'fb_pages': meta_settings.FB_PAGES, 'og_profile_id': meta_settings.FB_PROFILE_ID, 'og_publisher': meta_settings.FB_PUBLISHER, 'og_author_url': meta_settings.FB_AUTHOR_URL, 'twitter_type': meta_settings.TWITTER_TYPE, 'twitter_site': meta_settings.TWITTER_SITE, 'twitter_author': meta_settings.TWITTER_AUTHOR, 'gplus_type': meta_settings.GPLUS_TYPE, 'gplus_author': meta_settings.GPLUS_AUTHOR, } try: pagemeta = page.pagemeta meta.object_type = pagemeta.og_type meta.og_type = pagemeta.og_type meta.og_app_id = pagemeta.og_app_id meta.fb_pages = pagemeta.fb_pages meta.og_profile_id = pagemeta.og_author_fbid meta.twitter_type = pagemeta.twitter_type meta.twitter_site = pagemeta.twitter_site meta.twitter_author = pagemeta.twitter_author meta.gplus_type = pagemeta.gplus_type meta.gplus_author = pagemeta.gplus_author if meta.og_type == 'article': meta.og_publisher = pagemeta.og_publisher meta.og_author_url = pagemeta.og_author_url try: from djangocms_page_tags.utils import get_title_tags, get_page_tags tags = list(get_title_tags(page, language)) tags += list(get_page_tags(page)) meta.tag = ','.join([tag.name for tag in tags]) except ImportError: # djangocms-page-tags not available pass if not meta.image and pagemeta.image: meta.image = pagemeta.image.canonical_url or pagemeta.image.url for item in pagemeta.extra.all(): attribute = item.attribute if not attribute: attribute = item.DEFAULT_ATTRIBUTE meta.extra_custom_props.append((attribute, item.name, item.value)) except PageMeta.DoesNotExist: pass if meta.gplus_author and not meta.gplus_author.startswith('http'): if not meta.gplus_author.startswith('/'): meta.gplus_author = '{0}/{1}'.format(gplus_server, meta.gplus_author) else: meta.gplus_author = '{0}{1}'.format(gplus_server, meta.gplus_author) if page.publication_date: meta.published_time = page.publication_date.isoformat() if page.changed_date: meta.modified_time = page.changed_date.isoformat() if page.publication_end_date: meta.expiration_time = page.publication_end_date.isoformat() for attr, val in defaults.items(): if not getattr(meta, attr, '') and val: setattr(meta, attr, val) meta.url = page.get_absolute_url(language) return meta
Retrieves all the meta information for the page in the given language :param page: a Page instance :param lang: a language code :return: Meta instance :type: object
Below is the the instruction that describes the task: ### Input: Retrieves all the meta information for the page in the given language :param page: a Page instance :param lang: a language code :return: Meta instance :type: object ### Response: def get_page_meta(page, language): """ Retrieves all the meta information for the page in the given language :param page: a Page instance :param lang: a language code :return: Meta instance :type: object """ from django.core.cache import cache from meta.views import Meta from .models import PageMeta, TitleMeta try: meta_key = get_cache_key(page, language) except AttributeError: return None gplus_server = 'https://plus.google.com' meta = cache.get(meta_key) if not meta: meta = Meta() title = page.get_title_obj(language) meta.extra_custom_props = [] meta.title = page.get_page_title(language) if not meta.title: meta.title = page.get_title(language) if title.meta_description: meta.description = title.meta_description.strip() try: titlemeta = title.titlemeta if titlemeta.description: meta.description = titlemeta.description.strip() if titlemeta.keywords: meta.keywords = titlemeta.keywords.strip().split(',') meta.locale = titlemeta.locale meta.og_description = titlemeta.og_description.strip() if not meta.og_description: meta.og_description = meta.description meta.twitter_description = titlemeta.twitter_description.strip() if not meta.twitter_description: meta.twitter_description = meta.description meta.gplus_description = titlemeta.gplus_description.strip() if not meta.gplus_description: meta.gplus_description = meta.description if titlemeta.image: meta.image = title.titlemeta.image.canonical_url or title.titlemeta.image.url for item in titlemeta.extra.all(): attribute = item.attribute if not attribute: attribute = item.DEFAULT_ATTRIBUTE meta.extra_custom_props.append((attribute, item.name, item.value)) except (TitleMeta.DoesNotExist, AttributeError): # Skipping title-level metas if meta.description: meta.og_description = meta.description meta.twitter_description = meta.description meta.gplus_description = meta.description defaults = { 'object_type': meta_settings.FB_TYPE, 'og_type': meta_settings.FB_TYPE, 'og_app_id': meta_settings.FB_APPID, 'fb_pages': meta_settings.FB_PAGES, 'og_profile_id': meta_settings.FB_PROFILE_ID, 'og_publisher': meta_settings.FB_PUBLISHER, 'og_author_url': meta_settings.FB_AUTHOR_URL, 'twitter_type': meta_settings.TWITTER_TYPE, 'twitter_site': meta_settings.TWITTER_SITE, 'twitter_author': meta_settings.TWITTER_AUTHOR, 'gplus_type': meta_settings.GPLUS_TYPE, 'gplus_author': meta_settings.GPLUS_AUTHOR, } try: pagemeta = page.pagemeta meta.object_type = pagemeta.og_type meta.og_type = pagemeta.og_type meta.og_app_id = pagemeta.og_app_id meta.fb_pages = pagemeta.fb_pages meta.og_profile_id = pagemeta.og_author_fbid meta.twitter_type = pagemeta.twitter_type meta.twitter_site = pagemeta.twitter_site meta.twitter_author = pagemeta.twitter_author meta.gplus_type = pagemeta.gplus_type meta.gplus_author = pagemeta.gplus_author if meta.og_type == 'article': meta.og_publisher = pagemeta.og_publisher meta.og_author_url = pagemeta.og_author_url try: from djangocms_page_tags.utils import get_title_tags, get_page_tags tags = list(get_title_tags(page, language)) tags += list(get_page_tags(page)) meta.tag = ','.join([tag.name for tag in tags]) except ImportError: # djangocms-page-tags not available pass if not meta.image and pagemeta.image: meta.image = pagemeta.image.canonical_url or pagemeta.image.url for item in pagemeta.extra.all(): attribute = item.attribute if not attribute: attribute = item.DEFAULT_ATTRIBUTE meta.extra_custom_props.append((attribute, item.name, item.value)) except PageMeta.DoesNotExist: pass if meta.gplus_author and not meta.gplus_author.startswith('http'): if not meta.gplus_author.startswith('/'): meta.gplus_author = '{0}/{1}'.format(gplus_server, meta.gplus_author) else: meta.gplus_author = '{0}{1}'.format(gplus_server, meta.gplus_author) if page.publication_date: meta.published_time = page.publication_date.isoformat() if page.changed_date: meta.modified_time = page.changed_date.isoformat() if page.publication_end_date: meta.expiration_time = page.publication_end_date.isoformat() for attr, val in defaults.items(): if not getattr(meta, attr, '') and val: setattr(meta, attr, val) meta.url = page.get_absolute_url(language) return meta
def digestInSilico(proteinSequence, cleavageRule='[KR]', missedCleavage=0, removeNtermM=True, minLength=5, maxLength=55): """Returns a list of peptide sequences and cleavage information derived from an in silico digestion of a polypeptide. :param proteinSequence: amino acid sequence of the poly peptide to be digested :param cleavageRule: cleavage rule expressed in a regular expression, see :attr:`maspy.constants.expasy_rules` :param missedCleavage: number of allowed missed cleavage sites :param removeNtermM: booo, True to consider also peptides with the N-terminal methionine of the protein removed :param minLength: int, only yield peptides with length >= minLength :param maxLength: int, only yield peptides with length <= maxLength :returns: a list of resulting peptide enries. Protein positions start with ``1`` and end with ``len(proteinSequence``. :: [(peptide amino acid sequence, {'startPos': int, 'endPos': int, 'missedCleavage': int} ), ... ] .. note:: This is a regex example for specifying N-terminal cleavage at lysine sites ``\\w(?=[K])`` """ passFilter = lambda startPos, endPos: (endPos - startPos >= minLength and endPos - startPos <= maxLength ) _regexCleave = re.finditer(cleavageRule, proteinSequence) cleavagePosList = set(itertools.chain(map(lambda x: x.end(), _regexCleave))) cleavagePosList.add(len(proteinSequence)) cleavagePosList = sorted(list(cleavagePosList)) #Add end of protein as cleavage site if protein doesn't end with specififed #cleavage positions numCleavageSites = len(cleavagePosList) if missedCleavage >= numCleavageSites: missedCleavage = numCleavageSites -1 digestionresults = list() #Generate protein n-terminal peptides after methionine removal if removeNtermM and proteinSequence[0] == 'M': for cleavagePos in range(0, missedCleavage+1): startPos = 1 endPos = cleavagePosList[cleavagePos] if passFilter(startPos, endPos): sequence = proteinSequence[startPos:endPos] info = dict() info['startPos'] = startPos+1 info['endPos'] = endPos info['missedCleavage'] = cleavagePos digestionresults.append((sequence, info)) #Generate protein n-terminal peptides if cleavagePosList[0] != 0: for cleavagePos in range(0, missedCleavage+1): startPos = 0 endPos = cleavagePosList[cleavagePos] if passFilter(startPos, endPos): sequence = proteinSequence[startPos:endPos] info = dict() info['startPos'] = startPos+1 info['endPos'] = endPos info['missedCleavage'] = cleavagePos digestionresults.append((sequence, info)) #Generate all remaining peptides, including the c-terminal peptides lastCleavagePos = 0 while lastCleavagePos < numCleavageSites: for missedCleavage in range(0, missedCleavage+1): nextCleavagePos = lastCleavagePos + missedCleavage + 1 if nextCleavagePos < numCleavageSites: startPos = cleavagePosList[lastCleavagePos] endPos = cleavagePosList[nextCleavagePos] if passFilter(startPos, endPos): sequence = proteinSequence[startPos:endPos] info = dict() info['startPos'] = startPos+1 info['endPos'] = endPos info['missedCleavage'] = missedCleavage digestionresults.append((sequence, info)) lastCleavagePos += 1 return digestionresults
Returns a list of peptide sequences and cleavage information derived from an in silico digestion of a polypeptide. :param proteinSequence: amino acid sequence of the poly peptide to be digested :param cleavageRule: cleavage rule expressed in a regular expression, see :attr:`maspy.constants.expasy_rules` :param missedCleavage: number of allowed missed cleavage sites :param removeNtermM: booo, True to consider also peptides with the N-terminal methionine of the protein removed :param minLength: int, only yield peptides with length >= minLength :param maxLength: int, only yield peptides with length <= maxLength :returns: a list of resulting peptide enries. Protein positions start with ``1`` and end with ``len(proteinSequence``. :: [(peptide amino acid sequence, {'startPos': int, 'endPos': int, 'missedCleavage': int} ), ... ] .. note:: This is a regex example for specifying N-terminal cleavage at lysine sites ``\\w(?=[K])``
Below is the the instruction that describes the task: ### Input: Returns a list of peptide sequences and cleavage information derived from an in silico digestion of a polypeptide. :param proteinSequence: amino acid sequence of the poly peptide to be digested :param cleavageRule: cleavage rule expressed in a regular expression, see :attr:`maspy.constants.expasy_rules` :param missedCleavage: number of allowed missed cleavage sites :param removeNtermM: booo, True to consider also peptides with the N-terminal methionine of the protein removed :param minLength: int, only yield peptides with length >= minLength :param maxLength: int, only yield peptides with length <= maxLength :returns: a list of resulting peptide enries. Protein positions start with ``1`` and end with ``len(proteinSequence``. :: [(peptide amino acid sequence, {'startPos': int, 'endPos': int, 'missedCleavage': int} ), ... ] .. note:: This is a regex example for specifying N-terminal cleavage at lysine sites ``\\w(?=[K])`` ### Response: def digestInSilico(proteinSequence, cleavageRule='[KR]', missedCleavage=0, removeNtermM=True, minLength=5, maxLength=55): """Returns a list of peptide sequences and cleavage information derived from an in silico digestion of a polypeptide. :param proteinSequence: amino acid sequence of the poly peptide to be digested :param cleavageRule: cleavage rule expressed in a regular expression, see :attr:`maspy.constants.expasy_rules` :param missedCleavage: number of allowed missed cleavage sites :param removeNtermM: booo, True to consider also peptides with the N-terminal methionine of the protein removed :param minLength: int, only yield peptides with length >= minLength :param maxLength: int, only yield peptides with length <= maxLength :returns: a list of resulting peptide enries. Protein positions start with ``1`` and end with ``len(proteinSequence``. :: [(peptide amino acid sequence, {'startPos': int, 'endPos': int, 'missedCleavage': int} ), ... ] .. note:: This is a regex example for specifying N-terminal cleavage at lysine sites ``\\w(?=[K])`` """ passFilter = lambda startPos, endPos: (endPos - startPos >= minLength and endPos - startPos <= maxLength ) _regexCleave = re.finditer(cleavageRule, proteinSequence) cleavagePosList = set(itertools.chain(map(lambda x: x.end(), _regexCleave))) cleavagePosList.add(len(proteinSequence)) cleavagePosList = sorted(list(cleavagePosList)) #Add end of protein as cleavage site if protein doesn't end with specififed #cleavage positions numCleavageSites = len(cleavagePosList) if missedCleavage >= numCleavageSites: missedCleavage = numCleavageSites -1 digestionresults = list() #Generate protein n-terminal peptides after methionine removal if removeNtermM and proteinSequence[0] == 'M': for cleavagePos in range(0, missedCleavage+1): startPos = 1 endPos = cleavagePosList[cleavagePos] if passFilter(startPos, endPos): sequence = proteinSequence[startPos:endPos] info = dict() info['startPos'] = startPos+1 info['endPos'] = endPos info['missedCleavage'] = cleavagePos digestionresults.append((sequence, info)) #Generate protein n-terminal peptides if cleavagePosList[0] != 0: for cleavagePos in range(0, missedCleavage+1): startPos = 0 endPos = cleavagePosList[cleavagePos] if passFilter(startPos, endPos): sequence = proteinSequence[startPos:endPos] info = dict() info['startPos'] = startPos+1 info['endPos'] = endPos info['missedCleavage'] = cleavagePos digestionresults.append((sequence, info)) #Generate all remaining peptides, including the c-terminal peptides lastCleavagePos = 0 while lastCleavagePos < numCleavageSites: for missedCleavage in range(0, missedCleavage+1): nextCleavagePos = lastCleavagePos + missedCleavage + 1 if nextCleavagePos < numCleavageSites: startPos = cleavagePosList[lastCleavagePos] endPos = cleavagePosList[nextCleavagePos] if passFilter(startPos, endPos): sequence = proteinSequence[startPos:endPos] info = dict() info['startPos'] = startPos+1 info['endPos'] = endPos info['missedCleavage'] = missedCleavage digestionresults.append((sequence, info)) lastCleavagePos += 1 return digestionresults
def set_proxy(self, instance, state): """ Change class """ if state in self.state_proxy: state_proxy = self.state_proxy[state] try: app_label, model_name = state_proxy.split(".") except ValueError: # If we can't split, assume a model in current app app_label = instance._meta.app_label model_name = state_proxy model = get_model(app_label, model_name) if model is None: raise ValueError('No model found {0}'.format(state_proxy)) instance.__class__ = model
Change class
Below is the the instruction that describes the task: ### Input: Change class ### Response: def set_proxy(self, instance, state): """ Change class """ if state in self.state_proxy: state_proxy = self.state_proxy[state] try: app_label, model_name = state_proxy.split(".") except ValueError: # If we can't split, assume a model in current app app_label = instance._meta.app_label model_name = state_proxy model = get_model(app_label, model_name) if model is None: raise ValueError('No model found {0}'.format(state_proxy)) instance.__class__ = model
def add_builtin_parameters(parameters): """Add built-in parameters to a dictionary of parameters Parameters ---------- parameters : dict Dictionary of parameters provided by the user """ with_builtin_parameters = { "pm": { "run_uuid": str(uuid4()), "current_datetime_local": datetime.now(), "current_datetime_utc": datetime.utcnow(), } } if parameters is not None: with_builtin_parameters.update(parameters) return with_builtin_parameters
Add built-in parameters to a dictionary of parameters Parameters ---------- parameters : dict Dictionary of parameters provided by the user
Below is the the instruction that describes the task: ### Input: Add built-in parameters to a dictionary of parameters Parameters ---------- parameters : dict Dictionary of parameters provided by the user ### Response: def add_builtin_parameters(parameters): """Add built-in parameters to a dictionary of parameters Parameters ---------- parameters : dict Dictionary of parameters provided by the user """ with_builtin_parameters = { "pm": { "run_uuid": str(uuid4()), "current_datetime_local": datetime.now(), "current_datetime_utc": datetime.utcnow(), } } if parameters is not None: with_builtin_parameters.update(parameters) return with_builtin_parameters
def create_dir(path): """ Creates a directory. Warns, if the directory can't be accessed. Passes, if the directory already exists. modified from http://stackoverflow.com/a/600612 Parameters ---------- path : str path to the directory to be created """ import sys import errno try: os.makedirs(path) except OSError as exc: # Python >2.5 if exc.errno == errno.EEXIST: if os.path.isdir(path): pass else: # if something exists at the path, but it's not a dir raise elif exc.errno == errno.EACCES: sys.stderr.write("Cannot create [{0}]! Check Permissions".format(path)) raise else: raise
Creates a directory. Warns, if the directory can't be accessed. Passes, if the directory already exists. modified from http://stackoverflow.com/a/600612 Parameters ---------- path : str path to the directory to be created
Below is the the instruction that describes the task: ### Input: Creates a directory. Warns, if the directory can't be accessed. Passes, if the directory already exists. modified from http://stackoverflow.com/a/600612 Parameters ---------- path : str path to the directory to be created ### Response: def create_dir(path): """ Creates a directory. Warns, if the directory can't be accessed. Passes, if the directory already exists. modified from http://stackoverflow.com/a/600612 Parameters ---------- path : str path to the directory to be created """ import sys import errno try: os.makedirs(path) except OSError as exc: # Python >2.5 if exc.errno == errno.EEXIST: if os.path.isdir(path): pass else: # if something exists at the path, but it's not a dir raise elif exc.errno == errno.EACCES: sys.stderr.write("Cannot create [{0}]! Check Permissions".format(path)) raise else: raise
def static_sum(values, limit_n=1000): """Example of static sum routine.""" if len(values) < limit_n: return sum(values) else: half = len(values) // 2 return add( static_sum(values[:half], limit_n), static_sum(values[half:], limit_n))
Example of static sum routine.
Below is the the instruction that describes the task: ### Input: Example of static sum routine. ### Response: def static_sum(values, limit_n=1000): """Example of static sum routine.""" if len(values) < limit_n: return sum(values) else: half = len(values) // 2 return add( static_sum(values[:half], limit_n), static_sum(values[half:], limit_n))
def generate_img_id(profile, ext=None, label=None, tmp=False): """ Generates img_id. """ if ext and not ext.startswith('.'): ext = '.' + ext if label: label = re.sub(r'[^a-z0-9_\-]', '', label, flags=re.I) label = re.sub(r'_+', '_', label) label = label[:60] return '{profile}:{tmp}{dtstr}_{rand}{label}{ext}'.format( profile=profile, tmp=(dju_settings.DJU_IMG_UPLOAD_TMP_PREFIX if tmp else ''), dtstr=datetime_to_dtstr(), rand=get_random_string(4, 'abcdefghijklmnopqrstuvwxyz0123456789'), label=(('_' + label) if label else ''), ext=(ext or ''), )
Generates img_id.
Below is the the instruction that describes the task: ### Input: Generates img_id. ### Response: def generate_img_id(profile, ext=None, label=None, tmp=False): """ Generates img_id. """ if ext and not ext.startswith('.'): ext = '.' + ext if label: label = re.sub(r'[^a-z0-9_\-]', '', label, flags=re.I) label = re.sub(r'_+', '_', label) label = label[:60] return '{profile}:{tmp}{dtstr}_{rand}{label}{ext}'.format( profile=profile, tmp=(dju_settings.DJU_IMG_UPLOAD_TMP_PREFIX if tmp else ''), dtstr=datetime_to_dtstr(), rand=get_random_string(4, 'abcdefghijklmnopqrstuvwxyz0123456789'), label=(('_' + label) if label else ''), ext=(ext or ''), )
def to_array(self): """ Serializes this PassportElementErrorReverseSide to a dictionary. :return: dictionary representation of this object. :rtype: dict """ array = super(PassportElementErrorReverseSide, self).to_array() array['source'] = u(self.source) # py2: type unicode, py3: type str array['type'] = u(self.type) # py2: type unicode, py3: type str array['file_hash'] = u(self.file_hash) # py2: type unicode, py3: type str array['message'] = u(self.message) # py2: type unicode, py3: type str return array
Serializes this PassportElementErrorReverseSide to a dictionary. :return: dictionary representation of this object. :rtype: dict
Below is the the instruction that describes the task: ### Input: Serializes this PassportElementErrorReverseSide to a dictionary. :return: dictionary representation of this object. :rtype: dict ### Response: def to_array(self): """ Serializes this PassportElementErrorReverseSide to a dictionary. :return: dictionary representation of this object. :rtype: dict """ array = super(PassportElementErrorReverseSide, self).to_array() array['source'] = u(self.source) # py2: type unicode, py3: type str array['type'] = u(self.type) # py2: type unicode, py3: type str array['file_hash'] = u(self.file_hash) # py2: type unicode, py3: type str array['message'] = u(self.message) # py2: type unicode, py3: type str return array
def cached(fn, size=32): ''' this decorator creates a type safe lru_cache around the decorated function. Unlike functools.lru_cache, this will not crash when unhashable arguments are passed to the function''' assert callable(fn) assert isinstance(size, int) return overload(fn)(lru_cache(size, typed=True)(fn))
this decorator creates a type safe lru_cache around the decorated function. Unlike functools.lru_cache, this will not crash when unhashable arguments are passed to the function
Below is the the instruction that describes the task: ### Input: this decorator creates a type safe lru_cache around the decorated function. Unlike functools.lru_cache, this will not crash when unhashable arguments are passed to the function ### Response: def cached(fn, size=32): ''' this decorator creates a type safe lru_cache around the decorated function. Unlike functools.lru_cache, this will not crash when unhashable arguments are passed to the function''' assert callable(fn) assert isinstance(size, int) return overload(fn)(lru_cache(size, typed=True)(fn))
def queue(p_queue, host=None): '''Construct a path to the queue dir for a queue''' if host is not None: return _path(_c.FSQ_QUEUE, root=_path(host, root=hosts(p_queue))) return _path(p_queue, _c.FSQ_QUEUE)
Construct a path to the queue dir for a queue
Below is the the instruction that describes the task: ### Input: Construct a path to the queue dir for a queue ### Response: def queue(p_queue, host=None): '''Construct a path to the queue dir for a queue''' if host is not None: return _path(_c.FSQ_QUEUE, root=_path(host, root=hosts(p_queue))) return _path(p_queue, _c.FSQ_QUEUE)
def iter_otus(nexson, nexson_version=None): """generator over all otus in all otus group elements. yields a tuple of 3 items: otus group ID, otu ID, the otu obj """ if nexson_version is None: nexson_version = detect_nexson_version(nexson) if not _is_by_id_hbf(nexson_version): convert_nexson_format(nexson, BY_ID_HONEY_BADGERFISH) # TODO shouldn't modify... nex = get_nexml_el(nexson) otus_group_by_id = nex['otusById'] group_order = nex.get('^ot:otusElementOrder', []) if len(group_order) < len(otus_group_by_id): group_order = list(otus_group_by_id.keys()) group_order.sort() for otus_group_id in group_order: otus_group = otus_group_by_id[otus_group_id] otu_by_id = otus_group['otuById'] ti_order = list(otu_by_id.keys()) for otu_id in ti_order: otu = otu_by_id[otu_id] yield otus_group_id, otu_id, otu
generator over all otus in all otus group elements. yields a tuple of 3 items: otus group ID, otu ID, the otu obj
Below is the the instruction that describes the task: ### Input: generator over all otus in all otus group elements. yields a tuple of 3 items: otus group ID, otu ID, the otu obj ### Response: def iter_otus(nexson, nexson_version=None): """generator over all otus in all otus group elements. yields a tuple of 3 items: otus group ID, otu ID, the otu obj """ if nexson_version is None: nexson_version = detect_nexson_version(nexson) if not _is_by_id_hbf(nexson_version): convert_nexson_format(nexson, BY_ID_HONEY_BADGERFISH) # TODO shouldn't modify... nex = get_nexml_el(nexson) otus_group_by_id = nex['otusById'] group_order = nex.get('^ot:otusElementOrder', []) if len(group_order) < len(otus_group_by_id): group_order = list(otus_group_by_id.keys()) group_order.sort() for otus_group_id in group_order: otus_group = otus_group_by_id[otus_group_id] otu_by_id = otus_group['otuById'] ti_order = list(otu_by_id.keys()) for otu_id in ti_order: otu = otu_by_id[otu_id] yield otus_group_id, otu_id, otu
def subquery(self, name=None): """ The recipe's query as a subquery suitable for use in joins or other queries. """ query = self.query() return query.subquery(name=name)
The recipe's query as a subquery suitable for use in joins or other queries.
Below is the the instruction that describes the task: ### Input: The recipe's query as a subquery suitable for use in joins or other queries. ### Response: def subquery(self, name=None): """ The recipe's query as a subquery suitable for use in joins or other queries. """ query = self.query() return query.subquery(name=name)
def apply(self, mapping): """Apply a mapping of name-value-pairs to a template.""" mapping = {name: self.str(value, tolerant=self.tolerant) for name, value in mapping.items() if value is not None or self.tolerant} if self.tolerant: return self.template.safe_substitute(mapping) return self.template.substitute(mapping)
Apply a mapping of name-value-pairs to a template.
Below is the the instruction that describes the task: ### Input: Apply a mapping of name-value-pairs to a template. ### Response: def apply(self, mapping): """Apply a mapping of name-value-pairs to a template.""" mapping = {name: self.str(value, tolerant=self.tolerant) for name, value in mapping.items() if value is not None or self.tolerant} if self.tolerant: return self.template.safe_substitute(mapping) return self.template.substitute(mapping)
def fromJSON(value): """loads the GP object from a JSON string """ j = json.loads(value) v = GPLong() if "defaultValue" in j: v.value = j['defaultValue'] else: v.value = j['value'] if 'paramName' in j: v.paramName = j['paramName'] elif 'name' in j: v.paramName = j['name'] return v
loads the GP object from a JSON string
Below is the the instruction that describes the task: ### Input: loads the GP object from a JSON string ### Response: def fromJSON(value): """loads the GP object from a JSON string """ j = json.loads(value) v = GPLong() if "defaultValue" in j: v.value = j['defaultValue'] else: v.value = j['value'] if 'paramName' in j: v.paramName = j['paramName'] elif 'name' in j: v.paramName = j['name'] return v
def mapPartitions(self, f, preservesPartitioning=False): """ .. note:: Experimental Returns a new RDD by applying a function to each partition of the wrapped RDD, where tasks are launched together in a barrier stage. The interface is the same as :func:`RDD.mapPartitions`. Please see the API doc there. .. versionadded:: 2.4.0 """ def func(s, iterator): return f(iterator) return PipelinedRDD(self.rdd, func, preservesPartitioning, isFromBarrier=True)
.. note:: Experimental Returns a new RDD by applying a function to each partition of the wrapped RDD, where tasks are launched together in a barrier stage. The interface is the same as :func:`RDD.mapPartitions`. Please see the API doc there. .. versionadded:: 2.4.0
Below is the the instruction that describes the task: ### Input: .. note:: Experimental Returns a new RDD by applying a function to each partition of the wrapped RDD, where tasks are launched together in a barrier stage. The interface is the same as :func:`RDD.mapPartitions`. Please see the API doc there. .. versionadded:: 2.4.0 ### Response: def mapPartitions(self, f, preservesPartitioning=False): """ .. note:: Experimental Returns a new RDD by applying a function to each partition of the wrapped RDD, where tasks are launched together in a barrier stage. The interface is the same as :func:`RDD.mapPartitions`. Please see the API doc there. .. versionadded:: 2.4.0 """ def func(s, iterator): return f(iterator) return PipelinedRDD(self.rdd, func, preservesPartitioning, isFromBarrier=True)
def plot_blandaltman(x, y, agreement=1.96, confidence=.95, figsize=(5, 4), dpi=100, ax=None): """ Generate a Bland-Altman plot to compare two sets of measurements. Parameters ---------- x, y : np.array or list First and second measurements. agreement : float Multiple of the standard deviation to plot limit of agreement bounds. The defaults is 1.96. confidence : float If not ``None``, plot the specified percentage confidence interval on the mean and limits of agreement. figsize : tuple Figsize in inches dpi : int Resolution of the figure in dots per inches. ax : matplotlib axes Axis on which to draw the plot Returns ------- ax : Matplotlib Axes instance Returns the Axes object with the plot for further tweaking. Notes ----- Bland-Altman plots are extensively used to evaluate the agreement among two different instruments or two measurements techniques. Bland-Altman plots allow identification of any systematic difference between the measurements (i.e., fixed bias) or possible outliers. The mean difference is the estimated bias, and the SD of the differences measures the random fluctuations around this mean. If the mean value of the difference differs significantly from 0 on the basis of a 1-sample t-test, this indicates the presence of fixed bias. If there is a consistent bias, it can be adjusted for by subtracting the mean difference from the new method. It is common to compute 95% limits of agreement for each comparison (average difference ± 1.96 standard deviation of the difference), which tells us how far apart measurements by 2 methods were more likely to be for most individuals. If the differences within mean ± 1.96 SD are not clinically important, the two methods may be used interchangeably. The 95% limits of agreement can be unreliable estimates of the population parameters especially for small sample sizes so, when comparing methods or assessing repeatability, it is important to calculate confidence intervals for 95% limits of agreement. The code is an adaptation of the Python package PyCompare by Jake TM Pearce. All credits goes to the original author. The present implementation is a simplified version; please refer to the original package for more advanced functionalities. References ---------- .. [1] Bland, J. M., & Altman, D. (1986). Statistical methods for assessing agreement between two methods of clinical measurement. The lancet, 327(8476), 307-310. .. [2] https://github.com/jaketmp/pyCompare .. [3] https://en.wikipedia.org/wiki/Bland%E2%80%93Altman_plot Examples -------- Bland-Altman plot .. plot:: >>> import numpy as np >>> import pingouin as pg >>> np.random.seed(123) >>> mean, cov = [10, 11], [[1, 0.8], [0.8, 1]] >>> x, y = np.random.multivariate_normal(mean, cov, 30).T >>> ax = pg.plot_blandaltman(x, y) """ # Safety check x = np.asarray(x) y = np.asarray(y) assert x.ndim == 1 and y.ndim == 1 assert x.size == y.size n = x.size mean = np.vstack((x, y)).mean(0) diff = x - y md = diff.mean() sd = diff.std(axis=0, ddof=1) # Confidence intervals if confidence is not None: assert 0 < confidence < 1 ci = dict() ci['mean'] = stats.norm.interval(confidence, loc=md, scale=sd / np.sqrt(n)) seLoA = ((1 / n) + (agreement**2 / (2 * (n - 1)))) * (sd**2) loARange = np.sqrt(seLoA) * stats.t.ppf((1 - confidence) / 2, n - 1) ci['upperLoA'] = ((md + agreement * sd) + loARange, (md + agreement * sd) - loARange) ci['lowerLoA'] = ((md - agreement * sd) + loARange, (md - agreement * sd) - loARange) # Start the plot if ax is None: fig, ax = plt.subplots(1, 1, figsize=figsize, dpi=dpi) # Plot the mean diff, limits of agreement and scatter ax.axhline(md, color='#6495ED', linestyle='--') ax.axhline(md + agreement * sd, color='coral', linestyle='--') ax.axhline(md - agreement * sd, color='coral', linestyle='--') ax.scatter(mean, diff, alpha=0.5) loa_range = (md + (agreement * sd)) - (md - agreement * sd) offset = (loa_range / 100.0) * 1.5 trans = transforms.blended_transform_factory(ax.transAxes, ax.transData) ax.text(0.98, md + offset, 'Mean', ha="right", va="bottom", transform=trans) ax.text(0.98, md - offset, '%.2f' % md, ha="right", va="top", transform=trans) ax.text(0.98, md + (agreement * sd) + offset, '+%.2f SD' % agreement, ha="right", va="bottom", transform=trans) ax.text(0.98, md + (agreement * sd) - offset, '%.2f' % (md + agreement * sd), ha="right", va="top", transform=trans) ax.text(0.98, md - (agreement * sd) - offset, '-%.2f SD' % agreement, ha="right", va="top", transform=trans) ax.text(0.98, md - (agreement * sd) + offset, '%.2f' % (md - agreement * sd), ha="right", va="bottom", transform=trans) if confidence is not None: ax.axhspan(ci['mean'][0], ci['mean'][1], facecolor='#6495ED', alpha=0.2) ax.axhspan(ci['upperLoA'][0], ci['upperLoA'][1], facecolor='coral', alpha=0.2) ax.axhspan(ci['lowerLoA'][0], ci['lowerLoA'][1], facecolor='coral', alpha=0.2) # Labels and title ax.set_ylabel('Difference between methods') ax.set_xlabel('Mean of methods') ax.set_title('Bland-Altman plot') # Despine and trim sns.despine(trim=True, ax=ax) return ax
Generate a Bland-Altman plot to compare two sets of measurements. Parameters ---------- x, y : np.array or list First and second measurements. agreement : float Multiple of the standard deviation to plot limit of agreement bounds. The defaults is 1.96. confidence : float If not ``None``, plot the specified percentage confidence interval on the mean and limits of agreement. figsize : tuple Figsize in inches dpi : int Resolution of the figure in dots per inches. ax : matplotlib axes Axis on which to draw the plot Returns ------- ax : Matplotlib Axes instance Returns the Axes object with the plot for further tweaking. Notes ----- Bland-Altman plots are extensively used to evaluate the agreement among two different instruments or two measurements techniques. Bland-Altman plots allow identification of any systematic difference between the measurements (i.e., fixed bias) or possible outliers. The mean difference is the estimated bias, and the SD of the differences measures the random fluctuations around this mean. If the mean value of the difference differs significantly from 0 on the basis of a 1-sample t-test, this indicates the presence of fixed bias. If there is a consistent bias, it can be adjusted for by subtracting the mean difference from the new method. It is common to compute 95% limits of agreement for each comparison (average difference ± 1.96 standard deviation of the difference), which tells us how far apart measurements by 2 methods were more likely to be for most individuals. If the differences within mean ± 1.96 SD are not clinically important, the two methods may be used interchangeably. The 95% limits of agreement can be unreliable estimates of the population parameters especially for small sample sizes so, when comparing methods or assessing repeatability, it is important to calculate confidence intervals for 95% limits of agreement. The code is an adaptation of the Python package PyCompare by Jake TM Pearce. All credits goes to the original author. The present implementation is a simplified version; please refer to the original package for more advanced functionalities. References ---------- .. [1] Bland, J. M., & Altman, D. (1986). Statistical methods for assessing agreement between two methods of clinical measurement. The lancet, 327(8476), 307-310. .. [2] https://github.com/jaketmp/pyCompare .. [3] https://en.wikipedia.org/wiki/Bland%E2%80%93Altman_plot Examples -------- Bland-Altman plot .. plot:: >>> import numpy as np >>> import pingouin as pg >>> np.random.seed(123) >>> mean, cov = [10, 11], [[1, 0.8], [0.8, 1]] >>> x, y = np.random.multivariate_normal(mean, cov, 30).T >>> ax = pg.plot_blandaltman(x, y)
Below is the the instruction that describes the task: ### Input: Generate a Bland-Altman plot to compare two sets of measurements. Parameters ---------- x, y : np.array or list First and second measurements. agreement : float Multiple of the standard deviation to plot limit of agreement bounds. The defaults is 1.96. confidence : float If not ``None``, plot the specified percentage confidence interval on the mean and limits of agreement. figsize : tuple Figsize in inches dpi : int Resolution of the figure in dots per inches. ax : matplotlib axes Axis on which to draw the plot Returns ------- ax : Matplotlib Axes instance Returns the Axes object with the plot for further tweaking. Notes ----- Bland-Altman plots are extensively used to evaluate the agreement among two different instruments or two measurements techniques. Bland-Altman plots allow identification of any systematic difference between the measurements (i.e., fixed bias) or possible outliers. The mean difference is the estimated bias, and the SD of the differences measures the random fluctuations around this mean. If the mean value of the difference differs significantly from 0 on the basis of a 1-sample t-test, this indicates the presence of fixed bias. If there is a consistent bias, it can be adjusted for by subtracting the mean difference from the new method. It is common to compute 95% limits of agreement for each comparison (average difference ± 1.96 standard deviation of the difference), which tells us how far apart measurements by 2 methods were more likely to be for most individuals. If the differences within mean ± 1.96 SD are not clinically important, the two methods may be used interchangeably. The 95% limits of agreement can be unreliable estimates of the population parameters especially for small sample sizes so, when comparing methods or assessing repeatability, it is important to calculate confidence intervals for 95% limits of agreement. The code is an adaptation of the Python package PyCompare by Jake TM Pearce. All credits goes to the original author. The present implementation is a simplified version; please refer to the original package for more advanced functionalities. References ---------- .. [1] Bland, J. M., & Altman, D. (1986). Statistical methods for assessing agreement between two methods of clinical measurement. The lancet, 327(8476), 307-310. .. [2] https://github.com/jaketmp/pyCompare .. [3] https://en.wikipedia.org/wiki/Bland%E2%80%93Altman_plot Examples -------- Bland-Altman plot .. plot:: >>> import numpy as np >>> import pingouin as pg >>> np.random.seed(123) >>> mean, cov = [10, 11], [[1, 0.8], [0.8, 1]] >>> x, y = np.random.multivariate_normal(mean, cov, 30).T >>> ax = pg.plot_blandaltman(x, y) ### Response: def plot_blandaltman(x, y, agreement=1.96, confidence=.95, figsize=(5, 4), dpi=100, ax=None): """ Generate a Bland-Altman plot to compare two sets of measurements. Parameters ---------- x, y : np.array or list First and second measurements. agreement : float Multiple of the standard deviation to plot limit of agreement bounds. The defaults is 1.96. confidence : float If not ``None``, plot the specified percentage confidence interval on the mean and limits of agreement. figsize : tuple Figsize in inches dpi : int Resolution of the figure in dots per inches. ax : matplotlib axes Axis on which to draw the plot Returns ------- ax : Matplotlib Axes instance Returns the Axes object with the plot for further tweaking. Notes ----- Bland-Altman plots are extensively used to evaluate the agreement among two different instruments or two measurements techniques. Bland-Altman plots allow identification of any systematic difference between the measurements (i.e., fixed bias) or possible outliers. The mean difference is the estimated bias, and the SD of the differences measures the random fluctuations around this mean. If the mean value of the difference differs significantly from 0 on the basis of a 1-sample t-test, this indicates the presence of fixed bias. If there is a consistent bias, it can be adjusted for by subtracting the mean difference from the new method. It is common to compute 95% limits of agreement for each comparison (average difference ± 1.96 standard deviation of the difference), which tells us how far apart measurements by 2 methods were more likely to be for most individuals. If the differences within mean ± 1.96 SD are not clinically important, the two methods may be used interchangeably. The 95% limits of agreement can be unreliable estimates of the population parameters especially for small sample sizes so, when comparing methods or assessing repeatability, it is important to calculate confidence intervals for 95% limits of agreement. The code is an adaptation of the Python package PyCompare by Jake TM Pearce. All credits goes to the original author. The present implementation is a simplified version; please refer to the original package for more advanced functionalities. References ---------- .. [1] Bland, J. M., & Altman, D. (1986). Statistical methods for assessing agreement between two methods of clinical measurement. The lancet, 327(8476), 307-310. .. [2] https://github.com/jaketmp/pyCompare .. [3] https://en.wikipedia.org/wiki/Bland%E2%80%93Altman_plot Examples -------- Bland-Altman plot .. plot:: >>> import numpy as np >>> import pingouin as pg >>> np.random.seed(123) >>> mean, cov = [10, 11], [[1, 0.8], [0.8, 1]] >>> x, y = np.random.multivariate_normal(mean, cov, 30).T >>> ax = pg.plot_blandaltman(x, y) """ # Safety check x = np.asarray(x) y = np.asarray(y) assert x.ndim == 1 and y.ndim == 1 assert x.size == y.size n = x.size mean = np.vstack((x, y)).mean(0) diff = x - y md = diff.mean() sd = diff.std(axis=0, ddof=1) # Confidence intervals if confidence is not None: assert 0 < confidence < 1 ci = dict() ci['mean'] = stats.norm.interval(confidence, loc=md, scale=sd / np.sqrt(n)) seLoA = ((1 / n) + (agreement**2 / (2 * (n - 1)))) * (sd**2) loARange = np.sqrt(seLoA) * stats.t.ppf((1 - confidence) / 2, n - 1) ci['upperLoA'] = ((md + agreement * sd) + loARange, (md + agreement * sd) - loARange) ci['lowerLoA'] = ((md - agreement * sd) + loARange, (md - agreement * sd) - loARange) # Start the plot if ax is None: fig, ax = plt.subplots(1, 1, figsize=figsize, dpi=dpi) # Plot the mean diff, limits of agreement and scatter ax.axhline(md, color='#6495ED', linestyle='--') ax.axhline(md + agreement * sd, color='coral', linestyle='--') ax.axhline(md - agreement * sd, color='coral', linestyle='--') ax.scatter(mean, diff, alpha=0.5) loa_range = (md + (agreement * sd)) - (md - agreement * sd) offset = (loa_range / 100.0) * 1.5 trans = transforms.blended_transform_factory(ax.transAxes, ax.transData) ax.text(0.98, md + offset, 'Mean', ha="right", va="bottom", transform=trans) ax.text(0.98, md - offset, '%.2f' % md, ha="right", va="top", transform=trans) ax.text(0.98, md + (agreement * sd) + offset, '+%.2f SD' % agreement, ha="right", va="bottom", transform=trans) ax.text(0.98, md + (agreement * sd) - offset, '%.2f' % (md + agreement * sd), ha="right", va="top", transform=trans) ax.text(0.98, md - (agreement * sd) - offset, '-%.2f SD' % agreement, ha="right", va="top", transform=trans) ax.text(0.98, md - (agreement * sd) + offset, '%.2f' % (md - agreement * sd), ha="right", va="bottom", transform=trans) if confidence is not None: ax.axhspan(ci['mean'][0], ci['mean'][1], facecolor='#6495ED', alpha=0.2) ax.axhspan(ci['upperLoA'][0], ci['upperLoA'][1], facecolor='coral', alpha=0.2) ax.axhspan(ci['lowerLoA'][0], ci['lowerLoA'][1], facecolor='coral', alpha=0.2) # Labels and title ax.set_ylabel('Difference between methods') ax.set_xlabel('Mean of methods') ax.set_title('Bland-Altman plot') # Despine and trim sns.despine(trim=True, ax=ax) return ax
def extend(self, protocol: Union[Iterable[Dict], 'Pipeline']) -> 'Pipeline': """Add another pipeline to the end of the current pipeline. :param protocol: An iterable of dictionaries (or another Pipeline) :return: This pipeline for fluid query building Example: >>> p1 = Pipeline.from_functions(['enrich_protein_and_rna_origins']) >>> p2 = Pipeline.from_functions(['remove_pathologies']) >>> p1.extend(p2) """ for data in protocol: name, args, kwargs = _get_protocol_tuple(data) self.append(name, *args, **kwargs) return self
Add another pipeline to the end of the current pipeline. :param protocol: An iterable of dictionaries (or another Pipeline) :return: This pipeline for fluid query building Example: >>> p1 = Pipeline.from_functions(['enrich_protein_and_rna_origins']) >>> p2 = Pipeline.from_functions(['remove_pathologies']) >>> p1.extend(p2)
Below is the the instruction that describes the task: ### Input: Add another pipeline to the end of the current pipeline. :param protocol: An iterable of dictionaries (or another Pipeline) :return: This pipeline for fluid query building Example: >>> p1 = Pipeline.from_functions(['enrich_protein_and_rna_origins']) >>> p2 = Pipeline.from_functions(['remove_pathologies']) >>> p1.extend(p2) ### Response: def extend(self, protocol: Union[Iterable[Dict], 'Pipeline']) -> 'Pipeline': """Add another pipeline to the end of the current pipeline. :param protocol: An iterable of dictionaries (or another Pipeline) :return: This pipeline for fluid query building Example: >>> p1 = Pipeline.from_functions(['enrich_protein_and_rna_origins']) >>> p2 = Pipeline.from_functions(['remove_pathologies']) >>> p1.extend(p2) """ for data in protocol: name, args, kwargs = _get_protocol_tuple(data) self.append(name, *args, **kwargs) return self
def _select_row_by_column_value(tree_view, list_store, column, value): """Helper method to select a tree view row :param Gtk.TreeView tree_view: Tree view who's row is to be selected :param Gtk.ListStore list_store: List store of the tree view :param int column: Column in which the value is searched :param value: Value to search for :returns: Row of list store that has selected :rtype: int """ for row_num, iter_elem in enumerate(list_store): if iter_elem[column] == value: tree_view.set_cursor(row_num) return row_num
Helper method to select a tree view row :param Gtk.TreeView tree_view: Tree view who's row is to be selected :param Gtk.ListStore list_store: List store of the tree view :param int column: Column in which the value is searched :param value: Value to search for :returns: Row of list store that has selected :rtype: int
Below is the the instruction that describes the task: ### Input: Helper method to select a tree view row :param Gtk.TreeView tree_view: Tree view who's row is to be selected :param Gtk.ListStore list_store: List store of the tree view :param int column: Column in which the value is searched :param value: Value to search for :returns: Row of list store that has selected :rtype: int ### Response: def _select_row_by_column_value(tree_view, list_store, column, value): """Helper method to select a tree view row :param Gtk.TreeView tree_view: Tree view who's row is to be selected :param Gtk.ListStore list_store: List store of the tree view :param int column: Column in which the value is searched :param value: Value to search for :returns: Row of list store that has selected :rtype: int """ for row_num, iter_elem in enumerate(list_store): if iter_elem[column] == value: tree_view.set_cursor(row_num) return row_num
def _getRole(self, matchedVars): """ :param matchedVars: :return: NULL or the role's integer value """ role = matchedVars.get(ROLE) if role is not None and role.strip() == '': role = NULL else: valid = Authoriser.isValidRoleName(role) if valid: role = Authoriser.getRoleFromName(role) else: self.print("Invalid role. Valid roles are: {}". format(", ".join(map(lambda r: r.name, Roles))), Token.Error) return False return role
:param matchedVars: :return: NULL or the role's integer value
Below is the the instruction that describes the task: ### Input: :param matchedVars: :return: NULL or the role's integer value ### Response: def _getRole(self, matchedVars): """ :param matchedVars: :return: NULL or the role's integer value """ role = matchedVars.get(ROLE) if role is not None and role.strip() == '': role = NULL else: valid = Authoriser.isValidRoleName(role) if valid: role = Authoriser.getRoleFromName(role) else: self.print("Invalid role. Valid roles are: {}". format(", ".join(map(lambda r: r.name, Roles))), Token.Error) return False return role
def get_language(self): """\ Returns the language is by the article or the configuration language """ # we don't want to force the target language # so we use the article.meta_lang if self.config.use_meta_language: if self.article.meta_lang: return self.article.meta_lang[:2] return self.config.target_language
\ Returns the language is by the article or the configuration language
Below is the the instruction that describes the task: ### Input: \ Returns the language is by the article or the configuration language ### Response: def get_language(self): """\ Returns the language is by the article or the configuration language """ # we don't want to force the target language # so we use the article.meta_lang if self.config.use_meta_language: if self.article.meta_lang: return self.article.meta_lang[:2] return self.config.target_language
def buildRootname(filename, ext=None): """ Build a new rootname for an existing file and given extension. Any user supplied extensions to use for searching for file need to be provided as a list of extensions. Examples -------- :: >>> rootname = buildRootname(filename, ext=['_dth.fits']) # doctest: +SKIP """ if filename in ['' ,' ', None]: return None fpath, fname = os.path.split(filename) if ext is not None and '_' in ext[0]: froot = os.path.splitext(fname)[0].split('_')[0] else: froot = fname if fpath in ['', ' ', None]: fpath = os.curdir # Get complete list of filenames from current directory flist = os.listdir(fpath) #First, assume given filename is complete and verify # it exists... rootname = None for name in flist: if name == froot: rootname = froot break elif name == froot + '.fits': rootname = froot + '.fits' break # If we have an incomplete filename, try building a default # name and seeing if it exists... # # Set up default list of suffix/extensions to add to rootname _extlist = [] for extn in EXTLIST: _extlist.append(extn) if rootname is None: # Add any user-specified extension to list of extensions... if ext is not None: for i in ext: _extlist.insert(0,i) # loop over all extensions looking for a filename that matches... for extn in _extlist: # Start by looking for filename with exactly # the same case a provided in ASN table... rname = froot + extn for name in flist: if rname == name: rootname = name break if rootname is None: # Try looking for all lower-case filename # instead of a mixed-case filename as required # by the pipeline. rname = froot.lower() + extn for name in flist: if rname == name: rootname = name break if rootname is not None: break # If we still haven't found the file, see if we have the # info to build one... if rootname is None and ext is not None: # Check to see if we have a full filename to start with... _indx = froot.find('.') if _indx > 0: rootname = froot[:_indx] + ext[0] else: rootname = froot + ext[0] if fpath not in ['.', '', ' ', None]: rootname = os.path.join(fpath, rootname) # It will be up to the calling routine to verify # that a valid rootname, rather than 'None', was returned. return rootname
Build a new rootname for an existing file and given extension. Any user supplied extensions to use for searching for file need to be provided as a list of extensions. Examples -------- :: >>> rootname = buildRootname(filename, ext=['_dth.fits']) # doctest: +SKIP
Below is the the instruction that describes the task: ### Input: Build a new rootname for an existing file and given extension. Any user supplied extensions to use for searching for file need to be provided as a list of extensions. Examples -------- :: >>> rootname = buildRootname(filename, ext=['_dth.fits']) # doctest: +SKIP ### Response: def buildRootname(filename, ext=None): """ Build a new rootname for an existing file and given extension. Any user supplied extensions to use for searching for file need to be provided as a list of extensions. Examples -------- :: >>> rootname = buildRootname(filename, ext=['_dth.fits']) # doctest: +SKIP """ if filename in ['' ,' ', None]: return None fpath, fname = os.path.split(filename) if ext is not None and '_' in ext[0]: froot = os.path.splitext(fname)[0].split('_')[0] else: froot = fname if fpath in ['', ' ', None]: fpath = os.curdir # Get complete list of filenames from current directory flist = os.listdir(fpath) #First, assume given filename is complete and verify # it exists... rootname = None for name in flist: if name == froot: rootname = froot break elif name == froot + '.fits': rootname = froot + '.fits' break # If we have an incomplete filename, try building a default # name and seeing if it exists... # # Set up default list of suffix/extensions to add to rootname _extlist = [] for extn in EXTLIST: _extlist.append(extn) if rootname is None: # Add any user-specified extension to list of extensions... if ext is not None: for i in ext: _extlist.insert(0,i) # loop over all extensions looking for a filename that matches... for extn in _extlist: # Start by looking for filename with exactly # the same case a provided in ASN table... rname = froot + extn for name in flist: if rname == name: rootname = name break if rootname is None: # Try looking for all lower-case filename # instead of a mixed-case filename as required # by the pipeline. rname = froot.lower() + extn for name in flist: if rname == name: rootname = name break if rootname is not None: break # If we still haven't found the file, see if we have the # info to build one... if rootname is None and ext is not None: # Check to see if we have a full filename to start with... _indx = froot.find('.') if _indx > 0: rootname = froot[:_indx] + ext[0] else: rootname = froot + ext[0] if fpath not in ['.', '', ' ', None]: rootname = os.path.join(fpath, rootname) # It will be up to the calling routine to verify # that a valid rootname, rather than 'None', was returned. return rootname
def display(self, format="png"): """ Return an object that can be used to display this sequence. This is used for IPython Notebook. :param format: "png" or "svg" """ from sebastian.core.transforms import lilypond seq = HSeq(self) | lilypond() lily_output = write_lilypond.lily_format(seq) if not lily_output.strip(): #In the case of empty lily outputs, return self to get a textual display return self if format == "png": suffix = ".preview.png" args = ["lilypond", "--png", "-dno-print-pages", "-dpreview"] elif format == "svg": suffix = ".preview.svg" args = ["lilypond", "-dbackend=svg", "-dno-print-pages", "-dpreview"] f = tempfile.NamedTemporaryFile(suffix=suffix) basename = f.name[:-len(suffix)] args.extend(["-o" + basename, "-"]) #Pass shell=True so that if your $PATH contains ~ it will #get expanded. This also changes the way the arguments get #passed in. To work correctly, pass them as a string p = sp.Popen(" ".join(args), stdin=sp.PIPE, shell=True) stdout, stderr = p.communicate("{ %s }" % lily_output) if p.returncode != 0: # there was an error #raise IOError("Lilypond execution failed: %s%s" % (stdout, stderr)) return None if not ipython: return f.read() if format == "png": return Image(data=f.read(), filename=f.name, format="png") else: return SVG(data=f.read(), filename=f.name)
Return an object that can be used to display this sequence. This is used for IPython Notebook. :param format: "png" or "svg"
Below is the the instruction that describes the task: ### Input: Return an object that can be used to display this sequence. This is used for IPython Notebook. :param format: "png" or "svg" ### Response: def display(self, format="png"): """ Return an object that can be used to display this sequence. This is used for IPython Notebook. :param format: "png" or "svg" """ from sebastian.core.transforms import lilypond seq = HSeq(self) | lilypond() lily_output = write_lilypond.lily_format(seq) if not lily_output.strip(): #In the case of empty lily outputs, return self to get a textual display return self if format == "png": suffix = ".preview.png" args = ["lilypond", "--png", "-dno-print-pages", "-dpreview"] elif format == "svg": suffix = ".preview.svg" args = ["lilypond", "-dbackend=svg", "-dno-print-pages", "-dpreview"] f = tempfile.NamedTemporaryFile(suffix=suffix) basename = f.name[:-len(suffix)] args.extend(["-o" + basename, "-"]) #Pass shell=True so that if your $PATH contains ~ it will #get expanded. This also changes the way the arguments get #passed in. To work correctly, pass them as a string p = sp.Popen(" ".join(args), stdin=sp.PIPE, shell=True) stdout, stderr = p.communicate("{ %s }" % lily_output) if p.returncode != 0: # there was an error #raise IOError("Lilypond execution failed: %s%s" % (stdout, stderr)) return None if not ipython: return f.read() if format == "png": return Image(data=f.read(), filename=f.name, format="png") else: return SVG(data=f.read(), filename=f.name)
def playerid_reverse_lookup(player_ids, key_type=None): """Retrieve a table of player information given a list of player ids :param player_ids: list of player ids :type player_ids: list :param key_type: name of the key type being looked up (one of "mlbam", "retro", "bbref", or "fangraphs") :type key_type: str :rtype: :class:`pandas.core.frame.DataFrame` """ key_types = ('mlbam', 'retro', 'bbref', 'fangraphs', ) if not key_type: key_type = key_types[0] # default is "mlbam" if key_type not provided elif key_type not in key_types: raise ValueError( '[Key Type: {}] Invalid; Key Type must be one of "{}"'.format(key_type, '", "'.join(key_types)) ) table = get_lookup_table() key = 'key_{}'.format(key_type) results = table[table[key].isin(player_ids)] results = results.reset_index().drop('index', 1) return results
Retrieve a table of player information given a list of player ids :param player_ids: list of player ids :type player_ids: list :param key_type: name of the key type being looked up (one of "mlbam", "retro", "bbref", or "fangraphs") :type key_type: str :rtype: :class:`pandas.core.frame.DataFrame`
Below is the the instruction that describes the task: ### Input: Retrieve a table of player information given a list of player ids :param player_ids: list of player ids :type player_ids: list :param key_type: name of the key type being looked up (one of "mlbam", "retro", "bbref", or "fangraphs") :type key_type: str :rtype: :class:`pandas.core.frame.DataFrame` ### Response: def playerid_reverse_lookup(player_ids, key_type=None): """Retrieve a table of player information given a list of player ids :param player_ids: list of player ids :type player_ids: list :param key_type: name of the key type being looked up (one of "mlbam", "retro", "bbref", or "fangraphs") :type key_type: str :rtype: :class:`pandas.core.frame.DataFrame` """ key_types = ('mlbam', 'retro', 'bbref', 'fangraphs', ) if not key_type: key_type = key_types[0] # default is "mlbam" if key_type not provided elif key_type not in key_types: raise ValueError( '[Key Type: {}] Invalid; Key Type must be one of "{}"'.format(key_type, '", "'.join(key_types)) ) table = get_lookup_table() key = 'key_{}'.format(key_type) results = table[table[key].isin(player_ids)] results = results.reset_index().drop('index', 1) return results
def acquire(self, _filter=None, default=None): """ acquire(_filter=None, default=None) Claims a resource from the pool for manual use. Resources are created as needed when all members of the pool are claimed or the pool is empty. Most of the time you will want to use :meth:`transaction`. :param _filter: a filter that can be used to select a member of the pool :type _filter: callable :param default: a value that will be used instead of calling :meth:`create_resource` if a new resource needs to be created :rtype: Resource """ if not _filter: def _filter(obj): return True elif not callable(_filter): raise TypeError("_filter is not a callable") resource = None with self.lock: for e in self.resources: if not e.claimed and _filter(e.object): resource = e break if resource is None: if default is not None: resource = Resource(default, self) else: resource = Resource(self.create_resource(), self) self.resources.append(resource) resource.claimed = True return resource
acquire(_filter=None, default=None) Claims a resource from the pool for manual use. Resources are created as needed when all members of the pool are claimed or the pool is empty. Most of the time you will want to use :meth:`transaction`. :param _filter: a filter that can be used to select a member of the pool :type _filter: callable :param default: a value that will be used instead of calling :meth:`create_resource` if a new resource needs to be created :rtype: Resource
Below is the the instruction that describes the task: ### Input: acquire(_filter=None, default=None) Claims a resource from the pool for manual use. Resources are created as needed when all members of the pool are claimed or the pool is empty. Most of the time you will want to use :meth:`transaction`. :param _filter: a filter that can be used to select a member of the pool :type _filter: callable :param default: a value that will be used instead of calling :meth:`create_resource` if a new resource needs to be created :rtype: Resource ### Response: def acquire(self, _filter=None, default=None): """ acquire(_filter=None, default=None) Claims a resource from the pool for manual use. Resources are created as needed when all members of the pool are claimed or the pool is empty. Most of the time you will want to use :meth:`transaction`. :param _filter: a filter that can be used to select a member of the pool :type _filter: callable :param default: a value that will be used instead of calling :meth:`create_resource` if a new resource needs to be created :rtype: Resource """ if not _filter: def _filter(obj): return True elif not callable(_filter): raise TypeError("_filter is not a callable") resource = None with self.lock: for e in self.resources: if not e.claimed and _filter(e.object): resource = e break if resource is None: if default is not None: resource = Resource(default, self) else: resource = Resource(self.create_resource(), self) self.resources.append(resource) resource.claimed = True return resource
def read_files(filenames, with_name=False): """Read many files.""" text = [read_file(filename) for filename in filenames] if with_name: return dict(zip(filenames, text)) return text
Read many files.
Below is the the instruction that describes the task: ### Input: Read many files. ### Response: def read_files(filenames, with_name=False): """Read many files.""" text = [read_file(filename) for filename in filenames] if with_name: return dict(zip(filenames, text)) return text
def coord_from_area(self, x, y, lat, lon, width, ground_width): '''return (lat,lon) for a pixel in an area image''' pixel_width = ground_width / float(width) dx = x * pixel_width dy = y * pixel_width return mp_util.gps_offset(lat, lon, dx, -dy)
return (lat,lon) for a pixel in an area image
Below is the the instruction that describes the task: ### Input: return (lat,lon) for a pixel in an area image ### Response: def coord_from_area(self, x, y, lat, lon, width, ground_width): '''return (lat,lon) for a pixel in an area image''' pixel_width = ground_width / float(width) dx = x * pixel_width dy = y * pixel_width return mp_util.gps_offset(lat, lon, dx, -dy)
def convert_units_to_base_units(units): """Convert a set of units into a set of "base" units. Returns a 2-tuple of `factor, new_units`. """ total_factor = 1 new_units = [] for unit in units: if unit not in BASE_UNIT_CONVERSIONS: continue factor, new_unit = BASE_UNIT_CONVERSIONS[unit] total_factor *= factor new_units.append(new_unit) new_units.sort() return total_factor, tuple(new_units)
Convert a set of units into a set of "base" units. Returns a 2-tuple of `factor, new_units`.
Below is the the instruction that describes the task: ### Input: Convert a set of units into a set of "base" units. Returns a 2-tuple of `factor, new_units`. ### Response: def convert_units_to_base_units(units): """Convert a set of units into a set of "base" units. Returns a 2-tuple of `factor, new_units`. """ total_factor = 1 new_units = [] for unit in units: if unit not in BASE_UNIT_CONVERSIONS: continue factor, new_unit = BASE_UNIT_CONVERSIONS[unit] total_factor *= factor new_units.append(new_unit) new_units.sort() return total_factor, tuple(new_units)
def write_24bit_uint(self, n): """ Writes a 24 bit unsigned integer to the stream. @since: 0.4 @param n: 24 bit unsigned integer @type n: C{int} @raise TypeError: Unexpected type for int C{n}. @raise OverflowError: Not in range. """ if type(n) not in python.int_types: raise TypeError('expected an int (got:%r)' % (type(n),)) if not 0 <= n <= 0xffffff: raise OverflowError("n is out of range") order = None if not self._is_big_endian(): order = [0, 8, 16] else: order = [16, 8, 0] for x in order: self.write_uchar((n >> x) & 0xff)
Writes a 24 bit unsigned integer to the stream. @since: 0.4 @param n: 24 bit unsigned integer @type n: C{int} @raise TypeError: Unexpected type for int C{n}. @raise OverflowError: Not in range.
Below is the the instruction that describes the task: ### Input: Writes a 24 bit unsigned integer to the stream. @since: 0.4 @param n: 24 bit unsigned integer @type n: C{int} @raise TypeError: Unexpected type for int C{n}. @raise OverflowError: Not in range. ### Response: def write_24bit_uint(self, n): """ Writes a 24 bit unsigned integer to the stream. @since: 0.4 @param n: 24 bit unsigned integer @type n: C{int} @raise TypeError: Unexpected type for int C{n}. @raise OverflowError: Not in range. """ if type(n) not in python.int_types: raise TypeError('expected an int (got:%r)' % (type(n),)) if not 0 <= n <= 0xffffff: raise OverflowError("n is out of range") order = None if not self._is_big_endian(): order = [0, 8, 16] else: order = [16, 8, 0] for x in order: self.write_uchar((n >> x) & 0xff)
def create_instruction( self, parent, instruction, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Creates an instruction for how data should be labeled. Example: >>> from google.cloud import datalabeling_v1beta1 >>> >>> client = datalabeling_v1beta1.DataLabelingServiceClient() >>> >>> parent = client.project_path('[PROJECT]') >>> >>> # TODO: Initialize `instruction`: >>> instruction = {} >>> >>> response = client.create_instruction(parent, instruction) >>> >>> def callback(operation_future): ... # Handle result. ... result = operation_future.result() >>> >>> response.add_done_callback(callback) >>> >>> # Handle metadata. >>> metadata = response.metadata() Args: parent (str): Required. Instruction resource parent, format: projects/{project\_id} instruction (Union[dict, ~google.cloud.datalabeling_v1beta1.types.Instruction]): Required. Instruction of how to perform the labeling task. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.datalabeling_v1beta1.types.Instruction` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.datalabeling_v1beta1.types._OperationFuture` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """ # Wrap the transport method to add retry and timeout logic. if "create_instruction" not in self._inner_api_calls: self._inner_api_calls[ "create_instruction" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.create_instruction, default_retry=self._method_configs["CreateInstruction"].retry, default_timeout=self._method_configs["CreateInstruction"].timeout, client_info=self._client_info, ) request = data_labeling_service_pb2.CreateInstructionRequest( parent=parent, instruction=instruction ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("parent", parent)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) operation = self._inner_api_calls["create_instruction"]( request, retry=retry, timeout=timeout, metadata=metadata ) return google.api_core.operation.from_gapic( operation, self.transport._operations_client, instruction_pb2.Instruction, metadata_type=proto_operations_pb2.CreateInstructionMetadata, )
Creates an instruction for how data should be labeled. Example: >>> from google.cloud import datalabeling_v1beta1 >>> >>> client = datalabeling_v1beta1.DataLabelingServiceClient() >>> >>> parent = client.project_path('[PROJECT]') >>> >>> # TODO: Initialize `instruction`: >>> instruction = {} >>> >>> response = client.create_instruction(parent, instruction) >>> >>> def callback(operation_future): ... # Handle result. ... result = operation_future.result() >>> >>> response.add_done_callback(callback) >>> >>> # Handle metadata. >>> metadata = response.metadata() Args: parent (str): Required. Instruction resource parent, format: projects/{project\_id} instruction (Union[dict, ~google.cloud.datalabeling_v1beta1.types.Instruction]): Required. Instruction of how to perform the labeling task. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.datalabeling_v1beta1.types.Instruction` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.datalabeling_v1beta1.types._OperationFuture` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid.
Below is the the instruction that describes the task: ### Input: Creates an instruction for how data should be labeled. Example: >>> from google.cloud import datalabeling_v1beta1 >>> >>> client = datalabeling_v1beta1.DataLabelingServiceClient() >>> >>> parent = client.project_path('[PROJECT]') >>> >>> # TODO: Initialize `instruction`: >>> instruction = {} >>> >>> response = client.create_instruction(parent, instruction) >>> >>> def callback(operation_future): ... # Handle result. ... result = operation_future.result() >>> >>> response.add_done_callback(callback) >>> >>> # Handle metadata. >>> metadata = response.metadata() Args: parent (str): Required. Instruction resource parent, format: projects/{project\_id} instruction (Union[dict, ~google.cloud.datalabeling_v1beta1.types.Instruction]): Required. Instruction of how to perform the labeling task. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.datalabeling_v1beta1.types.Instruction` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.datalabeling_v1beta1.types._OperationFuture` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. ### Response: def create_instruction( self, parent, instruction, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Creates an instruction for how data should be labeled. Example: >>> from google.cloud import datalabeling_v1beta1 >>> >>> client = datalabeling_v1beta1.DataLabelingServiceClient() >>> >>> parent = client.project_path('[PROJECT]') >>> >>> # TODO: Initialize `instruction`: >>> instruction = {} >>> >>> response = client.create_instruction(parent, instruction) >>> >>> def callback(operation_future): ... # Handle result. ... result = operation_future.result() >>> >>> response.add_done_callback(callback) >>> >>> # Handle metadata. >>> metadata = response.metadata() Args: parent (str): Required. Instruction resource parent, format: projects/{project\_id} instruction (Union[dict, ~google.cloud.datalabeling_v1beta1.types.Instruction]): Required. Instruction of how to perform the labeling task. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.datalabeling_v1beta1.types.Instruction` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.datalabeling_v1beta1.types._OperationFuture` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """ # Wrap the transport method to add retry and timeout logic. if "create_instruction" not in self._inner_api_calls: self._inner_api_calls[ "create_instruction" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.create_instruction, default_retry=self._method_configs["CreateInstruction"].retry, default_timeout=self._method_configs["CreateInstruction"].timeout, client_info=self._client_info, ) request = data_labeling_service_pb2.CreateInstructionRequest( parent=parent, instruction=instruction ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("parent", parent)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) operation = self._inner_api_calls["create_instruction"]( request, retry=retry, timeout=timeout, metadata=metadata ) return google.api_core.operation.from_gapic( operation, self.transport._operations_client, instruction_pb2.Instruction, metadata_type=proto_operations_pb2.CreateInstructionMetadata, )
def get_uniform_comparator(comparator): """ convert comparator alias to uniform name """ if comparator in ["eq", "equals", "==", "is"]: return "equals" elif comparator in ["lt", "less_than"]: return "less_than" elif comparator in ["le", "less_than_or_equals"]: return "less_than_or_equals" elif comparator in ["gt", "greater_than"]: return "greater_than" elif comparator in ["ge", "greater_than_or_equals"]: return "greater_than_or_equals" elif comparator in ["ne", "not_equals"]: return "not_equals" elif comparator in ["str_eq", "string_equals"]: return "string_equals" elif comparator in ["len_eq", "length_equals", "count_eq"]: return "length_equals" elif comparator in ["len_gt", "count_gt", "length_greater_than", "count_greater_than"]: return "length_greater_than" elif comparator in ["len_ge", "count_ge", "length_greater_than_or_equals", \ "count_greater_than_or_equals"]: return "length_greater_than_or_equals" elif comparator in ["len_lt", "count_lt", "length_less_than", "count_less_than"]: return "length_less_than" elif comparator in ["len_le", "count_le", "length_less_than_or_equals", \ "count_less_than_or_equals"]: return "length_less_than_or_equals" else: return comparator
convert comparator alias to uniform name
Below is the the instruction that describes the task: ### Input: convert comparator alias to uniform name ### Response: def get_uniform_comparator(comparator): """ convert comparator alias to uniform name """ if comparator in ["eq", "equals", "==", "is"]: return "equals" elif comparator in ["lt", "less_than"]: return "less_than" elif comparator in ["le", "less_than_or_equals"]: return "less_than_or_equals" elif comparator in ["gt", "greater_than"]: return "greater_than" elif comparator in ["ge", "greater_than_or_equals"]: return "greater_than_or_equals" elif comparator in ["ne", "not_equals"]: return "not_equals" elif comparator in ["str_eq", "string_equals"]: return "string_equals" elif comparator in ["len_eq", "length_equals", "count_eq"]: return "length_equals" elif comparator in ["len_gt", "count_gt", "length_greater_than", "count_greater_than"]: return "length_greater_than" elif comparator in ["len_ge", "count_ge", "length_greater_than_or_equals", \ "count_greater_than_or_equals"]: return "length_greater_than_or_equals" elif comparator in ["len_lt", "count_lt", "length_less_than", "count_less_than"]: return "length_less_than" elif comparator in ["len_le", "count_le", "length_less_than_or_equals", \ "count_less_than_or_equals"]: return "length_less_than_or_equals" else: return comparator
def detach_from_all(self, bIgnoreExceptions = False): """ Detaches from all processes currently being debugged. @note: To better handle last debugging event, call L{stop} instead. @type bIgnoreExceptions: bool @param bIgnoreExceptions: C{True} to ignore any exceptions that may be raised when detaching. @raise WindowsError: Raises an exception on error, unless C{bIgnoreExceptions} is C{True}. """ for pid in self.get_debugee_pids(): self.detach(pid, bIgnoreExceptions = bIgnoreExceptions)
Detaches from all processes currently being debugged. @note: To better handle last debugging event, call L{stop} instead. @type bIgnoreExceptions: bool @param bIgnoreExceptions: C{True} to ignore any exceptions that may be raised when detaching. @raise WindowsError: Raises an exception on error, unless C{bIgnoreExceptions} is C{True}.
Below is the the instruction that describes the task: ### Input: Detaches from all processes currently being debugged. @note: To better handle last debugging event, call L{stop} instead. @type bIgnoreExceptions: bool @param bIgnoreExceptions: C{True} to ignore any exceptions that may be raised when detaching. @raise WindowsError: Raises an exception on error, unless C{bIgnoreExceptions} is C{True}. ### Response: def detach_from_all(self, bIgnoreExceptions = False): """ Detaches from all processes currently being debugged. @note: To better handle last debugging event, call L{stop} instead. @type bIgnoreExceptions: bool @param bIgnoreExceptions: C{True} to ignore any exceptions that may be raised when detaching. @raise WindowsError: Raises an exception on error, unless C{bIgnoreExceptions} is C{True}. """ for pid in self.get_debugee_pids(): self.detach(pid, bIgnoreExceptions = bIgnoreExceptions)
def confidence_interval_hazard_(self): """ The confidence interval of the hazard. """ return self._compute_confidence_bounds_of_transform(self._hazard, self.alpha, self._ci_labels)
The confidence interval of the hazard.
Below is the the instruction that describes the task: ### Input: The confidence interval of the hazard. ### Response: def confidence_interval_hazard_(self): """ The confidence interval of the hazard. """ return self._compute_confidence_bounds_of_transform(self._hazard, self.alpha, self._ci_labels)
def save(self, *args, **kwargs): ''' Just add "s" if no plural name given. ''' if not self.pluralName: self.pluralName = self.name + 's' super(self.__class__, self).save(*args, **kwargs)
Just add "s" if no plural name given.
Below is the the instruction that describes the task: ### Input: Just add "s" if no plural name given. ### Response: def save(self, *args, **kwargs): ''' Just add "s" if no plural name given. ''' if not self.pluralName: self.pluralName = self.name + 's' super(self.__class__, self).save(*args, **kwargs)
def reinit_configurations(self, request): """ Re-initialize configuration for resource if it has been changed. This method should be called if resource consumption strategy was changed. """ now = timezone.now() # Step 1. Collect all resources with changed configuration. changed_resources = [] for resource_model in CostTrackingRegister.registered_resources: for resource in resource_model.objects.all(): try: pe = models.PriceEstimate.objects.get(scope=resource, month=now.month, year=now.year) except models.PriceEstimate.DoesNotExist: changed_resources.append(resource) else: new_configuration = CostTrackingRegister.get_configuration(resource) if new_configuration != pe.consumption_details.configuration: changed_resources.append(resource) # Step 2. Re-init configuration and recalculate estimate for changed resources. for resource in changed_resources: models.PriceEstimate.update_resource_estimate(resource, CostTrackingRegister.get_configuration(resource)) message = _('Configuration was reinitialized for %(count)s resources') % {'count': len(changed_resources)} self.message_user(request, message) return redirect(reverse('admin:cost_tracking_defaultpricelistitem_changelist'))
Re-initialize configuration for resource if it has been changed. This method should be called if resource consumption strategy was changed.
Below is the the instruction that describes the task: ### Input: Re-initialize configuration for resource if it has been changed. This method should be called if resource consumption strategy was changed. ### Response: def reinit_configurations(self, request): """ Re-initialize configuration for resource if it has been changed. This method should be called if resource consumption strategy was changed. """ now = timezone.now() # Step 1. Collect all resources with changed configuration. changed_resources = [] for resource_model in CostTrackingRegister.registered_resources: for resource in resource_model.objects.all(): try: pe = models.PriceEstimate.objects.get(scope=resource, month=now.month, year=now.year) except models.PriceEstimate.DoesNotExist: changed_resources.append(resource) else: new_configuration = CostTrackingRegister.get_configuration(resource) if new_configuration != pe.consumption_details.configuration: changed_resources.append(resource) # Step 2. Re-init configuration and recalculate estimate for changed resources. for resource in changed_resources: models.PriceEstimate.update_resource_estimate(resource, CostTrackingRegister.get_configuration(resource)) message = _('Configuration was reinitialized for %(count)s resources') % {'count': len(changed_resources)} self.message_user(request, message) return redirect(reverse('admin:cost_tracking_defaultpricelistitem_changelist'))
def add_project_levels(cls, project): """ Add project sub levels extra items """ eitem_path = '' eitem_project_levels = {} if project is not None: subprojects = project.split('.') for i in range(0, len(subprojects)): if i > 0: eitem_path += "." eitem_path += subprojects[i] eitem_project_levels['project_' + str(i + 1)] = eitem_path return eitem_project_levels
Add project sub levels extra items
Below is the the instruction that describes the task: ### Input: Add project sub levels extra items ### Response: def add_project_levels(cls, project): """ Add project sub levels extra items """ eitem_path = '' eitem_project_levels = {} if project is not None: subprojects = project.split('.') for i in range(0, len(subprojects)): if i > 0: eitem_path += "." eitem_path += subprojects[i] eitem_project_levels['project_' + str(i + 1)] = eitem_path return eitem_project_levels
def _read_points(self, vlrs): """ private function to handle reading of the points record parts of the las file. the header is needed for the point format and number of points the vlrs are need to get the potential laszip vlr as well as the extra bytes vlr """ try: extra_dims = vlrs.get("ExtraBytesVlr")[0].type_of_extra_dims() except IndexError: extra_dims = None point_format = PointFormat(self.header.point_format_id, extra_dims=extra_dims) if self.header.are_points_compressed: laszip_vlr = vlrs.pop(vlrs.index("LasZipVlr")) points = self._read_compressed_points_data(laszip_vlr, point_format) else: points = record.PackedPointRecord.from_stream( self.stream, point_format, self.header.point_count ) return points
private function to handle reading of the points record parts of the las file. the header is needed for the point format and number of points the vlrs are need to get the potential laszip vlr as well as the extra bytes vlr
Below is the the instruction that describes the task: ### Input: private function to handle reading of the points record parts of the las file. the header is needed for the point format and number of points the vlrs are need to get the potential laszip vlr as well as the extra bytes vlr ### Response: def _read_points(self, vlrs): """ private function to handle reading of the points record parts of the las file. the header is needed for the point format and number of points the vlrs are need to get the potential laszip vlr as well as the extra bytes vlr """ try: extra_dims = vlrs.get("ExtraBytesVlr")[0].type_of_extra_dims() except IndexError: extra_dims = None point_format = PointFormat(self.header.point_format_id, extra_dims=extra_dims) if self.header.are_points_compressed: laszip_vlr = vlrs.pop(vlrs.index("LasZipVlr")) points = self._read_compressed_points_data(laszip_vlr, point_format) else: points = record.PackedPointRecord.from_stream( self.stream, point_format, self.header.point_count ) return points
def find_readlength(args): # type: (Namespace) -> int """Estimate length of reads based on 10000 first.""" try: bed_file = args.treatment[0] except AttributeError: bed_file = args.infiles[0] filereader = "cat " if bed_file.endswith(".gz") and search("linux", platform, IGNORECASE): filereader = "zcat " elif bed_file.endswith(".gz") and search("darwin", platform, IGNORECASE): filereader = "gzcat " elif bed_file.endswith(".bz2"): filereader = "bzgrep " command = filereader + "{} | head -10000".format(bed_file) output = check_output(command, shell=True) df = pd.read_table( BytesIO(output), header=None, usecols=[1, 2], sep="\t", names=["Start", "End"]) readlengths = df.End - df.Start mean_readlength = readlengths.mean() median_readlength = readlengths.median() max_readlength = readlengths.max() min_readlength = readlengths.min() logging.info(( "Used first 10000 reads of {} to estimate a median read length of {}\n" "Mean readlength: {}, max readlength: {}, min readlength: {}.").format( bed_file, median_readlength, mean_readlength, max_readlength, min_readlength)) return median_readlength
Estimate length of reads based on 10000 first.
Below is the the instruction that describes the task: ### Input: Estimate length of reads based on 10000 first. ### Response: def find_readlength(args): # type: (Namespace) -> int """Estimate length of reads based on 10000 first.""" try: bed_file = args.treatment[0] except AttributeError: bed_file = args.infiles[0] filereader = "cat " if bed_file.endswith(".gz") and search("linux", platform, IGNORECASE): filereader = "zcat " elif bed_file.endswith(".gz") and search("darwin", platform, IGNORECASE): filereader = "gzcat " elif bed_file.endswith(".bz2"): filereader = "bzgrep " command = filereader + "{} | head -10000".format(bed_file) output = check_output(command, shell=True) df = pd.read_table( BytesIO(output), header=None, usecols=[1, 2], sep="\t", names=["Start", "End"]) readlengths = df.End - df.Start mean_readlength = readlengths.mean() median_readlength = readlengths.median() max_readlength = readlengths.max() min_readlength = readlengths.min() logging.info(( "Used first 10000 reads of {} to estimate a median read length of {}\n" "Mean readlength: {}, max readlength: {}, min readlength: {}.").format( bed_file, median_readlength, mean_readlength, max_readlength, min_readlength)) return median_readlength
def _score_clusters(self, X, y=None): """ Determines the "scores" of the cluster, the metric that determines the size of the cluster visualized on the visualization. """ stype = self.scoring.lower() # scoring method name if stype == "membership": return np.bincount(self.estimator.labels_) raise YellowbrickValueError("unknown scoring method '{}'".format(stype))
Determines the "scores" of the cluster, the metric that determines the size of the cluster visualized on the visualization.
Below is the the instruction that describes the task: ### Input: Determines the "scores" of the cluster, the metric that determines the size of the cluster visualized on the visualization. ### Response: def _score_clusters(self, X, y=None): """ Determines the "scores" of the cluster, the metric that determines the size of the cluster visualized on the visualization. """ stype = self.scoring.lower() # scoring method name if stype == "membership": return np.bincount(self.estimator.labels_) raise YellowbrickValueError("unknown scoring method '{}'".format(stype))
def create_seq(self, ): """Create a sequence and store it in the self.sequence :returns: None :rtype: None :raises: None """ name = self.name_le.text() desc = self.desc_pte.toPlainText() try: seq = djadapter.models.Sequence(name=name, project=self._project, description=desc) seq.save() self.sequence = seq self.accept() except: log.exception("Could not create new sequence")
Create a sequence and store it in the self.sequence :returns: None :rtype: None :raises: None
Below is the the instruction that describes the task: ### Input: Create a sequence and store it in the self.sequence :returns: None :rtype: None :raises: None ### Response: def create_seq(self, ): """Create a sequence and store it in the self.sequence :returns: None :rtype: None :raises: None """ name = self.name_le.text() desc = self.desc_pte.toPlainText() try: seq = djadapter.models.Sequence(name=name, project=self._project, description=desc) seq.save() self.sequence = seq self.accept() except: log.exception("Could not create new sequence")
def values(self): """Return data in `self` as a numpy array. If all columns are the same dtype, the resulting array will have this dtype. If there are >1 dtypes in columns, then the resulting array will have dtype `object`. """ dtypes = [col.dtype for col in self.columns] if len(set(dtypes)) > 1: dtype = object else: dtype = None return np.array(self.columns, dtype=dtype).T
Return data in `self` as a numpy array. If all columns are the same dtype, the resulting array will have this dtype. If there are >1 dtypes in columns, then the resulting array will have dtype `object`.
Below is the the instruction that describes the task: ### Input: Return data in `self` as a numpy array. If all columns are the same dtype, the resulting array will have this dtype. If there are >1 dtypes in columns, then the resulting array will have dtype `object`. ### Response: def values(self): """Return data in `self` as a numpy array. If all columns are the same dtype, the resulting array will have this dtype. If there are >1 dtypes in columns, then the resulting array will have dtype `object`. """ dtypes = [col.dtype for col in self.columns] if len(set(dtypes)) > 1: dtype = object else: dtype = None return np.array(self.columns, dtype=dtype).T
def download(self, output="", outputFile="", silent=True): """ Downloads the image of the comic onto your computer. Arguments: output: the output directory where comics will be downloaded to. The default argument for 'output is the empty string; if the empty string is passed, it defaults to a "Downloads" directory in your home folder (this directory will be created if it does not exist). outputFile: the filename that will be written. If the empty string is passed, outputFile will default to a string of the form xkcd-(comic number)-(image filename), so for example, xkcd-1691-optimization.png. silent: boolean, defaults to True. If set to False, an error will be printed to standard output should the provided integer argument not be valid. Returns the path to the downloaded file, or an empty string in the event of failure.""" image = urllib.urlopen(self.imageLink).read() #Process optional input to work out where the dowload will go and what it'll be called if output != "": output = os.path.abspath(os.path.expanduser(output)) if output == "" or not os.path.exists(output): output = os.path.expanduser(os.path.join("~", "Downloads")) # Create ~/Downloads if it doesn't exist, since this is the default path. if not os.path.exists(output): os.mkdir(output) if outputFile == "": outputFile = "xkcd-" + str(self.number) + "-" + self.imageName output = os.path.join(output, outputFile) try: download = open(output, 'wb') except: if not silent: print("Unable to make file " + output) return "" download.write(image) download.close() return output
Downloads the image of the comic onto your computer. Arguments: output: the output directory where comics will be downloaded to. The default argument for 'output is the empty string; if the empty string is passed, it defaults to a "Downloads" directory in your home folder (this directory will be created if it does not exist). outputFile: the filename that will be written. If the empty string is passed, outputFile will default to a string of the form xkcd-(comic number)-(image filename), so for example, xkcd-1691-optimization.png. silent: boolean, defaults to True. If set to False, an error will be printed to standard output should the provided integer argument not be valid. Returns the path to the downloaded file, or an empty string in the event of failure.
Below is the the instruction that describes the task: ### Input: Downloads the image of the comic onto your computer. Arguments: output: the output directory where comics will be downloaded to. The default argument for 'output is the empty string; if the empty string is passed, it defaults to a "Downloads" directory in your home folder (this directory will be created if it does not exist). outputFile: the filename that will be written. If the empty string is passed, outputFile will default to a string of the form xkcd-(comic number)-(image filename), so for example, xkcd-1691-optimization.png. silent: boolean, defaults to True. If set to False, an error will be printed to standard output should the provided integer argument not be valid. Returns the path to the downloaded file, or an empty string in the event of failure. ### Response: def download(self, output="", outputFile="", silent=True): """ Downloads the image of the comic onto your computer. Arguments: output: the output directory where comics will be downloaded to. The default argument for 'output is the empty string; if the empty string is passed, it defaults to a "Downloads" directory in your home folder (this directory will be created if it does not exist). outputFile: the filename that will be written. If the empty string is passed, outputFile will default to a string of the form xkcd-(comic number)-(image filename), so for example, xkcd-1691-optimization.png. silent: boolean, defaults to True. If set to False, an error will be printed to standard output should the provided integer argument not be valid. Returns the path to the downloaded file, or an empty string in the event of failure.""" image = urllib.urlopen(self.imageLink).read() #Process optional input to work out where the dowload will go and what it'll be called if output != "": output = os.path.abspath(os.path.expanduser(output)) if output == "" or not os.path.exists(output): output = os.path.expanduser(os.path.join("~", "Downloads")) # Create ~/Downloads if it doesn't exist, since this is the default path. if not os.path.exists(output): os.mkdir(output) if outputFile == "": outputFile = "xkcd-" + str(self.number) + "-" + self.imageName output = os.path.join(output, outputFile) try: download = open(output, 'wb') except: if not silent: print("Unable to make file " + output) return "" download.write(image) download.close() return output
def get_freq_tuples(my_list, print_total_threshold): """ Turn a list of errors into frequency-sorted tuples thresholded by a certain total number """ d = {} for token in my_list: d.setdefault(token, 0) d[token] += 1 return sorted(d.items(), key=operator.itemgetter(1), reverse=True)[:print_total_threshold]
Turn a list of errors into frequency-sorted tuples thresholded by a certain total number
Below is the the instruction that describes the task: ### Input: Turn a list of errors into frequency-sorted tuples thresholded by a certain total number ### Response: def get_freq_tuples(my_list, print_total_threshold): """ Turn a list of errors into frequency-sorted tuples thresholded by a certain total number """ d = {} for token in my_list: d.setdefault(token, 0) d[token] += 1 return sorted(d.items(), key=operator.itemgetter(1), reverse=True)[:print_total_threshold]
def touchstone_data(obj): r""" Validate if an object is an :ref:`TouchstoneData` pseudo-type object. :param obj: Object :type obj: any :raises: RuntimeError (Argument \`*[argument_name]*\` is not valid). The token \*[argument_name]\* is replaced by the name of the argument the contract is attached to :rtype: None """ if (not isinstance(obj, dict)) or ( isinstance(obj, dict) and (sorted(obj.keys()) != sorted(["points", "freq", "pars"])) ): raise ValueError(pexdoc.pcontracts.get_exdesc()) if not (isinstance(obj["points"], int) and (obj["points"] > 0)): raise ValueError(pexdoc.pcontracts.get_exdesc()) if _check_increasing_real_numpy_vector(obj["freq"]): raise ValueError(pexdoc.pcontracts.get_exdesc()) if not isinstance(obj["pars"], np.ndarray): raise ValueError(pexdoc.pcontracts.get_exdesc()) vdata = ["int", "float", "complex"] if not any([obj["pars"].dtype.name.startswith(item) for item in vdata]): raise ValueError(pexdoc.pcontracts.get_exdesc()) if obj["freq"].size != obj["points"]: raise ValueError(pexdoc.pcontracts.get_exdesc()) nports = int(math.sqrt(obj["pars"].size / obj["freq"].size)) if obj["points"] * (nports ** 2) != obj["pars"].size: raise ValueError(pexdoc.pcontracts.get_exdesc())
r""" Validate if an object is an :ref:`TouchstoneData` pseudo-type object. :param obj: Object :type obj: any :raises: RuntimeError (Argument \`*[argument_name]*\` is not valid). The token \*[argument_name]\* is replaced by the name of the argument the contract is attached to :rtype: None
Below is the the instruction that describes the task: ### Input: r""" Validate if an object is an :ref:`TouchstoneData` pseudo-type object. :param obj: Object :type obj: any :raises: RuntimeError (Argument \`*[argument_name]*\` is not valid). The token \*[argument_name]\* is replaced by the name of the argument the contract is attached to :rtype: None ### Response: def touchstone_data(obj): r""" Validate if an object is an :ref:`TouchstoneData` pseudo-type object. :param obj: Object :type obj: any :raises: RuntimeError (Argument \`*[argument_name]*\` is not valid). The token \*[argument_name]\* is replaced by the name of the argument the contract is attached to :rtype: None """ if (not isinstance(obj, dict)) or ( isinstance(obj, dict) and (sorted(obj.keys()) != sorted(["points", "freq", "pars"])) ): raise ValueError(pexdoc.pcontracts.get_exdesc()) if not (isinstance(obj["points"], int) and (obj["points"] > 0)): raise ValueError(pexdoc.pcontracts.get_exdesc()) if _check_increasing_real_numpy_vector(obj["freq"]): raise ValueError(pexdoc.pcontracts.get_exdesc()) if not isinstance(obj["pars"], np.ndarray): raise ValueError(pexdoc.pcontracts.get_exdesc()) vdata = ["int", "float", "complex"] if not any([obj["pars"].dtype.name.startswith(item) for item in vdata]): raise ValueError(pexdoc.pcontracts.get_exdesc()) if obj["freq"].size != obj["points"]: raise ValueError(pexdoc.pcontracts.get_exdesc()) nports = int(math.sqrt(obj["pars"].size / obj["freq"].size)) if obj["points"] * (nports ** 2) != obj["pars"].size: raise ValueError(pexdoc.pcontracts.get_exdesc())
def init(textCNN, vocab, model_mode, context, lr): """Initialize parameters.""" textCNN.initialize(mx.init.Xavier(), ctx=context, force_reinit=True) if model_mode != 'rand': textCNN.embedding.weight.set_data(vocab.embedding.idx_to_vec) if model_mode == 'multichannel': textCNN.embedding_extend.weight.set_data(vocab.embedding.idx_to_vec) if model_mode == 'static' or model_mode == 'multichannel': # Parameters of textCNN.embedding are not updated during training. textCNN.embedding.collect_params().setattr('grad_req', 'null') trainer = gluon.Trainer(textCNN.collect_params(), 'adam', {'learning_rate': lr}) return textCNN, trainer
Initialize parameters.
Below is the the instruction that describes the task: ### Input: Initialize parameters. ### Response: def init(textCNN, vocab, model_mode, context, lr): """Initialize parameters.""" textCNN.initialize(mx.init.Xavier(), ctx=context, force_reinit=True) if model_mode != 'rand': textCNN.embedding.weight.set_data(vocab.embedding.idx_to_vec) if model_mode == 'multichannel': textCNN.embedding_extend.weight.set_data(vocab.embedding.idx_to_vec) if model_mode == 'static' or model_mode == 'multichannel': # Parameters of textCNN.embedding are not updated during training. textCNN.embedding.collect_params().setattr('grad_req', 'null') trainer = gluon.Trainer(textCNN.collect_params(), 'adam', {'learning_rate': lr}) return textCNN, trainer
def validate_subnet(s): """Validate a dotted-quad ip address including a netmask. The string is considered a valid dotted-quad address with netmask if it consists of one to four octets (0-255) seperated by periods (.) followed by a forward slash (/) and a subnet bitmask which is expressed in dotted-quad format. >>> validate_subnet('127.0.0.1/255.255.255.255') True >>> validate_subnet('127.0/255.0.0.0') True >>> validate_subnet('127.0/255') True >>> validate_subnet('127.0.0.256/255.255.255.255') False >>> validate_subnet('127.0.0.1/255.255.255.256') False >>> validate_subnet('127.0.0.0') False >>> validate_subnet(None) Traceback (most recent call last): ... TypeError: expected string or unicode :param s: String to validate as a dotted-quad ip address with netmask. :type s: str :returns: ``True`` if a valid dotted-quad ip address with netmask, ``False`` otherwise. :raises: TypeError """ if isinstance(s, basestring): if '/' in s: start, mask = s.split('/', 2) return validate_ip(start) and validate_netmask(mask) else: return False raise TypeError("expected string or unicode")
Validate a dotted-quad ip address including a netmask. The string is considered a valid dotted-quad address with netmask if it consists of one to four octets (0-255) seperated by periods (.) followed by a forward slash (/) and a subnet bitmask which is expressed in dotted-quad format. >>> validate_subnet('127.0.0.1/255.255.255.255') True >>> validate_subnet('127.0/255.0.0.0') True >>> validate_subnet('127.0/255') True >>> validate_subnet('127.0.0.256/255.255.255.255') False >>> validate_subnet('127.0.0.1/255.255.255.256') False >>> validate_subnet('127.0.0.0') False >>> validate_subnet(None) Traceback (most recent call last): ... TypeError: expected string or unicode :param s: String to validate as a dotted-quad ip address with netmask. :type s: str :returns: ``True`` if a valid dotted-quad ip address with netmask, ``False`` otherwise. :raises: TypeError
Below is the the instruction that describes the task: ### Input: Validate a dotted-quad ip address including a netmask. The string is considered a valid dotted-quad address with netmask if it consists of one to four octets (0-255) seperated by periods (.) followed by a forward slash (/) and a subnet bitmask which is expressed in dotted-quad format. >>> validate_subnet('127.0.0.1/255.255.255.255') True >>> validate_subnet('127.0/255.0.0.0') True >>> validate_subnet('127.0/255') True >>> validate_subnet('127.0.0.256/255.255.255.255') False >>> validate_subnet('127.0.0.1/255.255.255.256') False >>> validate_subnet('127.0.0.0') False >>> validate_subnet(None) Traceback (most recent call last): ... TypeError: expected string or unicode :param s: String to validate as a dotted-quad ip address with netmask. :type s: str :returns: ``True`` if a valid dotted-quad ip address with netmask, ``False`` otherwise. :raises: TypeError ### Response: def validate_subnet(s): """Validate a dotted-quad ip address including a netmask. The string is considered a valid dotted-quad address with netmask if it consists of one to four octets (0-255) seperated by periods (.) followed by a forward slash (/) and a subnet bitmask which is expressed in dotted-quad format. >>> validate_subnet('127.0.0.1/255.255.255.255') True >>> validate_subnet('127.0/255.0.0.0') True >>> validate_subnet('127.0/255') True >>> validate_subnet('127.0.0.256/255.255.255.255') False >>> validate_subnet('127.0.0.1/255.255.255.256') False >>> validate_subnet('127.0.0.0') False >>> validate_subnet(None) Traceback (most recent call last): ... TypeError: expected string or unicode :param s: String to validate as a dotted-quad ip address with netmask. :type s: str :returns: ``True`` if a valid dotted-quad ip address with netmask, ``False`` otherwise. :raises: TypeError """ if isinstance(s, basestring): if '/' in s: start, mask = s.split('/', 2) return validate_ip(start) and validate_netmask(mask) else: return False raise TypeError("expected string or unicode")
def get_osdb_hash(self): """ Get the hash of this local videofile :return: hash as string """ if self._osdb_hash is None: self._osdb_hash = self._calculate_osdb_hash() return self._osdb_hash
Get the hash of this local videofile :return: hash as string
Below is the the instruction that describes the task: ### Input: Get the hash of this local videofile :return: hash as string ### Response: def get_osdb_hash(self): """ Get the hash of this local videofile :return: hash as string """ if self._osdb_hash is None: self._osdb_hash = self._calculate_osdb_hash() return self._osdb_hash
def get_selection(self): """ Read text from the X selection Usage: C{clipboard.get_selection()} @return: text contents of the mouse selection @rtype: C{str} @raise Exception: if no text was found in the selection """ Gdk.threads_enter() text = self.selection.wait_for_text() Gdk.threads_leave() if text is not None: return text else: raise Exception("No text found in X selection")
Read text from the X selection Usage: C{clipboard.get_selection()} @return: text contents of the mouse selection @rtype: C{str} @raise Exception: if no text was found in the selection
Below is the the instruction that describes the task: ### Input: Read text from the X selection Usage: C{clipboard.get_selection()} @return: text contents of the mouse selection @rtype: C{str} @raise Exception: if no text was found in the selection ### Response: def get_selection(self): """ Read text from the X selection Usage: C{clipboard.get_selection()} @return: text contents of the mouse selection @rtype: C{str} @raise Exception: if no text was found in the selection """ Gdk.threads_enter() text = self.selection.wait_for_text() Gdk.threads_leave() if text is not None: return text else: raise Exception("No text found in X selection")
def new_program(self, _id, series, title, subtitle, description, mpaaRating, starRating, runTime, year, showType, colorCode, originalAirDate, syndicatedEpisodeNumber, advisories): """Callback run for each new program entry""" if self.__v_program: # [Program: EP007501780030, EP00750178, Doctor Who, The Shakespeare Code, Witches cast a spell on the Doctor and Martha., None, None, None, None, Series, None, 2007-04-07, None, []] print("[Program: %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, " "%s, %s]" % (_id, series, title, subtitle, description, mpaaRating, starRating, runTime, year, showType, colorCode, originalAirDate, syndicatedEpisodeNumber, advisories))
Callback run for each new program entry
Below is the the instruction that describes the task: ### Input: Callback run for each new program entry ### Response: def new_program(self, _id, series, title, subtitle, description, mpaaRating, starRating, runTime, year, showType, colorCode, originalAirDate, syndicatedEpisodeNumber, advisories): """Callback run for each new program entry""" if self.__v_program: # [Program: EP007501780030, EP00750178, Doctor Who, The Shakespeare Code, Witches cast a spell on the Doctor and Martha., None, None, None, None, Series, None, 2007-04-07, None, []] print("[Program: %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, " "%s, %s]" % (_id, series, title, subtitle, description, mpaaRating, starRating, runTime, year, showType, colorCode, originalAirDate, syndicatedEpisodeNumber, advisories))
def get_default_config(self): """ Returns the default collector settings """ config = super(DiskUsageCollector, self).get_default_config() config.update({ 'path': 'iostat', 'devices': ('PhysicalDrive[0-9]+$' + '|md[0-9]+$' + '|sd[a-z]+[0-9]*$' + '|x?vd[a-z]+[0-9]*$' + '|disk[0-9]+$' + '|dm\-[0-9]+$'), 'sector_size': 512, 'send_zero': False, }) return config
Returns the default collector settings
Below is the the instruction that describes the task: ### Input: Returns the default collector settings ### Response: def get_default_config(self): """ Returns the default collector settings """ config = super(DiskUsageCollector, self).get_default_config() config.update({ 'path': 'iostat', 'devices': ('PhysicalDrive[0-9]+$' + '|md[0-9]+$' + '|sd[a-z]+[0-9]*$' + '|x?vd[a-z]+[0-9]*$' + '|disk[0-9]+$' + '|dm\-[0-9]+$'), 'sector_size': 512, 'send_zero': False, }) return config
def taf(wxdata: TafData, units: Units) -> TafTrans: """ Translate the results of taf.parse Keys: Forecast, Min-Temp, Max-Temp Forecast keys: Wind, Visibility, Clouds, Altimeter, Wind-Shear, Turbulance, Icing, Other """ translations = {'forecast': []} # type: ignore for line in wxdata.forecast: trans = shared(line, units) # type: ignore trans['wind'] = wind(line.wind_direction, line.wind_speed, line.wind_gust, unit=units.wind_speed) trans['wind_shear'] = wind_shear(line.wind_shear, units.altitude, units.wind_speed) trans['turbulance'] = turb_ice(line.turbulance, units.altitude) trans['icing'] = turb_ice(line.icing, units.altitude) # Remove false 'Sky Clear' if line type is 'BECMG' if line.type == 'BECMG' and trans['clouds'] == 'Sky clear': trans['clouds'] = None # type: ignore translations['forecast'].append(TafLineTrans(**trans)) # type: ignore translations['min_temp'] = min_max_temp(wxdata.min_temp, units.temperature) # type: ignore translations['max_temp'] = min_max_temp(wxdata.max_temp, units.temperature) # type: ignore translations['remarks'] = remarks.translate(wxdata.remarks) return TafTrans(**translations)
Translate the results of taf.parse Keys: Forecast, Min-Temp, Max-Temp Forecast keys: Wind, Visibility, Clouds, Altimeter, Wind-Shear, Turbulance, Icing, Other
Below is the the instruction that describes the task: ### Input: Translate the results of taf.parse Keys: Forecast, Min-Temp, Max-Temp Forecast keys: Wind, Visibility, Clouds, Altimeter, Wind-Shear, Turbulance, Icing, Other ### Response: def taf(wxdata: TafData, units: Units) -> TafTrans: """ Translate the results of taf.parse Keys: Forecast, Min-Temp, Max-Temp Forecast keys: Wind, Visibility, Clouds, Altimeter, Wind-Shear, Turbulance, Icing, Other """ translations = {'forecast': []} # type: ignore for line in wxdata.forecast: trans = shared(line, units) # type: ignore trans['wind'] = wind(line.wind_direction, line.wind_speed, line.wind_gust, unit=units.wind_speed) trans['wind_shear'] = wind_shear(line.wind_shear, units.altitude, units.wind_speed) trans['turbulance'] = turb_ice(line.turbulance, units.altitude) trans['icing'] = turb_ice(line.icing, units.altitude) # Remove false 'Sky Clear' if line type is 'BECMG' if line.type == 'BECMG' and trans['clouds'] == 'Sky clear': trans['clouds'] = None # type: ignore translations['forecast'].append(TafLineTrans(**trans)) # type: ignore translations['min_temp'] = min_max_temp(wxdata.min_temp, units.temperature) # type: ignore translations['max_temp'] = min_max_temp(wxdata.max_temp, units.temperature) # type: ignore translations['remarks'] = remarks.translate(wxdata.remarks) return TafTrans(**translations)
def import_from_txt( filename_or_fobj, encoding="utf-8", frame_style=FRAME_SENTINEL, *args, **kwargs ): """Return a rows.Table created from imported TXT file.""" # TODO: (maybe) # enable parsing of non-fixed-width-columns # with old algorithm - that would just split columns # at the vertical separator character for the frame. # (if doing so, include an optional parameter) # Also, this fixes an outstanding unreported issue: # trying to parse tables which fields values # included a Pipe char - "|" - would silently # yield bad results. source = Source.from_file(filename_or_fobj, mode="rb", plugin_name="txt", encoding=encoding) raw_contents = source.fobj.read().decode(encoding).rstrip("\n") if frame_style is FRAME_SENTINEL: frame_style = _guess_frame_style(raw_contents) else: frame_style = _parse_frame_style(frame_style) contents = raw_contents.splitlines() del raw_contents if frame_style != "None": contents = contents[1:-1] del contents[1] else: # the table is possibly generated from other source. # check if the line we reserve as a separator is realy empty. if not contents[1].strip(): del contents[1] col_positions = _parse_col_positions(frame_style, contents[0]) table_rows = [ [ row[start + 1 : end].strip() for start, end in zip(col_positions, col_positions[1:]) ] for row in contents ] meta = { "imported_from": "txt", "source": source, "frame_style": frame_style, } return create_table(table_rows, meta=meta, *args, **kwargs)
Return a rows.Table created from imported TXT file.
Below is the the instruction that describes the task: ### Input: Return a rows.Table created from imported TXT file. ### Response: def import_from_txt( filename_or_fobj, encoding="utf-8", frame_style=FRAME_SENTINEL, *args, **kwargs ): """Return a rows.Table created from imported TXT file.""" # TODO: (maybe) # enable parsing of non-fixed-width-columns # with old algorithm - that would just split columns # at the vertical separator character for the frame. # (if doing so, include an optional parameter) # Also, this fixes an outstanding unreported issue: # trying to parse tables which fields values # included a Pipe char - "|" - would silently # yield bad results. source = Source.from_file(filename_or_fobj, mode="rb", plugin_name="txt", encoding=encoding) raw_contents = source.fobj.read().decode(encoding).rstrip("\n") if frame_style is FRAME_SENTINEL: frame_style = _guess_frame_style(raw_contents) else: frame_style = _parse_frame_style(frame_style) contents = raw_contents.splitlines() del raw_contents if frame_style != "None": contents = contents[1:-1] del contents[1] else: # the table is possibly generated from other source. # check if the line we reserve as a separator is realy empty. if not contents[1].strip(): del contents[1] col_positions = _parse_col_positions(frame_style, contents[0]) table_rows = [ [ row[start + 1 : end].strip() for start, end in zip(col_positions, col_positions[1:]) ] for row in contents ] meta = { "imported_from": "txt", "source": source, "frame_style": frame_style, } return create_table(table_rows, meta=meta, *args, **kwargs)
def vdotg(v1, v2, ndim): """ Compute the dot product of two double precision vectors of arbitrary dimension. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/vdotg_c.html :param v1: First vector in the dot product. :type v1: list[ndim] :param v2: Second vector in the dot product. :type v2: list[ndim] :param ndim: Dimension of v1 and v2. :type ndim: int :return: dot product of v1 and v2. :rtype: float """ v1 = stypes.toDoubleVector(v1) v2 = stypes.toDoubleVector(v2) ndim = ctypes.c_int(ndim) return libspice.vdotg_c(v1, v2, ndim)
Compute the dot product of two double precision vectors of arbitrary dimension. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/vdotg_c.html :param v1: First vector in the dot product. :type v1: list[ndim] :param v2: Second vector in the dot product. :type v2: list[ndim] :param ndim: Dimension of v1 and v2. :type ndim: int :return: dot product of v1 and v2. :rtype: float
Below is the the instruction that describes the task: ### Input: Compute the dot product of two double precision vectors of arbitrary dimension. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/vdotg_c.html :param v1: First vector in the dot product. :type v1: list[ndim] :param v2: Second vector in the dot product. :type v2: list[ndim] :param ndim: Dimension of v1 and v2. :type ndim: int :return: dot product of v1 and v2. :rtype: float ### Response: def vdotg(v1, v2, ndim): """ Compute the dot product of two double precision vectors of arbitrary dimension. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/vdotg_c.html :param v1: First vector in the dot product. :type v1: list[ndim] :param v2: Second vector in the dot product. :type v2: list[ndim] :param ndim: Dimension of v1 and v2. :type ndim: int :return: dot product of v1 and v2. :rtype: float """ v1 = stypes.toDoubleVector(v1) v2 = stypes.toDoubleVector(v2) ndim = ctypes.c_int(ndim) return libspice.vdotg_c(v1, v2, ndim)
def skewer( self, input_fastq1, output_prefix, output_fastq1, log, cpus, adapters, input_fastq2=None, output_fastq2=None): """ Create commands with which to run skewer. :param str input_fastq1: Path to input (read 1) FASTQ file :param str output_prefix: Prefix for output FASTQ file names :param str output_fastq1: Path to (read 1) output FASTQ file :param str log: Path to file to which to write logging information :param int | str cpus: Number of processing cores to allow :param str adapters: Path to file with sequencing adapters :param str input_fastq2: Path to read 2 input FASTQ file :param str output_fastq2: Path to read 2 output FASTQ file :return list[str]: Sequence of commands to run to trim reads with skewer and rename files as desired. """ pe = input_fastq2 is not None mode = "pe" if pe else "any" cmds = list() cmd1 = self.tools.skewer + " --quiet" cmd1 += " -f sanger" cmd1 += " -t {0}".format(cpus) cmd1 += " -m {0}".format(mode) cmd1 += " -x {0}".format(adapters) cmd1 += " -o {0}".format(output_prefix) cmd1 += " {0}".format(input_fastq1) if input_fastq2 is None: cmds.append(cmd1) else: cmd1 += " {0}".format(input_fastq2) cmds.append(cmd1) if input_fastq2 is None: cmd2 = "mv {0} {1}".format(output_prefix + "-trimmed.fastq", output_fastq1) cmds.append(cmd2) else: cmd2 = "mv {0} {1}".format(output_prefix + "-trimmed-pair1.fastq", output_fastq1) cmds.append(cmd2) cmd3 = "mv {0} {1}".format(output_prefix + "-trimmed-pair2.fastq", output_fastq2) cmds.append(cmd3) cmd4 = "mv {0} {1}".format(output_prefix + "-trimmed.log", log) cmds.append(cmd4) return cmds
Create commands with which to run skewer. :param str input_fastq1: Path to input (read 1) FASTQ file :param str output_prefix: Prefix for output FASTQ file names :param str output_fastq1: Path to (read 1) output FASTQ file :param str log: Path to file to which to write logging information :param int | str cpus: Number of processing cores to allow :param str adapters: Path to file with sequencing adapters :param str input_fastq2: Path to read 2 input FASTQ file :param str output_fastq2: Path to read 2 output FASTQ file :return list[str]: Sequence of commands to run to trim reads with skewer and rename files as desired.
Below is the the instruction that describes the task: ### Input: Create commands with which to run skewer. :param str input_fastq1: Path to input (read 1) FASTQ file :param str output_prefix: Prefix for output FASTQ file names :param str output_fastq1: Path to (read 1) output FASTQ file :param str log: Path to file to which to write logging information :param int | str cpus: Number of processing cores to allow :param str adapters: Path to file with sequencing adapters :param str input_fastq2: Path to read 2 input FASTQ file :param str output_fastq2: Path to read 2 output FASTQ file :return list[str]: Sequence of commands to run to trim reads with skewer and rename files as desired. ### Response: def skewer( self, input_fastq1, output_prefix, output_fastq1, log, cpus, adapters, input_fastq2=None, output_fastq2=None): """ Create commands with which to run skewer. :param str input_fastq1: Path to input (read 1) FASTQ file :param str output_prefix: Prefix for output FASTQ file names :param str output_fastq1: Path to (read 1) output FASTQ file :param str log: Path to file to which to write logging information :param int | str cpus: Number of processing cores to allow :param str adapters: Path to file with sequencing adapters :param str input_fastq2: Path to read 2 input FASTQ file :param str output_fastq2: Path to read 2 output FASTQ file :return list[str]: Sequence of commands to run to trim reads with skewer and rename files as desired. """ pe = input_fastq2 is not None mode = "pe" if pe else "any" cmds = list() cmd1 = self.tools.skewer + " --quiet" cmd1 += " -f sanger" cmd1 += " -t {0}".format(cpus) cmd1 += " -m {0}".format(mode) cmd1 += " -x {0}".format(adapters) cmd1 += " -o {0}".format(output_prefix) cmd1 += " {0}".format(input_fastq1) if input_fastq2 is None: cmds.append(cmd1) else: cmd1 += " {0}".format(input_fastq2) cmds.append(cmd1) if input_fastq2 is None: cmd2 = "mv {0} {1}".format(output_prefix + "-trimmed.fastq", output_fastq1) cmds.append(cmd2) else: cmd2 = "mv {0} {1}".format(output_prefix + "-trimmed-pair1.fastq", output_fastq1) cmds.append(cmd2) cmd3 = "mv {0} {1}".format(output_prefix + "-trimmed-pair2.fastq", output_fastq2) cmds.append(cmd3) cmd4 = "mv {0} {1}".format(output_prefix + "-trimmed.log", log) cmds.append(cmd4) return cmds
def close(self): """Close the notification.""" with self.selenium.context(self.selenium.CONTEXT_CHROME): self.find_close_button().click() self.window.wait_for_notification(None)
Close the notification.
Below is the the instruction that describes the task: ### Input: Close the notification. ### Response: def close(self): """Close the notification.""" with self.selenium.context(self.selenium.CONTEXT_CHROME): self.find_close_button().click() self.window.wait_for_notification(None)
def get_resolution(self) -> list: '''Show device resolution.''' output, _ = self._execute('-s', self.device_sn, 'shell', 'wm', 'size') return output.split()[2].split('x')
Show device resolution.
Below is the the instruction that describes the task: ### Input: Show device resolution. ### Response: def get_resolution(self) -> list: '''Show device resolution.''' output, _ = self._execute('-s', self.device_sn, 'shell', 'wm', 'size') return output.split()[2].split('x')
def buscar_por_id(self, id_direito): """Obtém os direitos de um grupo de usuário e um grupo de equipamento. :param id_direito: Identificador do direito grupo equipamento. :return: Dicionário com a seguinte estrutura: :: {'direito_grupo_equipamento': {'id_grupo_equipamento': < id_grupo_equipamento >, 'exclusao': < exclusao >, 'alterar_config': < alterar_config >, 'nome_grupo_equipamento': < nome_grupo_equipamento >, 'id_grupo_usuario': < id_grupo_usuario >, 'escrita': < escrita >, 'nome_grupo_usuario': < nome_grupo_usuario >, 'id': < id >, 'leitura': < leitura >}} :raise InvalidParameterError: O identificador do direito grupo equipamento é nulo ou inválido. :raise DireitoGrupoEquipamentoNaoExisteError: Direito Grupo Equipamento não cadastrado. :raise DataBaseError: Falha na networkapi ao acessar o banco de dados. :raise XMLError: Falha na networkapi ao gerar o XML de resposta. """ if not is_valid_int_param(id_direito): raise InvalidParameterError( u'O identificador do direito grupo equipamento é inválido ou não foi informado.') url = 'direitosgrupoequipamento/' + str(id_direito) + '/' code, map = self.submit(None, 'GET', url) return self.response(code, map)
Obtém os direitos de um grupo de usuário e um grupo de equipamento. :param id_direito: Identificador do direito grupo equipamento. :return: Dicionário com a seguinte estrutura: :: {'direito_grupo_equipamento': {'id_grupo_equipamento': < id_grupo_equipamento >, 'exclusao': < exclusao >, 'alterar_config': < alterar_config >, 'nome_grupo_equipamento': < nome_grupo_equipamento >, 'id_grupo_usuario': < id_grupo_usuario >, 'escrita': < escrita >, 'nome_grupo_usuario': < nome_grupo_usuario >, 'id': < id >, 'leitura': < leitura >}} :raise InvalidParameterError: O identificador do direito grupo equipamento é nulo ou inválido. :raise DireitoGrupoEquipamentoNaoExisteError: Direito Grupo Equipamento não cadastrado. :raise DataBaseError: Falha na networkapi ao acessar o banco de dados. :raise XMLError: Falha na networkapi ao gerar o XML de resposta.
Below is the the instruction that describes the task: ### Input: Obtém os direitos de um grupo de usuário e um grupo de equipamento. :param id_direito: Identificador do direito grupo equipamento. :return: Dicionário com a seguinte estrutura: :: {'direito_grupo_equipamento': {'id_grupo_equipamento': < id_grupo_equipamento >, 'exclusao': < exclusao >, 'alterar_config': < alterar_config >, 'nome_grupo_equipamento': < nome_grupo_equipamento >, 'id_grupo_usuario': < id_grupo_usuario >, 'escrita': < escrita >, 'nome_grupo_usuario': < nome_grupo_usuario >, 'id': < id >, 'leitura': < leitura >}} :raise InvalidParameterError: O identificador do direito grupo equipamento é nulo ou inválido. :raise DireitoGrupoEquipamentoNaoExisteError: Direito Grupo Equipamento não cadastrado. :raise DataBaseError: Falha na networkapi ao acessar o banco de dados. :raise XMLError: Falha na networkapi ao gerar o XML de resposta. ### Response: def buscar_por_id(self, id_direito): """Obtém os direitos de um grupo de usuário e um grupo de equipamento. :param id_direito: Identificador do direito grupo equipamento. :return: Dicionário com a seguinte estrutura: :: {'direito_grupo_equipamento': {'id_grupo_equipamento': < id_grupo_equipamento >, 'exclusao': < exclusao >, 'alterar_config': < alterar_config >, 'nome_grupo_equipamento': < nome_grupo_equipamento >, 'id_grupo_usuario': < id_grupo_usuario >, 'escrita': < escrita >, 'nome_grupo_usuario': < nome_grupo_usuario >, 'id': < id >, 'leitura': < leitura >}} :raise InvalidParameterError: O identificador do direito grupo equipamento é nulo ou inválido. :raise DireitoGrupoEquipamentoNaoExisteError: Direito Grupo Equipamento não cadastrado. :raise DataBaseError: Falha na networkapi ao acessar o banco de dados. :raise XMLError: Falha na networkapi ao gerar o XML de resposta. """ if not is_valid_int_param(id_direito): raise InvalidParameterError( u'O identificador do direito grupo equipamento é inválido ou não foi informado.') url = 'direitosgrupoequipamento/' + str(id_direito) + '/' code, map = self.submit(None, 'GET', url) return self.response(code, map)