text_prompt
stringlengths
157
13.1k
code_prompt
stringlengths
7
19.8k
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def plot_contour( xall, yall, zall, ax=None, cmap=None, ncontours=100, vmin=None, vmax=None, levels=None, cbar=True, cax=None, cbar_label=None, cbar_orientation='vertical', norm=None, nbins=100, method='nearest', mask=False, **kwargs): """Plot a two-dimensional contour map by interpolating scattered data on a grid. Parameters xall : ndarray(T) Sample x-coordinates. yall : ndarray(T) Sample y-coordinates. zall : ndarray(T) Sample z-coordinates. ax : matplotlib.Axes object, optional, default=None The ax to plot to; if ax=None, a new ax (and fig) is created. cmap : matplotlib colormap, optional, default=None The color map to use. ncontours : int, optional, default=100 Number of contour levels. vmin : float, optional, default=None Lowest z-value to be plotted. vmax : float, optional, default=None Highest z-value to be plotted. levels : iterable of float, optional, default=None Contour levels to plot; use legacy style calculation if 'legacy'. cbar : boolean, optional, default=True Plot a color bar. cax : matplotlib.Axes object, optional, default=None Plot the colorbar into a custom axes object instead of stealing space from ax. cbar_label : str, optional, default=None Colorbar label string; use None to suppress it. cbar_orientation : str, optional, default='vertical' Colorbar orientation; choose 'vertical' or 'horizontal'. norm : matplotlib norm, optional, default=None Use a norm when coloring the contour plot. nbins : int, optional, default=100 Number of grid points used in each dimension. method : str, optional, default='nearest' Assignment method; scipy.interpolate.griddata supports the methods 'nearest', 'linear', and 'cubic'. mask : boolean, optional, default=False Hide unsampled areas is True. Optional parameters for contourf (**kwargs) corner_mask : boolean, optional Enable/disable corner masking, which only has an effect if z is a masked array. If False, any quad touching a masked point is masked out. If True, only the triangular corners of quads nearest those points are always masked out, other triangular corners comprising three unmasked points are contoured as usual. Defaults to rcParams['contour.corner_mask'], which defaults to True. alpha : float The alpha blending value. locator : [ None | ticker.Locator subclass ] If locator is None, the default MaxNLocator is used. The locator is used to determine the contour levels if they are not given explicitly via the levels argument. extend : [ ‘neither’ | ‘both’ | ‘min’ | ‘max’ ] Unless this is ‘neither’, contour levels are automatically added to one or both ends of the range so that all data are included. These added ranges are then mapped to the special colormap values which default to the ends of the colormap range, but can be set via matplotlib.colors.Colormap.set_under() and matplotlib.colors.Colormap.set_over() methods. xunits, yunits : [ None | registered units ] Override axis units by specifying an instance of a matplotlib.units.ConversionInterface. antialiased : boolean, optional Enable antialiasing, overriding the defaults. For filled contours, the default is True. For line contours, it is taken from rcParams[‘lines.antialiased’]. nchunk : [ 0 | integer ] If 0, no subdivision of the domain. Specify a positive integer to divide the domain into subdomains of nchunk by nchunk quads. Chunking reduces the maximum length of polygons generated by the contouring algorithm which reduces the rendering workload passed on to the backend and also requires slightly less RAM. It can however introduce rendering artifacts at chunk boundaries depending on the backend, the antialiased flag and value of alpha. hatches : A list of cross hatch patterns to use on the filled areas. If None, no hatching will be added to the contour. Hatching is supported in the PostScript, PDF, SVG and Agg backends only. zorder : float Set the zorder for the artist. Artists with lower zorder values are drawn first. Returns ------- fig : matplotlib.Figure object The figure in which the used ax resides. ax : matplotlib.Axes object The ax in which the map was plotted. misc : dict Contains a matplotlib.contour.QuadContourSet 'mappable' and, if requested, a matplotlib.Colorbar object 'cbar'. """
x, y, z = get_grid_data( xall, yall, zall, nbins=nbins, method=method) if vmin is None: vmin = _np.min(zall[zall > -_np.inf]) if vmax is None: vmax = _np.max(zall[zall < _np.inf]) if levels == 'legacy': eps = (vmax - vmin) / float(ncontours) levels = _np.linspace(vmin - eps, vmax + eps) if mask: _, _, counts = get_histogram( xall, yall, nbins=nbins, weights=None, avoid_zero_count=None) z = _np.ma.masked_where(counts.T <= 0, z) return plot_map( x, y, z, ax=ax, cmap=cmap, ncontours=ncontours, vmin=vmin, vmax=vmax, levels=levels, cbar=cbar, cax=cax, cbar_label=cbar_label, cbar_orientation=cbar_orientation, norm=norm, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def plot_state_map( xall, yall, states, ax=None, ncontours=100, cmap=None, cbar=True, cax=None, cbar_label='state', cbar_orientation='vertical', nbins=100, mask=True, **kwargs): """Plot a two-dimensional contour map of states by interpolating labels of scattered data on a grid. Parameters xall : ndarray(T) Sample x-coordinates. yall : ndarray(T) Sample y-coordinates. states : ndarray(T) Sample state labels. ax : matplotlib.Axes object, optional, default=None The ax to plot to; if ax=None, a new ax (and fig) is created. cmap : matplotlib colormap, optional, default=None The color map to use. ncontours : int, optional, default=100 Number of contour levels. cbar : boolean, optional, default=True Plot a color bar. cax : matplotlib.Axes object, optional, default=None Plot the colorbar into a custom axes object instead of stealing space from ax. cbar_label : str, optional, default='state' Colorbar label string; use None to suppress it. cbar_orientation : str, optional, default='vertical' Colorbar orientation; choose 'vertical' or 'horizontal'. nbins : int, optional, default=100 Number of grid points used in each dimension. mask : boolean, optional, default=False Hide unsampled areas is True. Optional parameters for contourf (**kwargs) corner_mask : boolean, optional Enable/disable corner masking, which only has an effect if z is a masked array. If False, any quad touching a masked point is masked out. If True, only the triangular corners of quads nearest those points are always masked out, other triangular corners comprising three unmasked points are contoured as usual. Defaults to rcParams['contour.corner_mask'], which defaults to True. alpha : float The alpha blending value. locator : [ None | ticker.Locator subclass ] If locator is None, the default MaxNLocator is used. The locator is used to determine the contour levels if they are not given explicitly via the levels argument. extend : [ ‘neither’ | ‘both’ | ‘min’ | ‘max’ ] Unless this is ‘neither’, contour levels are automatically added to one or both ends of the range so that all data are included. These added ranges are then mapped to the special colormap values which default to the ends of the colormap range, but can be set via matplotlib.colors.Colormap.set_under() and matplotlib.colors.Colormap.set_over() methods. xunits, yunits : [ None | registered units ] Override axis units by specifying an instance of a matplotlib.units.ConversionInterface. antialiased : boolean, optional Enable antialiasing, overriding the defaults. For filled contours, the default is True. For line contours, it is taken from rcParams[‘lines.antialiased’]. nchunk : [ 0 | integer ] If 0, no subdivision of the domain. Specify a positive integer to divide the domain into subdomains of nchunk by nchunk quads. Chunking reduces the maximum length of polygons generated by the contouring algorithm which reduces the rendering workload passed on to the backend and also requires slightly less RAM. It can however introduce rendering artifacts at chunk boundaries depending on the backend, the antialiased flag and value of alpha. hatches : A list of cross hatch patterns to use on the filled areas. If None, no hatching will be added to the contour. Hatching is supported in the PostScript, PDF, SVG and Agg backends only. zorder : float Set the zorder for the artist. Artists with lower zorder values are drawn first. Returns ------- fig : matplotlib.Figure object The figure in which the used ax resides. ax : matplotlib.Axes object The ax in which the map was plotted. misc : dict Contains a matplotlib.contour.QuadContourSet 'mappable' and, if requested, a matplotlib.Colorbar object 'cbar'. Notes ----- Please note that this plot is an approximative visualization: the underlying matplotlib.countourf function smoothes transitions between different values and, thus, coloring at state boundaries might be imprecise. """
from matplotlib.cm import get_cmap nstates = int(_np.max(states) + 1) cmap_ = get_cmap(cmap, nstates) fig, ax, misc = plot_contour( xall, yall, states, ax=ax, cmap=cmap_, ncontours=ncontours, vmin=None, vmax=None, levels=None, cbar=cbar, cax=cax, cbar_label=cbar_label, cbar_orientation=cbar_orientation, norm=None, nbins=nbins, method='nearest', mask=mask, **kwargs) if cbar: cmin, cmax = misc['mappable'].get_clim() f = (cmax - cmin) / float(nstates) n = _np.arange(nstates) misc['cbar'].set_ticks((n + 0.5) * f) misc['cbar'].set_ticklabels(n) return fig, ax, misc
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def single_traj_from_n_files(file_list, top): """ Creates a single trajectory object from a list of files """
traj = None for ff in file_list: if traj is None: traj = md.load(ff, top=top) else: traj = traj.join(md.load(ff, top=top)) return traj
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def add_element(self, e): r""" Appends a pipeline stage. Appends the given element to the end of the current chain. """
if not isinstance(e, Iterable): raise TypeError("given element {} is not iterable in terms of " "PyEMMAs coordinate pipeline.".format(e)) # only if we have more than one element if not e.is_reader and len(self._chain) >= 1: data_producer = self._chain[-1] # avoid calling the setter of StreamingTransformer.data_producer, since this # triggers a re-parametrization even on readers (where it makes not sense) e._data_producer = data_producer e.chunksize = self.chunksize self._chain.append(e)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def set_element(self, index, e): r""" Replaces a pipeline stage. Replace an element in chain and return replaced element. """
if index > len(self._chain): raise IndexError("tried to access element %i, but chain has only %i" " elements" % (index, len(self._chain))) if type(index) is not int: raise ValueError( "index is not a integer but '%s'" % str(type(index))) # if e is already in chain, we're finished if self._chain[index] is e: return # remove current index and its data producer replaced = self._chain.pop(index) if not replaced.is_reader: replaced.data_producer = None self._chain.insert(index, e) if index == 0: e.data_producer = e else: # rewire data_producers e.data_producer = self._chain[index - 1] # if e has a successive element, need to set data_producer try: successor = self._chain[index + 1] successor.data_producer = e except IndexError: pass # set data_producer for predecessor of e # self._chain[max(0, index - 1)].data_producer = self._chain[index] # since data producer of element after insertion changed, reset its status # TODO: make parameterized a property? self._chain[index]._estimated = False return replaced
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def parametrize(self): r""" Reads all data and discretizes it into discrete trajectories. """
for element in self._chain: if not element.is_reader and not element._estimated: element.estimate(element.data_producer, stride=self.param_stride, chunksize=self.chunksize) self._estimated = True
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _is_estimated(self): r""" Iterates through the pipeline elements and checks if every element is parametrized. """
result = self._estimated for el in self._chain: if not el.is_reader: result &= el._estimated return result
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def dtrajs(self): """ get discrete trajectories """
if not self._estimated: self.logger.info("not yet parametrized, running now.") self.parametrize() return self._chain[-1].dtrajs
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def save_dtrajs(self, prefix='', output_dir='.', output_format='ascii', extension='.dtraj'): r"""Saves calculated discrete trajectories. Filenames are taken from given reader. If data comes from memory dtrajs are written to a default filename. Parameters prefix : str prepend prefix to filenames. output_dir : str (optional) save files to this directory. Defaults to current working directory. output_format : str if format is 'ascii' dtrajs will be written as csv files, otherwise they will be written as NumPy .npy files. extension : str file extension to append (eg. '.itraj') """
clustering = self._chain[-1] reader = self._chain[0] from pyemma.coordinates.clustering.interface import AbstractClustering assert isinstance(clustering, AbstractClustering) trajfiles = None if isinstance(reader, FeatureReader): trajfiles = reader.filenames clustering.save_dtrajs( trajfiles, prefix, output_dir, output_format, extension)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def save(self, file_name, model_name='default', overwrite=False, save_streaming_chain=False): r""" saves the current state of this object to given file and name. Parameters file_name: str path to desired output file model_name: str, default='default' creates a group named 'model_name' in the given file, which will contain all of the data. If the name already exists, and overwrite is False (default) will raise a RuntimeError. overwrite: bool, default=False Should overwrite existing model names? save_streaming_chain : boolean, default=False if True, the data_producer(s) of this object will also be saved in the given file. Examples -------- """
from pyemma._base.serialization.h5file import H5File try: with H5File(file_name=file_name, mode='a') as f: f.add_serializable(model_name, obj=self, overwrite=overwrite, save_streaming_chain=save_streaming_chain) except Exception as e: msg = ('During saving the object {obj}") ' 'the following error occurred: {error}'.format(obj=self, error=e)) if isinstance(self, Loggable): self.logger.exception(msg) else: logger.exception(msg) raise
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def load(cls, file_name, model_name='default'): """ Loads a previously saved PyEMMA object from disk. Parameters file_name : str or file like object (has to provide read method). The file like object tried to be read for a serialized object. model_name: str, default='default' if multiple models are contained in the file, these can be accessed by their name. Use :func:`pyemma.list_models` to get a representation of all stored models. Returns ------- obj : the de-serialized object """
from .h5file import H5File with H5File(file_name, model_name=model_name, mode='r') as f: return f.model
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_version_for_class_from_state(state, klass): """ retrieves the version of the current klass from the state mapping from old locations to new ones. """
# klass may have been renamed, so we have to look this up in the class rename registry. names = [_importable_name(klass)] # lookup old names, handled by current klass. from .util import class_rename_registry names.extend(class_rename_registry.old_handled_by(klass)) for n in names: try: return state['class_tree_versions'][n] except KeyError: continue # if we did not find a suitable version number return infinity. if _debug: logger.debug('unable to obtain a __serialize_version for class %s', klass) return float('inf')
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def partial_fit(self, X): """ incrementally update the estimates Parameters X: array, list of arrays, PyEMMA reader input data. """
from pyemma.coordinates import source self._estimate(source(X), partial_fit=True) self._estimated = True return self
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def C00_(self): """ Instantaneous covariance matrix """
self._check_estimated() return self._rc.cov_XX(bessel=self.bessel)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def C0t_(self): """ Time-lagged covariance matrix """
self._check_estimated() return self._rc.cov_XY(bessel=self.bessel)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def Ctt_(self): """ Covariance matrix of the time shifted data"""
self._check_estimated() return self._rc.cov_YY(bessel=self.bessel)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def tram( ttrajs, dtrajs, bias, lag, unbiased_state=None, count_mode='sliding', connectivity='post_hoc_RE', maxiter=10000, maxerr=1.0E-15, save_convergence_info=0, dt_traj='1 step', connectivity_factor=1.0, nn=None, direct_space=False, N_dtram_accelerations=0, callback=None, init='mbar', init_maxiter=10000, init_maxerr=1e-8, equilibrium=None, overcounting_factor=1.0): r""" Transition-based reweighting analysis method Parameters ttrajs : numpy.ndarray(T), or list of numpy.ndarray(T_i) A single discrete trajectory or a list of discrete trajectories. The integers are in at any time. dtrajs : numpy.ndarray(T) of int, or list of numpy.ndarray(T_i) of int A single discrete trajectory or a list of discrete trajectories. The integers are indexes trajectory is in at any time. bias : numpy.ndarray(T, num_therm_states), or list of numpy.ndarray(T_i, num_therm_states) A single reduced bias energy trajectory or a list of reduced bias energy trajectories. For every simulation frame seen in trajectory i and time step t, btrajs[i][t, k] is the reduced bias energy of that frame evaluated in the k'th thermodynamic state (i.e. at the k'th umbrella/Hamiltonian/temperature) lag : int or list of int, optional, default=1 Integer lag time at which transitions are counted. Providing a list of lag times will trigger one estimation per lag time. unbiased_state : int, optional, default=None Index of the unbiased thermodynamic state or None if there is no unbiased data available. maxiter : int, optional, default=10000 The maximum number of dTRAM iterations before the estimator exits unsuccessfully. maxerr : float, optional, default=1e-15 Convergence criterion based on the maximal free energy change in a self-consistent iteration step. save_convergence_info : int, optional, default=0 Every save_convergence_info iteration steps, store the actual increment and the actual loglikelihood; 0 means no storage. dt_traj : str, optional, default='1 step' Description of the physical time corresponding to the lag. May be used by analysis algorithms such as plotting tools to pretty-print the axes. By default '1 step', i.e. there is no physical time unit. Specify by a number, whitespace and unit. Permitted units are (* is an arbitrary string): | 'fs', 'femtosecond*' | 'ps', 'picosecond*' | 'ns', 'nanosecond*' | 'us', 'microsecond*' | 'ms', 'millisecond*' | 's', 'second*' connectivity : str, optional, default='post_hoc_RE' One of 'post_hoc_RE', 'BAR_variance', 'reversible_pathways' or 'summed_count_matrix'. Defines what should be considered a connected set in the joint (product) space of conformations and thermodynamic ensembles. * 'reversible_pathways' : requires that every state in the connected set can be reached by following a pathway of reversible transitions. A reversible transition between two Markov states (within the same thermodynamic state k) is a pair of Markov states that belong to the same strongly connected component of the count matrix (from thermodynamic state k). A pathway of reversible transitions is a list of (i_(N-1), i_N)]. The thermodynamic state where the reversible transitions happen, is ignored in constructing the reversible pathways. This is equivalent to assuming that two ensembles overlap at some Markov state whenever there exist frames from both ensembles in that Markov state. * 'post_hoc_RE' : similar to 'reversible_pathways' but with a more strict requirement for the overlap between thermodynamic states. It is required that every state in the connected set can be reached by following a pathway of reversible transitions or jumping between overlapping thermodynamic states while staying in the same Markov state. A reversible transition between two Markov states (within the same thermodynamic state k) is a pair of Markov states that belong to the same strongly connected component of the count matrix (from thermodynamic state k). Two thermodynamic states k and l are defined to overlap at Markov state n if a replica exchange simulation [2]_ restricted to state n would show at least one transition from k to l or one transition from from l to k. The expected number of replica exchanges is estimated from the simulation data. The minimal number required of replica exchanges per Markov state can be increased by decreasing `connectivity_factor`. * 'BAR_variance' : like 'post_hoc_RE' but with a different condition to define the thermodynamic overlap based on the variance of the BAR estimator [3]_. Two thermodynamic states k and l are defined to overlap at Markov state n if the variance of the free energy difference Delta f_{kl} computed with BAR (and restricted to conformations form Markov state n) is less or equal than one. The minimally required variance can be controlled with `connectivity_factor`. * 'summed_count_matrix' : all thermodynamic states are assumed to overlap. The connected set is then computed by summing the count matrices over all thermodynamic states and taking it's largest strongly connected set. Not recommended! For more details see :func:`pyemma.thermo.extensions.cset.compute_csets_TRAM`. connectivity_factor : float, optional, default=1.0 Only needed if connectivity='post_hoc_RE' or 'BAR_variance'. Values greater than 1.0 weaken the connectivity conditions. For 'post_hoc_RE' this multiplies the number of hypothetically observed transitions. For 'BAR_variance' this scales the threshold for the minimal allowed variance of free energy differences. direct_space : bool, optional, default=False Whether to perform the self-consistent iteration with Boltzmann factors (direct space) or free energies (log-space). When analyzing data from multi-temperature simulations, direct-space is not recommended. N_dtram_accelerations : int, optional, default=0 Convergence of TRAM can be speeded up by interleaving the updates in the self-consistent iteration with a dTRAM-like update step. N_dtram_accelerations says how many times the dTRAM-like update step should be applied in every iteration of the TRAM equations. Currently this is only effective if direct_space=True. init : str, optional, default=None Use a specific initialization for self-consistent iteration: | None: use a hard-coded guess for free energies and Lagrangian multipliers | 'wham': perform a short WHAM estimate to initialize the free energies init_maxiter : int, optional, default=10000 The maximum number of self-consistent iterations during the initialization. init_maxerr : float, optional, default=1.0E-8 Convergence criterion for the initialization. Returns ------- A :class:`MEMM <pyemma.thermo.models.memm.MEMM>` object or list thereof A multi-ensemble Markov state model (for each given lag time) which consists of stationary and kinetic quantities at all temperatures/thermodynamic states. Example ------- **Umbrella sampling**: Suppose we simulate in K umbrellas, centered at .. math:: b_k(x) = \frac{c_k}{2 \textrm{kT}} \cdot (x - y_k)^2 Suppose we have one simulation of length T in each umbrella, and they are ordered from 0 to K-1. We have discretized the x-coordinate into 100 bins. Then dtrajs and ttrajs should each be a list of :math:`K` arrays. dtrajs would look for example like this:: where each array has length T, and is the sequence of bins (in the range 0 to 99) visited along the trajectory. ttrajs would look like this:: Because trajectory 1 stays in umbrella 1 (index 0), trajectory 2 stays in umbrella 2 (index 1), and so forth. The bias would be a list of :math:`T \times K` arrays which specify each frame's bias energy in all thermodynamic states: Let us try the above example: array([[[1 1] [0 4]] [[0 3] [2 1]]], dtype=int32) See :class:`MEMM <pyemma.thermo.models.memm.MEMM>` for a full documentation. .. autoclass:: pyemma.thermo.models.memm.MEMM :members: :undoc-members: .. rubric:: Methods .. autoautosummary:: pyemma.thermo.models.memm.MEMM :methods: .. rubric:: Attributes .. autoautosummary:: pyemma.thermo.models.memm.MEMM :attributes: References .. [1] Wu, H. et al 2016 Multiensemble Markov models of molecular thermodynamics and kinetics Proc. Natl. Acad. Sci. USA 113 E3221--E3230 .. [2]_ Hukushima et al, Exchange Monte Carlo method and application to spin glass simulations, J. Phys. Soc. Jan. 65, 1604 (1996) .. [3]_ Shirts and Chodera, Statistically optimal analysis of samples from multiple equilibrium states, J. Chem. Phys. 129, 124105 (2008) """
# prepare trajectories ttrajs = _types.ensure_dtraj_list(ttrajs) dtrajs = _types.ensure_dtraj_list(dtrajs) if len(ttrajs) != len(dtrajs): raise ValueError("Unmatching number of dtraj/ttraj elements: %d!=%d" % ( len(dtrajs), len(ttrajs))) if len(ttrajs) != len(bias): raise ValueError("Unmatching number of ttraj/bias elements: %d!=%d" % ( len(ttrajs), len(bias))) for ttraj, dtraj, btraj in zip(ttrajs, dtrajs, bias): if len(ttraj) != len(dtraj): raise ValueError("Unmatching number of data points in ttraj/dtraj: %d!=%d" % ( len(ttraj), len(dtraj))) if len(ttraj) != btraj.shape[0]: raise ValueError("Unmatching number of data points in ttraj/bias trajectory: %d!=%d" % ( len(ttraj), len(btraj))) # check lag time(s) lags = _np.asarray(lag, dtype=_np.intc).reshape((-1,)).tolist() # build TRAM and run estimation from pyemma.thermo import TRAM as _TRAM tram_estimators = [] from pyemma._base.progress import ProgressReporter pg = ProgressReporter() pg.register(amount_of_work=len(lags), description='Estimating TRAM for lags') with pg.context(): for lag in lags: t = _TRAM( lag, count_mode=count_mode, connectivity=connectivity, maxiter=maxiter, maxerr=maxerr, save_convergence_info=save_convergence_info, dt_traj=dt_traj, connectivity_factor=connectivity_factor, nn=nn, direct_space=direct_space, N_dtram_accelerations=N_dtram_accelerations, callback=callback, init=init, init_maxiter=init_maxiter, init_maxerr=init_maxerr, equilibrium=equilibrium, overcounting_factor=overcounting_factor).estimate((ttrajs, dtrajs, bias)) tram_estimators.append(t) pg.update(1) _assign_unbiased_state_label(tram_estimators, unbiased_state) # return if len(tram_estimators) == 1: return tram_estimators[0] return tram_estimators
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def dtram( ttrajs, dtrajs, bias, lag, unbiased_state=None, count_mode='sliding', connectivity='reversible_pathways', maxiter=10000, maxerr=1.0E-15, save_convergence_info=0, dt_traj='1 step', init=None, init_maxiter=10000, init_maxerr=1.0E-8): r""" Discrete transition-based reweighting analysis method Parameters ttrajs : numpy.ndarray(T) of int, or list of numpy.ndarray(T_i) of int A single discrete trajectory or a list of discrete trajectories. The integers are in at any time. dtrajs : numpy.ndarray(T) of int, or list of numpy.ndarray(T_i) of int A single discrete trajectory or a list of discrete trajectories. The integers are indexes trajectory is in at any time. bias : numpy.ndarray(shape=(num_therm_states, num_conf_states)) object bias_energies_full[j, i] is the bias energy in units of kT for each discrete state i at thermodynamic state j. lag : int or list of int, optional, default=1 Integer lag time at which transitions are counted. Providing a list of lag times will trigger one estimation per lag time. unbiased_state : int, optional, default=None Index of the unbiased thermodynamic state or None if there is no unbiased data available. count_mode : str, optional, default='sliding' Mode to obtain count matrices from discrete trajectories. Should be one of: * 'sliding' : a trajectory of length T will have :math:`T-\tau` counts at time indexes .. math:: * 'sample' : a trajectory of length T will have :math:`T/\tau` counts at time indexes .. math:: Currently only 'sliding' is supported. connectivity : str, optional, default='reversible_pathways' One of 'reversible_pathways', 'summed_count_matrix' or None. Defines what should be considered a connected set in the joint (product) space of conformations and thermodynamic ensembles. * 'reversible_pathways' : requires that every state in the connected set can be reached by following a pathway of reversible transitions. A reversible transition between two Markov states (within the same thermodynamic state k) is a pair of Markov states that belong to the same strongly connected component of the count matrix (from thermodynamic state k). A pathway of reversible transitions is a list of (i_(N-1), i_N)]. The thermodynamic state where the reversible transitions happen, is ignored in constructing the reversible pathways. This is equivalent to assuming that two ensembles overlap at some Markov state whenever there exist frames from both ensembles in that Markov state. * 'summed_count_matrix' : all thermodynamic states are assumed to overlap. The connected set is then computed by summing the count matrices over all thermodynamic states and taking it's largest strongly connected set. Not recommended! * None : assume that everything is connected. For debugging. For more details see :func:`pyemma.thermo.extensions.cset.compute_csets_dTRAM`. maxiter : int, optional, default=10000 The maximum number of dTRAM iterations before the estimator exits unsuccessfully. maxerr : float, optional, default=1e-15 Convergence criterion based on the maximal free energy change in a self-consistent iteration step. save_convergence_info : int, optional, default=0 Every save_convergence_info iteration steps, store the actual increment and the actual loglikelihood; 0 means no storage. dt_traj : str, optional, default='1 step' Description of the physical time corresponding to the lag. May be used by analysis algorithms such as plotting tools to pretty-print the axes. By default '1 step', i.e. there is no physical time unit. Specify by a number, whitespace and unit. Permitted units are (* is an arbitrary string): | 'fs', 'femtosecond*' | 'ps', 'picosecond*' | 'ns', 'nanosecond*' | 'us', 'microsecond*' | 'ms', 'millisecond*' | 's', 'second*' init : str, optional, default=None Use a specific initialization for self-consistent iteration: | None: use a hard-coded guess for free energies and Lagrangian multipliers | 'wham': perform a short WHAM estimate to initialize the free energies init_maxiter : int, optional, default=10000 The maximum number of self-consistent iterations during the initialization. init_maxerr : float, optional, default=1.0E-8 Convergence criterion for the initialization. Returns ------- A :class:`MEMM <pyemma.thermo.models.memm.MEMM>` object or list thereof A multi-ensemble Markov state model (for each given lag time) which consists of stationary and kinetic quantities at all temperatures/thermodynamic states. Example ------- **Umbrella sampling**: Suppose we simulate in K umbrellas, centered at .. math:: b_k(x) = \frac{c_k}{2 \textrm{kT}} \cdot (x - y_k)^2 Suppose we have one simulation of length T in each umbrella, and they are ordered from 0 to K-1. We have discretized the x-coordinate into 100 bins. Then dtrajs and ttrajs should each be a list of :math:`K` arrays. dtrajs would look for example like this:: where each array has length T, and is the sequence of bins (in the range 0 to 99) visited along the trajectory. ttrajs would look like this:: Because trajectory 1 stays in umbrella 1 (index 0), trajectory 2 stays in umbrella 2 (index 1), and so forth. bias is a :math:`K \times n` matrix with all reduced bias energies evaluated at all centers: .. math:: \left(\begin{array}{cccc} \end{array}\right) Let us try the above example: array([[[5, 1], [1, 2]], [[1, 4], [3, 1]]], dtype=int32) See :class:`MEMM <pyemma.thermo.models.memm.MEMM>` for a full documentation. .. autoclass:: pyemma.thermo.models.memm.MEMM :members: :undoc-members: .. rubric:: Methods .. autoautosummary:: pyemma.thermo.models.memm.MEMM :methods: .. rubric:: Attributes .. autoautosummary:: pyemma.thermo.models.memm.MEMM :attributes: References .. [1] Wu, H. et al 2014 Statistically optimal analysis of state-discretized trajectory data from multiple thermodynamic states J. Chem. Phys. 141, 214106 """
# prepare trajectories ttrajs = _types.ensure_dtraj_list(ttrajs) dtrajs = _types.ensure_dtraj_list(dtrajs) if len(ttrajs) != len(dtrajs): raise ValueError("Unmatching number of dtraj/ttraj elements: %d!=%d" % ( len(dtrajs), len(ttrajs)) ) for ttraj, dtraj in zip(ttrajs, dtrajs): if len(ttraj) != len(dtraj): raise ValueError("Unmatching number of data points in ttraj/dtraj: %d!=%d" % ( len(ttraj), len(dtraj))) # check lag time(s) lags = _np.asarray(lag, dtype=_np.intc).reshape((-1,)).tolist() # build DTRAM and run estimation from pyemma.thermo import DTRAM from pyemma._base.progress import ProgressReporter pg = ProgressReporter() pg.register(len(lags), description='Estimating DTRAM for lags') dtram_estimators = [] with pg.context(): for _lag in lags: d = DTRAM( bias, _lag, count_mode=count_mode, connectivity=connectivity, maxiter=maxiter, maxerr=maxerr, save_convergence_info=save_convergence_info, dt_traj=dt_traj, init=init, init_maxiter=init_maxiter, init_maxerr=init_maxerr).estimate((ttrajs, dtrajs)) dtram_estimators.append(d) pg.update(1) _assign_unbiased_state_label(dtram_estimators, unbiased_state) # return if len(dtram_estimators) == 1: return dtram_estimators[0] return dtram_estimators
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def wham( ttrajs, dtrajs, bias, maxiter=100000, maxerr=1.0E-15, save_convergence_info=0, dt_traj='1 step'): r""" Weighted histogram analysis method Parameters ttrajs : numpy.ndarray(T) of int, or list of numpy.ndarray(T_i) of int A single discrete trajectory or a list of discrete trajectories. The integers are in at any time. dtrajs : numpy.ndarray(T) of int, or list of numpy.ndarray(T_i) of int A single discrete trajectory or a list of discrete trajectories. The integers are indexes trajectory is in at any time. bias : numpy.ndarray(shape=(num_therm_states, num_conf_states)) object bias_energies_full[j, i] is the bias energy in units of kT for each discrete state i at thermodynamic state j. maxiter : int, optional, default=10000 The maximum number of dTRAM iterations before the estimator exits unsuccessfully. maxerr : float, optional, default=1e-15 Convergence criterion based on the maximal free energy change in a self-consistent iteration step. save_convergence_info : int, optional, default=0 Every save_convergence_info iteration steps, store the actual increment and the actual loglikelihood; 0 means no storage. dt_traj : str, optional, default='1 step' Description of the physical time corresponding to the lag. May be used by analysis algorithms such as plotting tools to pretty-print the axes. By default '1 step', i.e. there is no physical time unit. Specify by a number, whitespace and unit. Permitted units are (* is an arbitrary string): | 'fs', 'femtosecond*' | 'ps', 'picosecond*' | 'ns', 'nanosecond*' | 'us', 'microsecond*' | 'ms', 'millisecond*' | 's', 'second*' Returns ------- A :class:`MultiThermModel <pyemma.thermo.models.multi_therm.MultiThermModel>` object A stationary model which consists of thermodynamic quantities at all temperatures/thermodynamic states. Example ------- **Umbrella sampling**: Suppose we simulate in K umbrellas, centered at .. math:: b_k(x) = \frac{c_k}{2 \textrm{kT}} \cdot (x - y_k)^2 Suppose we have one simulation of length T in each umbrella, and they are ordered from 0 to K-1. We have discretized the x-coordinate into 100 bins. Then dtrajs and ttrajs should each be a list of :math:`K` arrays. dtrajs would look for example like this:: where each array has length T, and is the sequence of bins (in the range 0 to 99) visited along the trajectory. ttrajs would look like this:: Because trajectory 1 stays in umbrella 1 (index 0), trajectory 2 stays in umbrella 2 (index 1), and so forth. bias is a :math:`K \times n` matrix with all reduced bias energies evaluated at all centers: .. math:: \left(\begin{array}{cccc} \end{array}\right) Let us try the above example: array([[7, 3], [5, 5]]) See :class:`MultiThermModel <pyemma.thermo.models.multi_therm.MultiThermModel>` for a full documentation. .. autoclass:: pyemma.thermo.models.multi_therm.MultiThermModel :members: :undoc-members: .. rubric:: Methods .. autoautosummary:: pyemma.thermo.models.multi_therm.MultiThermModel :methods: .. rubric:: Attributes .. autoautosummary:: pyemma.thermo.models.multi_therm.MultiThermModel :attributes: References .. [1] Ferrenberg, A.M. and Swensen, R.H. 1988. New Monte Carlo Technique for Studying Phase Transitions. Phys. Rev. Lett. 23, 2635--2638 .. [2] Kumar, S. et al 1992. The Weighted Histogram Analysis Method for Free-Energy Calculations on Biomolecules. I. The Method. J. Comp. Chem. 13, 1011--1021 """
# check trajectories ttrajs = _types.ensure_dtraj_list(ttrajs) dtrajs = _types.ensure_dtraj_list(dtrajs) if len(ttrajs) != len(dtrajs): raise ValueError("Unmatching number of dtraj/ttraj elements: %d!=%d" % ( len(dtrajs), len(ttrajs)) ) for ttraj, dtraj in zip(ttrajs, dtrajs): if len(ttraj) != len(dtraj): raise ValueError("Unmatching number of data points in ttraj/dtraj: %d!=%d" % ( len(ttraj), len(dtraj))) # build WHAM from pyemma.thermo import WHAM wham_estimator = WHAM( bias, maxiter=maxiter, maxerr=maxerr, save_convergence_info=save_convergence_info, dt_traj=dt_traj) # run estimation return wham_estimator.estimate((ttrajs, dtrajs))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def mbar( ttrajs, dtrajs, bias, maxiter=100000, maxerr=1.0E-15, save_convergence_info=0, dt_traj='1 step', direct_space=False): r""" Multi-state Bennet acceptance ratio Parameters ttrajs : numpy.ndarray(T) of int, or list of numpy.ndarray(T_i) of int A single discrete trajectory or a list of discrete trajectories. The integers are in at any time. dtrajs : numpy.ndarray(T) of int, or list of numpy.ndarray(T_i) of int A single discrete trajectory or a list of discrete trajectories. The integers are indexes trajectory is in at any time. bias : numpy.ndarray(T, num_therm_states), or list of numpy.ndarray(T_i, num_therm_states) A single reduced bias energy trajectory or a list of reduced bias energy trajectories. For every simulation frame seen in trajectory i and time step t, btrajs[i][t, k] is the reduced bias energy of that frame evaluated in the k'th thermodynamic state (i.e. at the k'th umbrella/Hamiltonian/temperature) maxiter : int, optional, default=10000 The maximum number of dTRAM iterations before the estimator exits unsuccessfully. maxerr : float, optional, default=1e-15 Convergence criterion based on the maximal free energy change in a self-consistent iteration step. save_convergence_info : int, optional, default=0 Every save_convergence_info iteration steps, store the actual increment and the actual loglikelihood; 0 means no storage. dt_traj : str, optional, default='1 step' Description of the physical time corresponding to the lag. May be used by analysis algorithms such as plotting tools to pretty-print the axes. By default '1 step', i.e. there is no physical time unit. Specify by a number, whitespace and unit. Permitted units are (* is an arbitrary string): | 'fs', 'femtosecond*' | 'ps', 'picosecond*' | 'ns', 'nanosecond*' | 'us', 'microsecond*' | 'ms', 'millisecond*' | 's', 'second*' direct_space : bool, optional, default=False Whether to perform the self-consitent iteration with Boltzmann factors (direct space) or free energies (log-space). When analyzing data from multi-temperature simulations, direct-space is not recommended. Returns ------- A :class:`MultiThermModel <pyemma.thermo.models.multi_therm.MultiThermModel>` object A stationary model which consists of thermodynamic quantities at all temperatures/thermodynamic states. Example ------- **Umbrella sampling**: Suppose we simulate in K umbrellas, centered at .. math:: b_k(x) = \frac{c_k}{2 \textrm{kT}} \cdot (x - y_k)^2 Suppose we have one simulation of length T in each umbrella, and they are ordered from 0 to K-1. We have discretized the x-coordinate into 100 bins. Then dtrajs and ttrajs should each be a list of :math:`K` arrays. dtrajs would look for example like this:: where each array has length T, and is the sequence of bins (in the range 0 to 99) visited along the trajectory. ttrajs would look like this:: Because trajectory 1 stays in umbrella 1 (index 0), trajectory 2 stays in umbrella 2 (index 1), and so forth. The bias would be a list of :math:`T \times K` arrays which specify each frame's bias energy in all thermodynamic states: Let us try the above example: See :class:`MultiThermModel <pyemma.thermo.models.multi_therm.MultiThermModel>` for a full documentation. .. autoclass:: pyemma.thermo.models.multi_therm.MultiThermModel :members: :undoc-members: .. rubric:: Methods .. autoautosummary:: pyemma.thermo.models.multi_therm.MultiThermModel :methods: .. rubric:: Attributes .. autoautosummary:: pyemma.thermo.models.multi_therm.MultiThermModel :attributes: References .. [1] Shirts, M.R. and Chodera, J.D. 2008 Statistically optimal analysis of samples from multiple equilibrium states J. Chem. Phys. 129, 124105 """
# check trajectories ttrajs = _types.ensure_dtraj_list(ttrajs) dtrajs = _types.ensure_dtraj_list(dtrajs) if len(ttrajs) != len(dtrajs): raise ValueError("Unmatching number of dtraj/ttraj elements: %d!=%d" % ( len(dtrajs), len(ttrajs))) if len(ttrajs) != len(bias): raise ValueError("Unmatching number of ttraj/bias elements: %d!=%d" % ( len(ttrajs), len(bias))) for ttraj, dtraj, btraj in zip(ttrajs, dtrajs, bias): if len(ttraj) != len(dtraj): raise ValueError("Unmatching number of data points in ttraj/dtraj: %d!=%d" % ( len(ttraj), len(dtraj))) if len(ttraj) != btraj.shape[0]: raise ValueError("Unmatching number of data points in ttraj/bias trajectory: %d!=%d" % ( len(ttraj), len(btraj))) # build MBAR from pyemma.thermo import MBAR mbar_estimator = MBAR( maxiter=maxiter, maxerr=maxerr, save_convergence_info=save_convergence_info, dt_traj=dt_traj, direct_space=direct_space) # run estimation return mbar_estimator.estimate((ttrajs, dtrajs, bias))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def default_chunksize(self): """ How much data will be processed at once, in case no chunksize has been provided. Notes ----- This variable respects your setting for maximum memory in pyemma.config.default_chunksize """
if self._default_chunksize is None: try: # TODO: if dimension is not yet fixed (eg tica var cutoff, use dim of data_producer. self.dimension() self.output_type() except: self._default_chunksize = Iterable._FALLBACK_CHUNKSIZE else: self._default_chunksize = Iterable._compute_default_cs(self.dimension(), self.output_type().itemsize, self.logger) return self._default_chunksize
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def fit(self, X, y, lengths): """Fit HMM model to data. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) Feature matrix of individual samples. y : array-like, shape (n_samples,) Target labels. lengths : array-like of integers, shape (n_sequences,) Lengths of the individual sequences in X, y. The sum of these should be n_samples. Notes ----- Make sure the training set (X) is one-hot encoded; if more than one feature in X is on, the emission probabilities will be multiplied. Returns ------- self : MultinomialHMM """
alpha = self.alpha if alpha <= 0: raise ValueError("alpha should be >0, got {0!r}".format(alpha)) X = atleast2d_or_csr(X) classes, y = np.unique(y, return_inverse=True) lengths = np.asarray(lengths) Y = y.reshape(-1, 1) == np.arange(len(classes)) end = np.cumsum(lengths) start = end - lengths init_prob = np.log(Y[start].sum(axis=0) + alpha) init_prob -= logsumexp(init_prob) final_prob = np.log(Y[start].sum(axis=0) + alpha) final_prob -= logsumexp(final_prob) feature_prob = np.log(safe_sparse_dot(Y.T, X) + alpha) feature_prob -= logsumexp(feature_prob, axis=0) trans_prob = np.log(count_trans(y, len(classes)) + alpha) trans_prob -= logsumexp(trans_prob, axis=0) self.coef_ = feature_prob self.intercept_init_ = init_prob self.intercept_final_ = final_prob self.intercept_trans_ = trans_prob self.classes_ = classes return self
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _lstree(files, dirs): """Make git ls-tree like output."""
for f, sha1 in files: yield "100644 blob {}\t{}\0".format(sha1, f) for d, sha1 in dirs: yield "040000 tree {}\t{}\0".format(sha1, d)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def hash_dir(path): """Write directory at path to Git index, return its SHA1 as a string."""
dir_hash = {} for root, dirs, files in os.walk(path, topdown=False): f_hash = ((f, hash_file(join(root, f))) for f in files) d_hash = ((d, dir_hash[join(root, d)]) for d in dirs) # split+join normalizes paths on Windows (note the imports) dir_hash[join(*split(root))] = _mktree(f_hash, d_hash) return dir_hash[path]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def features(sentence, i): """Features for i'th token in sentence. Currently baseline named-entity recognition features, but these can easily be changed to do POS tagging or chunking. """
word = sentence[i] yield "word:{}" + word.lower() if word[0].isupper(): yield "CAP" if i > 0: yield "word-1:{}" + sentence[i - 1].lower() if i > 1: yield "word-2:{}" + sentence[i - 2].lower() if i + 1 < len(sentence): yield "word+1:{}" + sentence[i + 1].lower() if i + 2 < len(sentence): yield "word+2:{}" + sentence[i + 2].lower()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def whole_sequence_accuracy(y_true, y_pred, lengths): """Average accuracy measured on whole sequences. Returns the fraction of sequences in y_true that occur in y_pred without a single error. """
lengths = np.asarray(lengths) end = np.cumsum(lengths) start = end - lengths bounds = np.vstack([start, end]).T errors = sum(1. for i, j in bounds if np.any(y_true[i:j] != y_pred[i:j])) return 1 - errors / len(lengths)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def load_conll(f, features, n_features=(2 ** 16), split=False): """Load CoNLL file, extract features on the tokens and vectorize them. The ConLL file format is a line-oriented text format that describes sequences in a space-separated format, separating the sequences with blank lines. Typically, the last space-separated part is a label. Since the tab-separated parts are usually tokens (and maybe things like part-of-speech tags) rather than feature vectors, a function must be supplied that does the actual feature extraction. This function has access to the entire sequence, so that it can extract context features. A ``sklearn.feature_extraction.FeatureHasher`` (the "hashing trick") is used to map symbolic input feature names to columns, so this function dos not remember the actual input feature names. Parameters f : {string, file-like} Input file. features : callable Feature extraction function. Must take a list of tokens l that represent a single sequence and an index i into this list, and must return an iterator over strings that represent the features of l[i]. n_features : integer, optional Number of columns in the output. split : boolean, default=False Whether to split lines on whitespace beyond what is needed to parse out the labels. This is useful for CoNLL files that have extra columns containing information like part of speech tags. Returns ------- X : scipy.sparse matrix, shape (n_samples, n_features) Samples (feature vectors), as a single sparse matrix. y : np.ndarray, dtype np.string, shape n_samples Per-sample labels. lengths : np.ndarray, dtype np.int32, shape n_sequences Lengths of sequences within (X, y). The sum of these is equal to n_samples. """
fh = FeatureHasher(n_features=n_features, input_type="string") labels = [] lengths = [] with _open(f) as f: raw_X = _conll_sequences(f, features, labels, lengths, split) X = fh.transform(raw_X) return X, np.asarray(labels), np.asarray(lengths, dtype=np.int32)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def atleast2d_or_csr(X, dtype=None, order=None, copy=False): """Like numpy.atleast_2d, but converts sparse matrices to CSR format Also, converts np.matrix to np.ndarray. """
return _atleast2d_or_sparse(X, dtype, order, copy, sp.csr_matrix, "tocsr", sp.isspmatrix_csr)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def validate_lengths(n_samples, lengths): """Validate lengths array against n_samples. Parameters n_samples : integer Total number of samples. lengths : array-like of integers, shape (n_sequences,), optional Lengths of individual sequences in the input. Returns ------- start : array of integers, shape (n_sequences,) Start indices of sequences. end : array of integers, shape (n_sequences,) One-past-the-end indices of sequences. """
if lengths is None: lengths = [n_samples] lengths = np.asarray(lengths, dtype=np.int32) if lengths.sum() > n_samples: msg = "More than {0:d} samples in lengths array {1!s}" raise ValueError(msg.format(n_samples, lengths)) end = np.cumsum(lengths) start = end - lengths return start, end
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def make_trans_matrix(y, n_classes, dtype=np.float64): """Make a sparse transition matrix for y. Takes a label sequence y and returns an indicator matrix with n_classes² columns of the label transitions in y: M[i, j, k] means y[i-1] == j and y[i] == k. The first row will be empty. """
indices = np.empty(len(y), dtype=np.int32) for i in six.moves.xrange(len(y) - 1): indices[i] = y[i] * i + y[i + 1] indptr = np.arange(len(y) + 1) indptr[-1] = indptr[-2] return csr_matrix((np.ones(len(y), dtype=dtype), indices, indptr), shape=(len(y), n_classes ** 2))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: async def passthrough(self, request): """Make non-mocked network request"""
connector = TCPConnector() connector._resolve_host = partial(self._old_resolver_mock, connector) new_is_ssl = ClientRequest.is_ssl ClientRequest.is_ssl = self._old_is_ssl try: original_request = request.clone(scheme="https" if request.headers["AResponsesIsSSL"] else "http") headers = {k: v for k, v in request.headers.items() if k != "AResponsesIsSSL"} async with ClientSession(connector=connector) as session: async with getattr(session, request.method.lower())(original_request.url, headers=headers, data=(await request.read())) as r: headers = {k: v for k, v in r.headers.items() if k.lower() == "content-type"} text = await r.text() response = self.Response(text=text, status=r.status, headers=headers) return response finally: ClientRequest.is_ssl = new_is_ssl
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def to_one_line_string(tiles): """ Convert 136 tiles array to the one line string Example of output 123s123p123m33z """
tiles = sorted(tiles) man = [t for t in tiles if t < 36] pin = [t for t in tiles if 36 <= t < 72] pin = [t - 36 for t in pin] sou = [t for t in tiles if 72 <= t < 108] sou = [t - 72 for t in sou] honors = [t for t in tiles if t >= 108] honors = [t - 108 for t in honors] sou = sou and ''.join([str((i // 4) + 1) for i in sou]) + 's' or '' pin = pin and ''.join([str((i // 4) + 1) for i in pin]) + 'p' or '' man = man and ''.join([str((i // 4) + 1) for i in man]) + 'm' or '' honors = honors and ''.join([str((i // 4) + 1) for i in honors]) + 'z' or '' return man + pin + sou + honors
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def to_136_array(tiles): """ Convert 34 array to the 136 tiles array """
temp = [] results = [] for x in range(0, 34): if tiles[x]: temp_value = [x * 4] * tiles[x] for tile in temp_value: if tile in results: count_of_tiles = len([x for x in temp if x == tile]) new_tile = tile + count_of_tiles results.append(new_tile) temp.append(tile) else: results.append(tile) temp.append(tile) return results
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def string_to_136_array(sou=None, pin=None, man=None, honors=None, has_aka_dora=False): """ Method to convert one line string tiles format to the 136 array. You can pass r instead of 5 for it to become a red five from that suit. To prevent old usage without red, has_aka_dora has to be True for this to do that. We need it to increase readability of our tests """
def _split_string(string, offset, red=None): data = [] temp = [] if not string: return [] for i in string: if i == 'r' and has_aka_dora: temp.append(red) data.append(red) else: tile = offset + (int(i) - 1) * 4 if tile == red and has_aka_dora: # prevent non reds to become red tile += 1 if tile in data: count_of_tiles = len([x for x in temp if x == tile]) new_tile = tile + count_of_tiles data.append(new_tile) temp.append(tile) else: data.append(tile) temp.append(tile) return data results = _split_string(man, 0, FIVE_RED_MAN) results += _split_string(pin, 36, FIVE_RED_PIN) results += _split_string(sou, 72, FIVE_RED_SOU) results += _split_string(honors, 108) return results
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def string_to_34_array(sou=None, pin=None, man=None, honors=None): """ Method to convert one line string tiles format to the 34 array We need it to increase readability of our tests """
results = TilesConverter.string_to_136_array(sou, pin, man, honors) results = TilesConverter.to_34_array(results) return results
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def find_34_tile_in_136_array(tile34, tiles): """ Our shanten calculator will operate with 34 tiles format, after calculations we need to find calculated 34 tile in player's 136 tiles. For example we had 0 tile from 34 array in 136 array it can be present as 0, 1, 2, 3 """
if tile34 is None or tile34 > 33: return None tile = tile34 * 4 possible_tiles = [tile] + [tile + i for i in range(1, 4)] found_tile = None for possible_tile in possible_tiles: if possible_tile in tiles: found_tile = possible_tile break return found_tile
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def value(self): """Compute option value according to BSM model."""
return self._sign[1] * self.S0 * norm.cdf( self._sign[1] * self.d1, 0.0, 1.0 ) - self._sign[1] * self.K * np.exp(-self.r * self.T) * norm.cdf( self._sign[1] * self.d2, 0.0, 1.0 )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def add_option(self, K=None, price=None, St=None, kind="call", pos="long"): """Add an option to the object's `options` container."""
kinds = { "call": Call, "Call": Call, "c": Call, "C": Call, "put": Put, "Put": Put, "p": Put, "P": Put, } St = self.St if St is None else St option = kinds[kind](St=St, K=K, price=price, pos=pos) self.options.append(option)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _rolling_lstsq(x, y): """Finds solution for the rolling case. Matrix formulation."""
if x.ndim == 2: # Treat everything as 3d and avoid AxisError on .swapaxes(1, 2) below # This means an original input of: # array([0., 1., 2., 3., 4., 5., 6.]) # becomes: # array([[[0.], # [1.], # [2.], # [3.]], # # [[1.], # [2.], # ... x = x[:, :, None] elif x.ndim <= 1: raise np.AxisError("x should have ndmi >= 2") return np.squeeze( np.matmul( np.linalg.inv(np.matmul(x.swapaxes(1, 2), x)), np.matmul(x.swapaxes(1, 2), np.atleast_3d(y)), ) )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _confirm_constant(a): """Confirm `a` has volumn vector of 1s."""
a = np.asanyarray(a) return np.isclose(a, 1.0).all(axis=0).any()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _pvalues_all(self): """Two-tailed p values for t-stats of all parameters."""
return 2.0 * (1.0 - scs.t.cdf(np.abs(self._tstat_all), self.df_err))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def best(self): """The resulting best-fit distribution, its parameters, and SSE."""
return pd.Series( { "name": self.best_dist.name, "params": self.best_param, "sse": self.best_sse, } )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def all(self, by="name", ascending=True): """All tested distributions, their parameters, and SSEs."""
res = pd.DataFrame( { "name": self.distributions, "params": self.params, "sse": self.sses, } )[["name", "sse", "params"]] res.sort_values(by=by, ascending=ascending, inplace=True) return res
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def plot(self): """Plot the empirical histogram versus best-fit distribution's PDF."""
plt.plot(self.bin_edges, self.hist, self.bin_edges, self.best_pdf)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def eigen_table(self): """Eigenvalues, expl. variance, and cumulative expl. variance."""
idx = ["Eigenvalue", "Variability (%)", "Cumulative (%)"] table = pd.DataFrame( np.array( [self.eigenvalues, self.inertia, self.cumulative_inertia] ), columns=["F%s" % i for i in range(1, self.keep + 1)], index=idx, ) return table
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def optimize(self): """Analogous to `sklearn`'s fit. Returns `self` to enable chaining."""
def te(weights, r, proxies): """Helper func. `pyfinance.tracking_error` doesn't work here.""" if isinstance(weights, list): weights = np.array(weights) proxy = np.sum(proxies * weights, axis=1) te = np.std(proxy - r) # not anlzd... return te ew = utils.equal_weights(n=self.n, sumto=self.sumto) bnds = tuple((0, 1) for x in range(self.n)) cons = {"type": "eq", "fun": lambda x: np.sum(x) - self.sumto} xs = [] funs = [] for i, j in zip(self._r, self._proxies): opt = sco.minimize( te, x0=ew, args=(i, j), method="SLSQP", bounds=bnds, constraints=cons, ) x, fun = opt["x"], opt["fun"] xs.append(x) funs.append(fun) self._xs = np.array(xs) self._funs = np.array(funs) return self
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def replicate(self): """Forward-month returns of the replicating portfolio."""
return np.sum( self.proxies[self.window :] * self._xs[:-1], axis=1 ).reindex(self.r.index)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _try_to_squeeze(obj, raise_=False): """Attempt to squeeze to 1d Series. Parameters obj : {pd.Series, pd.DataFrame} raise_ : bool, default False """
if isinstance(obj, pd.Series): return obj elif isinstance(obj, pd.DataFrame) and obj.shape[-1] == 1: return obj.squeeze() else: if raise_: raise ValueError("Input cannot be squeezed.") return obj
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def anlzd_stdev(self, ddof=0, freq=None, **kwargs): """Annualized standard deviation with `ddof` degrees of freedom. Parameters ddof : int, default 0 Degrees of freedom, passed to pd.Series.std(). freq : str or None, default None A frequency string used to create an annualization factor. If None, `self.freq` will be used. If that is also None, a frequency will be inferred. If none can be inferred, an exception is raised. It may be any frequency string or anchored offset string recognized by Pandas, such as 'D', '5D', 'Q', 'Q-DEC', or 'BQS-APR'. **kwargs Passed to pd.Series.std(). TODO: freq Returns ------- float """
if freq is None: freq = self._try_get_freq() if freq is None: raise FrequencyError(msg) return nanstd(self, ddof=ddof) * freq ** 0.5
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def batting_avg(self, benchmark): """Percentage of periods when `self` outperformed `benchmark`. Parameters benchmark : {pd.Series, TSeries, 1d np.ndarray} The benchmark security to which `self` is compared. Returns ------- float """
diff = self.excess_ret(benchmark) return np.count_nonzero(diff > 0.0) / diff.count()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def beta_adj(self, benchmark, adj_factor=2 / 3, **kwargs): """Adjusted beta. Beta that is adjusted to reflect the tendency of beta to be mean reverting. [Source: CFA Institute] Formula: adj_factor * raw_beta + (1 - adj_factor) Parameters benchmark : {pd.Series, TSeries, pd.DataFrame, np.ndarray} The benchmark securitie(s) to which `self` is compared. Returns ------- float or np.ndarray If `benchmark` is 1d, returns a scalar. If `benchmark` is 2d, returns a 1d ndarray. Reference --------- .. _Blume, Marshall. "Betas and Their Regression Tendencies." http://www.stat.ucla.edu/~nchristo/statistics417/blume_betas.pdf """
beta = self.beta(benchmark=benchmark, **kwargs) return adj_factor * beta + (1 - adj_factor)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def down_capture(self, benchmark, threshold=0.0, compare_op="lt"): """Downside capture ratio. Measures the performance of `self` relative to benchmark conditioned on periods where `benchmark` is lt or le to `threshold`. Downside capture ratios are calculated by taking the fund's monthly return during the periods of negative benchmark performance and dividing it by the benchmark return. [Source: CFA Institute] Parameters benchmark : {pd.Series, TSeries, 1d np.ndarray} The benchmark security to which `self` is compared. threshold : float, default 0. The threshold at which the comparison should be done. `self` and `benchmark` are "filtered" to periods where `benchmark` is lt/le `threshold`. compare_op : {'lt', 'le'} Comparison operator used to compare to `threshold`. 'lt' is less-than; 'le' is less-than-or-equal. Returns ------- float Note ---- This metric uses geometric, not arithmetic, mean return. """
slf, bm = self.downmarket_filter( benchmark=benchmark, threshold=threshold, compare_op=compare_op, include_benchmark=True, ) return slf.geomean() / bm.geomean()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def downmarket_filter( self, benchmark, threshold=0.0, compare_op="lt", include_benchmark=False, ): """Drop elementwise samples where `benchmark` > `threshold`. Filters `self` (and optionally, `benchmark`) to periods where `benchmark` < `threshold`. (Or <= `threshold`.) Parameters benchmark : {pd.Series, TSeries, 1d np.ndarray} The benchmark security to which `self` is compared. threshold : float, default 0.0 The threshold at which the comparison should be done. `self` and `benchmark` are "filtered" to periods where `benchmark` is lt/le `threshold`. compare_op : {'lt', 'le'} Comparison operator used to compare to `threshold`. 'lt' is less-than; 'le' is less-than-or-equal. include_benchmark : bool, default False If True, return tuple of (`self`, `benchmark`) both filtered. If False, return only `self` filtered. Returns ------- TSeries or tuple of TSeries TSeries if `include_benchmark=False`, otherwise, tuple. """
return self._mkt_filter( benchmark=benchmark, threshold=threshold, compare_op=compare_op, include_benchmark=include_benchmark, )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def drawdown_end(self, return_date=False): """The date of the drawdown trough. Date at which the drawdown was most negative. Parameters return_date : bool, default False If True, return a `datetime.date` object. If False, return a Pandas Timestamp object. Returns ------- datetime.date or pandas._libs.tslib.Timestamp """
end = self.drawdown_idx().idxmin() if return_date: return end.date() return end
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def drawdown_idx(self): """Drawdown index; TSeries of drawdown from running HWM. Returns ------- TSeries """
ri = self.ret_idx() return ri / np.maximum(ri.cummax(), 1.0) - 1.0
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def drawdown_length(self, return_int=False): """Length of drawdown in days. This is the duration from peak to trough. Parameters return_int : bool, default False If True, return the number of days as an int. If False, return a Pandas Timedelta object. Returns ------- int or pandas._libs.tslib.Timedelta """
td = self.drawdown_end() - self.drawdown_start() if return_int: return td.days return td
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def drawdown_recov(self, return_int=False): """Length of drawdown recovery in days. This is the duration from trough to recovery date. Parameters return_int : bool, default False If True, return the number of days as an int. If False, return a Pandas Timedelta object. Returns ------- int or pandas._libs.tslib.Timedelta """
td = self.recov_date() - self.drawdown_end() if return_int: return td.days return td
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def drawdown_start(self, return_date=False): """The date of the peak at which most severe drawdown began. Parameters return_date : bool, default False If True, return a `datetime.date` object. If False, return a Pandas Timestamp object. Returns ------- datetime.date or pandas._libs.tslib.Timestamp """
# Thank you @cᴏʟᴅsᴘᴇᴇᴅ # https://stackoverflow.com/a/47892766/7954504 dd = self.drawdown_idx() mask = nancumsum(dd == nanmin(dd.min)).astype(bool) start = dd.mask(mask)[::-1].idxmax() if return_date: return start.date() return start
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def excess_drawdown_idx(self, benchmark, method="caer"): """Excess drawdown index; TSeries of excess drawdowns. There are several ways of computing this metric. For highly volatile returns, the `method` specified will have a non-negligible effect on the result. Parameters benchmark : {pd.Series, TSeries, 1d np.ndarray} The benchmark security to which `self` is compared. method : {'caer' (0), 'cger' (1), 'ecr' (2), 'ecrr' (3)} Indicates the methodology used. """
# TODO: plot these (compared) in docs. if isinstance(method, (int, float)): method = ["caer", "cger", "ecr", "ecrr"][method] method = method.lower() if method == "caer": er = self.excess_ret(benchmark=benchmark, method="arithmetic") return er.drawdown_idx() elif method == "cger": er = self.excess_ret(benchmark=benchmark, method="geometric") return er.drawdown_idx() elif method == "ecr": er = self.ret_idx() - benchmark.ret_idx() + 1 if er.isnull().any(): return er / er.cummax() - 1.0 else: return er / np.maximum.accumulate(er) - 1.0 elif method == "ecrr": # Credit to: SO @piRSquared # https://stackoverflow.com/a/36848867/7954504 p = self.ret_idx().values b = benchmark.ret_idx().values er = p - b if er.isnull().any(): # The slower route but NaN-friendly. cam = self.expanding(min_periods=1).apply(lambda x: x.argmax()) else: cam = utils.cumargmax(er) p0 = p[cam] b0 = b[cam] return (p * b0 - b * p0) / (p0 * b0) else: raise ValueError( "`method` must be one of" " ('caer', 'cger', 'ecr', 'ecrr')," " case-insensitive, or" " an integer mapping to these methods" " (1 thru 4)." )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def excess_ret(self, benchmark, method="arithmetic"): """Excess return. Parameters benchmark : {pd.Series, TSeries, 1d np.ndarray} The benchmark security to which `self` is compared. method : {{'arith', 'arithmetic'}, {'geo', 'geometric'}} The methodology used. An arithmetic excess return is a straightforward subtraction. A geometric excess return is the ratio of return-relatives of `self` to `benchmark`, minus one. Also known as: active return. Reference --------- .. _Essex River Analytics - A Case for Arithmetic Attribution http://www.northinfo.com/documents/563.pdf .. _Bacon, Carl. Excess Returns - Arithmetic or Geometric? https://www.cfapubs.org/doi/full/10.2469/dig.v33.n1.1235 """
if method.startswith("arith"): return self - _try_to_squeeze(benchmark) elif method.startswith("geo"): # Geometric excess return, # (1 + `self`) / (1 + `benchmark`) - 1. return ( self.ret_rels() / _try_to_squeeze(benchmark).ret_rels() - 1.0 )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def gain_to_loss_ratio(self): """Gain-to-loss ratio, ratio of positive to negative returns. Formula: (n pos. / n neg.) * (avg. up-month return / avg. down-month return) [Source: CFA Institute] Returns ------- float """
gt = self > 0 lt = self < 0 return (nansum(gt) / nansum(lt)) * (self[gt].mean() / self[lt].mean())
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def msquared(self, benchmark, rf=0.02, ddof=0): """M-squared, return scaled by relative total risk. A measure of what a portfolio would have returned if it had taken on the same *total* risk as the market index. [Source: CFA Institute] Parameters benchmark : {pd.Series, TSeries, 1d np.ndarray} The benchmark security to which `self` is compared. rf : {float, TSeries, pd.Series}, default 0.02 If float, this represents an *compounded annualized* risk-free rate; 2.0% is the default. If a TSeries or pd.Series, this represents a time series of periodic returns to a risk-free security. To download a risk-free rate return series using 3-month US T-bill yields, see:`pyfinance.datasets.load_rf`. ddof : int, default 0 Degrees of freedom, passed to pd.Series.std(). Returns ------- float """
rf = self._validate_rf(rf) scaling = benchmark.anlzd_stdev(ddof) / self.anlzd_stdev(ddof) diff = self.anlzd_ret() - rf return rf + diff * scaling
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def pct_negative(self, threshold=0.0): """Pct. of periods in which `self` is less than `threshold.` Parameters threshold : {float, TSeries, pd.Series}, default 0. Returns ------- float """
return np.count_nonzero(self[self < threshold]) / self.count()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def pct_positive(self, threshold=0.0): """Pct. of periods in which `self` is greater than `threshold.` Parameters threshold : {float, TSeries, pd.Series}, default 0. Returns ------- float """
return np.count_nonzero(self[self > threshold]) / self.count()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def recov_date(self, return_date=False): """Drawdown recovery date. Date at which `self` recovered to previous high-water mark. Parameters return_date : bool, default False If True, return a `datetime.date` object. If False, return a Pandas Timestamp object. Returns ------- {datetime.date, pandas._libs.tslib.Timestamp, pd.NaT} Returns NaT if recovery has not occured. """
dd = self.drawdown_idx() # False beginning on trough date and all later dates. mask = nancumprod(dd != nanmin(dd)).astype(bool) res = dd.mask(mask) == 0 # If `res` is all False (recovery has not occured), # .idxmax() will return `res.index[0]`. if not res.any(): recov = pd.NaT else: recov = res.idxmax() if return_date: return recov.date() return recov
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def rollup(self, freq, **kwargs): """Downsample `self` through geometric linking. Parameters freq : {'D', 'W', 'M', 'Q', 'A'} The frequency of the result. **kwargs Passed to `self.resample()`. Returns ------- TSeries Example ------- # Derive quarterly returns from monthly returns. 2016-03-31 0.0274 2016-06-30 -0.0032 2016-09-30 -0.0028 2016-12-31 0.0127 Freq: Q-DEC, dtype: float64 """
return self.ret_rels().resample(freq, **kwargs).prod() - 1.0
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def semi_stdev(self, threshold=0.0, ddof=0, freq=None): """Semi-standard deviation; stdev of downside returns. It is designed to address that fact that plain standard deviation penalizes "upside volatility."" Formula: `sqrt( sum([min(self - thresh, 0] **2 ) / (n - ddof) )` Also known as: downside deviation. Parameters threshold : {float, TSeries, pd.Series}, default 0. While zero is the default, it is also customary to use a "minimum acceptable return" (MAR) or a risk-free rate. Note: this is assumed to be a *periodic*, not necessarily annualized, return. ddof : int, default 0 Degrees of freedom, passed to pd.Series.std(). freq : str or None, default None A frequency string used to create an annualization factor. If None, `self.freq` will be used. If that is also None, a frequency will be inferred. If none can be inferred, an exception is raised. It may be any frequency string or anchored offset string recognized by Pandas, such as 'D', '5D', 'Q', 'Q-DEC', or 'BQS-APR'. Returns ------- float """
if freq is None: freq = self._try_get_freq() if freq is None: raise FrequencyError(msg) n = self.count() - ddof ss = (nansum(np.minimum(self - threshold, 0.0) ** 2) ** 0.5) / n return ss * freq ** 0.5
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def sharpe_ratio(self, rf=0.02, ddof=0): """Return over `rf` per unit of total risk. The average return in excess of the risk-free rate divided by the standard deviation of return; a measure of the average excess return earned per unit of standard deviation of return. [Source: CFA Institute] Parameters rf : {float, TSeries, pd.Series}, default 0.02 If float, this represents an *compounded annualized* risk-free rate; 2.0% is the default. If a TSeries or pd.Series, this represents a time series of periodic returns to a risk-free security. To download a risk-free rate return series using 3-month US T-bill yields, see:`pyfinance.datasets.load_rf`. ddof : int, default 0 Degrees of freedom, passed to pd.Series.std(). Returns ------- float """
rf = self._validate_rf(rf) stdev = self.anlzd_stdev(ddof=ddof) return (self.anlzd_ret() - rf) / stdev
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def sortino_ratio(self, threshold=0.0, ddof=0, freq=None): """Return over a threshold per unit of downside deviation. A performance appraisal ratio that replaces standard deviation in the Sharpe ratio with downside deviation. [Source: CFA Institute] Parameters threshold : {float, TSeries, pd.Series}, default 0. While zero is the default, it is also customary to use a "minimum acceptable return" (MAR) or a risk-free rate. Note: this is assumed to be a *periodic*, not necessarily annualized, return. ddof : int, default 0 Degrees of freedom, passed to pd.Series.std(). freq : str or None, default None A frequency string used to create an annualization factor. If None, `self.freq` will be used. If that is also None, a frequency will be inferred. If none can be inferred, an exception is raised. It may be any frequency string or anchored offset string recognized by Pandas, such as 'D', '5D', 'Q', 'Q-DEC', or 'BQS-APR'. Returns ------- float """
stdev = self.semi_stdev(threshold=threshold, ddof=ddof, freq=freq) return (self.anlzd_ret() - threshold) / stdev
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def tracking_error(self, benchmark, ddof=0): """Standard deviation of excess returns. The standard deviation of the differences between a portfolio's returns and its benchmark's returns. [Source: CFA Institute] Also known as: tracking risk; active risk Parameters benchmark : {pd.Series, TSeries, 1d np.ndarray} The benchmark security to which `self` is compared. ddof : int, default 0 Degrees of freedom, passed to pd.Series.std(). Returns ------- float """
er = self.excess_ret(benchmark=benchmark) return er.anlzd_stdev(ddof=ddof)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def treynor_ratio(self, benchmark, rf=0.02): """Return over `rf` per unit of systematic risk. A measure of risk-adjusted performance that relates a portfolio's excess returns to the portfolio's beta. [Source: CFA Institute] Parameters benchmark : {pd.Series, TSeries, 1d np.ndarray} The benchmark security to which `self` is compared. rf : {float, TSeries, pd.Series}, default 0.02 If float, this represents an *compounded annualized* risk-free rate; 2.0% is the default. If a TSeries or pd.Series, this represents a time series of periodic returns to a risk-free security. To download a risk-free rate return series using 3-month US T-bill yields, see:`pyfinance.datasets.load_rf`. Returns ------- float """
benchmark = _try_to_squeeze(benchmark) if benchmark.ndim > 1: raise ValueError("Treynor ratio requires a single benchmark") rf = self._validate_rf(rf) beta = self.beta(benchmark) return (self.anlzd_ret() - rf) / beta
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def up_capture(self, benchmark, threshold=0.0, compare_op="ge"): """Upside capture ratio. Measures the performance of `self` relative to benchmark conditioned on periods where `benchmark` is gt or ge to `threshold`. Upside capture ratios are calculated by taking the fund's monthly return during the periods of positive benchmark performance and dividing it by the benchmark return. [Source: CFA Institute] Parameters benchmark : {pd.Series, TSeries, 1d np.ndarray} The benchmark security to which `self` is compared. threshold : float, default 0. The threshold at which the comparison should be done. `self` and `benchmark` are "filtered" to periods where `benchmark` is gt/ge `threshold`. compare_op : {'ge', 'gt'} Comparison operator used to compare to `threshold`. 'gt' is greater-than; 'ge' is greater-than-or-equal. Returns ------- float Note ---- This metric uses geometric, not arithmetic, mean return. """
slf, bm = self.upmarket_filter( benchmark=benchmark, threshold=threshold, compare_op=compare_op, include_benchmark=True, ) return slf.geomean() / bm.geomean()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def upmarket_filter( self, benchmark, threshold=0.0, compare_op="ge", include_benchmark=False, ): """Drop elementwise samples where `benchmark` < `threshold`. Filters `self` (and optionally, `benchmark`) to periods where `benchmark` > `threshold`. (Or >= `threshold`.) Parameters benchmark : {pd.Series, TSeries, 1d np.ndarray} The benchmark security to which `self` is compared. threshold : float, default 0.0 The threshold at which the comparison should be done. `self` and `benchmark` are "filtered" to periods where `benchmark` is gt/ge `threshold`. compare_op : {'ge', 'gt'} Comparison operator used to compare to `threshold`. 'gt' is greater-than; 'ge' is greater-than-or-equal. include_benchmark : bool, default False If True, return tuple of (`self`, `benchmark`) both filtered. If False, return only `self` filtered. Returns ------- TSeries or tuple of TSeries TSeries if `include_benchmark=False`, otherwise, tuple. """
return self._mkt_filter( benchmark=benchmark, threshold=threshold, compare_op=compare_op, include_benchmark=include_benchmark, )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def CAPM(self, benchmark, has_const=False, use_const=True): """Interface to OLS regression against `benchmark`. `self.alpha()`, `self.beta()` and several other methods stem from here. For the full method set, see `pyfinance.ols.OLS`. Parameters benchmark : {pd.Series, TSeries, pd.DataFrame, np.ndarray} The benchmark securitie(s) to which `self` is compared. has_const : bool, default False Specifies whether `benchmark` includes a user-supplied constant (a column vector). If False, it is added at instantiation. use_const : bool, default True Whether to include an intercept term in the model output. Note the difference between `has_const` and `use_const`: the former specifies whether a column vector of 1s is included in the input; the latter specifies whether the model itself should include a constant (intercept) term. Exogenous data that is ~N(0,1) would have a constant equal to zero; specify use_const=False in this situation. Returns ------- pyfinance.ols.OLS """
return ols.OLS( y=self, x=benchmark, has_const=has_const, use_const=use_const )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def load_retaildata(): """Monthly retail trade data from census.gov."""
# full = 'https://www.census.gov/retail/mrts/www/mrtssales92-present.xls' # indiv = 'https://www.census.gov/retail/marts/www/timeseries.html' db = { "Auto, other Motor Vehicle": "https://www.census.gov/retail/marts/www/adv441x0.txt", "Building Material and Garden Equipment and Supplies Dealers": "https://www.census.gov/retail/marts/www/adv44400.txt", "Clothing and Clothing Accessories Stores": "https://www.census.gov/retail/marts/www/adv44800.txt", "Dept. Stores (ex. leased depts)": "https://www.census.gov/retail/marts/www/adv45210.txt", "Electronics and Appliance Stores": "https://www.census.gov/retail/marts/www/adv44300.txt", "Food Services and Drinking Places": "https://www.census.gov/retail/marts/www/adv72200.txt", "Food and Beverage Stores": "https://www.census.gov/retail/marts/www/adv44500.txt", "Furniture and Home Furnishings Stores": "https://www.census.gov/retail/marts/www/adv44200.txt", "Gasoline Stations": "https://www.census.gov/retail/marts/www/adv44700.txt", "General Merchandise Stores": "https://www.census.gov/retail/marts/www/adv45200.txt", "Grocery Stores": "https://www.census.gov/retail/marts/www/adv44510.txt", "Health and Personal Care Stores": "https://www.census.gov/retail/marts/www/adv44600.txt", "Miscellaneous Store Retailers": "https://www.census.gov/retail/marts/www/adv45300.txt", "Motor Vehicle and Parts Dealers": "https://www.census.gov/retail/marts/www/adv44100.txt", "Nonstore Retailers": "https://www.census.gov/retail/marts/www/adv45400.txt", "Retail and Food Services, total": "https://www.census.gov/retail/marts/www/adv44x72.txt", "Retail, total": "https://www.census.gov/retail/marts/www/adv44000.txt", "Sporting Goods, Hobby, Book, and Music Stores": "https://www.census.gov/retail/marts/www/adv45100.txt", "Total (excl. Motor Vehicle)": "https://www.census.gov/retail/marts/www/adv44y72.txt", "Retail (excl. Motor Vehicle and Parts Dealers)": "https://www.census.gov/retail/marts/www/adv4400a.txt", } dct = {} for key, value in db.items(): data = pd.read_csv( value, skiprows=5, skip_blank_lines=True, header=None, sep="\s+", index_col=0, ) try: cut = data.index.get_loc("SEASONAL") except KeyError: cut = data.index.get_loc("NO") data = data.iloc[:cut] data = data.apply(lambda col: pd.to_numeric(col, downcast="float")) data = data.stack() year = data.index.get_level_values(0) month = data.index.get_level_values(1) idx = pd.to_datetime( {"year": year, "month": month, "day": 1} ) + offsets.MonthEnd(1) data.index = idx data.name = key dct[key] = data sales = pd.DataFrame(dct) sales = sales.reindex( pd.date_range(sales.index[0], sales.index[-1], freq="M") ) # TODO: account for any skipped months; could specify a DateOffset to # `freq` param of `pandas.DataFrame.shift` yoy = sales.pct_change(periods=12) return sales, yoy
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def avail(df): """Return start & end availability for each column in a DataFrame."""
avail = DataFrame( { "start": df.apply(lambda col: col.first_valid_index()), "end": df.apply(lambda col: col.last_valid_index()), } ) return avail[["start", "end"]]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _uniquewords(*args): """Dictionary of words to their indices. Helper function to `encode.`"""
words = {} n = 0 for word in itertools.chain(*args): if word not in words: words[word] = n n += 1 return words
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def encode(*args): """One-hot encode the given input strings."""
args = [arg.split() for arg in args] unique = _uniquewords(*args) feature_vectors = np.zeros((len(args), len(unique))) for vec, s in zip(feature_vectors, args): for word in s: vec[unique[word]] = 1 return feature_vectors
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def unique_everseen(iterable, filterfalse_=itertools.filterfalse): """Unique elements, preserving order."""
# Itertools recipes: # https://docs.python.org/3/library/itertools.html#itertools-recipes seen = set() seen_add = seen.add for element in filterfalse_(seen.__contains__, iterable): seen_add(element) yield element
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def dispatch(self, *args, **kwargs): """ Only staff members can access this view """
return super(GetAppListJsonView, self).dispatch(*args, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get(self, request): """ Returns a json representing the menu voices in a format eaten by the js menu. Raised ImproperlyConfigured exceptions can be viewed in the browser console """
self.app_list = site.get_app_list(request) self.apps_dict = self.create_app_list_dict() # no menu provided items = get_config('MENU') if not items: voices = self.get_default_voices() else: voices = [] for item in items: self.add_voice(voices, item) return JsonResponse(voices, safe=False)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def add_voice(self, voices, item): """ Adds a voice to the list """
voice = None if item.get('type') == 'title': voice = self.get_title_voice(item) elif item.get('type') == 'app': voice = self.get_app_voice(item) elif item.get('type') == 'model': voice = self.get_app_model_voice(item) elif item.get('type') == 'free': voice = self.get_free_voice(item) if voice: voices.append(voice)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_title_voice(self, item): """ Title voice Returns the js menu compatible voice dict if the user can see it, None otherwise """
view = True if item.get('perms', None): view = self.check_user_permission(item.get('perms', [])) elif item.get('apps', None): view = self.check_apps_permission(item.get('apps', [])) if view: return { 'type': 'title', 'label': item.get('label', ''), 'icon': item.get('icon', None) } return None
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_free_voice(self, item): """ Free voice Returns the js menu compatible voice dict if the user can see it, None otherwise """
view = True if item.get('perms', None): view = self.check_user_permission(item.get('perms', [])) elif item.get('apps', None): view = self.check_apps_permission(item.get('apps', [])) if view: return { 'type': 'free', 'label': item.get('label', ''), 'icon': item.get('icon', None), 'url': item.get('url', None) } return None
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_app_voice(self, item): """ App voice Returns the js menu compatible voice dict if the user can see it, None otherwise """
if item.get('name', None) is None: raise ImproperlyConfigured('App menu voices must have a name key') if self.check_apps_permission([item.get('name', None)]): children = [] if item.get('models', None) is None: for name, model in self.apps_dict[item.get('name')]['models'].items(): # noqa children.append({ 'type': 'model', 'label': model.get('name', ''), 'url': model.get('admin_url', '') }) else: for model_item in item.get('models', []): voice = self.get_model_voice(item.get('name'), model_item) if voice: children.append(voice) return { 'type': 'app', 'label': item.get('label', ''), 'icon': item.get('icon', None), 'children': children } return None
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_app_model_voice(self, app_model_item): """ App Model voice Returns the js menu compatible voice dict if the user can see it, None otherwise """
if app_model_item.get('name', None) is None: raise ImproperlyConfigured('Model menu voices must have a name key') # noqa if app_model_item.get('app', None) is None: raise ImproperlyConfigured('Model menu voices must have an app key') # noqa return self.get_model_voice(app_model_item.get('app'), app_model_item)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_model_voice(self, app, model_item): """ Model voice Returns the js menu compatible voice dict if the user can see it, None otherwise """
if model_item.get('name', None) is None: raise ImproperlyConfigured('Model menu voices must have a name key') # noqa if self.check_model_permission(app, model_item.get('name', None)): return { 'type': 'model', 'label': model_item.get('label', ''), 'icon': model_item.get('icon', None), 'url': self.apps_dict[app]['models'][model_item.get('name')]['admin_url'], # noqa } return None
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def create_app_list_dict(self): """ Creates a more efficient to check dictionary from the app_list list obtained from django admin """
d = {} for app in self.app_list: models = {} for model in app.get('models', []): models[model.get('object_name').lower()] = model d[app.get('app_label').lower()] = { 'app_url': app.get('app_url', ''), 'app_label': app.get('app_label'), 'models': models } return d
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def check_apps_permission(self, apps): """ Checks if one of apps is listed in apps_dict Since apps_dict is derived from the app_list given by django admin, it lists only the apps the user can view """
for app in apps: if app in self.apps_dict: return True return False
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def check_model_permission(self, app, model): """ Checks if model is listed in apps_dict Since apps_dict is derived from the app_list given by django admin, it lists only the apps and models the user can view """
if self.apps_dict.get(app, False) and model in self.apps_dict[app]['models']: return True return False
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_default_voices(self): """ When no custom menu is defined in settings Retrieves a js menu ready dict from the django admin app list """
voices = [] for app in self.app_list: children = [] for model in app.get('models', []): child = { 'type': 'model', 'label': model.get('name', ''), 'url': model.get('admin_url', '') } children.append(child) voice = { 'type': 'app', 'label': app.get('name', ''), 'url': app.get('app_url', ''), 'children': children } voices.append(voice) return voices
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def uncheckButton(self): """Removes the checked stated of all buttons in this widget. This method is also a slot. """
#for button in self.buttons[1:]: for button in self.buttons: # supress editButtons toggled event button.blockSignals(True) if button.isChecked(): button.setChecked(False) button.blockSignals(False)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def addColumn(self, columnName, dtype, defaultValue): """Adds a column with the given parameters to the underlying model This method is also a slot. If no model is set, nothing happens. Args: columnName (str): The name of the new column. dtype (numpy.dtype): The datatype of the new column. defaultValue (object): Fill the column with this value. """
model = self.tableView.model() if model is not None: model.addDataFrameColumn(columnName, dtype, defaultValue) self.addColumnButton.setChecked(False)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def showAddColumnDialog(self, triggered): """Display the dialog to add a column to the model. This method is also a slot. Args: triggered (bool): If the corresponding button was activated, the dialog will be created and shown. """
if triggered: dialog = AddAttributesDialog(self) dialog.accepted.connect(self.addColumn) dialog.rejected.connect(self.uncheckButton) dialog.show()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def addRow(self, triggered): """Adds a row to the model. This method is also a slot. Args: triggered (bool): If the corresponding button was activated, the row will be appended to the end. """
if triggered: model = self.tableView.model() model.addDataFrameRows() self.sender().setChecked(False)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def removeRow(self, triggered): """Removes a row to the model. This method is also a slot. Args: triggered (bool): If the corresponding button was activated, the selected row will be removed from the model. """
if triggered: model = self.tableView.model() selection = self.tableView.selectedIndexes() rows = [index.row() for index in selection] model.removeDataFrameRows(set(rows)) self.sender().setChecked(False)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def removeColumns(self, columnNames): """Removes one or multiple columns from the model. This method is also a slot. Args: columnNames (list): A list of columns, which shall be removed from the model. """
model = self.tableView.model() if model is not None: model.removeDataFrameColumns(columnNames) self.removeColumnButton.setChecked(False)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def setViewModel(self, model): """Sets the model for the enclosed TableView in this widget. Args: model (DataFrameModel): The model to be displayed by the Table View. """
if isinstance(model, DataFrameModel): self.enableEditing(False) self.uncheckButton() selectionModel = self.tableView.selectionModel() self.tableView.setModel(model) model.dtypeChanged.connect(self.updateDelegate) model.dataChanged.connect(self.updateDelegates) del selectionModel
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def updateDelegates(self): """reset all delegates"""
for index, column in enumerate(self.tableView.model().dataFrame().columns): dtype = self.tableView.model().dataFrame()[column].dtype self.updateDelegate(index, dtype)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def createThread(parent, worker, deleteWorkerLater=False): """Create a new thread for given worker. Args: parent (QObject): parent of thread and worker. worker (ProgressWorker): worker to use in thread. deleteWorkerLater (bool, optional): delete the worker if thread finishes. Returns: QThread """
thread = QtCore.QThread(parent) thread.started.connect(worker.doWork) worker.finished.connect(thread.quit) if deleteWorkerLater: thread.finished.connect(worker.deleteLater) worker.moveToThread(thread) worker.setParent(parent) return thread