text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_samples_bqm(cls, samples_like, bqm, **kwargs):
"""Build a SampleSet from raw samples using a BinaryQuadraticModel to get energies and vartype. Args: samples_like: A collection of raw samples. 'samples_like' is an extension of NumPy's array_like. See :func:`.as_samples`. bqm (:obj:`.BinaryQuadraticModel`):
A binary quadratic model. It is used to calculate the energies and set the vartype. info (dict, optional):
Information about the :class:`SampleSet` as a whole formatted as a dict. num_occurrences (array_like, optional):
Number of occurrences for each sample. If not provided, defaults to a vector of 1s. aggregate_samples (bool, optional, default=False):
If true, returned :obj:`.SampleSet` will have all unique samples. sort_labels (bool, optional, default=True):
If true, :attr:`.SampleSet.variables` will be in sorted-order. Note that mixed types are not sortable in which case the given order will be maintained. **vectors (array_like):
Other per-sample data. Returns: :obj:`.SampleSet` Examples: """ |
# more performant to do this once, here rather than again in bqm.energies
# and in cls.from_samples
samples_like = as_samples(samples_like)
energies = bqm.energies(samples_like)
return cls.from_samples(samples_like, energy=energies, vartype=bqm.vartype, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def data_vectors(self):
"""The per-sample data in a vector. Returns: dict: A dict where the keys are the fields in the record and the values are the corresponding arrays. Examples: energy=[-1, 1]) array([-1, 1]) Note that this is equivalent to, and less performant than: energy=[-1, 1]) array([-1, 1]) """ |
return {field: self.record[field] for field in self.record.dtype.names
if field != 'sample'} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def first(self):
"""Sample with the lowest-energy. Raises: ValueError: If empty. Example: Sample(sample={'a': -1, 'b': 1}, energy=-2.0, num_occurrences=1) """ |
try:
return next(self.data(sorted_by='energy', name='Sample'))
except StopIteration:
raise ValueError('{} is empty'.format(self.__class__.__name__)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def done(self):
"""Return True if a pending computation is done. Used when a :class:`SampleSet` is constructed with :meth:`SampleSet.from_future`. Examples: This example uses a :class:`~concurrent.futures.Future` object directly. Typically a :class:`~concurrent.futures.Executor` sets the result of the future (see documentation for :mod:`concurrent.futures`). False True array([[-1], [ 1]], dtype=int8) """ |
return (not hasattr(self, '_future')) or (not hasattr(self._future, 'done')) or self._future.done() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def samples(self, n=None, sorted_by='energy'):
"""Return an iterable over the samples. Args: n (int, optional, default=None):
Maximum number of samples to return in the view. sorted_by (str/None, optional, default='energy'):
Selects the record field used to sort the samples. If None, samples are returned in record order. Returns: :obj:`.SamplesArray`: A view object mapping variable labels to values. Examples: {'a': -1, 'b': 1} {'a': 1, 'b': -1} {'a': -1, 'b': -1} {'a': 1, 'b': 1} {'a': -1, 'b': 1} -1 array([ 1, -1], dtype=int8) array([[ 1, -1], [-1, -1], [ 1, 1]], dtype=int8) """ |
if n is not None:
return self.samples(sorted_by=sorted_by)[:n]
if sorted_by is None:
samples = self.record.sample
else:
order = np.argsort(self.record[sorted_by])
samples = self.record.sample[order]
return SamplesArray(samples, self.variables) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def copy(self):
"""Create a shallow copy.""" |
return self.__class__(self.record.copy(),
self.variables, # a new one is made in all cases
self.info.copy(),
self.vartype) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def aggregate(self):
"""Create a new SampleSet with repeated samples aggregated. Returns: :obj:`.SampleSet` Note: :attr:`.SampleSet.record.num_occurrences` are accumulated but no other fields are. """ |
_, indices, inverse = np.unique(self.record.sample, axis=0,
return_index=True, return_inverse=True)
# unique also sorts the array which we don't want, so we undo the sort
order = np.argsort(indices)
indices = indices[order]
record = self.record[indices]
# fix the number of occurrences
record.num_occurrences = 0
for old_idx, new_idx in enumerate(inverse):
new_idx = order[new_idx]
record[new_idx].num_occurrences += self.record[old_idx].num_occurrences
# dev note: we don't check the energies as they should be the same
# for individual samples
return type(self)(record, self.variables, copy.deepcopy(self.info),
self.vartype) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def append_variables(self, samples_like, sort_labels=True):
"""Create a new sampleset with the given variables with values added. Not defined for empty sample sets. Note that when `sample_like` is a :obj:`.SampleSet`, the data vectors and info are ignored. Args: samples_like: Samples to add to the sample set. Should either be a single sample or should match the length of the sample set. See :func:`.as_samples` for what is allowed to be `samples_like`. sort_labels (bool, optional, default=True):
If true, returned :attr:`.SampleSet.variables` will be in sorted-order. Note that mixed types are not sortable in which case the given order will be maintained. Returns: :obj:`.SampleSet`: A new sample set with the variables/values added. Examples: a b c energy num_oc. 0 -1 +1 -1 -1.0 1 1 +1 +1 -1 1.0 1 ['SPIN', 2 rows, 2 samples, 3 variables] Add variables from another sampleset to the original above. Note that the energies do not change. a b c d energy num_oc. 0 -1 +1 -1 +1 -1.0 1 1 +1 +1 +1 +1 1.0 1 ['SPIN', 2 rows, 2 samples, 4 variables] """ |
samples, labels = as_samples(samples_like)
num_samples = len(self)
# we don't handle multiple values
if samples.shape[0] == num_samples:
# we don't need to do anything, it's already the correct shape
pass
elif samples.shape[0] == 1 and num_samples:
samples = np.repeat(samples, num_samples, axis=0)
else:
msg = ("mismatched shape. The samples to append should either be "
"a single sample or should match the length of the sample "
"set. Empty sample sets cannot be appended to.")
raise ValueError(msg)
# append requires the new variables to be unique
variables = self.variables
if any(v in variables for v in labels):
msg = "Appended samples cannot contain variables in sample set"
raise ValueError(msg)
new_variables = list(variables) + labels
new_samples = np.hstack((self.record.sample, samples))
return type(self).from_samples((new_samples, new_variables),
self.vartype,
info=copy.deepcopy(self.info), # make a copy
sort_labels=sort_labels,
**self.data_vectors) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def lowest(self, rtol=1.e-5, atol=1.e-8):
"""Return a sample set containing the lowest-energy samples. A sample is included if its energy is within tolerance of the lowest energy in the sample set. The following equation is used to determine if two values are equivalent: absolute(`a` - `b`) <= (`atol` + `rtol` * absolute(`b`)) See :func:`numpy.isclose` for additional details and caveats. Args: rtol (float, optional, default=1.e-5):
The relative tolerance (see above). atol (float, optional, default=1.e-8):
The absolute tolerance (see above). Returns: :obj:`.SampleSet`: A new sample set containing the lowest energy samples as delimited by configured tolerances from the lowest energy sample in the current sample set. Examples: a b energy num_oc. 0 -1 -1 -1.001 1 ['SPIN', 1 rows, 1 samples, 2 variables] a b energy num_oc. 0 -1 -1 -1.001 1 1 +1 +1 -0.999 1 ['SPIN', 2 rows, 2 samples, 2 variables] Note: "Lowest energy" is the lowest energy in the sample set. This is not always the "ground energy" which is the lowest energy possible for a binary quadratic model. """ |
if len(self) == 0:
# empty so all are lowest
return self.copy()
record = self.record
# want all the rows within tolerance of the minimal energy
close = np.isclose(record.energy,
np.min(record.energy),
rtol=rtol, atol=atol)
record = record[close]
return type(self)(record, self.variables, copy.deepcopy(self.info),
self.vartype) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def slice(self, *slice_args, **kwargs):
"""Create a new SampleSet with rows sliced according to standard Python slicing syntax. Args: start (int, optional, default=None):
Start index for `slice`. stop (int):
Stop index for `slice`. step (int, optional, default=None):
Step value for `slice`. sorted_by (str/None, optional, default='energy'):
Selects the record field used to sort the samples before slicing. Note that `sorted_by` determines the sample order in the returned SampleSet. Returns: :obj:`.SampleSet` Examples: 0 1 2 3 4 5 6 7 8 9 energy num_oc. 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 0 1 1 2 0 0 1 0 0 0 0 0 0 0 2 1 3 0 0 0 1 0 0 0 0 0 0 3 1 4 0 0 0 0 1 0 0 0 0 0 4 1 5 0 0 0 0 0 1 0 0 0 0 5 1 6 0 0 0 0 0 0 1 0 0 0 6 1 7 0 0 0 0 0 0 0 1 0 0 7 1 8 0 0 0 0 0 0 0 0 1 0 8 1 9 0 0 0 0 0 0 0 0 0 1 9 1 ['BINARY', 10 rows, 10 samples, 10 variables] 0 1 2 3 4 5 6 7 8 9 energy num_oc. 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 0 1 1 2 0 0 1 0 0 0 0 0 0 0 2 1 ['BINARY', 3 rows, 3 samples, 10 variables] 0 1 2 3 4 5 6 7 8 9 energy num_oc. 0 0 0 0 0 0 0 0 1 0 0 7 1 1 0 0 0 0 0 0 0 0 1 0 8 1 2 0 0 0 0 0 0 0 0 0 1 9 1 ['BINARY', 3 rows, 3 samples, 10 variables] 0 1 2 3 4 5 6 7 8 9 energy num_oc. 0 0 0 0 1 0 0 0 0 0 0 3 1 1 0 0 0 0 0 1 0 0 0 0 5 1 ['BINARY', 2 rows, 2 samples, 10 variables] """ |
# handle `sorted_by` kwarg with a default value in a python2-compatible way
sorted_by = kwargs.pop('sorted_by', 'energy')
if kwargs:
# be strict about allowed kwargs: throw the same error as python3 would
raise TypeError('slice got an unexpected '
'keyword argument {!r}'.format(kwargs.popitem()[0]))
# follow Python's slice syntax
if slice_args:
selector = slice(*slice_args)
else:
selector = slice(None)
if sorted_by is None:
record = self.record[selector]
else:
sort_indices = np.argsort(self.record[sorted_by])
record = self.record[sort_indices[selector]]
return type(self)(record, self.variables, copy.deepcopy(self.info),
self.vartype) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_pandas_dataframe(self, sample_column=False):
"""Convert a SampleSet to a Pandas DataFrame Returns: :obj:`pandas.DataFrame` Examples: a b c energy num_occurrences 0 -1 1 -1 -0.5 1 1 -1 -1 1 -0.5 1 sample energy num_occurrences 0 {'a': -1, 'b': 1, 'c': -1} -0.5 1 1 {'a': -1, 'b': -1, 'c': 1} -0.5 1 """ |
import pandas as pd
if sample_column:
df = pd.DataFrame(self.data(sorted_by=None, sample_dict_cast=True))
else:
# work directly with the record, it's much faster
df = pd.DataFrame(self.record.sample, columns=self.variables)
for field in sorted(self.record.dtype.fields): # sort for consistency
if field == 'sample':
continue
df.loc[:, field] = self.record[field]
return df |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def penalty_satisfaction(response, bqm):
""" Creates a penalty satisfaction list Given a sampleSet and a bqm object, will create a binary list informing whether the penalties introduced during degree reduction are satisfied for each sample in sampleSet Args: response (:obj:`.SampleSet`):
Samples corresponding to provided bqm bqm (:obj:`.BinaryQuadraticModel`):
a bqm object that contains its reduction info. Returns: :obj:`numpy.ndarray`: a binary array of penalty satisfaction information """ |
record = response.record
label_dict = response.variables.index
if len(bqm.info['reduction']) == 0:
return np.array([1] * len(record.sample))
penalty_vector = np.prod([record.sample[:, label_dict[qi]] *
record.sample[:, label_dict[qj]]
== record.sample[:,
label_dict[valdict['product']]]
for (qi, qj), valdict in
bqm.info['reduction'].items()], axis=0)
return penalty_vector |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def polymorph_response(response, poly, bqm, penalty_strength=None, keep_penalty_variables=True, discard_unsatisfied=False):
""" Transforms the sampleset for the higher order problem. Given a response of a penalized HUBO, this function creates a new sampleset object, taking into account penalty information and calculates the energies of samples for the higherorder problem. Args: response (:obj:`.SampleSet`):
response for a penalized hubo. poly (:obj:`.BinaryPolynomial`):
A binary polynomial. bqm (:obj:`dimod.BinaryQuadraticModel`):
Binary quadratic model of the reduced problem. penalty_strength (float, optional):
default is None, if provided, will be added to the info field of the returned sampleSet object. keep_penalty_variables (bool, optional):
default is True. if False will remove the variables used for penalty from the samples discard_unsatisfied (bool, optional):
default is False. If True will discard samples that do not satisfy the penalty conditions. Returns: (:obj:`.SampleSet'):
A sampleSet object that has additional penalty information. The energies of samples are calculated for the HUBO ignoring the penalty variables. """ |
record = response.record
penalty_vector = penalty_satisfaction(response, bqm)
original_variables = bqm.variables
if discard_unsatisfied:
samples_to_keep = list(map(bool, list(penalty_vector)))
penalty_vector = np.array([True] * np.sum(samples_to_keep))
else:
samples_to_keep = list(map(bool, [1] * len(record.sample)))
samples = record.sample[samples_to_keep]
energy_vector = poly.energies((samples, response.variables))
if not keep_penalty_variables:
original_variables = poly.variables
idxs = [response.variables.index[v] for v in original_variables]
samples = np.asarray(samples[:, idxs])
num_samples, num_variables = np.shape(samples)
datatypes = [('sample', np.dtype(np.int8), (num_variables,)),
('energy', energy_vector.dtype),
('penalty_satisfaction',
penalty_vector.dtype)]
datatypes.extend((name, record[name].dtype, record[name].shape[1:])
for name in record.dtype.names if
name not in {'sample',
'energy'})
data = np.rec.array(np.empty(num_samples, dtype=datatypes))
data.sample = samples
data.energy = energy_vector
for name in record.dtype.names:
if name not in {'sample', 'energy'}:
data[name] = record[name][samples_to_keep]
data['penalty_satisfaction'] = penalty_vector
response.info['reduction'] = bqm.info['reduction']
if penalty_strength is not None:
response.info['penalty_strength'] = penalty_strength
return SampleSet(data, original_variables, response.info,
response.vartype) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sample_poly(self, poly, penalty_strength=1.0, keep_penalty_variables=False, discard_unsatisfied=False, **parameters):
"""Sample from the given binary polynomial. Takes the given binary polynomial, introduces penalties, reduces the higher-order problem into a quadratic problem and sends it to its child sampler. Args: poly (:obj:`.BinaryPolynomial`):
A binary polynomial. penalty_strength (float, optional):
Strength of the reduction constraint. Insufficient strength can result in the binary quadratic model not having the same minimization as the polynomial. keep_penalty_variables (bool, optional):
default is True. if False will remove the variables used for penalty from the samples discard_unsatisfied (bool, optional):
default is False. If True will discard samples that do not satisfy the penalty conditions. **parameters: Parameters for the sampling method, specified by the child sampler. Returns: :obj:`dimod.SampleSet` """ |
bqm = make_quadratic(poly, penalty_strength, vartype=poly.vartype)
response = self.child.sample(bqm, **parameters)
return polymorph_response(response, poly, bqm,
penalty_strength=penalty_strength,
keep_penalty_variables=keep_penalty_variables,
discard_unsatisfied=discard_unsatisfied) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sample_poly(self, poly, scalar=None, bias_range=1, poly_range=None, ignored_terms=None, **parameters):
"""Scale and sample from the given binary polynomial. If scalar is not given, problem is scaled based on bias and polynomial ranges. See :meth:`.BinaryPolynomial.scale` and :meth:`.BinaryPolynomial.normalize` Args: poly (obj:`.BinaryPolynomial`):
A binary polynomial. scalar (number, optional):
Value by which to scale the energy range of the binary polynomial. bias_range (number/pair, optional, default=1):
Value/range by which to normalize the all the biases, or if `poly_range` is provided, just the linear biases. poly_range (number/pair, optional):
Value/range by which to normalize the higher order biases. ignored_terms (iterable, optional):
Biases associated with these terms are not scaled. **parameters: Other parameters for the sampling method, specified by the child sampler. """ |
if ignored_terms is None:
ignored_terms = set()
else:
ignored_terms = {frozenset(term) for term in ignored_terms}
# scale and normalize happen in-place so we need to make a copy
original, poly = poly, poly.copy()
if scalar is not None:
poly.scale(scalar, ignored_terms=ignored_terms)
else:
poly.normalize(bias_range=bias_range, poly_range=poly_range,
ignored_terms=ignored_terms)
# we need to know how much we scaled by, which we can do by looking
# at the biases
try:
v = next(v for v, bias in original.items()
if bias and v not in ignored_terms)
except StopIteration:
# nothing to scale
scalar = 1
else:
scalar = poly[v] / original[v]
sampleset = self.child.sample_poly(poly, **parameters)
if ignored_terms:
# we need to recalculate the energy
sampleset.record.energy = original.energies((sampleset.record.sample,
sampleset.variables))
else:
sampleset.record.energy /= scalar
return sampleset |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sample_poly(self, poly, **kwargs):
"""Sample from the binary polynomial and truncate output. Args: poly (obj:`.BinaryPolynomial`):
A binary polynomial. **kwargs: Parameters for the sampling method, specified by the child sampler. Returns: :obj:`dimod.SampleSet` """ |
tkw = self._truncate_kwargs
if self._aggregate:
return self.child.sample_poly(poly, **kwargs).aggregate().truncate(**tkw)
else:
return self.child.sample_poly(poly, **kwargs).truncate(**tkw) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _samples_dicts_to_array(samples_dicts, labels):
"""Convert an iterable of samples where each sample is a dict to a numpy 2d array. Also determines the labels is they are None. """ |
itersamples = iter(samples_dicts)
first_sample = next(itersamples)
if labels is None:
labels = list(first_sample)
num_variables = len(labels)
def _iter_samples():
yield np.fromiter((first_sample[v] for v in labels),
count=num_variables, dtype=np.int8)
try:
for sample in itersamples:
yield np.fromiter((sample[v] for v in labels),
count=num_variables, dtype=np.int8)
except KeyError:
msg = ("Each dict in 'samples' must have the same keys.")
raise ValueError(msg)
return np.stack(list(_iter_samples())), labels |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def data_struct_array(sample, **vectors):
# data_struct_array(sample, *, energy, **vectors):
"""Combine samples and per-sample data into a numpy structured array. Args: sample (array_like):
Samples, in any form that can be converted into a numpy array. energy (array_like, required):
Required keyword argument. Energies, in any form that can be converted into a numpy 1-dimensional array. **kwargs (array_like):
Other per-sample data, in any form that can be converted into a numpy array. Returns: :obj:`~numpy.ndarray`: A numpy structured array. Has fields ['sample', 'energy', 'num_occurrences', **kwargs] """ |
if not len(sample):
# if samples are empty
sample = np.zeros((0, 0), dtype=np.int8)
else:
sample = np.asarray(sample, dtype=np.int8)
if sample.ndim < 2:
sample = np.expand_dims(sample, 0)
num_samples, num_variables = sample.shape
if 'num_occurrences' not in vectors:
vectors['num_occurrences'] = [1] * num_samples
datavectors = {}
datatypes = [('sample', np.dtype(np.int8), (num_variables,))]
for kwarg, vector in vectors.items():
dtype = float if kwarg == 'energy' else None
datavectors[kwarg] = vector = np.asarray(vector, dtype)
if len(vector.shape) < 1 or vector.shape[0] != num_samples:
msg = ('{} and sample have a mismatched shape {}, {}. They must have the same size '
'in the first axis.').format(kwarg, vector.shape, sample.shape)
raise ValueError(msg)
datatypes.append((kwarg, vector.dtype, vector.shape[1:]))
if 'energy' not in datavectors:
# consistent error with the one thrown in python3
raise TypeError('data_struct_array() needs keyword-only argument energy')
elif datavectors['energy'].shape != (num_samples,):
raise ValueError('energy should be a vector of length {}'.format(num_samples))
data = np.rec.array(np.zeros(num_samples, dtype=datatypes))
data['sample'] = sample
for kwarg, vector in datavectors.items():
data[kwarg] = vector
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_samples(cls, samples_like, vectors, info, vartype, variable_labels=None):
"""Build a response from samples. Args: samples_like: A collection of samples. 'samples_like' is an extension of NumPy's array_like to include an iterable of sample dictionaries (as returned by :meth:`.Response.samples`). data_vectors (dict[field, :obj:`numpy.array`/list]):
Additional per-sample data as a dict of vectors. Each vector is the same length as `samples_matrix`. The key 'energy' and it's vector is required. info (dict):
Information about the response as a whole formatted as a dict. vartype (:class:`.Vartype`/str/set):
Variable type for the response. Accepted input values: * :class:`.Vartype.SPIN`, ``'SPIN'``, ``{-1, 1}`` * :class:`.Vartype.BINARY`, ``'BINARY'``, ``{0, 1}`` variable_labels (list, optional):
Determines the variable labels if samples_like is not an iterable of dictionaries. If samples_like is not an iterable of dictionaries and if variable_labels is not provided then index labels are used. Returns: :obj:`.Response` Examples: From dicts From an array """ |
# there is no np.is_array_like so we use a try-except block
try:
# trying to cast it to int8 rules out list of dictionaries. If we didn't try to cast
# then it would just create a vector of np.object
samples = np.asarray(samples_like, dtype=np.int8)
except TypeError:
# if labels are None, they are set here
samples, variable_labels = _samples_dicts_to_array(samples_like, variable_labels)
assert samples.dtype == np.int8, 'sanity check'
record = data_struct_array(samples, **vectors)
# if labels are still None, set them here. We could do this in an else in the try-except
# block, but the samples-array might not have the correct shape
if variable_labels is None:
__, num_variables = record.sample.shape
variable_labels = list(range(num_variables))
return cls(record, variable_labels, info, vartype) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def breathe_identifier(self):
""" The unique identifier for breathe directives. .. note:: This method is currently assumed to only be called for nodes that are in :data:`exhale.utils.LEAF_LIKE_KINDS` (see also :func:`exhale.graph.ExhaleRoot.generateSingleNodeRST` where it is used). **Return** :class:`python:str` Usually, this will just be ``self.name``. However, for functions in particular the signature must be included to distinguish overloads. """ |
if self.kind == "function":
# TODO: breathe bug with templates and overloads, don't know what to do...
return "{name}({parameters})".format(
name=self.name,
parameters=", ".join(self.parameters)
)
return self.name |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def full_signature(self):
""" The full signature of a ``"function"`` node. **Return** :class:`python:str` The full signature of the function, including template, return type, name, and parameter types. **Raises** :class:`python:RuntimeError` If ``self.kind != "function"``. """ |
if self.kind == "function":
return "{template}{return_type} {name}({parameters})".format(
template="template <{0}> ".format(", ".join(self.template)) if self.template else "",
return_type=self.return_type,
name=self.name,
parameters=", ".join(self.parameters)
)
raise RuntimeError(
"full_signature may only be called for a 'function', but {name} is a '{kind}' node.".format(
name=self.name, kind=self.kind
)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def findNestedNamespaces(self, lst):
'''
Recursive helper function for finding nested namespaces. If this node is a
namespace node, it is appended to ``lst``. Each node also calls each of its
child ``findNestedNamespaces`` with the same list.
:Parameters:
``lst`` (list)
The list each namespace node is to be appended to.
'''
if self.kind == "namespace":
lst.append(self)
for c in self.children:
c.findNestedNamespaces(lst) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def findNestedDirectories(self, lst):
'''
Recursive helper function for finding nested directories. If this node is a
directory node, it is appended to ``lst``. Each node also calls each of its
child ``findNestedDirectories`` with the same list.
:Parameters:
``lst`` (list)
The list each directory node is to be appended to.
'''
if self.kind == "dir":
lst.append(self)
for c in self.children:
c.findNestedDirectories(lst) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def findNestedClassLike(self, lst):
'''
Recursive helper function for finding nested classes and structs. If this node
is a class or struct, it is appended to ``lst``. Each node also calls each of
its child ``findNestedClassLike`` with the same list.
:Parameters:
``lst`` (list)
The list each class or struct node is to be appended to.
'''
if self.kind == "class" or self.kind == "struct":
lst.append(self)
for c in self.children:
c.findNestedClassLike(lst) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def generateDirectoryNodeDocuments(self):
'''
Generates all of the directory reStructuredText documents.
'''
all_dirs = []
for d in self.dirs:
d.findNestedDirectories(all_dirs)
for d in all_dirs:
self.generateDirectoryNodeRST(d) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def gerrymanderNodeFilenames(self):
'''
When creating nodes, the filename needs to be relative to ``conf.py``, so it
will include ``self.root_directory``. However, when generating the API, the
file we are writing to is in the same directory as the generated node files so
we need to remove the directory path from a given ExhaleNode's ``file_name``
before we can ``include`` it or use it in a ``toctree``.
'''
for node in self.all_nodes:
node.file_name = os.path.basename(node.file_name)
if node.kind == "file":
node.program_file = os.path.basename(node.program_file) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def generateClassView(self):
'''
Generates the class view hierarchy, writing it to ``self.class_hierarchy_file``.
'''
class_view_stream = StringIO()
for n in self.namespaces:
n.toHierarchy(True, 0, class_view_stream)
# Add everything that was not nested in a namespace.
missing = []
# class-like objects (structs and classes)
for cl in sorted(self.class_like):
if not cl.in_class_hierarchy:
missing.append(cl)
# enums
for e in sorted(self.enums):
if not e.in_class_hierarchy:
missing.append(e)
# unions
for u in sorted(self.unions):
if not u.in_class_hierarchy:
missing.append(u)
if len(missing) > 0:
idx = 0
last_missing_child = len(missing) - 1
for m in missing:
m.toHierarchy(True, 0, class_view_stream, idx == last_missing_child)
idx += 1
elif configs.createTreeView:
# need to restart since there were no missing children found, otherwise the
# last namespace will not correctly have a lastChild
class_view_stream.close()
class_view_stream = StringIO()
last_nspace_index = len(self.namespaces) - 1
for idx in range(last_nspace_index + 1):
nspace = self.namespaces[idx]
nspace.toHierarchy(True, 0, class_view_stream, idx == last_nspace_index)
# extract the value from the stream and close it down
class_view_string = class_view_stream.getvalue()
class_view_stream.close()
return class_view_string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def generateDirectoryView(self):
'''
Generates the file view hierarchy, writing it to ``self.file_hierarchy_file``.
'''
file_view_stream = StringIO()
for d in self.dirs:
d.toHierarchy(False, 0, file_view_stream)
# add potential missing files (not sure if this is possible though)
missing = []
for f in sorted(self.files):
if not f.in_file_hierarchy:
missing.append(f)
found_missing = len(missing) > 0
if found_missing:
idx = 0
last_missing_child = len(missing) - 1
for m in missing:
m.toHierarchy(False, 0, file_view_stream, idx == last_missing_child)
idx += 1
elif configs.createTreeView:
# need to restart since there were no missing children found, otherwise the
# last directory will not correctly have a lastChild
file_view_stream.close()
file_view_stream = StringIO()
last_dir_index = len(self.dirs) - 1
for idx in range(last_dir_index + 1):
curr_d = self.dirs[idx]
curr_d.toHierarchy(False, 0, file_view_stream, idx == last_dir_index)
# extract the value from the stream and close it down
file_view_string = file_view_stream.getvalue()
file_view_stream.close()
return file_view_string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def toConsole(self):
'''
Convenience function for printing out the entire API being generated to the
console. Unused in the release, but is helpful for debugging ;)
'''
fmt_spec = {
"class": utils.AnsiColors.BOLD_MAGENTA,
"struct": utils.AnsiColors.BOLD_CYAN,
"define": utils.AnsiColors.BOLD_YELLOW,
"enum": utils.AnsiColors.BOLD_MAGENTA,
"enumvalue": utils.AnsiColors.BOLD_RED, # red means unused in framework
"function": utils.AnsiColors.BOLD_CYAN,
"file": utils.AnsiColors.BOLD_YELLOW,
"dir": utils.AnsiColors.BOLD_MAGENTA,
"group": utils.AnsiColors.BOLD_RED, # red means unused in framework
"namespace": utils.AnsiColors.BOLD_CYAN,
"typedef": utils.AnsiColors.BOLD_YELLOW,
"union": utils.AnsiColors.BOLD_MAGENTA,
"variable": utils.AnsiColors.BOLD_CYAN
}
self.consoleFormat(
"{0} and {1}".format(
utils._use_color("Classes", fmt_spec["class"], sys.stderr),
utils._use_color("Structs", fmt_spec["struct"], sys.stderr),
),
self.class_like,
fmt_spec
)
self.consoleFormat(
utils._use_color("Defines", fmt_spec["define"], sys.stderr),
self.defines,
fmt_spec
)
self.consoleFormat(
utils._use_color("Enums", fmt_spec["enum"], sys.stderr),
self.enums,
fmt_spec
)
self.consoleFormat(
utils._use_color("Enum Values (unused)", fmt_spec["enumvalue"], sys.stderr),
self.enum_values,
fmt_spec
)
self.consoleFormat(
utils._use_color("Functions", fmt_spec["function"], sys.stderr),
self.functions,
fmt_spec
)
self.consoleFormat(
utils._use_color("Files", fmt_spec["file"], sys.stderr),
self.files,
fmt_spec
)
self.consoleFormat(
utils._use_color("Directories", fmt_spec["dir"], sys.stderr),
self.dirs,
fmt_spec
)
self.consoleFormat(
utils._use_color("Groups (unused)", fmt_spec["group"], sys.stderr),
self.groups,
fmt_spec
)
self.consoleFormat(
utils._use_color("Namespaces", fmt_spec["namespace"], sys.stderr),
self.namespaces,
fmt_spec
)
self.consoleFormat(
utils._use_color("Typedefs", fmt_spec["typedef"], sys.stderr),
self.typedefs,
fmt_spec
)
self.consoleFormat(
utils._use_color("Unions", fmt_spec["union"], sys.stderr),
self.unions,
fmt_spec
)
self.consoleFormat(
utils._use_color("Variables", fmt_spec["variable"], sys.stderr),
self.variables,
fmt_spec
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sanitize(name):
""" Sanitize the specified ``name`` for use with breathe directives. **Parameters** ``name`` (:class:`python:str`) The name to be sanitized. **Return** :class:`python:str` The input ``name`` sanitized to use with breathe directives (primarily for use with ``.. doxygenfunction::``). Replacements such as ``"<" -> "<"`` are performed, as well as removing spaces ``"< " -> "<"`` must be done. Breathe is particularly sensitive with respect to whitespace. """ |
return name.replace(
"<", "<"
).replace(
">", ">"
).replace(
"&", "&"
).replace(
"< ", "<"
).replace(
" >", ">"
).replace(
" &", "&"
).replace(
"& ", "&"
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def doxygenLanguageToPygmentsLexer(location, language):
'''
Given an input location and language specification, acquire the Pygments lexer to
use for this file.
1. If :data:`configs.lexerMapping <exhale.configs.lexerMapping>` has been specified,
then :data:`configs._compiled_lexer_mapping <exhale.configs._compiled_lexer_mapping>`
will be queried first using the ``location`` parameter.
2. If no matching was found, then the appropriate lexer defined in
:data:`LANG_TO_LEX <exhale.utils.LANG_TO_LEX>` is used.
3. If no matching language is found, ``"none"`` is returned (indicating to Pygments
that no syntax highlighting should occur).
'''
if configs._compiled_lexer_mapping:
for regex in configs._compiled_lexer_mapping:
if regex.match(location):
return configs._compiled_lexer_mapping[regex]
if language in LANG_TO_LEX:
return LANG_TO_LEX[language]
return "none" |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def getBriefAndDetailedRST(textRoot, node):
'''
Given an input ``node``, return a tuple of strings where the first element of
the return is the ``brief`` description and the second is the ``detailed``
description.
.. todo:: actually document this
'''
node_xml_contents = utils.nodeCompoundXMLContents(node)
if not node_xml_contents:
return "", ""
try:
node_soup = BeautifulSoup(node_xml_contents, "lxml-xml")
except:
utils.fancyError("Unable to parse [{0}] xml using BeautifulSoup".format(node.name))
try:
# In the file xml definitions, things such as enums or defines are listed inside
# of <sectiondef> tags, which may have some nested <briefdescription> or
# <detaileddescription> tags. So as long as we make sure not to search
# recursively, then the following will extract the file descriptions only
# process the brief description if provided
brief = node_soup.doxygen.compounddef.find_all("briefdescription", recursive=False)
brief_desc = ""
if len(brief) == 1:
brief = brief[0]
# Empty descriptions will usually get parsed as a single newline, which we
# want to ignore ;)
if not brief.get_text().isspace():
brief_desc = convertDescriptionToRST(textRoot, node, brief, None)
# process the detailed description if provided
detailed = node_soup.doxygen.compounddef.find_all("detaileddescription", recursive=False)
detailed_desc = ""
if len(detailed) == 1:
detailed = detailed[0]
if not detailed.get_text().isspace():
detailed_desc = convertDescriptionToRST(textRoot, node, detailed, "Detailed Description")
return brief_desc, detailed_desc
except:
utils.fancyError(
"Could not acquire soup.doxygen.compounddef; likely not a doxygen xml file."
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _build_url(self, endpoint):
""" Builds the absolute URL using the target and desired endpoint. """ |
try:
path = self.endpoints[endpoint]
except KeyError:
msg = 'Unknown endpoint `{0}`'
raise ValueError(msg.format(endpoint))
absolute_url = urljoin(self.target, path)
return absolute_url |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_version(self, project, version, egg):
""" Adds a new project egg to the Scrapyd service. First class, maps to Scrapyd's add version endpoint. """ |
url = self._build_url(constants.ADD_VERSION_ENDPOINT)
data = {
'project': project,
'version': version
}
files = {
'egg': egg
}
json = self.client.post(url, data=data, files=files,
timeout=self.timeout)
return json['spiders'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cancel(self, project, job, signal=None):
""" Cancels a job from a specific project. First class, maps to Scrapyd's cancel job endpoint. """ |
url = self._build_url(constants.CANCEL_ENDPOINT)
data = {
'project': project,
'job': job,
}
if signal is not None:
data['signal'] = signal
json = self.client.post(url, data=data, timeout=self.timeout)
return json['prevstate'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_project(self, project):
""" Deletes all versions of a project. First class, maps to Scrapyd's delete project endpoint. """ |
url = self._build_url(constants.DELETE_PROJECT_ENDPOINT)
data = {
'project': project,
}
self.client.post(url, data=data, timeout=self.timeout)
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_version(self, project, version):
""" Deletes a specific version of a project. First class, maps to Scrapyd's delete version endpoint. """ |
url = self._build_url(constants.DELETE_VERSION_ENDPOINT)
data = {
'project': project,
'version': version
}
self.client.post(url, data=data, timeout=self.timeout)
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def job_status(self, project, job_id):
""" Retrieves the 'status' of a specific job specified by its id. Derived, utilises Scrapyd's list jobs endpoint to provide the answer. """ |
all_jobs = self.list_jobs(project)
for state in constants.JOB_STATES:
job_ids = [job['id'] for job in all_jobs[state]]
if job_id in job_ids:
return state
return '' |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def list_jobs(self, project):
""" Lists all known jobs for a project. First class, maps to Scrapyd's list jobs endpoint. """ |
url = self._build_url(constants.LIST_JOBS_ENDPOINT)
params = {'project': project}
jobs = self.client.get(url, params=params, timeout=self.timeout)
return jobs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def list_projects(self):
""" Lists all deployed projects. First class, maps to Scrapyd's list projects endpoint. """ |
url = self._build_url(constants.LIST_PROJECTS_ENDPOINT)
json = self.client.get(url, timeout=self.timeout)
return json['projects'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def list_spiders(self, project):
""" Lists all known spiders for a specific project. First class, maps to Scrapyd's list spiders endpoint. """ |
url = self._build_url(constants.LIST_SPIDERS_ENDPOINT)
params = {'project': project}
json = self.client.get(url, params=params, timeout=self.timeout)
return json['spiders'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def list_versions(self, project):
""" Lists all deployed versions of a specific project. First class, maps to Scrapyd's list versions endpoint. """ |
url = self._build_url(constants.LIST_VERSIONS_ENDPOINT)
params = {'project': project}
json = self.client.get(url, params=params, timeout=self.timeout)
return json['versions'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def schedule(self, project, spider, settings=None, **kwargs):
""" Schedules a spider from a specific project to run. First class, maps to Scrapyd's scheduling endpoint. """ |
url = self._build_url(constants.SCHEDULE_ENDPOINT)
data = {
'project': project,
'spider': spider
}
data.update(kwargs)
if settings:
setting_params = []
for setting_name, value in iteritems(settings):
setting_params.append('{0}={1}'.format(setting_name, value))
data['setting'] = setting_params
json = self.client.post(url, data=data, timeout=self.timeout)
return json['jobid'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _handle_response(self, response):
""" Handles the response received from Scrapyd. """ |
if not response.ok:
raise ScrapydResponseError(
"Scrapyd returned a {0} error: {1}".format(
response.status_code,
response.text))
try:
json = response.json()
except ValueError:
raise ScrapydResponseError("Scrapyd returned an invalid JSON "
"response: {0}".format(response.text))
if json['status'] == 'ok':
json.pop('status')
return json
elif json['status'] == 'error':
raise ScrapydResponseError(json['message']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def all(self):
r"""Returns all content in this node, regardless of whitespace or not. This includes all LaTeX needed to reconstruct the original source. ['\n', \newcommand{reverseconcat}[3]{#3#2#1}, '\n'] """ |
for child in self.expr.all:
if isinstance(child, TexExpr):
node = TexNode(child)
node.parent = self
yield node
else:
yield child |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def children(self):
r"""Immediate children of this TeX element that are valid TeX objects. This is equivalent to contents, excluding text elements and keeping only Tex expressions. :return: generator of all children :rtype: Iterator[TexExpr] \item Hello <BLANKLINE> """ |
for child in self.expr.children:
node = TexNode(child)
node.parent = self
yield node |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def string(self):
r"""This is valid if and only if 1. the expression is a :class:`.TexCmd` AND 2. the command has only one argument. :rtype: Union[None,str] 'Hello' 'Hello World' \textbf{Hello World} """ |
if isinstance(self.expr, TexCmd) and len(self.expr.args) == 1:
return self.expr.args[0].value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def text(self):
r"""All text in descendant nodes. This is equivalent to contents, keeping text elements and excluding Tex expressions. 'Nested\n ' """ |
for descendant in self.contents:
if isinstance(descendant, TokenWithPosition):
yield descendant
elif hasattr(descendant, 'text'):
yield from descendant.text |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def count(self, name=None, **attrs):
r"""Number of descendants matching criteria. :param Union[None,str] name: name of LaTeX expression :param attrs: LaTeX expression attributes, such as item text. :return: number of matching expressions :rtype: int 1 2 """ |
return len(list(self.find_all(name, **attrs))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self):
r"""Delete this node from the parse tree. Where applicable, this will remove all descendants of this node from the parse tree. \textit{}\textit{keep me!} \textit{keep me!} """ |
# TODO: needs better abstraction for supports contents
parent = self.parent
if parent.expr._supports_contents():
parent.remove(self)
return
# TODO: needs abstraction for removing from arg
for arg in parent.args:
if self.expr in arg.contents:
arg.contents.remove(self.expr) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find(self, name=None, **attrs):
r"""First descendant node matching criteria. Returns None if no descendant node found. :return: descendant node matching criteria :rtype: Union[None,TexExpr] \textit{eee} """ |
try:
return next(self.find_all(name, **attrs))
except StopIteration:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_all(self, name=None, **attrs):
r"""Return all descendant nodes matching criteria. :param Union[None,str] name: name of LaTeX expression :param attrs: LaTeX expression attributes, such as item text. :return: All descendant nodes matching criteria :rtype: Iterator[TexNode] \textit{eee} \textit{ooo} Traceback (most recent call last):
StopIteration """ |
for descendant in self.__descendants():
if hasattr(descendant, '__match__') and \
descendant.__match__(name, attrs):
yield descendant |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def all(self):
r"""Returns all content in this expression, regardless of whitespace or not. This includes all LaTeX needed to reconstruct the original source. True """ |
for arg in self.args:
for expr in arg:
yield expr
for content in self._contents:
yield content |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def contents(self):
r"""Returns all contents in this expression. Optionally includes whitespace if set when node was created. ['hi'] ['\n', 'hi'] """ |
for content in self.all:
is_whitespace = isinstance(content, str) and content.isspace()
if not is_whitespace or self.preserve_whitespace:
yield content |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tokens(self):
"""Further breaks down all tokens for a particular expression into words and other expressions. ['var x = 10'] """ |
for content in self.contents:
if isinstance(content, TokenWithPosition):
for word in content.split():
yield word
else:
yield content |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insert(self, i, *exprs):
"""Insert content at specified position into expression. :param int i: Position to add content to :param Union[TexExpr,str] exprs: List of contents to add TexExpr('textbf', ['hello']) TexExpr('textbf', ['world', 'hello']) """ |
self._assert_supports_contents()
for j, expr in enumerate(exprs):
self._contents.insert(i + j, expr) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove(self, expr):
"""Remove a provided expression from its list of contents. :param Union[TexExpr,str] expr: Content to add :return: index of the expression removed :rtype: int 0 TexExpr('textbf', []) """ |
self._assert_supports_contents()
index = self._contents.index(expr)
self._contents.remove(expr)
return index |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse(s):
"""Parse a string or list and return an Argument object :param Union[str,iterable] s: Either a string or a list, where the first and last elements are valid argument delimiters. RArg('arg0') OArg('arg0') """ |
if isinstance(s, arg_type):
return s
if isinstance(s, (list, tuple)):
for arg in arg_type:
if [s[0], s[-1]] == arg.delims():
return arg(*s[1:-1])
raise TypeError('Malformed argument. First and last elements must '
'match a valid argument format. In this case, TexSoup'
' could not find matching punctuation for: %s.\n'
'Common issues include: Unescaped special characters,'
' mistyped closing punctuation, misalignment.' % (str(s)))
for arg in arg_type:
if arg.__is__(s):
return arg(arg.__strip__(s))
raise TypeError('Malformed argument. Must be an Arg or a string in '
'either brackets or curly braces.') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insert(self, i, arg):
r"""Insert whitespace, an unparsed argument string, or an argument object. :param int i: Index to insert argument into :param Arg arg: Argument to insert 3 [RArg('arg0'), OArg('arg1'), OArg('arg2')] ['\n', RArg('arg0'), OArg('arg1'), OArg('arg2')] OArg('arg3') """ |
arg = self.__coerce(arg)
if isinstance(arg, Arg):
super().insert(i, arg)
if len(self) <= 1:
self.all.append(arg)
else:
if i > len(self):
i = len(self) - 1
before = self[i - 1]
index_before = self.all.index(before)
self.all.insert(index_before + 1, arg) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove(self, item):
"""Remove either an unparsed argument string or an argument object. :param Union[str,Arg] item: Item to remove 2 OArg('arg2') """ |
item = self.__coerce(item)
self.all.remove(item)
super().remove(item) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pop(self, i):
"""Pop argument object at provided index. :param int i: Index to pop from the list OArg('arg2') 2 RArg('arg0') """ |
item = super().pop(i)
j = self.all.index(item)
return self.all.pop(j) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def forward(self, j=1):
"""Move forward by j steps. 'abc' 'bc' """ |
if j < 0:
return self.backward(-j)
self.__i += j
return self[self.__i-j:self.__i] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read(tex):
"""Read and parse all LaTeX source :param Union[str,iterable] tex: LaTeX source :return TexEnv: the global environment """ |
if isinstance(tex, str):
tex = tex
else:
tex = ''.join(itertools.chain(*tex))
buf, children = Buffer(tokenize(tex)), []
while buf.hasNext():
content = read_tex(buf)
if content is not None:
children.append(content)
return TexEnv('[tex]', children), tex |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def resolve(tex):
"""Resolve all imports and update the parse tree. Reads from a tex file and once finished, writes to a tex file. """ |
# soupify
soup = TexSoup(tex)
# resolve subimports
for subimport in soup.find_all('subimport'):
path = subimport.args[0] + subimport.args[1]
subimport.replace_with(*resolve(open(path)).contents)
# resolve imports
for _import in soup.find_all('import'):
_import.replace_with(*resolve(open(_import.args[0])).contents)
# resolve includes
for include in soup.find_all('include'):
include.replace_with(*resolve(open(include.args[0])).contents)
return soup |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sollen(tex, command):
r"""Measure solution length :param Union[str,buffer] tex: the LaTeX source as a string or file buffer :param str command: the command denoting a solution i.e., if the tex file uses '\answer{<answer here>}', then the command is 'answer'. :return int: the solution length """ |
return sum(len(a.string) for a in TexSoup(tex).find_all(command)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def count(tex):
"""Extract all labels, then count the number of times each is referenced in the provided file. Does not follow \includes. """ |
# soupify
soup = TexSoup(tex)
# extract all unique labels
labels = set(label.string for label in soup.find_all('label'))
# create dictionary mapping label to number of references
return dict((label, soup.find_all('\ref{%s}' % label)) for label in labels) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def next_token(text):
r"""Returns the next possible token, advancing the iterator to the next position to start processing from. :param Union[str,iterator,Buffer] text: LaTeX to process :return str: the token \textbf { Do play \textit { nice } . } . ' ' '$$' \gamma = \beta """ |
while text.hasNext():
for name, f in tokenizers:
current_token = f(text)
if current_token is not None:
return current_token |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tokenize(text):
r"""Generator for LaTeX tokens on text, ignoring comments. :param Union[str,iterator,Buffer] text: LaTeX to process \textbf { Do play \textit { nice } . } \begin { tabular } 0 & 1 \\ 2 & 0 \end { tabular } """ |
current_token = next_token(text)
while current_token is not None:
yield current_token
current_token = next_token(text) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def token(name):
"""Marker for a token :param str name: Name of tokenizer """ |
def wrap(f):
tokenizers.append((name, f))
return f
return wrap |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tokenize_punctuation_command(text):
"""Process command that augments or modifies punctuation. This is important to the tokenization of a string, as opening or closing punctuation is not supposed to match. :param Buffer text: iterator over text, with current position """ |
if text.peek() == '\\':
for point in PUNCTUATION_COMMANDS:
if text.peek((1, len(point) + 1)) == point:
return text.forward(len(point) + 1) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tokenize_line_comment(text):
r"""Process a line comment :param Buffer text: iterator over line, with current position '%hello world' '%hello' """ |
result = TokenWithPosition('', text.position)
if text.peek() == '%' and text.peek(-1) != '\\':
result += text.forward(1)
while text.peek() != '\n' and text.hasNext():
result += text.forward(1)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tokenize_argument(text):
"""Process both optional and required arguments. :param Buffer text: iterator over line, with current position """ |
for delim in ARG_TOKENS:
if text.startswith(delim):
return text.forward(len(delim)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tokenize_math(text):
r"""Prevents math from being tokenized. :param Buffer text: iterator over line, with current position '$' '$$' """ |
if text.startswith('$') and (
text.position == 0 or text.peek(-1) != '\\' or text.endswith(r'\\')):
starter = '$$' if text.startswith('$$') else '$'
return TokenWithPosition(text.forward(len(starter)), text.position) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tokenize_string(text, delimiters=None):
r"""Process a string of text :param Buffer text: iterator over line, with current position :param Union[None,iterable,str] delimiters: defines the delimiters 'hello' 'hello again' \ 0 & 1 \\ """ |
if delimiters is None:
delimiters = ALL_TOKENS
result = TokenWithPosition('', text.position)
for c in text:
if c == '\\' and str(text.peek()) in delimiters and str(c + text.peek()) not in delimiters:
c += next(text)
elif str(c) in delimiters: # assumes all tokens are single characters
text.backward(1)
return result
result += c
if text.peek((0, 2)) == '\\\\':
result += text.forward(2)
if text.peek((0, 2)) == '\n\n':
result += text.forward(2)
return result
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_tex(src):
r"""Read next expression from buffer :param Buffer src: a buffer of tokens """ |
c = next(src)
if c.startswith('%'):
return c
elif c.startswith('$'):
name = '$$' if c.startswith('$$') else '$'
expr = TexEnv(name, [], nobegin=True)
return read_math_env(src, expr)
elif c.startswith('\[') or c.startswith("\("):
if c.startswith('\['):
name = 'displaymath'
begin = '\['
end = '\]'
else:
name = "math"
begin = "\("
end = "\)"
expr = TexEnv(name, [], nobegin=True, begin=begin, end=end)
return read_math_env(src, expr)
elif c.startswith('\\'):
command = TokenWithPosition(c[1:], src.position)
if command == 'item':
contents, arg = read_item(src)
mode, expr = 'command', TexCmd(command, contents, arg)
elif command == 'begin':
mode, expr, _ = 'begin', TexEnv(src.peek(1)), src.forward(3)
else:
mode, expr = 'command', TexCmd(command)
expr.args = read_args(src, expr.args)
if mode == 'begin':
read_env(src, expr)
return expr
if c in ARG_START_TOKENS:
return read_arg(src, c)
return c |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_item(src):
r"""Read the item content. There can be any number of whitespace characters between \item and the first non-whitespace character. However, after that first non-whitespace character, the item can only tolerate one successive line break at a time. \item can also take an argument. :param Buffer src: a buffer of tokens :return: contents of the item and any item arguments """ |
def stringify(s):
return TokenWithPosition.join(s.split(' '), glue=' ')
def forward_until_new(s):
"""Catch the first non-whitespace character"""
t = TokenWithPosition('', s.peek().position)
while (s.hasNext() and
any([s.peek().startswith(substr) for substr in string.whitespace]) and
not t.strip(" ").endswith('\n')):
t += s.forward(1)
return t
# Item argument such as in description environment
arg = []
extra = []
if src.peek() in ARG_START_TOKENS:
c = next(src)
a = read_arg(src, c)
arg.append(a)
if not src.hasNext():
return extra, arg
last = stringify(forward_until_new(src))
extra.append(last.lstrip(" "))
while (src.hasNext() and not str(src).strip(" ").startswith('\n\n') and
not src.startswith('\item') and
not src.startswith('\end') and
not (isinstance(last, TokenWithPosition) and last.strip(" ").endswith('\n\n') and len(extra) > 1)):
last = read_tex(src)
extra.append(last)
return extra, arg |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_args(src, args=None):
r"""Read all arguments from buffer. Advances buffer until end of last valid arguments. There can be any number of whitespace characters between command and the first argument. However, after that first argument, the command can only tolerate one successive line break, before discontinuing the chain of arguments. :param TexArgs args: existing arguments to extend :return: parsed arguments :rtype: TexArgs """ |
args = args or TexArgs()
# Unlimited whitespace before first argument
candidate_index = src.num_forward_until(lambda s: not s.isspace())
while src.peek().isspace():
args.append(read_tex(src))
# Restricted to only one line break after first argument
line_breaks = 0
while src.peek() in ARG_START_TOKENS or \
(src.peek().isspace() and line_breaks == 0):
space_index = src.num_forward_until(lambda s: not s.isspace())
if space_index > 0:
line_breaks += 1
if src.peek((0, space_index)).count("\n") <= 1 and src.peek(space_index) in ARG_START_TOKENS:
args.append(read_tex(src))
else:
line_breaks = 0
tex_text = read_tex(src)
args.append(tex_text)
if not args:
src.backward(candidate_index)
return args |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_arg(src, c):
"""Read the argument from buffer. Advances buffer until right before the end of the argument. :param Buffer src: a buffer of tokens :param str c: argument token (starting token) :return: the parsed argument :rtype: Arg """ |
content = [c]
while src.hasNext():
if src.peek() in ARG_END_TOKENS:
content.append(next(src))
break
else:
content.append(read_tex(src))
return Arg.parse(content) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def hub_scores(msm, waypoints=None):
""" Calculate the hub score for one or more waypoints The "hub score" is a measure of how well traveled a certain state or set of states is in a network. Specifically, it is the fraction of times that a walker visits a state en route from some state A to another state B, averaged over all combinations of A and B. Parameters msm : msmbuilder.MarkovStateModel MSM to analyze waypoints : array_like, int, optional The index of the intermediate state (or more than one). If None, then all waypoints will be used Returns ------- hub_score : float The hub score for the waypoint References .. [1] Dickson & Brooks (2012), J. Chem. Theory Comput., 8, 3044-3052. """ |
n_states = msm.n_states_
if isinstance(waypoints, int):
waypoints = [waypoints]
elif waypoints is None:
waypoints = xrange(n_states)
elif not (isinstance(waypoints, list) or
isinstance(waypoints, np.ndarray)):
raise ValueError("waypoints (%s) must be an int, a list, or None" %
str(waypoints))
hub_scores = []
for waypoint in waypoints:
other_states = (i for i in xrange(n_states) if i != waypoint)
# calculate the hub score for this waypoint
hub_score = 0.0
for (source, sink) in itertools.permutations(other_states, 2):
hub_score += fraction_visited(source, sink, waypoint, msm)
hub_score /= float((n_states - 1) * (n_states - 2))
hub_scores.append(hub_score)
return np.array(hub_scores) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fit(self, sequences, y=None):
"""Fit a BACE lumping model using a sequence of cluster assignments. Parameters sequences : list(np.ndarray(dtype='int')) List of arrays of cluster assignments y : None Unused, present for sklearn compatibility only. Returns ------- self """ |
super(BACE, self).fit(sequences, y=y)
if self.n_macrostates is not None:
self._do_lumping()
else:
raise RuntimeError('n_macrostates must not be None to fit')
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _do_lumping(self):
"""Do the BACE lumping. """ |
c = copy.deepcopy(self.countsmat_)
if self.sliding_window:
c *= self.lag_time
c, macro_map, statesKeep = self._filterFunc(c)
w = np.array(c.sum(axis=1)).flatten()
w[statesKeep] += 1
unmerged = np.zeros(w.shape[0], dtype=np.int8)
unmerged[statesKeep] = 1
# get nonzero indices in upper triangle
indRecalc = self._getInds(c, statesKeep)
dMat = np.zeros(c.shape, dtype=np.float32)
i = 0
nCurrentStates = statesKeep.shape[0]
self.bayesFactors = {}
dMat, minX, minY = self._calcDMat(c, w, indRecalc, dMat,
statesKeep, unmerged)
while nCurrentStates > self.n_macrostates:
c, w, indRecalc, dMat, macro_map, statesKeep, unmerged, minX, minY = self._mergeTwoClosestStates(
c, w, indRecalc, dMat, macro_map,
statesKeep, minX, minY, unmerged)
nCurrentStates -= 1
if self.save_all_maps:
saved_map = copy.deepcopy(macro_map)
self.map_dict[nCurrentStates] = saved_map
if nCurrentStates - 1 == self.n_macrostates:
self.microstate_mapping_ = macro_map |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def percentage(self):
"""Returns the progress as a percentage.""" |
if self.currval >= self.maxval:
return 100.0
return self.currval * 100.0 / self.maxval |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_packages():
"""Find all of mdtraj's python packages. Adapted from IPython's setupbase.py. Copyright IPython contributors, licensed under the BSD license. """ |
packages = ['mdtraj.scripts']
for dir,subdirs,files in os.walk('MDTraj'):
package = dir.replace(os.path.sep, '.')
if '__init__.py' not in files:
# not a package
continue
packages.append(package.replace('MDTraj', 'mdtraj'))
return packages |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _detect_sse3(self):
"Does this compiler support SSE3 intrinsics?"
self._print_support_start('SSE3')
result = self.hasfunction('__m128 v; _mm_hadd_ps(v,v)',
include='<pmmintrin.h>',
extra_postargs=['-msse3'])
self._print_support_end('SSE3', result)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _detect_sse41(self):
"Does this compiler support SSE4.1 intrinsics?"
self._print_support_start('SSE4.1')
result = self.hasfunction( '__m128 v; _mm_round_ps(v,0x00)',
include='<smmintrin.h>',
extra_postargs=['-msse4'])
self._print_support_end('SSE4.1', result)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def uncertainty_K(self):
"""Estimate of the element-wise asymptotic standard deviation in the rate matrix """ |
if self.information_ is None:
self._build_information()
sigma_K = _ratematrix.sigma_K(
self.information_, theta=self.theta_, n=self.n_states_)
return sigma_K |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def uncertainty_pi(self):
"""Estimate of the element-wise asymptotic standard deviation in the stationary distribution. """ |
if self.information_ is None:
self._build_information()
sigma_pi = _ratematrix.sigma_pi(
self.information_, theta=self.theta_, n=self.n_states_)
return sigma_pi |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def uncertainty_eigenvalues(self):
"""Estimate of the element-wise asymptotic standard deviation in the model eigenvalues """ |
if self.information_ is None:
self._build_information()
sigma_eigenvalues = _ratematrix.sigma_eigenvalues(
self.information_, theta=self.theta_, n=self.n_states_)
if self.n_timescales is None:
return sigma_eigenvalues
return np.nan_to_num(sigma_eigenvalues[:self.n_timescales+1]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def uncertainty_timescales(self):
"""Estimate of the element-wise asymptotic standard deviation in the model relaxation timescales. """ |
if self.information_ is None:
self._build_information()
sigma_timescales = _ratematrix.sigma_timescales(
self.information_, theta=self.theta_, n=self.n_states_)
if self.n_timescales is None:
return sigma_timescales
return sigma_timescales[:self.n_timescales] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _initial_guess(self, countsmat):
"""Generate an initial guess for \theta. """ |
if self.theta_ is not None:
return self.theta_
if self.guess == 'log':
transmat, pi = _transmat_mle_prinz(countsmat)
K = np.real(scipy.linalg.logm(transmat)) / self.lag_time
elif self.guess == 'pseudo':
transmat, pi = _transmat_mle_prinz(countsmat)
K = (transmat - np.eye(self.n_states_)) / self.lag_time
elif isinstance(self.guess, np.ndarray):
pi = _solve_ratemat_eigensystem(self.guess)[1][:, 0]
K = self.guess
S = np.multiply(np.sqrt(np.outer(pi, 1/pi)), K)
sflat = np.maximum(S[np.triu_indices_from(countsmat, k=1)], 0)
theta0 = np.concatenate((sflat, np.log(pi)))
return theta0 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _build_information(self):
"""Build the inverse of hessian of the log likelihood at theta_ """ |
lag_time = float(self.lag_time)
# only the "active set" of variables not at the bounds of the
# feasible set.
inds = np.where(self.theta_ != 0)[0]
hessian = _ratematrix.hessian(
self.theta_, self.countsmat_, t=lag_time, inds=inds)
self.information_ = np.zeros((len(self.theta_), len(self.theta_)))
self.information_[np.ix_(inds, inds)] = scipy.linalg.pinv(-hessian) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _log_posterior(theta, counts, alpha, beta, n):
"""Log of the posterior probability and gradient Parameters theta : ndarray, shape=(n_params,) The free parameters of the reversible rate matrix counts : ndarray, shape=(n, n) The count matrix (sufficient statistics for the likielihood) alpha : ndarray, shape=(n,) Dirichlet concentration parameters beta : ndarray, shape=(n_params-n,) Scale parameter for the exponential prior on the symmetric rate matrix. """ |
# likelihood + grad
logp1, grad = loglikelihood(theta, counts)
# exponential prior on s_{ij}
logp2 = lexponential(theta[:-n], beta, grad=grad[:-n])
# dirichlet prior on \pi
logp3 = ldirichlet_softmax(theta[-n:], alpha=alpha, grad=grad[-n:])
logp = logp1 + logp2 + logp3
return logp, grad |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def categorical(pvals, size=None, random_state=None):
"""Return random integer from a categorical distribution Parameters pvals : sequence of floats, length p Probabilities of each of the ``p`` different outcomes. These should sum to 1. size : int or tuple of ints, optional Defines the shape of the returned array of random integers. If None (the default), returns a single float. random_state: RandomState or an int seed, optional A random number generator instance. """ |
cumsum = np.cumsum(pvals)
if size is None:
size = (1,)
axis = 0
elif isinstance(size, tuple):
size = size + (1,)
axis = len(size) - 1
else:
raise TypeError('size must be an int or tuple of ints')
random_state = check_random_state(random_state)
return np.sum(cumsum < random_state.random_sample(size), axis=axis) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def metzner_mcmc_slow(Z, n_samples, n_thin=1, random_state=None):
"""Metropolis Markov chain Monte Carlo sampler for reversible transition matrices Parameters Z : np.array, shape=(n_states, n_states) The effective count matrix, the number of observed transitions between states plus the number of prior counts n_samples : int Number of steps to iterate the chain for n_thin : int Yield every ``n_thin``-th sample from the MCMC chain random_state : int or RandomState instance or None (default) Pseudo Random Number generator seed control. If None, use the numpy.random singleton. Notes ----- The transition matrix posterior distribution is :: P(T | Z) \propto \Prod_{ij} T_{ij}^{Z_{ij}} and constrained to be reversible, such that there exists a \pi s.t. :: \pi_i T_{ij} = \pi_j T_{ji} Yields ------ T : np.array, shape=(n_states, n_states) This generator yields samples from the transition matrix posterior References .. [1] P. Metzner, F. Noe and C. Schutte, "Estimating the sampling error: Distribution of transition matrices and functions of transition matrices for given trajectory data." Phys. Rev. E 80 021106 (2009) See Also -------- metzner_mcmc_fast """ |
# Upper and lower bounds on the sum of the K matrix, to ensure proper
# proposal weights. See Eq. 17 of [1].
K_MINUS = 0.9
K_PLUS = 1.1
Z = np.asarray(Z)
n_states = Z.shape[0]
if not Z.ndim == 2 and Z.shape[1] == n_states:
raise ValueError("Z must be square. Z.shape=%s" % str(Z.shape))
K = 0.5 * (Z + Z.T) / np.sum(Z, dtype=float)
random = check_random_state(random_state)
n_accept = 0
for t in range(n_samples):
# proposal
# Select two indices in [0...n_states). We draw them by drawing a
# random floats in [0,1) and then rounding to int so that this method
# is exactly analogous to `metzner_mcmc_fast`, which, for each MCMC
# iteration, draws 4 random floats in [0,1) from the same numpy PSRNG,
# and then inside the C step kernel (src/metzner_mcmc.c) uses two of
# them like this. This ensures that this function and
# `metzner_mcmc_fast` give _exactly_ the same sequence of transition
# matricies, given the same random seed.
i, j = (random.rand(2) * n_states).astype(np.int)
sc = np.sum(K)
if i == j:
a, b = max(-K[i,j], K_MINUS - sc), K_PLUS - sc
else:
a, b = max(-K[i,j], 0.5*(K_MINUS - sc)), 0.5*(K_PLUS - sc)
epsilon = random.uniform(a, b)
K_proposal = np.copy(K)
K_proposal[i, j] += epsilon
if i != j:
K_proposal[j, i] += epsilon
# acceptance?
cutoff = np.exp(_logprob_T(_K_to_T(K_proposal), Z) -
_logprob_T(_K_to_T(K), Z))
r = random.rand()
# print 'i', i, 'j', j
# print 'a', a, 'b', b
# print 'cutoff', cutoff
# print 'r', r
# print 'sc', sc
if r < cutoff:
n_accept += 1
K = K_proposal
if (t+1) % n_thin == 0:
yield _K_to_T(K) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_layout():
"""Specify a hierarchy of our templates.""" |
tica_msm = TemplateDir(
'tica',
[
'tica/tica.py',
'tica/tica-plot.py',
'tica/tica-sample-coordinate.py',
'tica/tica-sample-coordinate-plot.py',
],
[
TemplateDir(
'cluster',
[
'cluster/cluster.py',
'cluster/cluster-plot.py',
'cluster/sample-clusters.py',
'cluster/sample-clusters-plot.py',
],
[
TemplateDir(
'msm',
[
'msm/timescales.py',
'msm/timescales-plot.py',
'msm/microstate.py',
'msm/microstate-plot.py',
'msm/microstate-traj.py',
],
[],
)
]
)
]
)
layout = TemplateDir(
'',
[
'0-test-install.py',
'1-get-example-data.py',
'README.md',
],
[
TemplateDir(
'analysis',
[
'analysis/gather-metadata.py',
'analysis/gather-metadata-plot.py',
],
[
TemplateDir(
'rmsd',
[
'rmsd/rmsd.py',
'rmsd/rmsd-plot.py',
],
[],
),
TemplateDir(
'landmarks',
[
'landmarks/find-landmarks.py',
'landmarks/featurize.py',
'landmarks/featurize-plot.py',
],
[tica_msm],
),
TemplateDir(
'dihedrals',
[
'dihedrals/featurize.py',
'dihedrals/featurize-plot.py',
],
[tica_msm],
)
]
)
]
)
return layout |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find(self, name, limit=None):
"""Find the named TemplateDir in the hierarchy""" |
if name == self.name:
if limit is not None:
assert limit == 1
self.subdirs = []
return self
for subdir in self.subdirs:
res = subdir.find(name, limit)
if res is not None:
return res
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def implied_timescales(sequences, lag_times, n_timescales=10, msm=None, n_jobs=1, verbose=0):
""" Calculate the implied timescales for a given MSM. Parameters sequences : list of array-like List of sequences, or a single sequence. Each sequence should be a 1D iterable of state labels. Labels can be integers, strings, or other orderable objects. lag_times : array-like Lag times to calculate implied timescales at. n_timescales : int, optional Number of timescales to calculate. msm : msmbuilder.msm.MarkovStateModel, optional Instance of an MSM to specify parameters other than the lag time. If None, then the default parameters (as implemented by msmbuilder.msm.MarkovStateModel) will be used. n_jobs : int, optional Number of jobs to run in parallel Returns ------- timescales : np.ndarray, shape = [n_models, n_timescales] The slowest timescales (in units of lag times) for each model. """ |
if msm is None:
msm = MarkovStateModel()
param_grid = {'lag_time' : lag_times}
models = param_sweep(msm, sequences, param_grid, n_jobs=n_jobs,
verbose=verbose)
timescales = [m.timescales_ for m in models]
n_timescales = min(n_timescales, min(len(ts) for ts in timescales))
timescales = np.array([ts[:n_timescales] for ts in timescales])
return timescales |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def experimental(name=None):
"""A simple decorator to mark functions and methods as experimental.""" |
def inner(func):
@functools.wraps(func)
def wrapper(*fargs, **kw):
fname = name
if name is None:
fname = func.__name__
warnings.warn("%s" % fname, category=ExperimentalWarning,
stacklevel=2)
return func(*fargs, **kw)
return wrapper
return inner |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _replace_labels(doc):
"""Really hacky find-and-replace method that modifies one of the sklearn docstrings to change the semantics of labels_ for the subclasses""" |
lines = doc.splitlines()
labelstart, labelend = None, None
foundattributes = False
for i, line in enumerate(lines):
stripped = line.strip()
if stripped == 'Attributes':
foundattributes = True
if foundattributes and not labelstart and stripped.startswith('labels_'):
labelstart = len('\n'.join(lines[:i])) + 1
if labelstart and not labelend and stripped == '':
labelend = len('\n'.join(lines[:i + 1]))
if labelstart is None or labelend is None:
return doc
replace = '\n'.join([
' labels_ : list of arrays, each of shape [sequence_length, ]',
' The label of each point is an integer in [0, n_clusters).',
'',
])
return doc[:labelstart] + replace + doc[labelend:] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dump(value, filename, compress=None, cache_size=None):
"""Save an arbitrary python object using pickle. Parameters value : any Python object The object to store to disk using pickle. filename : string The name of the file in which it is to be stored compress : None No longer used cache_size : positive number, optional No longer used See Also -------- load : corresponding loader """ |
if compress is not None or cache_size is not None:
warnings.warn("compress and cache_size are no longer valid options")
with open(filename, 'wb') as f:
pickle.dump(value, f) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.