text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_gtf_url(ensembl_release, species, server=ENSEMBL_FTP_SERVER):
""" Returns a URL and a filename, which can be joined together. """ |
ensembl_release, species, _ = \
normalize_release_properties(ensembl_release, species)
subdir = _species_subdir(
ensembl_release,
species=species,
filetype="gtf",
server=server)
url_subdir = urllib_parse.urljoin(server, subdir)
filename = make_gtf_filename(
ensembl_release=ensembl_release,
species=species)
return join(url_subdir, filename) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_fasta_url( ensembl_release, species, sequence_type, server=ENSEMBL_FTP_SERVER):
"""Construct URL to FASTA file with cDNA transcript or protein sequences Parameter examples: ensembl_release = 75 species = "Homo_sapiens" sequence_type = "cdna" (other option: "pep") """ |
ensembl_release, species, reference_name = normalize_release_properties(
ensembl_release, species)
subdir = _species_subdir(
ensembl_release,
species=species,
filetype="fasta",
server=server)
server_subdir = urllib_parse.urljoin(server, subdir)
server_sequence_subdir = join(server_subdir, sequence_type)
filename = make_fasta_filename(
ensembl_release=ensembl_release,
species=species,
sequence_type=sequence_type)
return join(server_sequence_subdir, filename) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _transcript_feature_positions(self, feature):
""" Get unique positions for feature, raise an error if feature is absent. """ |
ranges = self._transcript_feature_position_ranges(
feature, required=True)
results = []
# a feature (such as a stop codon), maybe be split over multiple
# contiguous ranges. Collect all the nucleotide positions into a
# single list.
for (start, end) in ranges:
# since ranges are [inclusive, inclusive] and
# Python ranges are [inclusive, exclusive) we have to increment
# the end position
for position in range(start, end + 1):
assert position not in results, \
"Repeated position %d for %s" % (position, feature)
results.append(position)
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def spliced_offset(self, position):
""" Convert from an absolute chromosomal position to the offset into this transcript"s spliced mRNA. Position must be inside some exon (otherwise raise exception). """ |
# this code is performance sensitive, so switching from
# typechecks.require_integer to a simpler assertion
assert type(position) == int, \
"Position argument must be an integer, got %s : %s" % (
position, type(position))
if position < self.start or position > self.end:
raise ValueError(
"Invalid position: %d (must be between %d and %d)" % (
position,
self.start,
self.end))
# offset from beginning of unspliced transcript (including introns)
unspliced_offset = self.offset(position)
total_spliced_offset = 0
# traverse exons in order of their appearance on the strand
# Since absolute positions may decrease if on the negative strand,
# we instead use unspliced offsets to get always increasing indices.
#
# Example:
#
# Exon Name: exon 1 exon 2
# Spliced Offset: 123456 789...
# Intron vs. Exon: ...iiiiiieeeeeeiiiiiiiiiiiiiiiieeeeeeiiiiiiiiiii...
for exon in self.exons:
exon_unspliced_start, exon_unspliced_end = self.offset_range(
exon.start, exon.end)
# If the relative position is not within this exon, keep a running
# total of the total exonic length-so-far.
#
# Otherwise, if the relative position is within an exon, get its
# offset into that exon by subtracting the exon"s relative start
# position from the relative position. Add that to the total exonic
# length-so-far.
if exon_unspliced_start <= unspliced_offset <= exon_unspliced_end:
# all offsets are base 0, can be used as indices into
# sequence string
exon_offset = unspliced_offset - exon_unspliced_start
return total_spliced_offset + exon_offset
else:
exon_length = len(exon) # exon_end_position - exon_start_position + 1
total_spliced_offset += exon_length
raise ValueError(
"Couldn't find position %d on any exon of %s" % (
position, self.id)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _contiguous_offsets(self, offsets):
""" Sorts the input list of integer offsets, ensures that values are contiguous. """ |
offsets.sort()
for i in range(len(offsets) - 1):
assert offsets[i] + 1 == offsets[i + 1], \
"Offsets not contiguous: %s" % (offsets,)
return offsets |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start_codon_spliced_offsets(self):
""" Offsets from start of spliced mRNA transcript of nucleotides in start codon. """ |
offsets = [
self.spliced_offset(position)
for position
in self.start_codon_positions
]
return self._contiguous_offsets(offsets) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stop_codon_spliced_offsets(self):
""" Offsets from start of spliced mRNA transcript of nucleotides in stop codon. """ |
offsets = [
self.spliced_offset(position)
for position
in self.stop_codon_positions
]
return self._contiguous_offsets(offsets) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def complete(self):
""" Consider a transcript complete if it has start and stop codons and a coding sequence whose length is divisible by 3 """ |
return (
self.contains_start_codon and
self.contains_stop_codon and
self.coding_sequence is not None and
len(self.coding_sequence) % 3 == 0
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _all_possible_indices(self, column_names):
""" Create list of tuples containing all possible index groups we might want to create over tables in this database. If a set of genome annotations is missing some column we want to index on, we have to drop any indices which use that column. A specific table may later drop some of these indices if they're missing values for that feature or are the same as the table's primary key. """ |
candidate_column_groups = [
['seqname', 'start', 'end'],
['gene_name'],
['gene_id'],
['transcript_id'],
['transcript_name'],
['exon_id'],
['protein_id'],
['ccds_id'],
]
indices = []
column_set = set(column_names)
# Since queries are often restricted by feature type
# we should include that column in combination with all
# other indices we anticipate might improve performance
for column_group in candidate_column_groups:
skip = False
for column_name in column_group:
# some columns, such as 'exon_id',
# are not available in all releases of Ensembl (or
# other GTFs)
if column_name not in column_set:
logger.info(
"Skipping database index for {%s}",
", ".join(column_group))
skip = True
if skip:
continue
indices.append(column_group)
return indices |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connection(self):
""" Get a connection to the database or raise an exception """ |
connection = self._get_connection()
if connection:
return connection
else:
message = "GTF database needs to be created"
if self.install_string:
message += ", run: %s" % self.install_string
raise ValueError(message) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connect_or_create(self, overwrite=False):
""" Return a connection to the database if it exists, otherwise create it. Overwrite the existing database if `overwrite` is True. """ |
connection = self._get_connection()
if connection:
return connection
else:
return self.create(overwrite=overwrite) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_sql_query(self, sql, required=False, query_params=[]):
""" Given an arbitrary SQL query, run it against the database and return the results. Parameters sql : str SQL query required : bool Raise an error if no results found in the database query_params : list For each '?' in the query there must be a corresponding value in this list. """ |
try:
cursor = self.connection.execute(sql, query_params)
except sqlite3.OperationalError as e:
error_message = e.message if hasattr(e, 'message') else str(e)
logger.warn(
"Encountered error \"%s\" from query \"%s\" with parameters %s",
error_message,
sql,
query_params)
raise
results = cursor.fetchall()
if required and not results:
raise ValueError(
"No results found for query:\n%s\nwith parameters: %s" % (
sql, query_params))
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query_loci(self, filter_column, filter_value, feature):
""" Query for loci satisfying a given filter and feature type. Parameters filter_column : str Name of column to filter results by. filter_value : str Only return loci which have this value in the their filter_column. feature : str Feature names such as 'transcript', 'gene', and 'exon' Returns list of Locus objects """ |
# list of values containing (contig, start, stop, strand)
result_tuples = self.query(
select_column_names=["seqname", "start", "end", "strand"],
filter_column=filter_column,
filter_value=filter_value,
feature=feature,
distinct=True,
required=True)
return [
Locus(contig, start, end, strand)
for (contig, start, end, strand)
in result_tuples
] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query_locus(self, filter_column, filter_value, feature):
""" Query for unique locus, raises error if missing or more than one locus in the database. Parameters filter_column : str Name of column to filter results by. filter_value : str Only return loci which have this value in the their filter_column. feature : str Feature names such as 'transcript', 'gene', and 'exon' Returns single Locus object. """ |
loci = self.query_loci(
filter_column=filter_column,
filter_value=filter_value,
feature=feature)
if len(loci) == 0:
raise ValueError("Couldn't find locus for %s with %s = %s" % (
feature, filter_column, filter_value))
elif len(loci) > 1:
raise ValueError("Too many loci for %s with %s = %s: %s" % (
feature, filter_column, filter_value, loci))
return loci[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _load_gtf_as_dataframe(self, usecols=None, features=None):
""" Parse this genome source's GTF file and load it as a Pandas DataFrame """ |
logger.info("Reading GTF from %s", self.gtf_path)
df = read_gtf(
self.gtf_path,
column_converters={
"seqname": normalize_chromosome,
"strand": normalize_strand,
},
infer_biotype_column=True,
usecols=usecols,
features=features)
column_names = set(df.keys())
expect_gene_feature = features is None or "gene" in features
expect_transcript_feature = features is None or "transcript" in features
observed_features = set(df["feature"])
# older Ensembl releases don't have "gene" or "transcript"
# features, so fill in those rows if they're missing
if expect_gene_feature and "gene" not in observed_features:
# if we have to reconstruct gene feature rows then
# fill in values for 'gene_name' and 'gene_biotype'
# but only if they're actually present in the GTF
logger.info("Creating missing gene features...")
df = create_missing_features(
dataframe=df,
unique_keys={"gene": "gene_id"},
extra_columns={
"gene": {
"gene_name",
"gene_biotype"
}.intersection(column_names),
},
missing_value="")
logger.info("Done.")
if expect_transcript_feature and "transcript" not in observed_features:
logger.info("Creating missing transcript features...")
df = create_missing_features(
dataframe=df,
unique_keys={"transcript": "transcript_id"},
extra_columns={
"transcript": {
"gene_id",
"gene_name",
"gene_biotype",
"transcript_name",
"transcript_biotype",
"protein_id",
}.intersection(column_names)
},
missing_value="")
logger.info("Done.")
return df |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transcripts(self):
""" Property which dynamically construct transcript objects for all transcript IDs associated with this gene. """ |
transcript_id_results = self.db.query(
select_column_names=['transcript_id'],
filter_column='gene_id',
filter_value=self.id,
feature='transcript',
distinct=False,
required=False)
# We're doing a SQL query for each transcript ID to fetch
# its particular information, might be more efficient if we
# just get all the columns here, but how do we keep that modular?
return [
self.genome.transcript_by_id(result[0])
for result in transcript_id_results
] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clone_bs4_elem(el):
"""Clone a bs4 tag before modifying it. Code from `http://stackoverflow.com/questions/23057631/clone-element-with -beautifulsoup` """ |
if isinstance(el, NavigableString):
return type(el)(el)
copy = Tag(None, el.builder, el.name, el.namespace, el.nsprefix)
# work around bug where there is no builder set
# https://bugs.launchpad.net/beautifulsoup/+bug/1307471
copy.attrs = dict(el.attrs)
for attr in ('can_be_empty_element', 'hidden'):
setattr(copy, attr, getattr(el, attr))
for child in el.contents:
copy.append(clone_bs4_elem(child))
return copy |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clean_ticker(ticker):
""" Cleans a ticker for easier use throughout MoneyTree Splits by space and only keeps first bit. Also removes any characters that are not letters. Returns as lowercase. 'vix' 'spx' """ |
pattern = re.compile('[\W_]+')
res = pattern.sub('', ticker.split(' ')[0])
return res.lower() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def scale(val, src, dst):
""" Scale value from src range to dst range. If value outside bounds, it is clipped and set to the low or high bound of dst. Ex: scale(0, (0.0, 99.0), (-1.0, 1.0)) == -1.0 scale(-5, (0.0, 99.0), (-1.0, 1.0)) == -1.0 """ |
if val < src[0]:
return dst[0]
if val > src[1]:
return dst[1]
return ((val - src[0]) / (src[1] - src[0])) * (dst[1] - dst[0]) + dst[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def as_format(item, format_str='.2f'):
""" Map a format string over a pandas object. """ |
if isinstance(item, pd.Series):
return item.map(lambda x: format(x, format_str))
elif isinstance(item, pd.DataFrame):
return item.applymap(lambda x: format(x, format_str)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_price_index(returns, start=100):
""" Returns a price index given a series of returns. Args: * returns: Expects a return series * start (number):
Starting level Assumes arithmetic returns. Formula is: cumprod (1+r) """ |
return (returns.replace(to_replace=np.nan, value=0) + 1).cumprod() * start |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_stats(prices):
""" Calculates performance stats of a given object. If object is Series, a PerformanceStats object is returned. If object is DataFrame, a GroupStats object is returned. Args: * prices (Series, DataFrame):
Set of prices """ |
if isinstance(prices, pd.Series):
return PerformanceStats(prices)
elif isinstance(prices, pd.DataFrame):
return GroupStats(*[prices[x] for x in prices.columns])
else:
raise NotImplementedError('Unsupported type') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def asfreq_actual(series, freq, method='ffill', how='end', normalize=False):
""" Similar to pandas' asfreq but keeps the actual dates. For example, if last data point in Jan is on the 29th, that date will be used instead of the 31st. """ |
orig = series
is_series = False
if isinstance(series, pd.Series):
is_series = True
name = series.name if series.name else 'data'
orig = pd.DataFrame({name: series})
# add date column
t = pd.concat([orig, pd.DataFrame({'dt': orig.index.values},
index=orig.index.values)], axis=1)
# fetch dates
dts = t.asfreq(freq=freq, method=method, how=how,
normalize=normalize)['dt']
res = orig.loc[dts.values]
if is_series:
return res[name]
else:
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_inv_vol_weights(returns):
""" Calculates weights proportional to inverse volatility of each column. Returns weights that are inversely proportional to the column's volatility resulting in a set of portfolio weights where each position has the same level of volatility. Note, that assets with returns all equal to NaN or 0 are excluded from the portfolio (their weight is set to NaN). Returns: Series {col_name: weight} """ |
# calc vols
vol = np.divide(1., np.std(returns, ddof=1))
vol[np.isinf(vol)] = np.NaN
volsum = vol.sum()
return np.divide(vol, volsum) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_mean_var_weights(returns, weight_bounds=(0., 1.), rf=0., covar_method='ledoit-wolf', options=None):
""" Calculates the mean-variance weights given a DataFrame of returns. Args: * returns (DataFrame):
Returns for multiple securities. * weight_bounds ((low, high)):
Weigh limits for optimization. * rf (float):
`Risk-free rate <https://www.investopedia.com/terms/r/risk-freerate.asp>`_ used in utility calculation * covar_method (str):
Covariance matrix estimation method. Currently supported: - `ledoit-wolf <http://www.ledoit.net/honey.pdf>`_ - standard * options (dict):
options for minimizing, e.g. {'maxiter': 10000 } Returns: Series {col_name: weight} """ |
def fitness(weights, exp_rets, covar, rf):
# portfolio mean
mean = sum(exp_rets * weights)
# portfolio var
var = np.dot(np.dot(weights, covar), weights)
# utility - i.e. sharpe ratio
util = (mean - rf) / np.sqrt(var)
# negative because we want to maximize and optimizer
# minimizes metric
return -util
n = len(returns.columns)
# expected return defaults to mean return by default
exp_rets = returns.mean()
# calc covariance matrix
if covar_method == 'ledoit-wolf':
covar = sklearn.covariance.ledoit_wolf(returns)[0]
elif covar_method == 'standard':
covar = returns.cov()
else:
raise NotImplementedError('covar_method not implemented')
weights = np.ones([n]) / n
bounds = [weight_bounds for i in range(n)]
# sum of weights must be equal to 1
constraints = ({'type': 'eq', 'fun': lambda W: sum(W) - 1.})
optimized = minimize(fitness, weights, (exp_rets, covar, rf),
method='SLSQP', constraints=constraints,
bounds=bounds, options=options)
# check if success
if not optimized.success:
raise Exception(optimized.message)
# return weight vector
return pd.Series({returns.columns[i]: optimized.x[i] for i in range(n)}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_num_days_required(offset, period='d', perc_required=0.90):
""" Estimates the number of days required to assume that data is OK. Helper function used to determine if there are enough "good" data days over a given period. Args: * offset (DateOffset):
Offset (lookback) period. * period (str):
Period string. * perc_required (float):
percentage of number of days expected required. """ |
x = pd.to_datetime('2010-01-01')
delta = x - (x - offset)
# convert to 'trading days' - rough guestimate
days = delta.days * 0.69
if period == 'd':
req = days * perc_required
elif period == 'm':
req = (days / 20) * perc_required
elif period == 'y':
req = (days / 252) * perc_required
else:
raise NotImplementedError(
'period not supported. Supported periods are d, m, y')
return req |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_clusters(returns, n=None, plot=False):
""" Calculates the clusters based on k-means clustering. Args: * returns (pd.DataFrame):
DataFrame of returns * n (int):
Specify # of clusters. If None, this will be automatically determined * plot (bool):
Show plot? Returns: * dict with structure: {cluster# : [col names]} """ |
# calculate correlation
corr = returns.corr()
# calculate dissimilarity matrix
diss = 1 - corr
# scale down to 2 dimensions using MDS
# (multi-dimensional scaling) using the
# dissimilarity matrix
mds = sklearn.manifold.MDS(dissimilarity='precomputed')
xy = mds.fit_transform(diss)
def routine(k):
# fit KMeans
km = sklearn.cluster.KMeans(n_clusters=k)
km_fit = km.fit(xy)
labels = km_fit.labels_
centers = km_fit.cluster_centers_
# get {ticker: label} mappings
mappings = dict(zip(returns.columns, labels))
# print % of var explained
totss = 0
withinss = 0
# column average fot totss
avg = np.array([np.mean(xy[:, 0]), np.mean(xy[:, 1])])
for idx, lbl in enumerate(labels):
withinss += sum((xy[idx] - centers[lbl]) ** 2)
totss += sum((xy[idx] - avg) ** 2)
pvar_expl = 1.0 - withinss / totss
return mappings, pvar_expl, labels
if n:
result = routine(n)
else:
n = len(returns.columns)
n1 = int(np.ceil(n * 0.6666666666))
for i in range(2, n1 + 1):
result = routine(i)
if result[1] > 0.9:
break
if plot:
fig, ax = plt.subplots()
ax.scatter(xy[:, 0], xy[:, 1], c=result[2], s=90)
for i, txt in enumerate(returns.columns):
ax.annotate(txt, (xy[i, 0], xy[i, 1]), size=14)
# sanitize return value
tmp = result[0]
# map as such {cluster: [list of tickers], cluster2: [...]}
inv_map = {}
for k, v in iteritems(tmp):
inv_map[v] = inv_map.get(v, [])
inv_map[v].append(k)
return inv_map |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def limit_weights(weights, limit=0.1):
""" Limits weights and redistributes excedent amount proportionally. ex: - weights are {a: 0.7, b: 0.2, c: 0.1} - call with limit=0.5 - excess 0.2 in a is ditributed to b and c proportionally. - result is {a: 0.5, b: 0.33, c: 0.167} Args: * weights (Series):
A series describing the weights * limit (float):
Maximum weight allowed """ |
if 1.0 / limit > len(weights):
raise ValueError('invalid limit -> 1 / limit must be <= len(weights)')
if isinstance(weights, dict):
weights = pd.Series(weights)
if np.round(weights.sum(), 1) != 1.0:
raise ValueError('Expecting weights (that sum to 1) - sum is %s'
% weights.sum())
res = np.round(weights.copy(), 4)
to_rebalance = (res[res > limit] - limit).sum()
ok = res[res < limit]
ok += (ok / ok.sum()) * to_rebalance
res[res > limit] = limit
res[res < limit] = ok
if any(x > limit for x in res):
return limit_weights(res, limit=limit)
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def random_weights(n, bounds=(0., 1.), total=1.0):
""" Generate pseudo-random weights. Returns a list of random weights that is of length n, where each weight is in the range bounds, and where the weights sum up to total. Useful for creating random portfolios when benchmarking. Args: * n (int):
number of random weights * bounds ((low, high)):
bounds for each weight * total (float):
total sum of the weights """ |
low = bounds[0]
high = bounds[1]
if high < low:
raise ValueError('Higher bound must be greater or '
'equal to lower bound')
if n * high < total or n * low > total:
raise ValueError('solution not possible with given n and bounds')
w = [0] * n
tgt = -float(total)
for i in range(n):
rn = n - i - 1
rhigh = rn * high
rlow = rn * low
lowb = max(-rhigh - tgt, low)
highb = min(-rlow - tgt, high)
rw = random.uniform(lowb, highb)
w[i] = rw
tgt += rw
random.shuffle(w)
return w |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_heatmap(data, title='Heatmap', show_legend=True, show_labels=True, label_fmt='.2f', vmin=None, vmax=None, figsize=None, label_color='w', cmap='RdBu', **kwargs):
""" Plot a heatmap using matplotlib's pcolor. Args: * data (DataFrame):
DataFrame to plot. Usually small matrix (ex. correlation matrix). * title (string):
Plot title * show_legend (bool):
Show color legend * show_labels (bool):
Show value labels * label_fmt (str):
Label format string * vmin (float):
Min value for scale * vmax (float):
Max value for scale * cmap (string):
Color map * kwargs: Passed to matplotlib's pcolor """ |
fig, ax = plt.subplots(figsize=figsize)
heatmap = ax.pcolor(data, vmin=vmin, vmax=vmax, cmap=cmap)
# for some reason heatmap has the y values backwards....
ax.invert_yaxis()
if title is not None:
plt.title(title)
if show_legend:
fig.colorbar(heatmap)
if show_labels:
vals = data.values
for x in range(data.shape[0]):
for y in range(data.shape[1]):
plt.text(x + 0.5, y + 0.5, format(vals[y, x], label_fmt),
horizontalalignment='center',
verticalalignment='center',
color=label_color)
plt.yticks(np.arange(0.5, len(data.index), 1), data.index)
plt.xticks(np.arange(0.5, len(data.columns), 1), data.columns)
return plt |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rollapply(data, window, fn):
""" Apply a function fn over a rolling window of size window. Args: * data (Series or DataFrame):
Series or DataFrame * window (int):
Window size * fn (function):
Function to apply over the rolling window. For a series, the return value is expected to be a single number. For a DataFrame, it shuold return a new row. Returns: * Object of same dimensions as data """ |
res = data.copy()
res[:] = np.nan
n = len(data)
if window > n:
return res
for i in range(window - 1, n):
res.iloc[i] = fn(data.iloc[i - window + 1:i + 1])
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _winsorize_wrapper(x, limits):
""" Wraps scipy winsorize function to drop na's """ |
if isinstance(x, pd.Series):
if x.count() == 0:
return x
notnanx = ~np.isnan(x)
x[notnanx] = scipy.stats.mstats.winsorize(x[notnanx],
limits=limits)
return x
else:
return scipy.stats.mstats.winsorize(x, limits=limits) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_excess_returns(returns, rf, nperiods=None):
""" Given a series of returns, it will return the excess returns over rf. Args: * returns (Series, DataFrame):
Returns * rf (float, Series):
`Risk-Free rate(s) <https://www.investopedia.com/terms/r/risk-freerate.asp>`_ expressed in annualized term or return series * nperiods (int):
Optional. If provided, will convert rf to different frequency using deannualize only if rf is a float Returns: * excess_returns (Series, DataFrame):
Returns - rf """ |
if type(rf) is float and nperiods is not None:
_rf = deannualize(rf, nperiods)
else:
_rf = rf
return returns - _rf |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def resample_returns( returns, func, seed=0, num_trials=100 ):
""" Resample the returns and calculate any statistic on every new sample. https://en.wikipedia.org/wiki/Resampling_(statistics) :param returns (Series, DataFrame):
Returns :param func: Given the resampled returns calculate a statistic :param seed: Seed for random number generator :param num_trials: Number of times to resample and run the experiment :return: Series of resampled statistics """ |
# stats = []
if type(returns) is pd.Series:
stats = pd.Series(index=range(num_trials))
elif type(returns) is pd.DataFrame:
stats = pd.DataFrame(
index=range(num_trials),
columns=returns.columns
)
else:
raise(TypeError("returns needs to be a Series or DataFrame!"))
n = returns.shape[0]
for i in range(num_trials):
random_indices = resample(returns.index, n_samples=n, random_state=seed + i)
stats.loc[i] = func(returns.loc[random_indices])
return stats |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_riskfree_rate(self, rf):
""" Set annual risk-free rate property and calculate properly annualized monthly and daily rates. Then performance stats are recalculated. Affects only this instance of the PerformanceStats. Args: * rf (float):
Annual `risk-free rate <https://www.investopedia.com/terms/r/risk-freerate.asp>`_ """ |
self.rf = rf
# Note, that we recalculate everything.
self._update(self.prices) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def display_monthly_returns(self):
""" Display a table containing monthly returns and ytd returns for every year in range. """ |
data = [['Year', 'Jan', 'Feb', 'Mar', 'Apr', 'May',
'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec', 'YTD']]
for k in self.return_table.index:
r = self.return_table.loc[k].values
data.append([k] + [fmtpn(x) for x in r])
print(tabulate(data, headers='firstrow')) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_histogram(self, freq=None, figsize=(15, 5), title=None, bins=20, **kwargs):
""" Plots a histogram of returns given a return frequency. Args: * freq (str):
Data frequency used for display purposes. This will dictate the type of returns Refer to pandas docs for valid period strings. * figsize ((x,y)):
figure size * title (str):
Title if default not appropriate * bins (int):
number of bins for the histogram * kwargs: passed to pandas' hist method """ |
if title is None:
title = self._get_default_plot_title(
self.name, freq, 'Return Histogram')
ser = self._get_series(freq).to_returns().dropna()
plt.figure(figsize=figsize)
ax = ser.hist(bins=bins, figsize=figsize, normed=True, **kwargs)
ax.set_title(title)
plt.axvline(0, linewidth=4)
return ser.plot(kind='kde') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_date_range(self, start=None, end=None):
""" Update date range of stats, charts, etc. If None then the original date range is used. So to reset to the original range, just call with no args. Args: * start (date):
start date * end (end):
end date """ |
start = self._start if start is None else pd.to_datetime(start)
end = self._end if end is None else pd.to_datetime(end)
self._update(self._prices.loc[start:end]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def display(self):
""" Display summary stats table. """ |
data = []
first_row = ['Stat']
first_row.extend(self._names)
data.append(first_row)
stats = self._stats()
for stat in stats:
k, n, f = stat
# blank row
if k is None:
row = [''] * len(data[0])
data.append(row)
continue
row = [n]
for key in self._names:
raw = getattr(self[key], k)
# if rf is a series print nan
if k == 'rf' and not type(raw) == float:
row.append(np.nan)
elif f is None:
row.append(raw)
elif f == 'p':
row.append(fmtp(raw))
elif f == 'n':
row.append(fmtn(raw))
elif f == 'dt':
row.append(raw.strftime('%Y-%m-%d'))
else:
raise NotImplementedError('unsupported format %s' % f)
data.append(row)
print(tabulate(data, headers='firstrow')) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def display_lookback_returns(self):
""" Displays the current lookback returns for each series. """ |
return self.lookback_returns.apply(
lambda x: x.map('{:,.2%}'.format), axis=1) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot(self, freq=None, figsize=(15, 5), title=None, logy=False, **kwargs):
""" Helper function for plotting the series. Args: * freq (str):
Data frequency used for display purposes. Refer to pandas docs for valid freq strings. * figsize ((x,y)):
figure size * title (str):
Title if default not appropriate * logy (bool):
log-scale for y axis * kwargs: passed to pandas' plot method """ |
if title is None:
title = self._get_default_plot_title(
freq, 'Equity Progression')
ser = self._get_series(freq).rebase()
return ser.plot(figsize=figsize, logy=logy,
title=title, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_scatter_matrix(self, freq=None, title=None, figsize=(10, 10), **kwargs):
""" Wrapper around pandas' scatter_matrix. Args: * freq (str):
Data frequency used for display purposes. Refer to pandas docs for valid freq strings. * figsize ((x,y)):
figure size * title (str):
Title if default not appropriate * kwargs: passed to pandas' scatter_matrix method """ |
if title is None:
title = self._get_default_plot_title(
freq, 'Return Scatter Matrix')
plt.figure()
ser = self._get_series(freq).to_returns().dropna()
pd.scatter_matrix(ser, figsize=figsize, **kwargs)
return plt.suptitle(title) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_histograms(self, freq=None, title=None, figsize=(10, 10), **kwargs):
""" Wrapper around pandas' hist. Args: * freq (str):
Data frequency used for display purposes. Refer to pandas docs for valid freq strings. * figsize ((x,y)):
figure size * title (str):
Title if default not appropriate * kwargs: passed to pandas' hist method """ |
if title is None:
title = self._get_default_plot_title(
freq, 'Return Histogram Matrix')
plt.figure()
ser = self._get_series(freq).to_returns().dropna()
ser.hist(figsize=figsize, **kwargs)
return plt.suptitle(title) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_correlation(self, freq=None, title=None, figsize=(12, 6), **kwargs):
""" Utility function to plot correlations. Args: * freq (str):
Pandas data frequency alias string * title (str):
Plot title * figsize (tuple (x,y)):
figure size * kwargs: passed to Pandas' plot_corr_heatmap function """ |
if title is None:
title = self._get_default_plot_title(
freq, 'Return Correlation Matrix')
rets = self._get_series(freq).to_returns().dropna()
return rets.plot_corr_heatmap(title=title, figsize=figsize, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(tickers, provider=None, common_dates=True, forward_fill=False, clean_tickers=True, column_names=None, ticker_field_sep=':', mrefresh=False, existing=None, **kwargs):
""" Helper function for retrieving data as a DataFrame. Args: * tickers (list, string, csv string):
Tickers to download. * provider (function):
Provider to use for downloading data. By default it will be ffn.DEFAULT_PROVIDER if not provided. * common_dates (bool):
Keep common dates only? Drop na's. * forward_fill (bool):
forward fill values if missing. Only works if common_dates is False, since common_dates will remove all nan's, so no filling forward necessary. * clean_tickers (bool):
Should the tickers be 'cleaned' using ffn.utils.clean_tickers? Basically remove non-standard characters (^VIX -> vix) and standardize to lower case. * column_names (list):
List of column names if clean_tickers is not satisfactory. * ticker_field_sep (char):
separator used to determine the ticker and field. This is in case we want to specify particular, non-default fields. For example, we might want: AAPL:Low,AAPL:High,AAPL:Close. ':' is the separator. * mrefresh (bool):
Ignore memoization. * existing (DataFrame):
Existing DataFrame to append returns to - used when we download from multiple sources * kwargs: passed to provider """ |
if provider is None:
provider = DEFAULT_PROVIDER
tickers = utils.parse_arg(tickers)
data = {}
for ticker in tickers:
t = ticker
f = None
# check for field
bits = ticker.split(ticker_field_sep, 1)
if len(bits) == 2:
t = bits[0]
f = bits[1]
# call provider - check if supports memoization
if hasattr(provider, 'mcache'):
data[ticker] = provider(ticker=t, field=f,
mrefresh=mrefresh, **kwargs)
else:
data[ticker] = provider(ticker=t, field=f, **kwargs)
df = pd.DataFrame(data)
# ensure same order as provided
df = df[tickers]
if existing is not None:
df = ffn.merge(existing, df)
if common_dates:
df = df.dropna()
if forward_fill:
df = df.fillna(method='ffill')
if column_names:
cnames = utils.parse_arg(column_names)
if len(cnames) != len(df.columns):
raise ValueError(
'column_names must be of same length as tickers')
df.columns = cnames
elif clean_tickers:
df.columns = map(utils.clean_ticker, df.columns)
return df |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def web(ticker, field=None, start=None, end=None, mrefresh=False, source='yahoo'):
""" Data provider wrapper around pandas.io.data provider. Provides memoization. """ |
if source == 'yahoo' and field is None:
field = 'Adj Close'
tmp = _download_web(ticker, data_source=source,
start=start, end=end)
if tmp is None:
raise ValueError('failed to retrieve data for %s:%s' % (ticker, field))
if field:
return tmp[field]
else:
return tmp |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def csv(ticker, path='data.csv', field='', mrefresh=False, **kwargs):
""" Data provider wrapper around pandas' read_csv. Provides memoization. """ |
# set defaults if not specified
if 'index_col' not in kwargs:
kwargs['index_col'] = 0
if 'parse_dates' not in kwargs:
kwargs['parse_dates'] = True
# read in dataframe from csv file
df = pd.read_csv(path, **kwargs)
tf = ticker
if field is not '' and field is not None:
tf = '%s:%s' % (tf, field)
# check that required column exists
if tf not in df:
raise ValueError('Ticker(field) not present in csv file!')
return df[tf] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def display(table, limit=0, vrepr=None, index_header=None, caption=None, tr_style=None, td_styles=None, encoding=None, truncate=None, epilogue=None):
""" Display a table inline within an IPython notebook. """ |
from IPython.core.display import display_html
html = _display_html(table, limit=limit, vrepr=vrepr,
index_header=index_header, caption=caption,
tr_style=tr_style, td_styles=td_styles,
encoding=encoding, truncate=truncate,
epilogue=epilogue)
display_html(html, raw=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fromxlsx(filename, sheet=None, range_string=None, row_offset=0, column_offset=0, **kwargs):
""" Extract a table from a sheet in an Excel .xlsx file. N.B., the sheet name is case sensitive. The `sheet` argument can be omitted, in which case the first sheet in the workbook is used by default. The `range_string` argument can be used to provide a range string specifying a range of cells to extract. The `row_offset` and `column_offset` arguments can be used to specify offsets. Any other keyword arguments are passed through to :func:`openpyxl.load_workbook()`. """ |
return XLSXView(filename, sheet=sheet, range_string=range_string,
row_offset=row_offset, column_offset=column_offset,
**kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def toxlsx(tbl, filename, sheet=None, encoding=None):
""" Write a table to a new Excel .xlsx file. """ |
import openpyxl
if encoding is None:
encoding = locale.getpreferredencoding()
wb = openpyxl.Workbook(write_only=True)
ws = wb.create_sheet(title=sheet)
for row in tbl:
ws.append(row)
wb.save(filename) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def teepickle(table, source=None, protocol=-1, write_header=True):
""" Return a table that writes rows to a pickle file as they are iterated over. """ |
return TeePickleView(table, source=source, protocol=protocol,
write_header=write_header) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def format(table, field, fmt, **kwargs):
""" Convenience function to format all values in the given `field` using the `fmt` format string. The ``where`` keyword argument can be given with a callable or expression which is evaluated on each row and which should return True if the conversion should be applied on that row, else False. """ |
conv = lambda v: fmt.format(v)
return convert(table, field, conv, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def formatall(table, fmt, **kwargs):
""" Convenience function to format all values in all fields using the `fmt` format string. The ``where`` keyword argument can be given with a callable or expression which is evaluated on each row and which should return True if the conversion should be applied on that row, else False. """ |
conv = lambda v: fmt.format(v)
return convertall(table, conv, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def interpolate(table, field, fmt, **kwargs):
""" Convenience function to interpolate all values in the given `field` using the `fmt` string. The ``where`` keyword argument can be given with a callable or expression which is evaluated on each row and which should return True if the conversion should be applied on that row, else False. """ |
conv = lambda v: fmt % v
return convert(table, field, conv, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def interpolateall(table, fmt, **kwargs):
""" Convenience function to interpolate all values in all fields using the `fmt` string. The ``where`` keyword argument can be given with a callable or expression which is evaluated on each row and which should return True if the conversion should be applied on that row, else False. """ |
conv = lambda v: fmt % v
return convertall(table, conv, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def recordlookup(table, key, dictionary=None):
""" Load a dictionary with data from the given table, mapping to record objects. """ |
if dictionary is None:
dictionary = dict()
it = iter(table)
hdr = next(it)
flds = list(map(text_type, hdr))
keyindices = asindices(hdr, key)
assert len(keyindices) > 0, 'no key selected'
getkey = operator.itemgetter(*keyindices)
for row in it:
k = getkey(row)
rec = Record(row, flds)
if k in dictionary:
# work properly with shelve
l = dictionary[k]
l.append(rec)
dictionary[k] = l
else:
dictionary[k] = [rec]
return dictionary |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def appendbcolz(table, obj, check_names=True):
"""Append data into a bcolz ctable. The `obj` argument can be either an existing ctable or the name of a directory were an on-disk ctable is stored. .. versionadded:: 1.1.0 """ |
import bcolz
import numpy as np
if isinstance(obj, string_types):
ctbl = bcolz.open(obj, mode='a')
else:
assert hasattr(obj, 'append') and hasattr(obj, 'names'), \
'expected rootdir or ctable, found %r' % obj
ctbl = obj
# setup
dtype = ctbl.dtype
it = iter(table)
hdr = next(it)
flds = list(map(text_type, hdr))
# check names match
if check_names:
assert tuple(flds) == tuple(ctbl.names), 'column names do not match'
# fill chunk-wise
chunklen = sum(ctbl.cols[name].chunklen
for name in ctbl.names) // len(ctbl.names)
while True:
data = list(itertools.islice(it, chunklen))
data = np.array(data, dtype=dtype)
ctbl.append(data)
if len(data) < chunklen:
break
ctbl.flush()
return ctbl |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def teetext(table, source=None, encoding=None, errors='strict', template=None, prologue=None, epilogue=None):
""" Return a table that writes rows to a text file as they are iterated over. """ |
assert template is not None, 'template is required'
return TeeTextView(table, source=source, encoding=encoding, errors=errors,
template=template, prologue=prologue, epilogue=epilogue) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def groupcountdistinctvalues(table, key, value):
"""Group by the `key` field then count the number of distinct values in the `value` field.""" |
s1 = cut(table, key, value)
s2 = distinct(s1)
s3 = aggregate(s2, key, len)
return s3 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def appendtextindex(table, index_or_dirname, indexname=None, merge=True, optimize=False):
""" Load all rows from `table` into a Whoosh index, adding them to any existing data in the index. Keyword arguments: table A table container with the data to be loaded. index_or_dirname Either an instance of `whoosh.index.Index` or a string containing the directory path where the index is to be stored. indexname String containing the name of the index, if multiple indexes are stored in the same directory. merge Merge small segments during commit? optimize Merge all segments together? """ |
import whoosh.index
# deal with polymorphic argument
if isinstance(index_or_dirname, string_types):
dirname = index_or_dirname
index = whoosh.index.open_dir(dirname, indexname=indexname,
readonly=False)
needs_closing = True
elif isinstance(index_or_dirname, whoosh.index.Index):
index = index_or_dirname
needs_closing = False
else:
raise ArgumentError('expected string or index, found %r'
% index_or_dirname)
writer = index.writer()
try:
for d in dicts(table):
writer.add_document(**d)
writer.commit(merge=merge, optimize=optimize)
except Exception:
writer.cancel()
raise
finally:
if needs_closing:
index.close() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def searchtextindexpage(index_or_dirname, query, pagenum, pagelen=10, indexname=None, docnum_field=None, score_field=None, fieldboosts=None, search_kwargs=None):
""" Search an index using a query, returning a result page. Keyword arguments: index_or_dirname Either an instance of `whoosh.index.Index` or a string containing the directory path where the index is to be stored. query Either a string or an instance of `whoosh.query.Query`. If a string, it will be parsed as a multi-field query, i.e., any terms not bound to a specific field will match **any** field. pagenum Number of the page to return (e.g., 1 = first page). pagelen Number of results per page. indexname String containing the name of the index, if multiple indexes are stored in the same directory. docnum_field If not None, an extra field will be added to the output table containing the internal document number stored in the index. The name of the field will be the value of this argument. score_field If not None, an extra field will be added to the output table containing the score of the result. The name of the field will be the value of this argument. fieldboosts An optional dictionary mapping field names to boosts. search_kwargs Any extra keyword arguments to be passed through to the Whoosh `search()` method. """ |
return SearchTextIndexView(index_or_dirname, query, pagenum=pagenum,
pagelen=pagelen, indexname=indexname,
docnum_field=docnum_field,
score_field=score_field, fieldboosts=fieldboosts,
search_kwargs=search_kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fromxls(filename, sheet=None, use_view=True):
""" Extract a table from a sheet in an Excel .xls file. Sheet is identified by its name or index number. N.B., the sheet name is case sensitive. """ |
return XLSView(filename, sheet=sheet, use_view=use_view) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def toxls(tbl, filename, sheet, encoding=None, style_compression=0, styles=None):
""" Write a table to a new Excel .xls file. """ |
import xlwt
if encoding is None:
encoding = locale.getpreferredencoding()
wb = xlwt.Workbook(encoding=encoding, style_compression=style_compression)
ws = wb.add_sheet(sheet)
if styles is None:
# simple version, don't worry about styles
for r, row in enumerate(tbl):
for c, v in enumerate(row):
ws.write(r, c, label=v)
else:
# handle styles
it = iter(tbl)
hdr = next(it)
flds = list(map(str, hdr))
for c, f in enumerate(flds):
ws.write(0, c, label=f)
if f not in styles or styles[f] is None:
styles[f] = xlwt.Style.default_style
# convert to list for easy zipping
styles = [styles[f] for f in flds]
for r, row in enumerate(it):
for c, (v, style) in enumerate(izip_longest(row, styles,
fillvalue=None)):
ws.write(r+1, c, label=v, style=style)
wb.save(filename) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def asindices(hdr, spec):
"""Convert the given field `spec` into a list of field indices.""" |
flds = list(map(text_type, hdr))
indices = list()
if not isinstance(spec, (list, tuple)):
spec = (spec,)
for s in spec:
# spec could be a field index (takes priority)
if isinstance(s, int) and s < len(hdr):
indices.append(s) # index fields from 0
# spec could be a field
elif s in flds:
indices.append(flds.index(s))
else:
raise FieldSelectionError(s)
return indices |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def expr(s):
""" Construct a function operating on a table record. The expression string is converted into a lambda function by prepending the string with ``'lambda rec: '``, then replacing anything enclosed in curly braces (e.g., ``"{foo}"``) with a lookup on the record (e.g., ``"rec['foo']"``), then finally calling :func:`eval`. So, e.g., the expression string ``"{foo} * {bar}"`` is converted to the function ``lambda rec: rec['foo'] * rec['bar']`` """ |
prog = re.compile('\{([^}]+)\}')
def repl(matchobj):
return "rec['%s']" % matchobj.group(1)
return eval("lambda rec: " + prog.sub(repl, s)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def teecsv(table, source=None, encoding=None, errors='strict', write_header=True, **csvargs):
""" Returns a table that writes rows to a CSV file as they are iterated over. """ |
source = write_source_from_arg(source)
csvargs.setdefault('dialect', 'excel')
return teecsv_impl(table, source=source, encoding=encoding,
errors=errors, write_header=write_header,
**csvargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def distinct(table, key=None, count=None, presorted=False, buffersize=None, tempdir=None, cache=True):
""" Return only distinct rows in the table. If the `count` argument is not None, it will be used as the name for an additional field, and the values of the field will be the number of duplicate rows. If the `key` keyword argument is passed, the comparison is done on the given key instead of the full row. See also :func:`petl.transform.dedup.duplicates`, :func:`petl.transform.dedup.unique`, :func:`petl.transform.reductions.groupselectfirst`, :func:`petl.transform.reductions.groupselectlast`. """ |
return DistinctView(table, key=key, count=count, presorted=presorted,
buffersize=buffersize, tempdir=tempdir, cache=cache) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_sqlalchemy_column(col, colname, constraints=True):
""" Infer an appropriate SQLAlchemy column type based on a sequence of values. Keyword arguments: col : sequence A sequence of values to use to infer type, length etc. colname : string Name of column constraints : bool If True use length and nullable constraints """ |
import sqlalchemy
col_not_none = [v for v in col if v is not None]
sql_column_kwargs = {}
sql_type_kwargs = {}
if len(col_not_none) == 0:
sql_column_type = sqlalchemy.String
if constraints:
sql_type_kwargs['length'] = NULL_COLUMN_MAX_LENGTH
elif all(isinstance(v, bool) for v in col_not_none):
sql_column_type = sqlalchemy.Boolean
elif all(isinstance(v, int) for v in col_not_none):
if max(col_not_none) > SQL_INTEGER_MAX \
or min(col_not_none) < SQL_INTEGER_MIN:
sql_column_type = sqlalchemy.BigInteger
else:
sql_column_type = sqlalchemy.Integer
elif all(isinstance(v, long) for v in col_not_none):
sql_column_type = sqlalchemy.BigInteger
elif all(isinstance(v, (int, long)) for v in col_not_none):
sql_column_type = sqlalchemy.BigInteger
elif all(isinstance(v, (int, long, float)) for v in col_not_none):
sql_column_type = sqlalchemy.Float
elif all(isinstance(v, datetime.datetime) for v in col_not_none):
sql_column_type = sqlalchemy.DateTime
elif all(isinstance(v, datetime.date) for v in col_not_none):
sql_column_type = sqlalchemy.Date
elif all(isinstance(v, datetime.time) for v in col_not_none):
sql_column_type = sqlalchemy.Time
else:
sql_column_type = sqlalchemy.String
if constraints:
sql_type_kwargs['length'] = max([len(text_type(v)) for v in col])
if constraints:
sql_column_kwargs['nullable'] = len(col_not_none) < len(col)
return sqlalchemy.Column(colname, sql_column_type(**sql_type_kwargs),
**sql_column_kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_sqlalchemy_table(table, tablename, schema=None, constraints=True, metadata=None):
""" Create an SQLAlchemy table definition based on data in `table`. Keyword arguments: table : table container Table data to use to infer types etc. tablename : text Name of the table schema : text Name of the database schema to create the table in constraints : bool If True use length and nullable constraints metadata : sqlalchemy.MetaData Custom table metadata """ |
import sqlalchemy
if not metadata:
metadata = sqlalchemy.MetaData()
sql_table = sqlalchemy.Table(tablename, metadata, schema=schema)
cols = columns(table)
flds = list(cols.keys())
for f in flds:
sql_column = make_sqlalchemy_column(cols[f], f,
constraints=constraints)
sql_table.append_column(sql_column)
return sql_table |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_create_table_statement(table, tablename, schema=None, constraints=True, metadata=None, dialect=None):
""" Generate a CREATE TABLE statement based on data in `table`. Keyword arguments: table : table container Table data to use to infer types etc. tablename : text Name of the table schema : text Name of the database schema to create the table in constraints : bool If True use length and nullable constraints metadata : sqlalchemy.MetaData Custom table metadata dialect : text One of {'access', 'sybase', 'sqlite', 'informix', 'firebird', 'mysql', 'oracle', 'maxdb', 'postgresql', 'mssql'} """ |
import sqlalchemy
sql_table = make_sqlalchemy_table(table, tablename, schema=schema,
constraints=constraints,
metadata=metadata)
if dialect:
module = __import__('sqlalchemy.dialects.%s' % DIALECTS[dialect],
fromlist=['dialect'])
sql_dialect = module.dialect()
else:
sql_dialect = None
return text_type(sqlalchemy.schema.CreateTable(sql_table)
.compile(dialect=sql_dialect)).strip() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_table(table, dbo, tablename, schema=None, commit=True, constraints=True, metadata=None, dialect=None, sample=1000):
""" Create a database table based on a sample of data in the given `table`. Keyword arguments: table : table container Table data to load dbo : database object DB-API 2.0 connection, callable returning a DB-API 2.0 cursor, or SQLAlchemy connection, engine or session tablename : text Name of the table schema : text Name of the database schema to create the table in commit : bool If True commit the changes constraints : bool If True use length and nullable constraints metadata : sqlalchemy.MetaData Custom table metadata dialect : text One of {'access', 'sybase', 'sqlite', 'informix', 'firebird', 'mysql', 'oracle', 'maxdb', 'postgresql', 'mssql'} sample : int Number of rows to sample when inferring types etc., set to 0 to use the whole table """ |
if sample > 0:
table = head(table, sample)
sql = make_create_table_statement(table, tablename, schema=schema,
constraints=constraints,
metadata=metadata, dialect=dialect)
_execute(sql, dbo, commit=commit) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def drop_table(dbo, tablename, schema=None, commit=True):
""" Drop a database table. Keyword arguments: dbo : database object DB-API 2.0 connection, callable returning a DB-API 2.0 cursor, or SQLAlchemy connection, engine or session tablename : text Name of the table schema : text Name of the database schema the table is in commit : bool If True commit the changes """ |
# sanitise table name
tablename = _quote(tablename)
if schema is not None:
tablename = _quote(schema) + '.' + tablename
sql = u'DROP TABLE %s' % tablename
_execute(sql, dbo, commit) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def typecounter(table, field):
""" Count the number of values found for each Python type. Counter({'str': 5}) Counter({'str': 3, 'int': 2}) Counter({'str': 2, 'int': 1, 'float': 1, 'NoneType': 1}) The `field` argument can be a field name or index (starting from zero). """ |
counter = Counter()
for v in values(table, field):
try:
counter[v.__class__.__name__] += 1
except IndexError:
pass # ignore short rows
return counter |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def teehtml(table, source=None, encoding=None, errors='strict', caption=None, vrepr=text_type, lineterminator='\n', index_header=False, tr_style=None, td_styles=None, truncate=None):
""" Return a table that writes rows to a Unicode HTML file as they are iterated over. """ |
source = write_source_from_arg(source)
return TeeHTMLView(table, source=source, encoding=encoding, errors=errors,
caption=caption, vrepr=vrepr,
lineterminator=lineterminator, index_header=index_header,
tr_style=tr_style, td_styles=td_styles,
truncate=truncate) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tupletree(table, start='start', stop='stop', value=None):
""" Construct an interval tree for the given table, where each node in the tree is a row of the table. """ |
import intervaltree
tree = intervaltree.IntervalTree()
it = iter(table)
hdr = next(it)
flds = list(map(text_type, hdr))
assert start in flds, 'start field not recognised'
assert stop in flds, 'stop field not recognised'
getstart = itemgetter(flds.index(start))
getstop = itemgetter(flds.index(stop))
if value is None:
getvalue = tuple
else:
valueindices = asindices(hdr, value)
assert len(valueindices) > 0, 'invalid value field specification'
getvalue = itemgetter(*valueindices)
for row in it:
tree.addi(getstart(row), getstop(row), getvalue(row))
return tree |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def facettupletrees(table, key, start='start', stop='stop', value=None):
""" Construct faceted interval trees for the given table, where each node in the tree is a row of the table. """ |
import intervaltree
it = iter(table)
hdr = next(it)
flds = list(map(text_type, hdr))
assert start in flds, 'start field not recognised'
assert stop in flds, 'stop field not recognised'
getstart = itemgetter(flds.index(start))
getstop = itemgetter(flds.index(stop))
if value is None:
getvalue = tuple
else:
valueindices = asindices(hdr, value)
assert len(valueindices) > 0, 'invalid value field specification'
getvalue = itemgetter(*valueindices)
keyindices = asindices(hdr, key)
assert len(keyindices) > 0, 'invalid key'
getkey = itemgetter(*keyindices)
trees = dict()
for row in it:
k = getkey(row)
if k not in trees:
trees[k] = intervaltree.IntervalTree()
trees[k].addi(getstart(row), getstop(row), getvalue(row))
return trees |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def recordtree(table, start='start', stop='stop'):
""" Construct an interval tree for the given table, where each node in the tree is a row of the table represented as a record object. """ |
import intervaltree
getstart = attrgetter(start)
getstop = attrgetter(stop)
tree = intervaltree.IntervalTree()
for rec in records(table):
tree.addi(getstart(rec), getstop(rec), rec)
return tree |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def facetrecordtrees(table, key, start='start', stop='stop'):
""" Construct faceted interval trees for the given table, where each node in the tree is a record. """ |
import intervaltree
getstart = attrgetter(start)
getstop = attrgetter(stop)
getkey = attrgetter(key)
trees = dict()
for rec in records(table):
k = getkey(rec)
if k not in trees:
trees[k] = intervaltree.IntervalTree()
trees[k].addi(getstart(rec), getstop(rec), rec)
return trees |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def facetintervallookupone(table, key, start='start', stop='stop', value=None, include_stop=False, strict=True):
""" Construct a faceted interval lookup for the given table, returning at most one result for each query. If ``strict=True``, queries returning more than one result will raise a `DuplicateKeyError`. If ``strict=False`` and there is more than one result, the first result is returned. """ |
trees = facettupletrees(table, key, start=start, stop=stop, value=value)
out = dict()
for k in trees:
out[k] = IntervalTreeLookupOne(trees[k], include_stop=include_stop,
strict=strict)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def intervalantijoin(left, right, lstart='start', lstop='stop', rstart='start', rstop='stop', lkey=None, rkey=None, include_stop=False, missing=None):
""" Return rows from the `left` table with no overlapping rows from the `right` table. Note start coordinates are included and stop coordinates are excluded from the interval. Use the `include_stop` keyword argument to include the upper bound of the interval when finding overlaps. """ |
assert (lkey is None) == (rkey is None), \
'facet key field must be provided for both or neither table'
return IntervalAntiJoinView(left, right, lstart=lstart, lstop=lstop,
rstart=rstart, rstop=rstop, lkey=lkey,
rkey=rkey, include_stop=include_stop,
missing=missing) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def intervaljoinvalues(left, right, value, lstart='start', lstop='stop', rstart='start', rstop='stop', lkey=None, rkey=None, include_stop=False):
""" Convenience function to join the left table with values from a specific field in the right hand table. Note start coordinates are included and stop coordinates are excluded from the interval. Use the `include_stop` keyword argument to include the upper bound of the interval when finding overlaps. """ |
assert (lkey is None) == (rkey is None), \
'facet key field must be provided for both or neither table'
if lkey is None:
lkp = intervallookup(right, start=rstart, stop=rstop, value=value,
include_stop=include_stop)
f = lambda row: lkp.search(row[lstart], row[lstop])
else:
lkp = facetintervallookup(right, rkey, start=rstart, stop=rstop,
value=value, include_stop=include_stop)
f = lambda row: lkp[row[lkey]].search(row[lstart], row[lstop])
return addfield(left, value, f) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def intervalsubtract(left, right, lstart='start', lstop='stop', rstart='start', rstop='stop', lkey=None, rkey=None, include_stop=False):
""" Subtract intervals in the right hand table from intervals in the left hand table. """ |
assert (lkey is None) == (rkey is None), \
'facet key field must be provided for both or neither table'
return IntervalSubtractView(left, right, lstart=lstart, lstop=lstop,
rstart=rstart, rstop=rstop, lkey=lkey,
rkey=rkey, include_stop=include_stop) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _collapse(intervals):
""" Collapse an iterable of intervals sorted by start coord. """ |
span = None
for start, stop in intervals:
if span is None:
span = _Interval(start, stop)
elif start <= span.stop < stop:
span = _Interval(span.start, stop)
elif start > span.stop:
yield span
span = _Interval(start, stop)
if span is not None:
yield span |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _subtract(start, stop, intervals):
""" Subtract intervals from a spanning interval. """ |
remainder_start = start
sub_stop = None
for sub_start, sub_stop in _collapse(intervals):
if remainder_start < sub_start:
yield _Interval(remainder_start, sub_start)
remainder_start = sub_stop
if sub_stop is not None and sub_stop < stop:
yield _Interval(sub_stop, stop) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rowgroupmap(table, key, mapper, header=None, presorted=False, buffersize=None, tempdir=None, cache=True):
""" Group rows under the given key then apply `mapper` to yield zero or more output rows for each input group of rows. """ |
return RowGroupMapView(table, key, mapper, header=header,
presorted=presorted,
buffersize=buffersize, tempdir=tempdir, cache=cache) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rowlenselect(table, n, complement=False):
"""Select rows of length `n`.""" |
where = lambda row: len(row) == n
return select(table, where, complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectop(table, field, value, op, complement=False):
"""Select rows where the function `op` applied to the given field and the given value returns `True`.""" |
return select(table, field, lambda v: op(v, value),
complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selecteq(table, field, value, complement=False):
"""Select rows where the given field equals the given value.""" |
return selectop(table, field, value, operator.eq, complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectne(table, field, value, complement=False):
"""Select rows where the given field does not equal the given value.""" |
return selectop(table, field, value, operator.ne, complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectlt(table, field, value, complement=False):
"""Select rows where the given field is less than the given value.""" |
value = Comparable(value)
return selectop(table, field, value, operator.lt, complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectle(table, field, value, complement=False):
"""Select rows where the given field is less than or equal to the given value.""" |
value = Comparable(value)
return selectop(table, field, value, operator.le, complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectgt(table, field, value, complement=False):
"""Select rows where the given field is greater than the given value.""" |
value = Comparable(value)
return selectop(table, field, value, operator.gt, complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectge(table, field, value, complement=False):
"""Select rows where the given field is greater than or equal to the given value.""" |
value = Comparable(value)
return selectop(table, field, value, operator.ge, complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectcontains(table, field, value, complement=False):
"""Select rows where the given field contains the given value.""" |
return selectop(table, field, value, operator.contains,
complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectin(table, field, value, complement=False):
"""Select rows where the given field is a member of the given value.""" |
return select(table, field, lambda v: v in value,
complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectnotin(table, field, value, complement=False):
"""Select rows where the given field is not a member of the given value.""" |
return select(table, field, lambda v: v not in value,
complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectis(table, field, value, complement=False):
"""Select rows where the given field `is` the given value.""" |
return selectop(table, field, value, operator.is_, complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectisnot(table, field, value, complement=False):
"""Select rows where the given field `is not` the given value.""" |
return selectop(table, field, value, operator.is_not, complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectisinstance(table, field, value, complement=False):
"""Select rows where the given field is an instance of the given type.""" |
return selectop(table, field, value, isinstance, complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectrangeopenleft(table, field, minv, maxv, complement=False):
"""Select rows where the given field is greater than or equal to `minv` and less than `maxv`.""" |
minv = Comparable(minv)
maxv = Comparable(maxv)
return select(table, field, lambda v: minv <= v < maxv,
complement=complement) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.