text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's first download an example file with some CTD data
Step2: The profile dPIRX003.cnv.OK was loaded with the default rule cnv.yaml
Step3: Or for an overview of all the attributes and data
Step4: The data
Step5: Each data returns as a masked array, hence all values equal to profile.attributes['bad_flag'] will return as a masked value
Step6: As a regular masked array, let's check the mean and standard deviation between the two temperature sensors
Step7: We can also export the data into a pandas DataFrame for easier data manipulation later on
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from seabird.cnv import fCNV
!wget https://raw.githubusercontent.com/castelao/seabird/master/sampledata/CTD/dPIRX003.cnv
profile = fCNV('dPIRX003.cnv')
print ("The profile coordinates is latitude: %.4f, and longitude: %.4f" % \
(profile.attributes['LATITUDE'], profile.attributes['LONGITUDE']))
print("Header: %s" % profile.attributes.keys())
print(profile.attributes)
print(profile.keys())
profile['TEMP2'][:25]
print(profile['TEMP'].mean(), profile['TEMP'].std())
print(profile['TEMP2'].mean(), profile['TEMP2'].std())
from matplotlib import pyplot as plt
plt.plot(profile['TEMP'], profile['PRES'],'b')
plt.plot(profile['TEMP2'], profile['PRES'],'g')
plt.gca().invert_yaxis()
plt.xlabel('temperature')
plt.ylabel('pressure [dbar]')
plt.title(profile.attributes['filename'])
df = profile.as_DataFrame()
df.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear regression without cross-valiation
Step2: Transform data structure
Step3: Build linear regression model
Step4: Fit the model
Step5: Prediction
Step6: Module evaluation
Step7: Compare results with R
Step8: Build cross-validation model
Step9: Fit cross-validation model
Step10: Prediction
Step11: Evaluation
Step12: Intercept and coefficients
Step13: Get parameter values from the best model
|
<ASSISTANT_TASK:>
Python Code:
from pyspark import SparkContext
sc = SparkContext(master = 'local')
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
ad = spark.read.csv('data/Advertising.csv', header=True, inferSchema=True)
ad.show(5)
from pyspark.ml.linalg import Vectors
ad_df = ad.rdd.map(lambda x: [Vectors.dense(x[0:3]), x[-1]]).toDF(['features', 'label'])
ad_df.show(5)
from pyspark.ml.regression import LinearRegression
lr = LinearRegression(featuresCol = 'features', labelCol = 'label')
lr_model = lr.fit(ad_df)
pred = lr_model.transform(ad_df)
pred.show(5)
from pyspark.ml.evaluation import RegressionEvaluator
evaluator = RegressionEvaluator(predictionCol='prediction', labelCol='label')
evaluator.setMetricName('r2').evaluate(pred)
training, test = ad_df.randomSplit([0.8, 0.2], seed=123)
##=====build cross valiation model======
# estimator
lr = LinearRegression(featuresCol = 'features', labelCol = 'label')
# parameter grid
from pyspark.ml.tuning import ParamGridBuilder
param_grid = ParamGridBuilder().\
addGrid(lr.regParam, [0, 0.5, 1]).\
addGrid(lr.elasticNetParam, [0, 0.5, 1]).\
build()
# evaluator
evaluator = RegressionEvaluator(predictionCol='prediction', labelCol='label', metricName='r2')
# cross-validation model
from pyspark.ml.tuning import CrossValidator
cv = CrossValidator(estimator=lr, estimatorParamMaps=param_grid, evaluator=evaluator, numFolds=4)
cv_model = cv.fit(training)
pred_training_cv = cv_model.transform(training)
pred_test_cv = cv_model.transform(test)
# performance on training data
evaluator.setMetricName('r2').evaluate(pred_training_cv)
# performance on test data
evaluator.setMetricName('r2').evaluate(pred_test_cv)
print('Intercept: ', cv_model.bestModel.intercept, "\n",
'coefficients: ', cv_model.bestModel.coefficients)
print('best regParam: ' + str(cv_model.bestModel._java_obj.getRegParam()) + "\n" +
'best ElasticNetParam:' + str(cv_model.bestModel._java_obj.getElasticNetParam()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
d = {'l': ['left', 'right', 'left', 'right', 'left', 'right'],
'r': ['right', 'left', 'right', 'left', 'right', 'left'],
'v': [-1, 1, -1, 1, -1, np.nan]}
df = pd.DataFrame(d)
def g(df):
return df.groupby('r')['v'].apply(pd.Series.sum,skipna=False)
result = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Get Cloud Project ID
Step2: 3. Get Client Credentials
Step3: 4. Enter Column Mapping Parameters
Step4: 5. Execute Column Mapping
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/google/starthinker
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
CLIENT_CREDENTIALS = 'PASTE CLIENT CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'sheet': '',
'tab': '',
'in_dataset': '',
'in_table': '',
'out_dataset': '',
'out_view': '',
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'mapping': {
'auth': 'user',
'sheet': {'field': {'name': 'sheet', 'kind': 'string', 'order': 1, 'default': ''}},
'tab': {'field': {'name': 'tab', 'kind': 'string', 'order': 2, 'default': ''}},
'in': {
'dataset': {'field': {'name': 'in_dataset', 'kind': 'string', 'order': 3, 'default': ''}},
'table': {'field': {'name': 'in_table', 'kind': 'string', 'order': 4, 'default': ''}}
},
'out': {
'dataset': {'field': {'name': 'out_dataset', 'kind': 'string', 'order': 7, 'default': ''}},
'view': {'field': {'name': 'out_view', 'kind': 'string', 'order': 8, 'default': ''}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll load the text file and convert it into integers for our network to use.
Step3: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Step4: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
Step5: Hyperparameters
Step6: Write out the graph for TensorBoard
Step7: Training
Step8: Sampling
|
<ASSISTANT_TASK:>
Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
def split_data(chars, batch_size, num_steps, split_frac=0.9):
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
preds = tf.nn.softmax(logits, name='predictions')
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/1', sess.graph)
!mkdir -p checkpoints/anna
epochs = 1
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pull out the photometry of the nearest source matched in the photometric catalog.
Step2: Scaling the spectrum to the photometry
Step3: An offset is clearly visible betwen the spectra and photometry. In the next step, we use an internal function to compute a correction factor to bring the spectrum in line with the photometry.
Step4: The function computed a multiplicative factor 10/8.9=1.1 to apply to the spectrum, which now falls nicely on top of the photometry. Next we fit for a first-order linear scaling (normalization and slope).
Step5: A bit of an improvement. But be careful with corrections with order>0 if there is limited photometry available in bands that overlap the spectrum.
Step6: Redshift PDF
Step7: Grizli internal photometry
Step8: Compare chi-squared
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import numpy as np
import matplotlib.pyplot as plt
import astropy.io.fits as pyfits
import drizzlepac
import grizli
from grizli.pipeline import photoz
from grizli import utils, prep, multifit, fitting
utils.set_warnings()
print('\n Grizli version: ', grizli.__version__)
# Requires eazy-py: https://github.com/gbrammer/eazy-py
import eazy
# Run in the directory where you ran the Grizli-Pipeline notebook and
# extracted spectra of two objects
root = 'j0332m2743'
os.chdir('{0}/Extractions/'.format(root))
# Fetch 3D-HST catalogs
if not os.path.exists('goodss_3dhst.v4.1.cats.tar.gz'):
os.system('wget https://archive.stsci.edu/missions/hlsp/3d-hst/RELEASE_V4.0/Photometry/GOODS-S/goodss_3dhst.v4.1.cats.tar.gz')
os.system('tar xzvf goodss_3dhst.v4.1.cats.tar.gz')
# Preparation for eazy-py
eazy.symlink_eazy_inputs(path=os.path.dirname(eazy.__file__)+'/data',
path_is_env=False)
### Initialize **eazy.photoz** object
field = 'goodss'
version = 'v4.1'
params = {}
params['CATALOG_FILE'] = '{0}_3dhst.{1}.cats/Catalog/{0}_3dhst.{1}.cat'.format(field, version)
params['Z_STEP'] = 0.002
params['Z_MAX'] = 10
params['MAIN_OUTPUT_FILE'] = '{0}_3dhst.{1}.eazypy'.format(field, version)
params['PRIOR_FILTER'] = 205
# Galactic extinction
params['MW_EBV'] = {'aegis':0.0066, 'cosmos':0.0148, 'goodss':0.0069,
'uds':0.0195, 'goodsn':0.0103}[field]
params['TEMPLATES_FILE'] = 'templates/fsps_full/tweak_fsps_QSF_12_v3.param'
translate_file = '{0}_3dhst.{1}.cats/Eazy/{0}_3dhst.{1}.translate'.format(field, version)
ez = eazy.photoz.PhotoZ(param_file=None, translate_file=translate_file,
zeropoint_file=None, params=params,
load_prior=True, load_products=False)
## Grism fitting arguments created in Grizli-Pipeline
args = np.load('fit_args.npy')[0]
## First-pass redshift templates, similar to the eazy templates but
## with separate emission lines
t0 = args['t0']
#############
## Make a helper object for generating photometry in a format that grizli
## understands.
## Passing the parameters precomputes a function to quickly interpolate
## the templates through the broad-band filters. It's not required,
## but makes the fitting much faster.
##
## `zgrid` defaults to ez.zgrid, be explicit here to show you can
## change it.
phot_obj = photoz.EazyPhot(ez, grizli_templates=t0, zgrid=ez.zgrid)
### Find IDs of specific objects to extract, same ones from the notebook
import astropy.units as u
tab = utils.GTable()
tab['ra'] = [53.0657456, 53.0624459]
tab['dec'] = [-27.720518, -27.707018]
# Internal grizli catalog
gcat = utils.read_catalog('{0}_phot.fits'.format(root))
idx, dr = gcat.match_to_catalog_sky(tab)
source_ids = gcat['number'][idx]
tab['id'] = source_ids
tab['dr'] = dr.to(u.mas)
tab['dr'].format='.1f'
tab.show_in_notebook()
## Find indices in the 3D-HST photometric catalog
idx3, dr3 = ez.cat.match_to_catalog_sky(tab)
## Run the photozs just for comparison. Not needed for the grism fitting
## but the photozs and SEDs give you a check that the photometry looks
## reasonable
ez.param['VERBOSITY'] = 1.
ez.fit_parallel(idx=idx3, verbose=False)
# or could run on the whole catalog by not specifying `idx`
# Show SEDs with best-fit templates and p(z)
for ix in idx3:
ez.show_fit(ix, id_is_idx=True)
### Spline templates for dummy grism continuum fits
wspline = np.arange(4200, 2.5e4)
Rspline = 50
df_spl = len(utils.log_zgrid(zr=[wspline[0], wspline[-1]], dz=1./Rspline))
tspline = utils.bspline_templates(wspline, df=df_spl+2, log=True, clip=0.0001)
i=1 # red galaxy
id=tab['id'][i]
ix = idx3[i]
## This isn't necessary for general fitting, but
## load the grism spectrum here for demonstrating the grism/photometry scaling
beams_file = '{0}_{1:05d}.beams.fits'.format(args['group_name'], id)
mb = multifit.MultiBeam(beams_file, MW_EBV=args['MW_EBV'],
fcontam=args['fcontam'], sys_err=args['sys_err'],
group_name=args['group_name'])
# Generate the `phot` dictionary
phot, ii, dd = phot_obj.get_phot_dict(mb.ra, mb.dec)
label = "3DHST Catalog ID: {0}, dr={1:.2f}, zphot={2:.3f}"
print(label.format(ez.cat['id'][ii], dd, ez.zbest[ii]))
print('\n`phot` keys:', list(phot.keys()))
for k in phot:
print('\n'+k+':\n', phot[k])
# Initialize photometry for the MultiBeam object
mb.set_photometry(**phot)
# parametric template fit to get reasonable background
sfit = mb.template_at_z(templates=tspline, fit_background=True,
include_photometry=False)
fig = mb.oned_figure(tfit=sfit)
ax = fig.axes[0]
ax.errorbar(mb.photom_pivot/1.e4, mb.photom_flam/1.e-19,
mb.photom_eflam/1.e-19,
marker='s', color='k', alpha=0.4, linestyle='None',
label='3D-HST photometry')
ax.legend(loc='upper left', fontsize=8)
## First example: no rescaling
z_phot = ez.zbest[ix]
# Reset scale parameter
if hasattr(mb,'pscale'):
delattr(mb, 'pscale')
t1 = args['t1']
tfit = mb.template_at_z(z=z_phot)
print('No rescaling, chi-squared={0:.1f}'.format(tfit['chi2']))
fig = fitting.full_sed_plot(mb, tfit, zfit=None, bin=4)
# Reset scale parameter
if hasattr(mb,'pscale'):
delattr(mb, 'pscale')
# Template rescaling, simple multiplicative factor
scl = mb.scale_to_photometry(order=0)
# has funny units of polynomial coefficients times 10**power,
# see `grizli.fitting.GroupFitter.compute_scale_array`
# Scale value is the inverse, so, e.g.,
# scl.x = [8.89] means scale the grism spectrum by 10/8.89=1.12
print(scl.x)
mb.pscale = scl.x
# Redo template fit
tfit = mb.template_at_z(z=z_phot)
print('Simple scaling, chi-squared={0:.1f}'.format(tfit['chi2']))
fig = fitting.full_sed_plot(mb, tfit, zfit=None, bin=4)
# Reset scale parameter
if hasattr(mb,'pscale'):
delattr(mb, 'pscale')
# Template rescaling, linear fit
scl = mb.scale_to_photometry(order=1)
# has funny units of polynomial coefficients times 10**power,
# see `grizli.fitting.GroupFitter.compute_scale_array`
# Scale value is the inverse, so, e.g.,
# scl.x = [8.89] means scale the grism spectrum by 10/8.89=1.12
print(scl.x)
mb.pscale = scl.x
# Redo template fit
tfit = mb.template_at_z(z=z_phot)
print('Simple scaling, chi-squared={0:.1f}'.format(tfit['chi2']))
fig = fitting.full_sed_plot(mb, tfit, zfit=None, bin=4)
# Now run the full redshift fit script with the photometry, which will also do the scaling
order=1
fitting.run_all_parallel(id, phot=phot, verbose=False,
scale_photometry=order+1, zr=[1.5, 2.4])
zfit = pyfits.open('{0}_{1:05d}.full.fits'.format(root, id))
z_grism = zfit['ZFIT_STACK'].header['Z_MAP']
print('Best redshift: {0:.4f}'.format(z_grism))
# Compare PDFs
pztab = utils.GTable.gread(zfit['ZFIT_STACK'])
plt.plot(pztab['zgrid'], pztab['pdf'], label='grism+3D-HST')
plt.plot(ez.zgrid, ez.pz[ix,:], label='photo-z')
plt.semilogy()
plt.xlim(z_grism-0.05, z_grism+0.05); plt.ylim(1.e-2, 1000)
plt.xlabel(r'$z$'); plt.ylabel(r'$p(z)$')
plt.grid()
plt.legend()
import grizli.pipeline.photoz
# The catalog is automatically generated with a number of aperture sizes. A total
# correction is computed in the detection band, usually a weighted sum of all available
# WFC3/IR filters, with the correction as the ratio between the aperture flux and the
# flux over the isophotal segment region, the 'flux' column in the SEP catalog.
aper_ix = 1
total_flux_column = 'flux'
# Get external photometry from Vizier
get_external_photometry = True
# Set `object_only=True` to generate the `eazy.photoz.Photoz` object from the
# internal photometric catalog without actually running the photo-zs on the catalog
# with few bands.
int_ez = grizli.pipeline.photoz.eazy_photoz(root, force=False, object_only=True,
apply_background=True, aper_ix=aper_ix,
apply_prior=True, beta_prior=True,
get_external_photometry=get_external_photometry,
external_limits=3, external_sys_err=0.3, external_timeout=300,
sys_err=0.05, z_step=0.01, z_min=0.01, z_max=12,
total_flux=total_flux_column)
# Available apertures
for k in int_ez.cat.meta:
if k.startswith('APER'):
print('Aperture {0}: R={1:>4.1f} pix = {2:>4.2f}"'.format(k, int_ez.cat.meta[k],
int_ez.cat.meta[k]*0.06))
#
k = 'APER_{0}'.format(aper_ix)
print('\nAperture used {0}: R={1:>4.1f} pix = {2:>4.2f}"'.format(k, int_ez.cat.meta[k],
int_ez.cat.meta[k]*0.06))
# Integrate the grism templates through the filters on the redshift grid.
int_phot_obj = photoz.EazyPhot(int_ez, grizli_templates=t0, zgrid=int_ez.zgrid)
# Reset scale parameter
if hasattr(mb,'pscale'):
delattr(mb, 'pscale')
# Show the SED
int_phot, ii, dd = int_phot_obj.get_phot_dict(mb.ra, mb.dec)
mb.unset_photometry()
mb.set_photometry(**int_phot)
tfit = mb.template_at_z(z=z_phot)
print('No rescaling, chi-squared={0:.1f}'.format(tfit['chi2']))
fig = fitting.full_sed_plot(mb, tfit, zfit=None, bin=4)
fig.axes[0].set_ylim(-5,80)
# Run the grism fit with the direct image photometry
# Note that here we show that you can just pass the full photometry object
# and the script will match the nearest photometric entry to the grism object.
order=0
fitting.run_all_parallel(id, phot_obj=int_phot_obj, verbose=False,
scale_photometry=order+1, zr=[1.5, 2.4])
zfit2 = pyfits.open('{0}_{1:05d}.full.fits'.format(root, id))
pztab2 = utils.GTable.gread(zfit2['ZFIT_STACK'])
plt.plot(ez.zgrid, ez.fit_chi2[ix,:] - ez.fit_chi2[ix,:].min(),
label='3D-HST photo-z')
plt.plot(pztab['zgrid'], pztab['chi2'] - pztab['chi2'].min(),
label='grism + 3D-HST photometry')
plt.plot(pztab2['zgrid'], pztab2['chi2'] - pztab2['chi2'].min(),
label='grism + direct image photometry')
plt.legend()
plt.xlabel('z'); plt.ylabel(r'$\chi^2$')
plt.xlim(1.5, 2.4); plt.ylim(-200, 4000); plt.grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CC
Step2: text.similar(w) finds words that appear in similar contexts to w
Step3: words similar to 'bought' -- mostly verbs
Step4: words similar to 'over' -- mostly prepositions
Step5: Tagged Corpora
Step6: Reading Tagged Corpora
Step7: not all corpora have the same tag sets, so can force mapping to universal tagset
Step8: Tagged corpora in other languages...
Step9: A Universal Part-of-Speech tagset
Step10: Nouns
Step11: Verbs
Step12: words and tags are paired. treat the word as a condition and the tag as an event and create a cfd
Step13: reverse the order... conditions
Step14: Adjectives and Adverbs
Step15: instead, use tagged_words() to look at POS tags
Step16: POS that follow 'often' the most are verbs**
Step17: Identifying words that have ambiguous POS tags
Step18: Mapping Words to Properties Using Python Dictionaries
Step19: Example
Step20: this "tagger" performs poorly, of course
Step21: Default taggers are still useful because after processing several thousand words of English text, most new words will be nouns
Step22: The Lookup Tagger
Step23: ~46% of tags were correct, just based on the 100 most frequent words
Step24: lots of None tags. These are words that were not in the 100 most frequent words.
Step25: better performance... 58% vs 46% without backoff!
Step26: N-Gram Tagging
Step27: Separating Training and Testing Data -- train/test split
Step28: General N-Gram Tagging -- NgramTagger class
Step29: Lots of None tags! The BigramTagger never saw a lot of word pairs during training
Step30: as N-Gram 'N' increases, data sparsity increases -- tradeoff between precision and recall
Step31: Tagging Unknown Words
Step32: ~5% of trigrams are ambiguous
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import nltk
from nltk import word_tokenize
text = word_tokenize("And now for something completely different")
nltk.pos_tag(text)
text = word_tokenize("They refuse to permit us to obtain the refuse permit")
nltk.pos_tag(text)
nltk.help.upenn_tagset()
text = nltk.Text(word.lower() for word in nltk.corpus.brown.words())
text.similar('woman')
text.similar('bought')
text.similar('over')
text.similar('the')
tagged_token = nltk.tag.str2tuple('fly/NN')
tagged_token
sent = '''
The/AT grand/JJ jury/NN commented/VBD on/IN a/AT number/NN of/IN
other/AP topics/NNS ,/, AMONG/IN them/PPO the/AT Atlanta/NP and/CC
Fulton/NP-tl County/NN-tl purchasing/VBG departments/NNS which/WDT it/PPS
said/VBD ``/`` ARE/BER well/QL operated/VBN and/CC follow/VB generally/RB
accepted/VBN practices/NNS which/WDT inure/VB to/IN the/AT best/JJT
interest/NN of/IN both/ABX governments/NNS ''/'' ./.
'''
[nltk.tag.str2tuple(t) for t in sent.split()]
nltk.corpus.brown.tagged_words()
nltk.corpus.brown.tagged_words(tagset='universal')
print(nltk.corpus.nps_chat.tagged_words())
nltk.corpus.conll2000.tagged_words()
nltk.corpus.treebank.tagged_words()
nltk.corpus.brown.tagged_words(tagset='universal')
nltk.corpus.treebank.tagged_words(tagset='universal')
nltk.corpus.sinica_treebank.tagged_words()
nltk.corpus.indian.tagged_words()
nltk.corpus.mac_morpho.tagged_words()
nltk.corpus.conll2002.tagged_words()
nltk.corpus.cess_cat.tagged_words()
brn = nltk.corpus.brown
brown_news_tagged = brn.tagged_words(categories='news', tagset='universal')
tag_fd = nltk.FreqDist(tag for (word, tag) in brown_news_tagged)
tag_fd.most_common()
plot = plt.figure(figsize=(18,10))
tag_fd.plot(cumulative=True)
tag_fd.plot()
word_tag_pairs = nltk.bigrams(brown_news_tagged)
list(word_tag_pairs)[:20]
word_tag_pairs = nltk.bigrams(brown_news_tagged) # generator needs to be redefined
noun_preceders = [a[1] for a, b in word_tag_pairs if b[1] == 'NOUN']
noun_preceders[:20]
fdist = nltk.FreqDist(noun_preceders)
[tag for tag, _ in fdist.most_common()]
fdist.plot(cumulative=True)
wsj = nltk.corpus.treebank.tagged_words(tagset='universal')
word_tag_fd = nltk.FreqDist(wsj)
[wt[0] for wt, _ in word_tag_fd.most_common() if wt[1] == 'VERB'][:25]
cfd1 = nltk.ConditionalFreqDist(wsj)
cfd1['yield'].most_common()
cfd1['cut'].most_common()
wsh = nltk.corpus.treebank.tagged_words()
cfd2 = nltk.ConditionalFreqDist((tag, word) for (word, tag) in wsj)
list(cfd2['ADJ'])[:25]
brown_learned_text = brn.words(categories='learned')
sorted(set(b for a, b in nltk.bigrams(brown_learned_text) if a == 'often'))
brown_lrnd_tagged = brn.tagged_words(categories='learned', tagset='universal')
brown_lrnd_tagged
tags = [b[1] for a, b in nltk.bigrams(brown_lrnd_tagged) if a[0] == 'often']
tags[:5]
fd = nltk.FreqDist(tags)
fd.tabulate()
def process(sentence):
for (w1, t1), (w2, t2), (w3, t3) in nltk.trigrams(sentence):
if (t1.startswith('V') and t2 == 'TO' and t3.startswith('V')):
print(w1, w2, w3)
for tagged_sent in brn.tagged_sents()[:200]:
process(tagged_sent)
brown_news_tagged = brn.tagged_words(categories='news', tagset='universal')
data = nltk.ConditionalFreqDist((word.lower(), tag)
for word, tag in brown_news_tagged)
for word in sorted(data.conditions()):
if len(data[word]) > 3:
tags = [tag for tag, _ in data[word].most_common()]
print(word, ' '.join(tags))
from nltk.corpus import brown
brown_tagged_sents = brown.tagged_sents(categories='news')
brown_sents = brown.sents(categories='news')
brown_sents[:1]
raw = 'I do not like green eggs and ham, I do not like them Sam I am!'
tokens = word_tokenize(raw)
default_tagger = nltk.DefaultTagger('NN')
default_tagger.tag(tokens)
default_tagger.evaluate(brown_tagged_sents)
patterns = [
(r'.*ing$', 'VBG'), # gerunds
(r'.*ed$', 'VBD'), # simple past
(r'.*es$', 'VBZ'), # 3rd singular present
(r'.*ould$', 'MD'), # modals
(r'.*\'s$', 'NN$'), # possessive nouns
(r'.*s$', 'NNS'), # plural nouns
(r'^-?[0-9]+(.[0-9]+)?$', 'CD'), # cardinal numbers
(r'.*', 'NN') # nouns (default)
]
brown_sents[3]
regexp_tagger = nltk.RegexpTagger(patterns)
regexp_tagger.tag(brown_sents[3])
regexp_tagger.evaluate(brown_tagged_sents)
fd = nltk.FreqDist(brown.words(categories='news'))
cfd = nltk.ConditionalFreqDist(brown.tagged_words(categories='news'))
most_freq_words = fd.most_common(100)
likely_tags = dict((word, cfd[word].max()) for word, _ in most_freq_words)
baseline_tagger = nltk.UnigramTagger(model=likely_tags)
baseline_tagger.evaluate(brown_tagged_sents)
sent = brown.sents(categories='news')[3]
sent
baseline_tagger.tag(sent)
baseline_tagger = nltk.UnigramTagger(model=likely_tags, backoff=nltk.DefaultTagger('NN'))
baseline_tagger.evaluate(brown_tagged_sents)
def performance(cfd, wordlist):
''' return evaluated performance value (0 to 1) for input cfd, wordlist
'''
lt = dict((word, cfd[word].max()) for word in wordlist)
baseline_tagger = nltk.UnigramTagger(model=lt, backoff=nltk.DefaultTagger('NN'))
return baseline_tagger.evaluate(brown.tagged_sents(categories='news'))
word_freqs = nltk.FreqDist(brown.words(categories='news')).most_common()
words_by_freq = [w for w, _ in word_freqs]
words_by_freq[:25]
cfd = nltk.ConditionalFreqDist(brown.tagged_words(categories='news'))
sizes = 2 ** np.arange(15)
sizes
perfs = [performance(cfd, words_by_freq[:size]) for size in sizes]
perfs
plt.figure(figsize=(12,8))
plt.plot(sizes, perfs, marker='o')
plt.title('Lookup Tagger Performance with Varying Model Size')
plt.xlabel('Model Size')
plt.ylabel('Performance')
plt.show()
from nltk.corpus import brown
brown_tagged_sents = brown.tagged_sents(categories='news')
brown_sents = brown.sents(categories='news')
unigram_tagger = nltk.UnigramTagger(brown_tagged_sents)
unigram_tagger.tag(brown_sents[2007])
split_size = int(len(brown_tagged_sents) * 0.9)
split_size
train_sents = brown_tagged_sents[:split_size]
test_sents = brown_tagged_sents[split_size:]
unigram_tagger = nltk.UnigramTagger(train_sents)
unigram_tagger.evaluate(test_sents)
bigram_tagger = nltk.BigramTagger(train_sents)
bigram_tagger.tag(brown_sents[2007])
unseen_sent = brown_sents[4203]
bigram_tagger.tag(unseen_sent)
bigram_tagger.evaluate(test_sents)
t0 = nltk.DefaultTagger('NN')
t1 = nltk.UnigramTagger(train_sents, backoff=t0)
t2 = nltk.BigramTagger(train_sents, backoff=t1)
t2.evaluate(test_sents)
t3 = nltk.TrigramTagger(train_sents, backoff=t2)
t3.evaluate(test_sents)
cfd = nltk.ConditionalFreqDist(((x[1], y[1], z[0]), z[1])
for sent in brown_tagged_sents
for x, y, z in nltk.trigrams(sent))
ambiguous_contexts = [c for c in cfd.conditions() if len(cfd[c]) > 1]
ambiguous_contexts[:10]
sum(cfd[c].N() for c in ambiguous_contexts) / cfd.N()
test_tags = [tag for sent in brown.sents(categories='editorial')
for word, tag in t2.tag(sent)]
gold_tags = [tag for word, tag in brown.tagged_words(categories='editorial')]
nltk.ConfusionMatrix(gold_tags, test_tags)
print(nltk.ConfusionMatrix(gold_tags, test_tags))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import Capital Bikeshare station information .xml file
Step2: create dictionary of bikeshare station (key) and its location (value)
Step3: save dictionary of bikeshare stations to pickle file
|
<ASSISTANT_TASK:>
Python Code:
import pickle
import xml.etree.ElementTree as ET
import urllib.request
xml_path = 'https://feeds.capitalbikeshare.com/stations/stations.xml'
tree = ET.parse(urllib.request.urlopen(xml_path))
root = tree.getroot()
station_location = dict()
for child in root:
tmp_lst = [float(child[4].text), float(child[5].text)]
station_location[child[1].text] = tmp_lst
station_location['10th & E St NW']
pickle.dump( station_location, open( "bike_location.p", "wb" ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SF Main sequence with full sample
Step2: Statistics
Step3: Number with HI detections
Step4: Table 1
Step5: Figure 1
Step6: With BT cut
Step7: Figure 2
Step8: Figure 3
Step9: Figure 4
Step10: Figure 5
Step11: Figure 6
Step12: Figure 7
Step13: Figure 8
Step14: Figure 9
Step15: Figure 10
Step16: Figure 11
Step17: Figure 12 - cutting this
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
import os
import sys
import warnings
warnings.filterwarnings('ignore')
import time
from scipy.stats import ks_2samp
from astropy.io import fits,ascii
from astropy.table import Table
from astropy.coordinates import SkyCoord
from astropy import units as u
homedir = os.getenv("HOME")
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.7 --ellip 0.75 --HIdef --minssfr -11.5 --cutBT --BT 0.3
os.chdir(homedir+'/research/LCS/plots/')
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.6 --ellip 0.75 --HIdef --minssfr -11.5
b.plot_full_ms()
#xline = np.linspace(8,11,100)
#yline = 0.61*xline-6.20
#plt.plot(xline,yline)
# write the tables that will be used to make the latex table
os.chdir(homedir+'/research/LCS/tables/')
%run ~/github/LCS/python/lcs_paper2_v2.py --ellip 0.75 --minmass 9.7 --minssfr -11.5
b.ks_stats(massmatch=False)
print()
print()
print('##### WITH BT CUT #####')
print()
print()
%run ~/github/LCS/python/lcs_paper2_v2.py --ellip 0.75 --minmass 9.7 --minssfr -11.5 --cutBT --BT 0.3
b.ks_stats(massmatch=False)
os.chdir(homedir+'/research/LCS/plots/')
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.7 --ellip 0.75 --HIdef --minssfr -11.5
print('number of core galaxies = ',sum(b.lcs_mass_sfr_flag & b.lcs.membflag))
print('\t with size measurements = ',sum(b.lcs_mass_sfr_flag & b.lcs.membflag & b.lcs.sampleflag))
print('number of infall galaxies = ',sum(b.lcs_mass_sfr_flag & b.lcs.infallflag))
print('\t with size measurements = ',sum(b.lcs_mass_sfr_flag & b.lcs.infallflag & b.lcs.sampleflag))
print('number of GSW galaxies = ',sum(b.gsw_mass_sfr_flag) )
print()
print('############# WITH BT CUT ###################')
print()
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.7 --ellip 0.75 --HIdef --minssfr -11.5 --cutBT --BT 0.3
print('number of core galaxies = ',sum(b.lcs_mass_sfr_flag & b.lcs.membflag))
print('\t with size measurements = ',sum(b.lcs_mass_sfr_flag & b.lcs.membflag & b.lcs.sampleflag))
print('number of infall galaxies = ',sum(b.lcs_mass_sfr_flag & b.lcs.infallflag))
print('\t with size measurements = ',sum(b.lcs_mass_sfr_flag & b.lcs.infallflag & b.lcs.sampleflag))
print('number of GSW galaxies = ',sum(b.gsw_mass_sfr_flag) )
os.chdir(homedir+'/research/LCS/plots/')
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.7 --ellip 0.75 --HIdef --minssfr -11.5
print('number of core galaxies = ',sum(b.lcs_mass_sfr_flag & b.lcs.membflag & b.lcs.cat['HIdef_flag']))
print('number of infall galaxies = ',sum(b.lcs_mass_sfr_flag & b.lcs.infallflag & b.lcs.cat['HIdef_flag']))
print('number of GSW galaxies = ',sum(b.gsw_mass_sfr_flag & b.gsw.HIdef['HIdef_flag']) )
print()
print('############# WITH BT CUT ###################')
print()
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.7 --ellip 0.75 --HIdef --minssfr -11.5 --cutBT --BT 0.3
print('number of core galaxies = ',sum(b.lcs_mass_sfr_flag & b.lcs.membflag & b.lcs.cat['HIdef_flag']))
print('number of infall galaxies = ',sum(b.lcs_mass_sfr_flag & b.lcs.infallflag & b.lcs.cat['HIdef_flag']))
print('number of GSW galaxies = ',sum(b.gsw_mass_sfr_flag & b.gsw.HIdef['HIdef_flag']) )
%%time
os.chdir(homedir+'/research/LCS/tables/')
%run ~/github/LCS/python/writelatexstats.py
t = writetable()
t.read_tables()
t.open_output()
t.get_stats()
t.write_header()
t.write_data()
t.write_footer()
t.close_output()
%%time
os.chdir(homedir+'/research/LCS/tables/')
%run ~/github/LCS/python/writelatexstats-massmatch.py
t = writetable()
t.read_tables()
t.open_output()
t.get_stats()
t.write_header()
t.write_data()
t.write_footer()
t.close_output()
%%time
os.chdir(homedir+'/research/LCS/plots/')
%run ~/github/LCS/python/lcs_paper2_v2.py --ellip 0.75 --minmass 9.7 --minssfr -11.5
mmatch = False
if mmatch:
flag = b.lcs.membflag
outfile1 = homedir+'/research/LCS/plots/lcscore-gsw-sfrmstar-BTcut1-e0p75-mmatch.pdf'
outfile2 = homedir+'/research/LCS/plots/lcscore-gsw-sfrmstar-BTcut1-e0p75-mmatch.png'
b.plot_sfr_mstar(lcsflag=flag,coreflag=True,plotMS=True,plotlegend=False,outfile1=outfile1,outfile2=outfile2,massmatch=True,hexbinflag=True,marker2='s')
print("")
print("")
flag = b.lcs.infallflag
outfile1 = homedir+'/research/LCS/plots/lcsinfall-gsw-sfrmstar-BTcut1-e0p75-mmatch.pdf'
outfile2 = homedir+'/research/LCS/plots/lcsinfall-gsw-sfrmstar-BTcut1-e0p75-mmatch.png'
b.plot_sfr_mstar(lcsflag=flag,label='Infall',outfile1=outfile1,outfile2=outfile2,coreflag=False,hexbinflag=True,massmatch=True)
else:
flag = b.lcs.membflag
outfile1 = homedir+'/research/LCS/plots/lcscore-gsw-sfrmstar-BTcut1-e0p75.pdf'
outfile2 = homedir+'/research/LCS/plots/lcscore-gsw-sfrmstar-BTcut1-e0p75.png'
b.plot_sfr_mstar(lcsflag=flag,coreflag=True,plotMS=True,plotlegend=False,outfile1=outfile1,outfile2=outfile2,massmatch=False,hexbinflag=True,marker2='s')
print("")
print("")
flag = b.lcs.infallflag
outfile1 = homedir+'/research/LCS/plots/lcsinfall-gsw-sfrmstar-BTcut1-e0p75.pdf'
outfile2 = homedir+'/research/LCS/plots/lcsinfall-gsw-sfrmstar-BTcut1-e0p75.png'
b.plot_sfr_mstar(lcsflag=flag,label='Infall',outfile1=outfile1,outfile2=outfile2,coreflag=False,hexbinflag=True,massmatch=False)
%%time
os.chdir(homedir+'/research/LCS/plots/')
%run ~/github/LCS/python/lcs_paper2_v2.py --ellip 0.75 --minmass 9.7 --minssfr -11.5 --cutBT --BT 0.3
mmatch=False
if mmatch:
flag = b.lcs.membflag
outfile1 = homedir+'/research/LCS/plots/lcscore-gsw-sfrmstar-BTcut03-e0p75-mmatch.pdf'
outfile2 = homedir+'/research/LCS/plots/lcscore-gsw-sfrmstar-BTcut03-e0p75-mmatch.png'
b.plot_sfr_mstar(lcsflag=flag,coreflag=True,plotMS=True,plotlegend=False,outfile1=outfile1,outfile2=outfile2,massmatch=True,hexbinflag=False,marker2='s')
print("")
print("")
flag = b.lcs.infallflag
outfile1 = homedir+'/research/LCS/plots/lcsinfall-gsw-sfrmstar-BTcut03-e0p75-mmatch.pdf'
outfile2 = homedir+'/research/LCS/plots/lcsinfall-gsw-sfrmstar-BTcut03-e0p75-mmatch.png'
b.plot_sfr_mstar(lcsflag=flag,label='Infall',outfile1=outfile1,outfile2=outfile2,coreflag=False,hexbinflag=True,massmatch=True)
else:
flag = b.lcs.membflag
outfile1 = homedir+'/research/LCS/plots/lcscore-gsw-sfrmstar-BTcut03-e0p75.pdf'
outfile2 = homedir+'/research/LCS/plots/lcscore-gsw-sfrmstar-BTcut03-e0p75.png'
b.plot_sfr_mstar(lcsflag=flag,coreflag=True,plotMS=True,plotlegend=False,outfile1=outfile1,outfile2=outfile2,massmatch=False,hexbinflag=False,marker2='s')
print("")
print("")
flag = b.lcs.infallflag
outfile1 = homedir+'/research/LCS/plots/lcsinfall-gsw-sfrmstar-BTcut03-e0p75.pdf'
outfile2 = homedir+'/research/LCS/plots/lcsinfall-gsw-sfrmstar-BTcut03-e0p75.png'
b.plot_sfr_mstar(lcsflag=flag,label='Infall',outfile1=outfile1,outfile2=outfile2,coreflag=False,hexbinflag=True,massmatch=False)
os.chdir(homedir+'/research/LCS/plots/')
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.7 --ellip 0.75 --minssfr -11.5
mmatch = False
if mmatch:
outfile1 = homedir+'/research/LCS/plots/delta-sfr-hist-BTcut1-e0p75-mmatch.pdf'
outfile2 = homedir+'/research/LCS/plots/delta-sfr-hist-BTcut1-e0p75-mmatch.png'
b.plot_dsfr_hist(outfile1=outfile1,outfile2=outfile2,massmatch=True,nbins=15)
else:
outfile1 = homedir+'/research/LCS/plots/delta-sfr-hist-BTcut1-e0p75.pdf'
outfile2 = homedir+'/research/LCS/plots/delta-sfr-hist-BTcut1-e0p75.png'
b.plot_dsfr_hist(outfile1=outfile1,outfile2=outfile2,massmatch=False,nbins=15)
1.5*.22
os.chdir(homedir+'/research/LCS/plots/')
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.7 --ellip 0.75 --minssfr -11.5
mmatch=False
if mmatch:
outfile1 = homedir+'/research/LCS/plots/fsuppressed-btcut1-mmatch.pdf'
outfile2 = homedir+'/research/LCS/plots/fsuppressed-btcut1-mmatch.png'
b.plot_frac_suppressed(massmatch=True)#outfile1=outfile1,outfile2=outfile2,nbins=12)
print()
print('##### WITH BT CUT ######')
print()
btcut = 0.3
b.plot_frac_suppressed(BTcut=btcut,plotsingle=False,massmatch=True)
plt.ylim(0,.4)
plt.legend([r'$\rm All$','_nolegend_','_nolegend_',r'$\rm B/T<0.3$'],loc='upper left')
plt.savefig(outfile1)
plt.savefig(outfile2)
else:
outfile1 = homedir+'/research/LCS/plots/fsuppressed-btcut1.pdf'
outfile2 = homedir+'/research/LCS/plots/fsuppressed-btcut1.png'
b.plot_frac_suppressed(massmatch=False)#outfile1=outfile1,outfile2=outfile2,nbins=12)
print()
print('##### WITH BT CUT ######')
print()
btcut = 0.3
b.plot_frac_suppressed(BTcut=btcut,plotsingle=False,massmatch=False)
plt.ylim(0,.4)
plt.legend([r'$\rm All$','_nolegend_','_nolegend_',r'$\rm B/T<0.3$'],loc='upper left')
plt.savefig(outfile1)
plt.savefig(outfile2)
os.chdir(homedir+'/research/LCS/plots/')
btmax=1
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.7 --ellip 0.75 --minssfr -11.5
mmatch=False
if mmatch:
b.compare_morph_mmatch(nbins=10,xmax=btmax,coreonly=False)#outfile1=outfile1,outfile2=outfile2,nbins=12)
outfile1 = homedir+'/research/LCS/plots/prop-lowsfr-lcsall-BTcut1-e0p75-mmatch.pdf'
outfile2 = homedir+'/research/LCS/plots/prop-lowsfr-lcsall-BTcut1-e0p75-mmatch.png'
plt.savefig(outfile1)
plt.savefig(outfile2)
else:
b.compare_morph(nbins=10,xmax=btmax,coreonly=False)#outfile1=outfile1,outfile2=outfile2,nbins=12)
outfile1 = homedir+'/research/LCS/plots/prop-lowsfr-lcsall-BTcut1-e0p75.pdf'
outfile2 = homedir+'/research/LCS/plots/prop-lowsfr-lcsall-BTcut1-e0p75.png'
plt.savefig(outfile1)
plt.savefig(outfile2)
%%time
os.chdir(homedir+'/research/LCS/plots/')
btmax=1
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.7 --ellip .75 --minssfr -11.5
b.compare_BT_lowsfr_field(nbins=12,BTmax=btmax)#outfile1=outfile1,outfile2=outfile2,nbins=12)
outfile1 = homedir+'/research/LCS/plots/morphhist-field-normal-lowsfr.pdf'
outfile2 = homedir+'/research/LCS/plots/morphhist-field-normal-lowsfr.png'
plt.savefig(outfile1)
plt.savefig(outfile2)
%%time
os.chdir(homedir+'/research/LCS/plots/')
btmax=1.1
nbins=8
#btmax=1
#nbins=7
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.7 --ellip 0.75 --minssfr -11.5
mmatch = False
if mmatch:
xvars,yvars = b.plot_dsfr_BT(nbins=nbins,xmax=btmax,writefiles=True,nsersic_cut=10,BTline=.3,mmatch=True)#outfile1=outfile1,outfile2=outfile2,nbins=12)
outfile1 = homedir+'/research/LCS/plots/dsfr-BTcut1-e0p75-mmatch.pdf'
outfile2 = homedir+'/research/LCS/plots/dsfr-BTcut1-e0p75-mmatch.png'
plt.savefig(outfile1)
plt.savefig(outfile2)
else:
xvars,yvars = b.plot_dsfr_BT(nbins=nbins,xmax=btmax,writefiles=True,nsersic_cut=10,BTline=.3,mmatch=False)#outfile1=outfile1,outfile2=outfile2,nbins=12)
outfile1 = homedir+'/research/LCS/plots/dsfr-BTcut1-e0p75.pdf'
outfile2 = homedir+'/research/LCS/plots/dsfr-BTcut1-e0p75.png'
plt.savefig(outfile1)
plt.savefig(outfile2)
%%time
os.chdir(homedir+'/research/LCS/plots/')
%run ~/github/LCS/python/lcs_paper2_v2.py --cutBT --BT .3 --minmass 9.7 --ellip 0.75 --minssfr -11.5
mmatch=False
if mmatch:
outfile1 = homedir+'/research/LCS/plots/delta-sfr-hist-BTcut0p3-e0p75-mmatch.pdf'
outfile2 = homedir+'/research/LCS/plots/delta-sfr-hist-BTcut0p3-e0p75-mmatch.png'
b.plot_dsfr_hist(outfile1=outfile1,outfile2=outfile2,massmatch=True)
else:
outfile1 = homedir+'/research/LCS/plots/delta-sfr-hist-BTcut0p3-e0p75.pdf'
outfile2 = homedir+'/research/LCS/plots/delta-sfr-hist-BTcut0p3-e0p75.png'
b.plot_dsfr_hist(outfile1=outfile1,outfile2=outfile2,massmatch=False)
%%time
# compare BT of low SFR field galaxies with BT distribution of mass-matched normal SF field galaxies
os.chdir(homedir+'/research/LCS/plots/')
btmax=.4
%run ~/github/LCS/python/lcs_paper2_v2.py --cutBT --BT 0.3 --minmass 9.7 --ellip 0.75 --minssfr -11.5
#b.compare_BT_lowsfr_field_core(nbins=10,coreonly=False,BTmax=btmax)
#outfile1 = homedir+'/research/LCS/plots/lowsfr-prop-lcsall-mmfield-BTcut0p4-e0p75.pdf'
#outfile2 = homedir+'/research/LCS/plots/lowsfr-prop-lcsall-mmfield-BTcut0p4-e0p75.png'
#plt.savefig(outfile1)
#plt.savefig(outfile2)
print()
print('TESTING')
print()
plt.figure(figsize=(12,6))
plt.subplots_adjust(wspace=.3,bottom=.2,hspace=.08)
print()
print('TESTING')
print()
b.compare_BT_lowsfr_field_core(nbins=10,coreonly=False,infallonly=True,BTmax=btmax,plotsingle=False,nrow=2,show_xlabel=False)
#outfile1 = homedir+'/research/LCS/plots/lowsfr-prop-lcsinfall-mmfield-BTcut0p4-e0p75.pdf'
#outfile2 = homedir+'/research/LCS/plots/lowsfr-prop-lcsinfall-mmfield-BTcut0p4-e0p75.png'
b.compare_BT_lowsfr_field_core(nbins=10,coreonly=True,BTmax=btmax,plotsingle=False,nrow=2,subplot_offset=4)
#outfile1 = homedir+'/research/LCS/plots/lowsfr-prop-lcscore-mmfield-BTcut0p4-e0p75.pdf'
#outfile2 = homedir+'/research/LCS/plots/lowsfr-prop-lcscore-mmfield-BTcut0p4-e0p75.png'
#plt.savefig(outfile1)
#plt.savefig(outfile2)
outfile1 = homedir+'/research/LCS/plots/lowsfr-prop-lcscore-infall-mmfield-BTcut0p3-e0p75.pdf'
outfile2 = homedir+'/research/LCS/plots/lowsfr-prop-lcscore-infall-mmfield-BTcut0p3-e0p75.png'
plt.savefig(outfile1)
plt.savefig(outfile2)
%%time
os.chdir(homedir+'/research/LCS/plots/')
%run ~/github/LCS/python/lcs_paper2_v2.py --cutBT --BT 0.3 --minmass 9.7 --HIdef --minssfr -11.5
b.compare_HIdef()
%%time
os.chdir(homedir+'/research/LCS/plots/')
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.7 --ellip .75 --HIdef --minssfr -11.5 --cutBT --BT 0.3
b.get_HIfrac_SFR_env(plotsingle=True)
figname1 = homedir+'/research/LCS/plots/frac-HI-SFR-env.png'
figname2 = homedir+'/research/LCS/plots/frac-HI-SFR-env.pdf'
plt.savefig(figname1)
plt.savefig(figname2)
%%time
os.chdir(homedir+'/research/LCS/plots/')
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.7 --BT .3 --cutBT --ellip .75 --HIdef --minssfr -11.5
figname1 = homedir+'/research/LCS/plots/lcs-dvdr-sfgals_2panel.png'
figname2 = homedir+'/research/LCS/plots/lcs-dvdr-sfgals_2panel.pdf'
b.plot_dvdr_sfgals_2panel(figname1=figname1,figname2=figname2,HIflag=True)
os.chdir(homedir+'/research/LCS/plots/')
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.7 --BT .4 --cutBT --ellip .75 --HIdef
figname1 = homedir+'/research/LCS/plots/frac-suppressed-Lx.png'
figname2 = homedir+'/research/LCS/plots/frac-suppressed-Lx.pdf'
b.frac_suppressed_Lx()#figname1=figname1,figname2=figname2,HIflag=True)
plt.savefig(figname1)
plt.savefig(figname2)
os.chdir(homedir+'/research/LCS/plots/')
%run ~/github/LCS/python/lcs_paper2_v2.py --minmass 9.7 --ellip .75 --HIdef
figname1 = homedir+'/research/LCS/plots/frac-HI-Lx.png'
figname2 = homedir+'/research/LCS/plots/frac-HI-Lx.pdf'
b.frac_HI_Lx()#figname1=figname1,figname2=figname2,HIflag=True)
plt.savefig(figname1)
plt.savefig(figname2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Rotations
Step5: PCA
Step8: FastFourier Transformation
Step9: Save python object with pickle
Step10: Progress Bar
Step11: Check separations by histogram and scatter plot
Step13: Plot Cumulative Lift
Step15: GBM skitlearn
Step17: Xgboost
Step18: LightGBM
Step21: Control plots
Step25: Tuning parameters of a model
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import xgboost as xgb
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import precision_recall_curve
df = pd.read_csv("iris.csv")
def rotMat3D(a,r):
Return the matrix that rotate the a vector into the r vector. numpy array are required
a = a/np.linalg.norm(a)
r = r/np.linalg.norm(r)
I = np.eye(3)
v = np.cross(a,r)
c = np.inner(a,r)
v_x = np.array([[0,-v[2],v[1]],[v[2],0,-v[0]],[-v[1],v[0],0]])
return I + v_x + np.matmul(v_x,v_x)/(1+c)
# example usage
z_old = np.array([0, 0, 1])
z = np.array([1, 1, 1])
R = rotMat3D(z, z_old)
print(z, R.dot(z))
print(z_old, R.dot(z_old))
print(np.linalg.norm(z), np.linalg.norm(R.dot(z)))
def createR2D(vector):
rotate the vector to [0,1], require numpy array
m = np.linalg.norm(vector)
c, s = vector[1]/m , vector[0]/m
R2 = np.array([c, -s, s, c]).reshape(2,2)
return R2
# example usage
y_old = np.array([3,4])
R2 = createR2D(y_old)
print(y_old, R2.dot(y_old))
from sklearn import decomposition
def pca_decomposition(df):
Perform sklearn PCA. The returned components are already ordered by the explained variance
pca = decomposition.PCA()
pca.fit(df)
return pca
def pca_stats(pca):
print("variance explained:\n", pca.explained_variance_ratio_)
print("pca components:\n", pca.components_)
def plot_classcolor(df, x='y', y='x', hue=None):
sns.lmplot(x, y, data=df, hue=hue, fit_reg=False)
sns.plt.title("({} vs {})".format(y, x))
plt.show()
def add_pca_to_df(df, allvars, pca):
df[["pca_" + str(i) for i, j in enumerate(pca.components_)
]] = pd.DataFrame(pca.fit_transform(df[allvars]))
pca = pca_decomposition( df[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']] )
pca_stats(pca)
add_pca_to_df(df, ['sepal_length', 'sepal_width', 'petal_length', 'petal_width'], pca)
plot_classcolor(df, 'pca_0', 'pca_1', 'species_id')
from scipy.fftpack import fft, rfft, irfft, fftfreq
def rfourier_transformation(df, var, pass_high=-1, pass_low=-1, verbose=True, plot=True):
Return the signal after low and high filter applied.
Use verbose and plot to see stats and plot the signal before and after the filter.
low = pass_high
high = pass_low
if (high < low) and (high>0):
print("Cannot be pass_low < pass_high!!")
return -1
time = pd.Series(df.index.values[1:10] -
df.index.values[:10 - 1]) # using the first 10 data
dt = time.describe()['50%']
if (verbose):
print(
sampling time: {0} s
sampling frequency: {1} hz
max freq in rfft: {2} hz
.format(dt, 1 / dt, 1 / (dt * 2), 1 / (dt)))
signal = df[var]
freq = fftfreq(signal.size, d=dt)
f_signal = rfft(signal)
m = {}
if (low > 0):
f_signal_lowcut = f_signal.copy()
f_signal_lowcut[(freq < low)] = 0
cutted_signal_low = irfft(f_signal_lowcut)
m['low'] = 1
if (high > 0):
f_signal_highcut = f_signal.copy()
f_signal_highcut[(freq > high)] = 0
cutted_signal_high = irfft(f_signal_highcut)
m['high'] = 1
if (high > 0) & (low > 0):
f_signal_bwcut = f_signal.copy()
f_signal_bwcut[(freq < low) | (freq > high)] = 0
cutted_signal_bw = irfft(f_signal_bwcut)
m['bw'] = 1
m['low'] = 2
m['high'] = 3
n = len(freq)
if (plot):
f, axarr = plt.subplots(len(m) + 1, 1, sharex=True, figsize=(18,15))
f.canvas.set_window_title(var)
# time plot
axarr[0].plot(signal)
axarr[0].set_title('Signal')
if 'bw' in m:
axarr[m['bw']].plot(df.index, cutted_signal_bw)
axarr[m['bw']].set_title('Signal after low-high cut')
if 'low' in m:
axarr[m['low']].plot(df.index, cutted_signal_low)
axarr[m['low']].set_title('Signal after high filter (low frequencies rejected)')
if 'high' in m:
axarr[m['high']].plot(df.index, cutted_signal_high)
axarr[m['high']].set_title('Signal after low filter (high frequencies rejected)')
plt.show()
# spectrum
f = plt.figure(figsize=(18,8))
plt.plot(freq[0:n // 2], f_signal[:n // 2])
f.suptitle('Frequency spectrum')
if 'low' in m:
plt.axvline(x=low, ymin=0., ymax=1, linewidth=2, color='red')
if 'high' in m:
plt.axvline(x=high, ymin=0., ymax=1, linewidth=2, color='red')
plt.show()
if 'bw' in m:
return cutted_signal_bw
elif 'low' in m:
return cutted_signal_low
elif 'high' in m:
return cutted_signal_high
else:
return signal
acc = pd.read_csv('accelerations.csv')
signal = rfourier_transformation(acc, 'x', pass_high=0.1, pass_low=0.5, verbose=True, plot=True)
# save in pickle with gzip compression
import pickle
import gzip
def save(obj, filename, protocol=0):
file = gzip.GzipFile(filename, 'wb')
file.write(pickle.dumps(obj, protocol))
file.close()
def load(filename):
file = gzip.GzipFile(filename, 'rb')
buffer = ""
while True:
data = file.read()
if data == "":
break
buffer += data
obj = pickle.loads(buffer)
file.close()
return obj
# Simple bar, the one to be used in a general python code
import tqdm
for i in tqdm.tqdm(range(0, 1000)):
pass
# Bar to be used in a jupyter notebook
for i in tqdm.tqdm_notebook(range(0, 1000)):
pass
# custom update bar
import time
tot = 4000
bar = tqdm.tqdm_notebook(desc='Status ', total=tot, mininterval=0.5, miniters=5, unit='cm', unit_scale=True)
# with the file options you can show the progress bar into a file
# mininterval: time in seconds to see an update on the progressbar
# miniters: Tweak this and `mininterval` to get very efficient loops, if 0 will only use mininterval
# unit_scale: use international scale for the units (k, M, m, etc...)
# bar_format: specify the bar format, default is '{l_bar}{bar}{r_bar}'. It can impact the performance if you ask for complicate bar format
# unit_divisor: [default: 1000], ignored unless `unit_scale` is True
# ncols: The width of the entire output message. If specified, dynamically resizes the progressbar to stay within this bound.
for l in range(0, tot):
if ((l-1) % 10) == 0:
bar.update(10)
if l % 1000 == 0:
bar.write('to print something without duplicate the progress bar (if you are using tqdm.tqdm instead of tqdm.tqdm_notebook)')
print('or use the simple print if you are using tqdm.tqdm_notebook')
time.sleep(0.001)
# with this extension you can use tqdm_notebook().pandas(...) instead of tqdm.pandas(...)
from tqdm import tqdm_notebook
!jupyter nbextension enable --py --sys-prefix widgetsnbextension
import pandas as pd
import numpy as np
import time
df = pd.DataFrame(np.random.randint(0, int(1e8), (100, 3)))
# Create and register a new `tqdm` instance with `pandas`
# (can use tqdm_gui, optional kwargs, etc.)
print('set tqdm_notebook for pandas, show the bar')
tqdm_notebook().pandas()
# Now you can use `progress_apply` instead of `apply`
print('example usage of progressbar in a groupby pandas statement')
df_g = df.groupby(0).progress_apply(lambda x: time.sleep(0.01))
print('example usage of progressbar in an apply pandas statement')
df_a = df.progress_apply(lambda x: time.sleep(0.01))
def plot_classcolor(df, x='y', y='x', hue=None):
sns.lmplot(x, y, data=df, hue=hue, fit_reg=False)
sns.plt.title("({} vs {})".format(y, x))
plt.show()
plot_classcolor(df, 'sepal_length', 'sepal_width', hue='species')
def plot_histo_per_class(df, var, target):
t_list = df[target].unique()
for t in t_list:
sns.distplot(
df[df[target] == t][var], kde=False, norm_hist=True, label=str(t))
sns.plt.legend()
sns.plt.show()
plot_histo_per_class(df, 'sepal_length', "species_id")
def plotLift(df, features, target, ascending=False, multiclass_level=None):
Plot the Lift function for all the features.
Ascending can be a list of the same feature length or a single boolean value.
For the multiclass case you can give the value of a class and the lift is calculated
considering the select class vs all the other
if multiclass_level != None:
df = df[features+[target]].copy()
if multiclass_level != 0:
df.loc[df[target] != multiclass_level, target] = 0
df.loc[df[target] == multiclass_level, target] = 1
else :
df.loc[df[target] == multiclass_level, target] = 1
df.loc[df[target] != multiclass_level, target] = 0
npoints = 100
n = len(df)
st = n / npoints
df_shuffled = df.sample(frac=1)
flat = np.array([[(i * st) / n, df_shuffled[0:int(i * st)][target].sum()]
for i in range(1, npoints + 1)])
flat = flat.transpose()
to_leg = []
if not isinstance(features, list):
features = [features]
if not isinstance(ascending, list):
ascending = [ascending for i in features]
for f, asc in zip(features, ascending):
a = df[[f, target]].sort_values(f, ascending=asc)
b = np.array([[(i * st) / n, a[0:int(i * st)][target].sum()]
for i in range(1, npoints + 1)])
b = b.transpose()
to_leg.append(plt.plot(b[0], b[1], label=f)[0])
to_leg.append(plt.plot(flat[0], flat[1], label="no_gain")[0])
plt.legend(handles=to_leg, loc=4)
plt.xlabel('faction of data', fontsize=18)
plt.ylabel(target+' (cumulative sum)', fontsize=16)
plt.show()
# Lift for regression
titanic = sns.load_dataset("titanic")
plotLift(titanic, ['sibsp', 'survived', 'class'], 'fare', ascending=[False,False, True])
# Lift plot example for multiclass
plotLift(
df, ['sepal_length', 'sepal_width', 'petal_length'],
'species_id',
ascending=[False, True, False],
multiclass_level=3)
def plot_var_imp_skitlearn(features, clf_fit):
Plot var_imp for a skitlearn fitted model
my_ff = np.array(features)
importances = clf_fit.feature_importances_
indices = np.argsort(importances)
pos = np.arange(len(my_ff[indices])) + .5
plt.figure(figsize=(20, 0.75 * len(my_ff[indices])))
plt.barh(pos, importances[indices], align='center')
plt.yticks(pos, my_ff[indices], size=25)
plt.xlabel('rank')
plt.title('Feature importances', size=25)
plt.grid(True)
plt.show()
importance_dict = dict(zip(my_ff[indices], importances[indices]))
return importance_dict
import xgboost
#### VERSIONE GIUSTA A LAVORO
def plot_var_imp_xgboost(model, mode='gain', ntop=-1):
Plot the vars imp for xgboost model, where mode = ['weight','gain','cover']
'weight' - the number of times a feature is used to split the data across all trees.
'gain' - the average gain of the feature when it is used in trees
'cover' - the average coverage of the feature when it is used in trees
importance = model.get_score(importance_type=mode)
importance = sorted(
importance.items(), key=operator.itemgetter(1), reverse=True)
if ntop == -1: ntop = len(importance)
importance = importance[0:ntop]
my_ff = np.array([i[0] for i in importance])
imp = np.array([i[1] for i in importance])
indices = np.argsort(imp)
pos = np.arange(len(my_ff[indices])) + .5
plt.figure(figsize=(20, 0.75 * len(my_ff[indices])))
plt.barh(pos, imp[indices], align='center')
plt.yticks(pos, my_ff[indices], size=25)
plt.xlabel('rank')
plt.title('Feature importances (' + mode + ')', size=25)
plt.grid(True)
plt.show()
return
import lightgbm as lgb
### VERSIONE CORRETTA A LAVORO
def plot_ROC_PrecisionRecall(y_test, y_pred):
Plot ROC curve and Precision-Recall plot
numpy arrays are required.
fpr_clf, tpr_clf, _ = roc_curve(y_test, y_pred)
precision, recall, thresholds = precision_recall_curve(y_test, y_pred)
f1 = np.array([2 * p * r / (p + r) for p, r in zip(precision, recall)])
f1[np.isnan(f1)] = 0
t_best_f1 = thresholds[np.argmax(f1)]
roc_auc = auc(fpr_clf, tpr_clf)
plt.figure(figsize=(25, 25))
# plot_ROC
plt.subplot(221)
plt.plot(
fpr_clf,
tpr_clf,
color='r',
lw=2,
label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='-')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
# plot_PrecisionRecall
plt.subplot(222)
plt.plot(
recall, precision, color='r', lw=2, label='Precision-Recall curve')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precison-Recall curve')
plt.legend(loc="lower right")
plt.show()
return {"roc_auc": roc_auc, "t_best_f1": t_best_f1}
def plot_ROC_PR_test_train(y_train, y_test, y_test_pred, y_train_pred):
Plot ROC and Precision-Recall curve for test and train.
Return the auc for test and train
roc_auc_test = plot_ROC_PrecisionRecall(y_test, y_test_pred)
roc_auc_train = plot_ROC_PrecisionRecall(y_train, y_train_pred)
return roc_auc_test, roc_auc_train
### Bayesian Optimization
# https://github.com/fmfn/BayesianOptimization
from bayes_opt import BayesianOptimization
def xgb_evaluate_gen(xg_train, xg_test, watchlist, num_rounds):
Create the function to be optimized (example for xgboost)
params = { 'eta': 0.1, 'objective':'binary:logistic','silent': 1, 'eval_metric': 'auc' }
def xgb_evaluate(min_child_weight,colsample_bytree,max_depth,subsample,gamma,alpha):
Return the function to be maximized by the Bayesian Optimization,
where the inputs are the parameters to be optimized and the output the
evaluation_metric on test set
params['min_child_weight'] = int(round(min_child_weight))
params['cosample_bytree'] = max(min(colsample_bytree, 1), 0)
params['max_depth'] = int(round(max_depth))
params['subsample'] = max(min(subsample, 1), 0)
params['gamma'] = max(gamma, 0)
params['alpha'] = max(alpha, 0)
#cv_result = xgb.cv(params, xg_train, num_boost_round=num_rounds, nfold=5,
# seed=random_state, callbacks=[xgb.callback.early_stop(25)]
model_temp = xgb.train(params, dtrain=xg_train, num_boost_round=num_rounds,
evals=watchlist, early_stopping_rounds=15, verbose_eval=False)
# return -cv_result['test-merror-mean'].values[-1]
return float(str(model_temp.eval(xg_test)).split(":")[1][0:-1])
return xgb_evaluate
def go_with_BayesianOptimization(xg_train, xg_test, watchlist, num_rounds = 1,
num_iter = 10, init_points = 10, acq='ucb'):
Send the Batesian Optimization for xgboost. acq = 'ucb', 'ei', 'poi'
xgb_func = xgb_evaluate_gen(xg_train, xg_test, watchlist, num_rounds)
xgbBO = BayesianOptimization(xgb_func, {'min_child_weight': (1, 50),
'colsample_bytree': (0.5, 1),
'max_depth': (5, 15),
'subsample': (0.5, 1),
'gamma': (0, 2),
'alpha': (0, 2),
})
xgbBO.maximize(init_points=init_points, n_iter=num_iter, acq=acq) # poi, ei, ucb
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import math
prime =[]
def simpleSieve(limit ) :
mark =[True for i in range(limit + 1 ) ]
p = 2
while(p * p <= limit ) :
if(mark[p ] == True ) :
for i in range(p * p , limit + 1 , p ) :
mark[i ] = False
p += 1
for p in range(2 , limit ) :
if mark[p ] :
prime . append(p )
print(p , end = "▁ ")
'
def segmentedSieve(n ) :
limit = int(math . floor(math . sqrt(n ) ) + 1 )
simpleSieve(limit )
low = limit
high = limit * 2
while low < n :
if high >= n :
high = n
mark =[True for i in range(limit + 1 ) ]
for i in range(len(prime ) ) :
loLim = int(math . floor(low / prime[i ] ) * prime[i ] )
if loLim < low :
loLim += prime[i ]
for j in range(loLim , high , prime[i ] ) :
mark[j - low ] = False
for i in range(low , high ) :
if mark[i - low ] :
print(i , end = "▁ ")
low = low + limit
high = high + limit
n = 100
print("Primes ▁ smaller ▁ than ", n , ": ")
segmentedSieve(100 )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Palettable API
Step2: Setting the matplotlib Color Cycle
Step3: Using a Continuous Palette
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from palettable.colorbrewer.qualitative import Set1_9
Set1_9.name
Set1_9.type
Set1_9.number
Set1_9.colors
Set1_9.hex_colors
Set1_9.mpl_colors
Set1_9.mpl_colormap
# requires ipythonblocks
Set1_9.show_as_blocks()
Set1_9.show_continuous_image()
Set1_9.show_discrete_image()
from palettable.wesanderson import Aquatic1_5, Moonrise4_5
x = np.linspace(0, 2 * np.pi)
offsets = np.linspace(0, 2*np.pi, 4, endpoint=False)
# Create array with shifted-sine curve along each column
yy = np.transpose([np.sin(x + phi) for phi in offsets])
plt.rc('lines', linewidth=4)
plt.rc('axes', color_cycle=Aquatic1_5.mpl_colors)
fig, (ax0, ax1) = plt.subplots(nrows=2)
ax0.plot(yy)
ax0.set_title('Set default color cycle to Aquatic1_5')
ax1.set_color_cycle(Moonrise4_5.mpl_colors)
ax1.plot(yy)
ax1.set_title('Set axes color cycle to Moonrise4_5')
# Tweak spacing between subplots to prevent labels from overlapping
plt.subplots_adjust(hspace=0.3)
from palettable.colorbrewer.sequential import YlGnBu_9
from matplotlib.colors import LogNorm
#normal distribution center at x=0 and y=5
x = np.random.randn(100000)
y = np.random.randn(100000)+5
plt.hist2d(x, y, bins=40, norm=LogNorm(), cmap=YlGnBu_9.mpl_colormap)
plt.colorbar()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Building sequences
Step2: Building sequences from pulses
Step3: As you can see, wait times can be added with the add_wait command. In this case the wait time is given by a lambda function. This enable the implementation of variable time steps. This can also be used to add pulses with variable length and amplitude. The variable name t is later used to set values for this wait time. <br>
Step4: Single channel virtual AWG
Step5: Please note, that this plot always displays the amplitude of your signal (not I or Q).
Step6: If you are satisfied with the results, load the sequences onto your physical device with
|
<ASSISTANT_TASK:>
Python Code:
testsample = sample.Sample()
testsample.readout_tone_length = 200e-9 # length of the readout tone
testsample.clock = 1e9 # sample rate of your physical awg/pulse generator
testsample.tpi = 100e-9 # duration of a pi-pulse
testsample.tpi2 = 50e-9 # duration of a pi/2-pulse
testsample.iq_frequency = 20e6 # iq_frequency for iq mixing (set to 0 for homodyne measurements)
#testsample.awg = my_awg #<- qkit instrument (your actual awg)
#example:
pi = ps.Pulse(50e-9, name = "pi-pulse", shape = ps.ShapeLib.gauss, iq_frequency=50e6)
#this creates a 50ns gaussian pulse with name "pi-pulse" at an iq_frequency of 50MHz.
my_sequence = ps.PulseSequence(testsample) # create sequence object
my_sequence.add(pi) # add pi pulse, as defined in the example above
my_sequence.add_wait(lambda t: t) # add a variable wait time with length t
my_sequence.add_readout() # add the readout
my_sequence.plot() # show SCHEMATIC plot of the pulse sequence
spinecho = sl.spinecho(testsample, n_pi = 2) # spinecho with 2 pi-pulses
spinecho.plot()
vawg = VirtAWG.VirtualAWG(testsample) # by default, the virtual awg is initialized with a single channel
time = np.arange(0, 500e-9, 50e-9) # time t for the sequence
vawg.set_sequence(my_sequence, t=time) # set_sequence deletes all previously stored sequences in a channel
vawg.add_sequence(spinecho, t=time*2) # add_sequence appends the next sequence to the sequences stored in the channel
# Note, this enables you to run multiple experiments, such as T1-measurement and spin-echo in parallel!#
vawg.plot()
# In the plot, the time starts at 0 together with the readout.
# The position of the readout is also used as a phase reference for all pulses.
# If you do not want the experiments to run consecutively, but to interleave them instead:
vawg.set_interleave(True)
# This also works for more than 2 sequences.
vawg.plot()
vawg = VirtAWG.VirtualAWG(testsample, channels=2) #Initialize with two channels channel (number is arbitrary)
vawg.set_sequence(my_sequence, channel=1, t=time) # set my_sequence (T1 measurement) on channel 1
vawg.set_sequence(spinecho, channel=2, t=time) # set spinecho on channel 2
vawg.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Define data loader, using cuDF
Step4: Define our training routine.
Step5: Implement our MLFlow training loop, and save our best model to the tracking server.
Step6: Begin serving our trained model using MLFlow
|
<ASSISTANT_TASK:>
Python Code:
#!wget -N https://rapidsai-cloud-ml-sample-data.s3-us-west-2.amazonaws.com/airline_small.parquet
def load_data(fpath):
Simple helper function for loading data to be used by CPU/GPU models.
:param fpath: Path to the data to be ingested
:return: DataFrame wrapping the data at [fpath]. Data will be in either a Pandas or RAPIDS (cuDF) DataFrame
import cudf
df = cudf.read_parquet(fpath)
X = df.drop(["ArrDelayBinary"], axis=1)
y = df["ArrDelayBinary"].astype('int32')
return train_test_split(X, y, test_size=0.2)
def train(fpath, max_detph, max_features, n_estimators):
:param fpath: Path or URL for the training data used with the model.
:max_detph: int Max tree depth
:max_features: float percentage of features to use in classification
:n_estimators: int number of trees to create
:return: Trained Model
X_train, X_test, y_train, y_test = load_data(fpath)
mod = RandomForestClassifier(max_depth=max_depth, max_features=max_features, n_estimators=n_estimators)
acc_scorer = accuracy_score
mod.fit(X_train, y_train)
preds = mod.predict(X_test)
acc = acc_scorer(y_test, preds)
mlparams = {"max_depth": str(max_depth),
"max_features": str(max_features),
"n_estimators": str(n_estimators),
}
mlflow.log_params(mlparams)
mlmetrics = {"accuracy": acc}
mlflow.log_metrics(mlmetrics)
return mod, infer_signature(X_train.to_pandas(), y_train.to_pandas())
conda_env = f'conda.yaml'
fpath = f'airline_small.parquet'
max_depth = 10
max_features = 0.75
n_estimators = 500
artifact_path = "Airline-Demo"
artifact_uri = None
experiment_name = "RAPIDS-Notebook"
experiment_id = None
mlflow.set_tracking_uri(uri='sqlite:////tmp/mlflow-db.sqlite')
mlflow.set_experiment(experiment_name)
with mlflow.start_run(run_name="(Notebook) RAPIDS-MLFlow"):
model, signature = train(fpath, max_depth, max_features, n_estimators)
mlflow.sklearn.log_model(model,
signature=signature,
artifact_path=artifact_path,
registered_model_name="rapids-mlflow-notebook",
conda_env='conda.yaml')
artifact_uri = mlflow.get_artifact_uri(artifact_path=artifact_path)
print(artifact_uri)
import json
import requests
host='localhost'
port='55755'
headers = {
"Content-Type": "application/json",
"format": "pandas-split"
}
data = {
"columns": ["Year", "Month", "DayofMonth", "DayofWeek", "CRSDepTime", "CRSArrTime", "UniqueCarrier",
"FlightNum", "ActualElapsedTime", "Origin", "Dest", "Distance", "Diverted"],
"data": [[1987, 10, 1, 4, 1, 556, 0, 190, 247, 202, 162, 1846, 0]]
}
## Pause to let server start
time.sleep(5)
while (True):
try:
resp = requests.post(url="http://%s:%s/invocations" % (host, port), data=json.dumps(data), headers=headers)
print('Classification: %s' % ("ON-Time" if resp.text == "[0.0]" else "LATE"))
break
except Exception as e:
errmsg = "Caught exception attempting to call model endpoint: %s" % e
print(errmsg, end='')
print("Sleeping")
time.sleep(20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Unit tests
Step9: Training the network
Step10: Check out your predictions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1/(1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = self.activation_function(final_inputs) # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = error - (y - hidden_outputs)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error * final_outputs * (1 - final_outputs)
hidden_error_term = np.dot(output_error_term, self.weights_hidden_to_output) * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += learning_rate * hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += learning_rate * output_error_term * hidden_outputs
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += delta_weights_h_o # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += delta_weights_i_h # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = self.activation_function(final_inputs) # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1/(1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
# Ok, let's see if this part works as planned.
bob = NeuralNetwork(3, 2, 1, 0.5)
bob.activation_function(0.5)
1/(1+np.exp(-0.5))
# Cool. Everything works there. Now, to figure out what the hell is going on with the train function.
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = self.activation_function(final_inputs) # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = error - (y - hidden_outputs)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error * final_outputs * (1 - final_outputs)
hidden_error_term = np.dot(output_error_term, self.weights_hidden_to_output.T) * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += learning_rate * hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += learning_rate * output_error_term * hidden_outputs
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += delta_weights_h_o # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += delta_weights_i_h # update input-to-hidden weights with gradient descent step
features = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
n_records = features.shape[0]
bob.weights_input_to_hidden
bob.weights_input_to_hidden.shape
delta_weights_i_h = np.zeros(bob.weights_input_to_hidden.shape)
delta_weights_i_h
bob.weights_hidden_to_output
bob.weights_hidden_to_output.shape
delta_weights_h_o = np.zeros(bob.weights_hidden_to_output.shape)
delta_weights_h_o
jim = zip(features, targets)
features
targets
X = features
y = targets
#for X, y in zip(features, targets):
hidden_inputs = np.dot(X, bob.weights_input_to_hidden) # signals into hidden layer
X
bob.weights_input_to_hidden
hidden_inputs
hidden_outputs = bob.activation_function(hidden_inputs) # signals from hidden layer
hidden_outputs
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
import sys
### Set the hyperparameters here ###
iterations = 100
learning_rate = 0.1
hidden_nodes = 2
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download & Process Security Dataset
Step2: Analytic I
|
<ASSISTANT_TASK:>
Python Code:
from openhunt.mordorutils import *
spark = get_spark()
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/credential_access/host/empire_mimikatz_sam_access.zip"
registerMordorSQLTable(spark, sd_file, "sdTable")
df = spark.sql(
'''
SELECT `@timestamp`, ProcessName, ObjectName, AccessMask, EventID
FROM sdTable
WHERE LOWER(Channel) = "security"
AND (EventID = 4656 OR EventID = 4663)
AND ObjectType = "Key"
AND (
lower(ObjectName) LIKE "%jd"
OR lower(ObjectName) LIKE "%gbg"
OR lower(ObjectName) LIKE "%data"
OR lower(ObjectName) LIKE "%skew1"
)
'''
)
df.show(10,False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Construct Meta-KG from SmartAPI
Step2: Filter
Step3: Find Meta-KG operations that converys Gene->Metabolize->ChemicalSubstance
Step4: Filter for Knowledge Graph Operations supported by MyChem.info as API source
Step5: Filter for API operations with drugbank as data source
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/biothings/biothings_explorer.git
from biothings_explorer.smartapi_kg import MetaKG
kg = MetaKG()
kg.constructMetaKG(source="remote")
kg.filter({"input_type": "Gene", "output_type": "ChemicalSubstance"})
kg.filter({"input_type": "Gene", "output_type": "ChemicalSubstance", "predicate": "metabolize"})
kg.filter({"api_name": "MyChem.info API"})
kg.filter({"source": "drugbank"})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We have two teams, one of which is much better than the other. Let's make a simulated season between these teams.
Step2: Prior on each team is a normal distribution with mean of 0 and standard deviation of 1.
Step3: Hmm, something looks odd here. The posterior pdf for these two teams has significant overlap. Does this mean that our model is not sure about which team is better?
Step4: Ah, so the posterior pdf is actually quite clear
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import os
import numpy as np
import pymc3 as pm
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
true_rating = {
'All Stars': 2.0,
'Average': 0.0,
'Just Having Fun': -1.2,
}
true_index = {
0: 'All Stars',
1: 'Average',
2: 'Just Having Fun',
}
n_teams = len(true_rating)
team_numbers = range(n_teams)
team_names = [true_index[i] for i in team_numbers]
true_rating
team_names
season_length = [5, 20, 100]
traces = []
simulatedSeasons = []
for n_games in season_length:
games = range(n_games)
database = []
for game in games:
game_row = {}
matchup = np.random.choice(team_numbers, size=2, replace=False)
team0 = true_index[matchup[0]]
team1 = true_index[matchup[1]]
game_row['Team A'] = team0
game_row['Team B'] = team1
game_row['Index A'] = matchup[0]
game_row['Index B'] = matchup[1]
deltaRating = true_rating[team0] - true_rating[team1]
p = 1 / (1 + np.exp(-deltaRating))
randomNumber = np.random.random()
outcome_A = p > randomNumber
game_row['Team A Wins'] = outcome_A
database.append(game_row)
simulatedSeason = pd.DataFrame(database)
simulatedSeasons.append(simulatedSeason)
with pm.Model() as model:
rating = pm.Normal('rating', mu=0, sd=1, shape=n_teams)
deltaRating = rating[simulatedSeason['Index A'].values] - rating[simulatedSeason['Index B'].values]
p = 1 / (1 + np.exp(-deltaRating))
win = pm.Bernoulli('win', p, observed=simulatedSeason['Team A Wins'].values)
trace = pm.sample(1000)
traces.append(trace)
simulatedSeasons[1].groupby('Team A').sum()
1 / (1 + np.exp(-2))
sns.set_context('poster')
f, axes = plt.subplots(nrows=3, ncols=1, figsize=(10, 15))
# plt.figure(figsize=(10, 5))
for ax_index, n_games in enumerate(season_length):
ax = axes[ax_index]
for team_number in team_numbers:
rating_posterior = traces[ax_index]['rating'][:, team_number]
team_name = true_index[team_number]
sns.distplot(rating_posterior, label=team_name, ax=ax)
ax.legend()
ax.set_xlabel('Rating')
ax.set_ylabel('Density')
ax.set_title("Season length: {} games".format(n_games))
plt.tight_layout()
simulatedSeason = pd.DataFrame(database)
simulatedSeason
project_dir = '/Users/rbussman/Projects/BUDA/buda-ratings'
scores_dir = os.path.join(project_dir, 'data', 'raw', 'game_scores')
simulatedSeason.to_csv(os.path.join(scores_dir, 'artificial_scores_big.csv'))
simulatedSeason.shape
with pm.Model() as model:
rating = pm.Normal('rating', mu=0, sd=1, shape=n_teams)
deltaRating = rating[simulatedSeason['Index A'].values] - rating[simulatedSeason['Index B'].values]
p = 1 / (1 + np.exp(-deltaRating))
win = pm.Bernoulli('win', p, observed=simulatedSeason['Team A Wins'].values)
with model:
trace = pm.sample(1000)
sns.set_context('poster')
plt.figure(figsize=(10, 5))
for team_number in team_numbers:
rating_posterior = trace['rating'][:, team_number]
team_name = true_index[team_number]
sns.distplot(rating_posterior, label=team_name)
plt.legend()
plt.xlabel('Rating')
plt.ylabel('Density')
sns.set_context('poster')
plt.figure(figsize=(10, 5))
for team_number in team_numbers[:-1]:
rating_posterior = trace['rating'][:, team_number] - trace['rating'][:, -1]
team_name = true_index[team_number]
sns.distplot(rating_posterior, label="{} - {}".format(team_name, true_index[team_numbers[-1]]))
plt.legend()
plt.xlabel('Rating')
plt.ylabel('Density')
gt0 = rating_posterior > 0
print("Percentage of samples where 'All Stars' have a higher rating than 'Just Having Fun': {:.2f}%".format(
100. * rating_posterior[gt0].size / rating_posterior.size))
rating_posterior
.75 ** 14
estimatedratings = trace['rating'].mean(axis=0)
estimatedratings
for key in true_rating:
print("True: {:.2f}; Estimated: {:.2f}".format((true_rating[key], estimatedratings[key]))
key
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A Convenience Function
Step2: The Assignment
Step3: Create your linear regression model here and store it in a variable called model. Don't actually train or do anything else with it yet
Step4: Slice out your data manually (e.g. don't use train_test_split, but actually do the indexing yourself. Set X_train to be year values LESS than 1986, and y_train to be corresponding 'WhiteMale' age values. You might also want to read the note about slicing on the bottom of this document before proceeding
Step5: Train your model then pass it into drawLine with your training set and labels. You can title it 'WhiteMale'. drawLine will output to the console a 2014 extrapolation / approximation for what it believes the WhiteMale's life expectancy in the U.S. will be... given the pre-1986 data you trained it with. It'll also produce a 2030 and 2045 extrapolation
Step6: Print the actual 2014 'WhiteMale' life expectancy from your loaded dataset
Step7: Repeat the process, but instead of for WhiteMale, this time select BlackFemale. Create a slice for BlackFemales, fit your model, and then call drawLine. Lastly, print out the actual 2014 BlackFemale life expectancy
Step8: Lastly, print out a correlation matrix for your entire dataset, and display a visualization of the correlation matrix, just as we described in the visualization section of the course
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot') # Look Pretty
def drawLine(model, X_test, y_test, title):
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(X_test, y_test, c='g', marker='o')
ax.plot(X_test, model.predict(X_test), color='orange', linewidth=1, alpha=0.7)
print("Est 2014 " + title + " Life Expectancy: ", model.predict([[2014]])[0])
print("Est 2030 " + title + " Life Expectancy: ", model.predict([[2030]])[0])
print("Est 2045 " + title + " Life Expectancy: ", model.predict([[2045]])[0])
score = model.score(X_test, y_test)
title += " R2: " + str(score)
ax.set_title(title)
plt.show()
# .. your code here ..
# .. your code here ..
# .. your code here ..
# .. your code here ..
# .. your code here ..
# .. your code here ..
# .. your code here ..
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Declaring a pre-processing configuration
Step2: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Temporal Event Ordering task
Step3: Pre-processing the input data
Step4: Computing the CrowdTruth metrics
Step5: results is a dict object that contains the quality metrics for the sentences, annotations and crowd workers.
Step6: The uqs column in results["units"] contains the sentence quality scores, capturing the overall workers agreement over each sentences. Here we plot its histogram
Step7: Plot the change in unit qualtity score at the beginning of the process and at the end
Step8: The unit_annotation_score column in results["units"] contains the sentence-annotation scores, capturing the likelihood that an annotation is expressed in a sentence. For each sentence, we store a dictionary mapping each annotation to its sentence-relation score.
Step9: Save unit metrics
Step10: The worker metrics are stored in results["workers"]
Step11: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.
Step12: Save the worker metrics
Step13: The annotation metrics are stored in results["annotations"]. The aqs column contains the annotation quality scores, capturing the overall worker agreement over one relation.
Step14: Example of a very clear unit
Step15: Example of an unclear unit
Step16: MACE for Recognizing Textual Entailment Annotation
Step17: CrowdTruth vs. MACE on Worker Quality
Step18: CrowdTruth vs. MACE vs. Majority Vote on Annotation Performance
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
test_data = pd.read_csv("../data/temp.standardized.csv")
test_data.head()
import crowdtruth
from crowdtruth.configuration import DefaultConfig
class TestConfig(DefaultConfig):
inputColumns = ["gold", "event1", "event2", "text"]
outputColumns = ["response"]
customPlatformColumns = ["!amt_annotation_ids", "orig_id", "!amt_worker_ids", "start", "end"]
# processing of a closed task
open_ended_task = False
annotation_vector = ["before", "after"]
def processJudgments(self, judgments):
# pre-process output to match the values in annotation_vector
for col in self.outputColumns:
# transform to lowercase
judgments[col] = judgments[col].apply(lambda x: str(x).lower())
return judgments
data, config = crowdtruth.load(
file = "../data/temp.standardized.csv",
config = TestConfig()
)
data['judgments'].head()
results = crowdtruth.run(data, config)
results["units"].head()
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 15, 5
plt.subplot(1, 2, 1)
plt.hist(results["units"]["uqs"])
plt.ylim(0,200)
plt.xlabel("Sentence Quality Score")
plt.ylabel("#Sentences")
plt.subplot(1, 2, 2)
plt.hist(results["units"]["uqs_initial"])
plt.ylim(0,200)
plt.xlabel("Sentence Quality Score Initial")
plt.ylabel("# Units")
import numpy as np
sortUQS = results["units"].sort_values(['uqs'], ascending=[1])
sortUQS = sortUQS.reset_index()
plt.rcParams['figure.figsize'] = 15, 5
plt.plot(np.arange(sortUQS.shape[0]), sortUQS["uqs_initial"], 'ro', lw = 1, label = "Initial UQS")
plt.plot(np.arange(sortUQS.shape[0]), sortUQS["uqs"], 'go', lw = 1, label = "Final UQS")
plt.ylabel('Sentence Quality Score')
plt.xlabel('Sentence Index')
results["units"]["unit_annotation_score"].head()
rows = []
header = ["orig_id", "gold", "text", "event1", "event2", "uqs", "uqs_initial", "before", "after", "before_initial", "after_initial"]
units = results["units"].reset_index()
for i in range(len(units.index)):
row = [units["unit"].iloc[i], units["input.gold"].iloc[i], units["input.text"].iloc[i], units["input.event1"].iloc[i],\
units["input.event2"].iloc[i], units["uqs"].iloc[i], units["uqs_initial"].iloc[i], \
units["unit_annotation_score"].iloc[i]["before"], units["unit_annotation_score"].iloc[i]["after"], \
units["unit_annotation_score_initial"].iloc[i]["before"], units["unit_annotation_score_initial"].iloc[i]["after"]]
rows.append(row)
rows = pd.DataFrame(rows, columns=header)
rows.to_csv("../data/results/crowdtruth_units_temp.csv", index=False)
results["workers"].head()
plt.rcParams['figure.figsize'] = 15, 5
plt.subplot(1, 2, 1)
plt.hist(results["workers"]["wqs"])
plt.ylim(0,30)
plt.xlabel("Worker Quality Score")
plt.ylabel("#Workers")
plt.subplot(1, 2, 2)
plt.hist(results["workers"]["wqs_initial"])
plt.ylim(0,30)
plt.xlabel("Worker Quality Score Initial")
plt.ylabel("#Workers")
results["workers"].to_csv("../data/results/crowdtruth_workers_temp.csv", index=True)
results["annotations"]
import numpy as np
sortedUQS = results["units"].sort_values(["uqs"])
# remove the units for which we don't have the events and the text
sortedUQS = sortedUQS.dropna()
sortedUQS.tail(1)
print("Text: %s" % sortedUQS["input.text"].iloc[len(sortedUQS.index)-1])
print("\n Event1: %s" % sortedUQS["input.event1"].iloc[len(sortedUQS.index)-1])
print("\n Event2: %s" % sortedUQS["input.event2"].iloc[len(sortedUQS.index)-1])
print("\n Expert Answer: %s" % sortedUQS["input.gold"].iloc[len(sortedUQS.index)-1])
print("\n Crowd Answer with CrowdTruth: %s" % sortedUQS["unit_annotation_score"].iloc[len(sortedUQS.index)-1])
print("\n Crowd Answer without CrowdTruth: %s" % sortedUQS["unit_annotation_score_initial"].iloc[len(sortedUQS.index)-1])
sortedUQS.head(1)
print("Text: %s" % sortedUQS["input.text"].iloc[0])
print("\n Event1: %s" % sortedUQS["input.event1"].iloc[0])
print("\n Event2: %s" % sortedUQS["input.event2"].iloc[0])
print("\n Expert Answer: %s" % sortedUQS["input.gold"].iloc[0])
print("\n Crowd Answer with CrowdTruth: %s" % sortedUQS["unit_annotation_score"].iloc[0])
print("\n Crowd Answer without CrowdTruth: %s" % sortedUQS["unit_annotation_score_initial"].iloc[0])
import numpy as np
test_data = pd.read_csv("../data/mace_temp.standardized.csv", header=None)
test_data = test_data.replace(np.nan, '', regex=True)
test_data.head()
import pandas as pd
mace_data = pd.read_csv("../data/results/mace_units_temp.csv")
mace_data.head()
mace_workers = pd.read_csv("../data/results/mace_workers_temp.csv")
mace_workers.head()
mace_workers = pd.read_csv("../data/results/mace_workers_temp.csv")
crowdtruth_workers = pd.read_csv("../data/results/crowdtruth_workers_temp.csv")
mace_workers = mace_workers.sort_values(["worker"])
crowdtruth_workers = crowdtruth_workers.sort_values(["worker"])
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.scatter(
mace_workers["competence"],
crowdtruth_workers["wqs"],
)
plt.title("Worker Quality Score")
plt.xlabel("MACE")
plt.ylabel("CrowdTruth")
sortWQS = crowdtruth_workers.sort_values(['wqs'], ascending=[1])
sortWQS = sortWQS.reset_index()
worker_ids = list(sortWQS["worker"])
mace_workers = mace_workers.set_index('worker')
mace_workers.loc[worker_ids]
plt.rcParams['figure.figsize'] = 15, 5
plt.plot(np.arange(sortWQS.shape[0]), sortWQS["wqs"], 'bo', lw = 1, label = "CrowdTruth Worker Score")
plt.plot(np.arange(mace_workers.shape[0]), mace_workers["competence"], 'go', lw = 1, label = "MACE Worker Score")
plt.ylabel('Worker Quality Score')
plt.xlabel('Worker Index')
plt.legend()
mace = pd.read_csv("../data/results/mace_units_temp.csv")
crowdtruth = pd.read_csv("../data/results/crowdtruth_units_temp.csv")
def compute_F1_score(dataset):
nyt_f1 = np.zeros(shape=(100, 2))
for idx in xrange(0, 100):
thresh = (idx + 1) / 100.0
tp = 0
fp = 0
tn = 0
fn = 0
for gt_idx in range(0, len(dataset.index)):
if dataset['after'].iloc[gt_idx] >= thresh:
if dataset['gold'].iloc[gt_idx] == 'after':
tp = tp + 1.0
else:
fp = fp + 1.0
else:
if dataset['gold'].iloc[gt_idx] == 'after':
fn = fn + 1.0
else:
tn = tn + 1.0
nyt_f1[idx, 0] = thresh
if tp != 0:
nyt_f1[idx, 1] = 2.0 * tp / (2.0 * tp + fp + fn)
else:
nyt_f1[idx, 1] = 0
return nyt_f1
def compute_majority_vote(dataset, crowd_column):
tp = 0
fp = 0
tn = 0
fn = 0
for j in range(len(dataset.index)):
if dataset['after_initial'].iloc[j] >= 0.5:
if dataset['gold'].iloc[j] == 'after':
tp = tp + 1.0
else:
fp = fp + 1.0
else:
if dataset['gold'].iloc[j] == 'after':
fn = fn + 1.0
else:
tn = tn + 1.0
return 2.0 * tp / (2.0 * tp + fp + fn)
F1_crowdtruth = compute_F1_score(crowdtruth)
print(F1_crowdtruth[F1_crowdtruth[:,1].argsort()][-10:])
F1_mace = compute_F1_score(mace)
print(F1_mace[F1_mace[:,1].argsort()][-10:])
F1_majority_vote = compute_majority_vote(crowdtruth, 'value')
F1_majority_vote
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The plate lies in the $xy$-plane with the surface at $z = 0$. The atoms lie in the $xz$-plane with $z>0$.
Step2: Next we define the state that we are interested in using pairinteraction's StateOne class . As shown in Figures 4 and 5 of Phys. Rev. A 96, 062509 (2017) we expect large changes for the $C_6$ coefficient of the $|69s_{1/2},m_j=1/2;72s_{1/2},m_j=1/2\rangle$ pair state, so this provides a good example.
Step3: The pair state state_two is created from the one atom states state_one1 and state_one2 using the StateTwo class.
Step4: Next, we diagonalize the system for the given interatomic distances in distance_atom and compare the free space system to a system at distance_surface away from the perfect mirror. The energy is calculated with respect to a Rubidium $|70p_{3/2},m_j=3/2;70p_{3/2},m_j=3/2\rangle$ two atom state, defined in energyzero.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# Arrays
import numpy as np
# Plotting
import matplotlib.pyplot as plt
from itertools import product
# Operating system interfaces
import os, sys
# Parallel computing
from multiprocessing import Pool
# pairinteraction :-)
from pairinteraction import pireal as pi
# Create cache for matrix elements
if not os.path.exists("./cache"):
os.makedirs("./cache")
cache = pi.MatrixElementCache("./cache")
theta = np.pi/2 # rad
distance_atom = np.linspace(6, 1.5, 50) # µm
distance_surface = 1 # µm
state_one1 = pi.StateOne("Rb", 69, 0, 0.5, 0.5)
state_one2 = pi.StateOne("Rb", 72, 0, 0.5, 0.5)
# Set up one-atom system
system_one = pi.SystemOne(state_one1.getSpecies(), cache)
system_one.restrictEnergy(min(state_one1.getEnergy(),state_one2.getEnergy()) - 50, \
max(state_one1.getEnergy(),state_one2.getEnergy()) + 50)
system_one.restrictN(min(state_one1.getN(),state_one2.getN()) - 2, \
max(state_one1.getN(),state_one2.getN()) + 2)
system_one.restrictL(min(state_one1.getL(),state_one2.getL()) - 2, \
max(state_one1.getL(),state_one2.getL()) + 2)
# Set up pair state
state_two = pi.StateTwo(state_one1, state_one2)
# Set up two-atom system
system_two = pi.SystemTwo(system_one, system_one, cache)
system_two.restrictEnergy(state_two.getEnergy() - 5, state_two.getEnergy() + 5)
system_two.setAngle(theta)
system_two.enableGreenTensor(True)
system_two.setDistance(distance_atom[0])
system_two.setSurfaceDistance(distance_surface)
system_two.buildInteraction()
# Diagonalize the two-atom system for different surface and interatomic distances
def getDiagonalizedSystems(distances):
system_two.setSurfaceDistance(distances[0])
system_two.setDistance(distances[1])
system_two.diagonalize(1e-3)
return system_two.getHamiltonian().diagonal()
if sys.platform != "win32":
with Pool() as pool:
energies = pool.map(getDiagonalizedSystems, product([1e12, distance_surface], distance_atom))
else:
energies = list(map(getDiagonalizedSystems, product([1e12, distance_surface], distance_atom)))
energyzero = pi.StateTwo(["Rb", "Rb"], [70, 70], [1, 1], [1.5, 1.5], [1.5, 1.5]).getEnergy()
y = np.array(energies).reshape(2, -1)-energyzero
x = np.repeat(distance_atom, system_two.getNumBasisvectors())
# Plot pair potentials
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_xlabel(r"Distance ($\mu m$)")
ax.set_ylabel(r"Energy (GHz)")
ax.set_xlim(np.min(distance_atom),np.max(distance_atom))
ax.set_ylim(-3, -1.6)
ax.plot(x, y[0], 'ko', ms=3, label = 'free space')
ax.plot(x, y[1], 'ro', ms=3, label = 'perfect mirror')
ax.legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The site_specific option accepts True, False or a string. In the latter case, the site key is recognized only if the value matches the string, while all other sites are treated as identical.
Step2: references is a required parameter, that should contain gas phase references. If a gas phase reference is dependent on another, order the dependent one after the latter.
Step3: How to import frequencies.
Step4: How to import transition states and pathways.
|
<ASSISTANT_TASK:>
Python Code:
# Import and instantiate energy_landscape object.
from catmap.api.ase_data import EnergyLandscape
energy_landscape = EnergyLandscape()
# Import all gas phase species from db.
search_filter_gas = []
energy_landscape.get_molecules('molecules.db', selection=search_filter_gas)
# Import all adsorbates and slabs from db.
search_filter_slab = []
energy_landscape.get_surfaces('surfaces.db', selection=search_filter_slab, site_specific=False)
references = (('H', 'H2_gas'), ('O', 'H2O_gas'), ('C', 'CH4_gas'),)
energy_landscape.calc_formation_energies(references)
file_name = 'my_input.txt'
energy_landscape.make_input_file(file_name)
# Take a peak at the file.
with open(file_name) as fp:
for line in fp.readlines()[:5]:
print(line)
energy_landscape.get_molecules('molecules.db', frequency_db='frequencies.db', selection=search_filter_gas)
energy_landscape.get_transition_states('neb.db')
energy_landscape.calc_formation_energies(references)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: BCM is the numbering that is engraved on the Raspberry Pi case we use and that you can also find back on the printed Pinout schema (BCM stands for Broadcom, the company that produces the Raspberry Pi chip).
Step2: With all GPIO settings done, we can put pin GPIO18 to work.
Step3: Note
|
<ASSISTANT_TASK:>
Python Code:
#load GPIO library
import RPi.GPIO as GPIO
#Set BCM (Broadcom) mode for the pin numbering
GPIO.setmode(GPIO.BCM)
# If we assign the name 'PIN' to the pin number we intend to use, we can reuse it later
# yet still change easily in one place
PIN = 18
# set pin as output
GPIO.setup(PIN, GPIO.OUT)
import time
# Repeat forever
while True:
# turn off pin 18
GPIO.output(PIN, 0)
# wait for half a second
time.sleep(.5)
# turn on pin 18
GPIO.output(PIN, 1)
# wait for half a second
time.sleep(.5)
#... and again ...
#reset the GPIO
PIN = 18
GPIO.cleanup()
GPIO.setmode(GPIO.BCM)
GPIO.setup(PIN, GPIO.OUT)
# Create PWM object and set its frequency in Hz (cycles per second)
led = GPIO.PWM(PIN, 60)
# Start PWM signal
led.start(0)
try:
while True:
# increase duty cycle by 1%
for dc in range(0, 101, 1):
led.ChangeDutyCycle(dc)
time.sleep(0.05)
# and down again ...
for dc in range(100, -1, -1):
led.ChangeDutyCycle(dc)
time.sleep(0.05)
except KeyboardInterrupt:
pass
led.stop()
GPIO.cleanup()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Volatility-weighted Portfolio (using just np.std as weighting metric)
Step2: Volatility-weighted Portfolio (with constraint of no asset weight to be greater than 2x any other asset weight. Function def min_max_vol_bounds defines the the constraint)
Step3: Quantized bucket Volatility-weighted Portfolio (using custom function bucket_std() as weighting metric)
|
<ASSISTANT_TASK:>
Python Code:
# USAGE: Equal-Weight Portfolio.
# 1) if 'exclude_non_overlapping=True' below, the portfolio will only contains
# days which are available across all of the algo return timeseries.
#
# if 'exclude_non_overlapping=False' then the portfolio returned will span from the
# earliest startdate of any algo, thru the latest enddate of any algo.
#
# 2) Weight of each algo will always be 1/N where N is the total number of algos passed to the function
portfolio_rets_ts, data_df = pf.timeseries.portfolio_returns_metric_weighted([SPY, FXE, GLD],
exclude_non_overlapping=True
)
to_plot = ['SPY', 'GLD', 'FXE'] + ["port_ret"]
data_df[to_plot].apply(pf.timeseries.cum_returns).plot()
pf.timeseries.perf_stats(data_df['port_ret'])
# USAGE: Portfolio based on volatility weighting.
# The higher the volatility the _less_ weight the algo gets in the portfolio
# The portfolio is rebalanced monthly. For quarterly reblancing, set portfolio_rebalance_rule='Q'
stocks_port, data_df = pf.timeseries.portfolio_returns_metric_weighted([SPY, FXE, GLD],
weight_function=np.std,
weight_function_window=126,
inverse_weight=True
)
to_plot = ['SPY', 'GLD', 'FXE'] + ["port_ret"]
data_df[to_plot].apply(pf.timeseries.cum_returns).plot()
pf.timeseries.perf_stats(data_df['port_ret'])
stocks_port, data_df = pf.timeseries.portfolio_returns_metric_weighted([SPY, FXE, GLD],
weight_function=np.std,
weight_func_transform=pf.timeseries.min_max_vol_bounds,
weight_function_window=126,
inverse_weight=True)
to_plot = ['SPY', 'GLD', 'FXE'] + ["port_ret"]
data_df[to_plot].apply(pf.timeseries.cum_returns).plot()
pf.timeseries.perf_stats(data_df['port_ret'])
stocks_port, data_df = pf.timeseries.portfolio_returns_metric_weighted([SPY, FXE, GLD],
weight_function=np.std,
weight_func_transform=pf.timeseries.bucket_std,
weight_function_window=126,
inverse_weight=True)
to_plot = ['SPY', 'GLD', 'FXE'] + ["port_ret"]
data_df[to_plot].apply(pf.timeseries.cum_returns).plot()
pf.timeseries.perf_stats(data_df['port_ret'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3. Dimensionality reduction
Step2: Correlation between variables
Step3: Revenue, employees and assets are highly correlated.
Step4: 3.1 Combining variables
Step5: 3.2 PCA
Step6: 3.3 Factor analysis
Step7: Difference between FA and PCA
Step8: Linear regression (to compare)
Step9: SVR
Step10: Lasso
Step11: Summary
|
<ASSISTANT_TASK:>
Python Code:
##Some code to run at the beginning of the file, to be able to show images in the notebook
##Don't worry about this cell
#Print the plots in this screen
%matplotlib inline
#Be able to plot images saved in the hard drive
from IPython.display import Image
#Make the notebook wider
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
import seaborn as sns
import pylab as plt
import pandas as pd
import numpy as np
import scipy.stats
import statsmodels.formula.api as smf
import sklearn
from sklearn.model_selection import train_test_split
#Read data
df_companies = pd.read_csv("data/big3_position.csv",sep="\t")
df_companies["log_revenue"] = np.log10(df_companies["Revenue"])
df_companies["log_assets"] = np.log10(df_companies["Assets"])
df_companies["log_employees"] = np.log10(df_companies["Employees"])
df_companies["log_marketcap"] = np.log10(df_companies["MarketCap"])
#Keep only industrial companies
df_companies = df_companies.loc[:,["log_revenue","log_assets","log_employees","log_marketcap","Company_name","TypeEnt"]]
df_companies = df_companies.loc[df_companies["TypeEnt"]=="Industrial company"]
#Dropnans
df_companies = df_companies.replace([np.inf,-np.inf],np.nan)
df_companies = df_companies.dropna()
df_companies.head()
# Compute the correlation matrix
corr = df_companies.corr()
# Generate a mask for the upper triangle (hide the upper triangle)
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, square=True,linewidths=.5,cmap="YlOrRd",vmin=0,vmax=1)
plt.show()
mod = smf.ols(formula='log_marketcap ~ log_revenue + log_employees + log_assets', data=df_companies)
res = mod.fit()
print(res.summary())
#The residuals are fine
plt.figure(figsize=(4,3))
sns.regplot(res.predict(),df_companies["log_marketcap"] -res.predict())
#Get many models to see hwo coefficient changes
from statsmodels.iolib.summary2 import summary_col
mod1 = smf.ols(formula='log_marketcap ~ log_revenue + log_employees + log_assets', data=df_companies).fit()
mod2 = smf.ols(formula='log_marketcap ~ log_revenue + log_assets', data=df_companies).fit()
mod3 = smf.ols(formula='log_marketcap ~ log_employees + log_assets', data=df_companies).fit()
mod4 = smf.ols(formula='log_marketcap ~ log_assets', data=df_companies).fit()
mod5 = smf.ols(formula='log_marketcap ~ log_revenue + log_employees ', data=df_companies).fit()
mod6 = smf.ols(formula='log_marketcap ~ log_revenue ', data=df_companies).fit()
mod7 = smf.ols(formula='log_marketcap ~ log_employees ', data=df_companies).fit()
output = summary_col([mod1,mod2,mod3,mod4,mod5,mod6,mod7],stars=True)
print(mod1.rsquared_adj,mod2.rsquared_adj,mod3.rsquared_adj,mod4.rsquared_adj,mod5.rsquared_adj,mod6.rsquared_adj,mod7.rsquared_adj)
output
X = df_companies.loc[:,["log_revenue","log_employees","log_assets"]]
X.head(2)
#Let's scale all the columns to have mean 0 and std 1
from sklearn.preprocessing import scale
X_to_combine = scale(X)
X_to_combine
#In this case we sum them together
X_combined = np.sum(X_to_combine,axis=1)
X_combined
#Add a new column with our combined variable and run regression
df_companies["combined"] = X_combined
print(smf.ols(formula='log_marketcap ~ combined ', data=df_companies).fit().summary())
#Do the fitting
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
new_X = pca.fit_transform(X)
print("Explained variance")
print(pca.explained_variance_ratio_)
print()
print("Weight of components")
print(["log_revenue","log_employees","log_assets"])
print(pca.components_)
print()
new_X
#Create our new variables (2 components, so 2 variables)
df_companies["pca_x1"] = new_X[:,0]
df_companies["pca_x2"] = new_X[:,1]
print(smf.ols(formula='log_marketcap ~ pca_x1 + pca_x2 ', data=df_companies).fit().summary())
print("Before")
sns.lmplot("log_revenue","log_assets",data=df_companies,fit_reg=False)
print("After")
sns.lmplot("pca_x1","pca_x2",data=df_companies,fit_reg=False)
from sklearn.decomposition import FactorAnalysis
fa = FactorAnalysis(n_components=2)
new_X = fa.fit_transform(X)
print("Weight of components")
print(["log_revenue","log_employees","log_assets"])
print(fa.components_)
print()
new_X
#New variables
df_companies["fa_x1"] = new_X[:,0]
df_companies["fa_x2"] = new_X[:,1]
print(smf.ols(formula='log_marketcap ~ fa_x1 + fa_x2 ', data=df_companies).fit().summary())
print("After")
sns.lmplot("fa_x1","fa_x2",data=df_companies,fit_reg=False)
Image(url="http://www.holehouse.org/mlclass/07_Regularization_files/Image.png")
from sklearn.model_selection import train_test_split
y = df_companies["log_marketcap"]
X = df_companies.loc[:,["log_revenue","log_employees","log_assets"]]
X.head(2)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
X_train.head()
df_train = X_train.copy()
df_train["log_marketcap"] = y_train
df_train.head()
mod = smf.ols(formula='log_marketcap ~ log_revenue + log_employees + log_assets', data=df_train).fit()
print("log_revenue log_employees log_assets ")
print(mod.params.values[1:])
from sklearn.svm import SVR
clf = SVR(C=0.1, epsilon=0.2,kernel="linear")
clf.fit(X_train, y_train)
print("log_revenue log_employees log_assets ")
print(clf.coef_)
from sklearn import linear_model
reg = linear_model.Lasso(alpha = 0.01)
reg.fit(X_train,y_train)
print("log_revenue log_employees log_assets ")
print(reg.coef_)
print(["SVR","Lasso","Linear regression"])
err1,err2,err3 = sklearn.metrics.mean_squared_error(clf.predict(X_test),y_test),sklearn.metrics.mean_squared_error(reg.predict(X_test),y_test),sklearn.metrics.mean_squared_error(mod.predict(X_test),y_test)
print(err1,err2,err3)
print(["SVR","Lasso","Linear regression"])
err1,err2,err3 = sklearn.metrics.r2_score(clf.predict(X_test),y_test),sklearn.metrics.r2_score(reg.predict(X_test),y_test),sklearn.metrics.r2_score(mod.predict(X_test),y_test)
print(err1,err2,err3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next Character Prediction with RNN's
Step2: In the above code, we reformatedd X_train, X_val and X_test to timed parts so that they are suitable for use in RNN's now.
Step3: I simply modified the code for CapitoningRNN to get rid of initial hidden state that has been feed from CNN, instead I give all zeros, and also get rid of word embedding layer since we are going to use characters we could use just their ascii representative.
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
import time, os, json
import numpy as np
import matplotlib.pyplot as plt
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.rnn_layers import *
from cs231n.captioning_solver import *
from cs231n.classifiers.rnn import *
from cs231n.coco_utils import load_coco_data, sample_coco_minibatch, decode_captions
from cs231n.image_utils import image_from_url
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
from metu.data_utils import load_nextchar_dataset, plain_text_file_to_dataset
# Load the TEXT data
# If your memory turns out to be sufficient, try the following:
#def get_nextchar_data(training_ratio=0.6, val_ratio=0.1):
def get_nextchar_data(training_ratio=0.1, test_ratio=0.06, val_ratio=0.01):
# Load the nextchar training data
X, y = load_nextchar_dataset(nextchar_datafile)
# Subsample the data
length=len(y)
num_training=int(length*training_ratio)
num_val = int(length*val_ratio)
num_test = min((length-num_training-num_val), int(length*test_ratio))
mask = range(num_training-1)
X_train = X[mask]
y_train = y[mask]
mask = range(num_training, num_training+num_test)
X_test = X[mask]
y_test = y[mask]
mask = range(num_training+num_test, num_training+num_test+num_val)
X_val = X[mask]
y_val = y[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
nextchar_datafile = 'metu/dataset/nextchar_data.pkl'
input_size = 5 # Size of the input of the network
#plain_text_file_to_dataset("metu/dataset/ince_memed_1.txt", nextchar_datafile, input_size)
plain_text_file_to_dataset("metu/dataset/shakespeare.txt", nextchar_datafile, input_size)
X_train, y_train, X_val, y_val, X_test, y_test = get_nextchar_data()
NX_train = np.zeros((X_train.shape[0], input_size+1, 1))
for i in xrange(X_train.shape[0]):
for j in xrange(input_size):
NX_train[i,j,0] = X_train[i,j]
NX_train[i,input_size,0] = y_train[i]
NX_test = np.zeros((X_test.shape[0], input_size+1, 1))
for i in xrange(X_test.shape[0]):
for j in xrange(input_size):
NX_test[i,j,0] = X_test[i,j]
NX_test[i,input_size,0] = y_test[i]
NX_val = np.zeros((X_val.shape[0], input_size+1, 1))
for i in xrange(X_val.shape[0]):
for j in xrange(input_size):
NX_val[i,j,0] = X_val[i,j]
NX_val[i,input_size,0] = y_val[i]
X_train, X_val, X_test = NX_train, NX_val, NX_test
print "Number of instances in the training set: ", len(X_train)
print "Number of instances in the validation set: ", len(X_val)
print "Number of instances in the testing set: ", len(X_test)
# We have loaded the dataset. That wasn't difficult, was it? :)
# Let's look at a few samples
#
from metu.data_utils import int_list_to_string, int_to_charstr
print "Input - Next char to be predicted"
for i in range(1,10):
print int_list_to_string(X_train[i]) + " - " + int_list_to_string(y_train[i])
small_rnn_model = NextCharRNN(
cell_type='rnn',
input_dim=input_size,
hidden_dim=512,
charvec_dim=1,
)
small_rnn_solver = NextCharSolver(small_rnn_model, X_train,
update_rule='adam',
num_epochs=50,
batch_size=100,
optim_config={
'learning_rate': 1e-2,
},
lr_decay=0.95,
verbose=True, print_every=100,
)
small_rnn_solver.train()
# Plot the training losses
plt.plot(small_rnn_solver.loss_history)
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.title('Training loss history')
plt.show()
mbs=10
idx = np.random.choice(len(X_train), mbs)
minibatch = X_train[idx]
next_chars = small_rnn_model.sample(minibatch, 6)
print 'Training Data:'
for i in xrange(mbs):
print 'Predicted string:', int_list_to_string(next_chars[i,:])
print 'Real string:', int_list_to_string(minibatch[i,:])
print
idx = np.random.choice(len(X_val), mbs)
minibatch = X_val[idx]
next_chars = small_rnn_model.sample(minibatch, 6)
print 'Validation Data:'
for i in xrange(mbs):
print 'Predicted string:', int_list_to_string(next_chars[i,:])
print 'Real string:', int_list_to_string(minibatch[i,:])
print
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font color='blue'>Exercício 1</font>
Step2: B
Step3: B
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
#Bibliotecas necessárias
from numpy.random import shuffle, randint, choice
lista = []
for i in range (1,1001):
numero = randint (1,7)
lista.append(numero)
plt.hist(lista,6,normed = True)
plt.axis([1,6,0,0.25])
plt.xlabel('Número do dado')
plt.ylabel('Frequencia')
plt.show()
#a
soma=0
i=0
while i <= 1000:
p1 = randint (1,7)
p2 = randint(1,7)
i +=1
if p1 + p2 == 7:
soma+=1
i+=1
print(soma/i)
cont = 0
b = 0
for i in range (1,10000):
lista = ['g','g','c']
shuffle(lista)
if lista [1] == 'c':
del lista[2]
elif lista[2]=='c':
del lista[1]
else:
x = randint(1,2)
del lista[x]
if lista[0]=='c':
cont+=1
elif lista[0] != 'c':
b+=1
print(cont/100)
print(b/100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define BKE Function
Step2: Define Symbolic Variables
Step3: Dynamics Analysis
Step4: Define Points of Interest
Step5: Creation of the Bodies
Step6: Pendulum Dynamics
Step7: Below is the equation describing the rotational motion of the pendulum body. Note that the angular momentum and torques are about the body's center of mass. It is also important to note that $J_p$ is about the center of mass of the pendulum.
Step8: Wheel Dynamics
Step9: Motor Dynamics
Step10: Constraints on the System
Step11: Gather All EOM's and Solve the System
Step12: Below the system of equations is solved for the following variables
Step13: Now we have solved for each of the unknown variables in terms of system properties and theta,
Step14: Next to make things look a little nicer, create a dict that can replace f(t) with a new variable that doesnt have the (t)
Step15: Create state vector X, input vector U, derivative of state vector Xdot
Step16: Define the state space matricies A, B, C, and D. They are defined by the Jacobians. A Jacobian is a matrix consisting of partial derivatives of each function and all independent variables
Step17: These matricies still are in terms of trig functions and angular rates squared and are thus non-linear. However, we can assume the system behaves linearly about the operating point, and so evaluate the elements of the state space matricies at the operating point.
Step18: Figure out pole zero cancellation in python control toolbox.
Step19: System Identification - Pendulum
|
<ASSISTANT_TASK:>
Python Code:
import sympy #symbolic algebra library
import sympy.physics.mechanics as mech
import control #control analysis library
sympy.init_printing(use_latex='mathjax')
from IPython.display import display
%matplotlib inline
%load_ext autoreload
%autoreload 2
import px4_logutil
import pylab as pl
def bke(vector, frame_i, frame_b, t):
return (vector.diff(t, frame_b) + frame_b.ang_vel_in(frame_i).cross(vector))
T, r, m_w, m_p, l, F, x, g, alpha, theta, t, R_x, R_z, N, J_p, J_w, v_x, omega, k_emf, b_damp, V, J_motor, a = \
sympy.symbols('T r m_w m_p l F x g alpha theta t R_x R_z N J_p J_w v_x omega k_emf b_damp V J_motor a')
frame_i = mech.ReferenceFrame('i') #inertial frame
frame_b = frame_i.orientnew('b', 'Axis', [theta(t), frame_i.y]) #fixed in pendulum
frame_w = frame_b.orientnew('w', 'Axis', [-alpha(t), frame_i.y]) #fixed in wheel
point_o = mech.Point('o')
point_o.set_vel(frame_i, 0) #point o is inertially fixed
point_W = point_o.locatenew('W', frame_i.x*x(t)) #wheel c.m.
point_W.set_vel(frame_b, 0) #point W is fixed in pendulum frame, too
point_W.set_vel(frame_i, point_W.pos_from(point_o).diff(t, frame_i))
point_P = point_W.locatenew('P', frame_b.z*(-l)) #pendulum c.m.
point_P.set_vel(frame_b, 0)
point_P.v2pt_theory(point_W, frame_i, frame_b);
# Wheel Creation
J_wheel = mech.inertia(frame_w, 0, J_w, 0)
wheel = mech.RigidBody('wheel', point_W, frame_w, m_w, (J_wheel, point_W))
# Pendulum Creation
J_pend = mech.inertia(frame_b, 0, J_p, 0)
pend = mech.RigidBody('pend', point_P, frame_b, m_p, (J_pend, point_P)) #change inertia point to point_p
# Pendulum F=ma equation of motion
eom_pend_Newt = bke(pend.linear_momentum(frame_i), frame_i, frame_b, t) \
- (R_x(t)*frame_i.x) \
- (-R_z(t)*frame_i.z + m_p*g*frame_i.z)
eom_pend_Newt = eom_pend_Newt.simplify()
#Pendulum Euler's Law
eom_pend_Euler = bke(pend.angular_momentum(point_P, frame_i), frame_i, frame_b, t) \
- R_x(t)*sympy.cos(theta(t))*(l)*frame_b.y \
- R_z(t)*sympy.sin(theta(t))*(l)*frame_b.y - (T(t)*frame_b.y)
eom_pend_Newt
eom_pend_Euler
# Wheel F=ma equation of motion, with reaction force at pin included
eom_wheel_Newt = wheel.linear_momentum(frame_i).diff(t, frame_i) \
- (F(t)*frame_i.x) - (-R_x(t)*frame_i.x) \
- R_z(t)*frame_i.z - (-N(t)*frame_i.z) - m_w*g*frame_i.z
#Wheel Euler's Law
eom_wheel_Euler = bke(wheel.angular_momentum(point_W, frame_i), frame_i, frame_w, t) \
- (-T(t)*frame_w.y) - (F(t)*r*frame_w.y)
eom_wheel_Newt
eom_wheel_Euler
eom_motor = T(t) + k_emf*V(t) - b_damp*alpha(t).diff(t) - J_motor*alpha(t).diff(t,2)
eom_motor
no_slip = r*(alpha(t) - theta(t)) - x(t) #no slip of the wheels
no_slip
eoms = sympy.Matrix([eom_pend_Newt.dot(frame_i.x), eom_wheel_Newt.dot(frame_i.x), #in the x direction
eom_pend_Euler.dot(frame_i.y), eom_wheel_Euler.dot(frame_i.y), #in the y direction (moments)
eom_pend_Newt.dot(frame_i.z), eom_wheel_Newt.dot(frame_i.z), #in the z direction
eom_motor, #the equation of motion for the motor
no_slip,
no_slip.diff(t),
no_slip.diff(t,2)]) #the constraint equation
eoms #display all 10 eoms
eom_sol = sympy.solve(eoms, [T(t), N(t), R_x(t), R_z(t), F(t), theta(t).diff(t,2), alpha(t).diff(t,2), \
x(t).diff(t,2), alpha(t).diff(t), alpha(t)], simplify=False)
#eom_sol
#simp_assump = {J_motor:0, J_w:0, V(t):0, x(t).diff(t):0, theta(t).diff(t):a, theta(t):0, a:theta(t).diff(t)}
simp_assump = {J_motor:0, J_w:0, V(x):0, b_damp:0}
theta_ddot = eom_sol[theta(t).diff(t,2)].expand().ratsimp().collect([theta(t), x(t), V(t), theta(t).diff(t), x(t).diff(t)], sympy.factor)
theta_ddot = theta_ddot.subs(simp_assump)
theta_ddot = theta_ddot.simplify()
theta_ddot
x_ddot = eom_sol[x(t).diff(t,2)].expand().ratsimp().collect([theta(t), x(t), V(t), theta(t).diff(t), x(t).diff(t)], sympy.factor)
x_ddot = x_ddot.subs(simp_assump)
x_ddot
remove_t = {x(t).diff(t): v_x, x(t): x, theta(t).diff(t): omega, theta(t): theta, alpha(t): alpha, V(t): V, T(t): T} #defines the dict
X = sympy.Matrix([x(t), x(t).diff(t), theta(t), theta(t).diff(t)]).subs(remove_t) #state vector
U = sympy.Matrix([V(t)]).subs(remove_t) #Input torque
Xdot = sympy.Matrix([x(t).diff(t), x_ddot, theta(t).diff(t), theta_ddot]).subs(remove_t)
X, U
A = Xdot.jacobian(X)
B = Xdot.jacobian(U)
C = X.jacobian(X)
D = X.jacobian(U)
ss = [A, B, C, D]
stop_eq_point = {T: 0, omega: 0, theta: 0, v_x: 0} #a dict of the equilibrium points when the segway is not moving
ss0 = [A.subs(stop_eq_point), B.subs(stop_eq_point), C, D]
ss0
sub_const = {
J_p: 2,
b_damp: 0,
b_emf: 1,
g: 9.8,
l: 1,
r: 0.1,
m_p: 1,
}
import pylab as pl
ss0
sys0 = control.ss(*[pl.array(mat_i.subs(sub_const)).astype(float) for mat_i in ss0])
sys0
def tf_clean(tf, tol=1e-3):
import copy
num = copy.deepcopy(tf.num)
den = copy.deepcopy(tf.den)
for i_u in range(tf.inputs):
for i_y in range(tf.outputs):
num[i_y][i_u] = pl.where(abs(num[i_y][i_u]) < tol, pl.zeros(num[i_y][i_u].shape), num[i_y][i_u])
den[i_y][i_u] = pl.where(abs(den[i_y][i_u]) < tol, pl.zeros(den[i_y][i_u].shape), den[i_y][i_u])
return control.tf(num,den)
tf_20 = tf_clean(control.ss2tf(sys0[2,0]))
tf_20
tf_20 = control.tf([-11],[1,0,-9.8])
tf_20
control.bode(tf_20, omega=pl.logspace(-2,4));
control.rlocus(tf_20);
pl.axis(20*pl.array([-1,1,-1,1]))
K, S, E = control.lqr(sys0.A, sys0.B, pl.eye(sys0.A.shape[0]), pl.eye(sys0.B.shape[1]))
K, S, E
eom
import scipy.integrate
sim = scipy.integrate.ode(
print Xdot
print X
print U
x_vect = sympy.DeferredVector('x')
u_vect = sympy.DeferredVector('u')
ss_sub = {X[i]:x_vect[i] for i in range(len(X))}
ss_sub.update({U[i]:u_vect[i] for i in range(len(U))})
ss_sub
Xdot.subs(sub_const).subs(ss_sub)
print x
print Xdot.subs(sub_const)
import numpy
f_eval = sympy.lambdify([t, x_vect, u_vect], Xdot.subs(sub_const).subs(ss_sub),
[{'ImmutableMatrix': numpy.array}, 'numpy'])
f_eval(0, pl.array([0,0,0,0]), pl.array([0]))
sim = scipy.integrate.ode(f_eval)
x0 = [0.5,0,0.5,0]
u = 0
sim.set_initial_value(x0)
sim.set_f_params(u)
dt = 0.01
tf = 2
data = {
't': [],
'u': [],
'y': [],
'x': [],
}
while sim.t + dt < tf:
x = sim.y
y = x # fix
u = -K.dot(x)
sim.set_f_params(u)
sim.integrate(sim.t + dt)
data['t'].append(sim.t)
data['u'].append(u)
data['x'].append(x)
data['y'].append(y)
h_nl = pl.plot(data['t'], data['y'], 'r-');
sysc = sys0.feedback(K)
t, y, x = control.forced_response(sysc, X0=x0, T=pl.linspace(0,tf), transpose=True)
h_l = pl.plot(t, y, 'k--');
pl.legend([h_nl[0], h_l[0]], ['non-linear', 'linear'], loc='best')
pl.grid()
eom_pend_Euler
pend_sysID_const = theta(t) + alpha(t)
pend_sysID_const
sol_pend_sysID = sympy.solve(
[pend_sysID_const,pend_sysID_const.diff(t),pend_sysID_const.diff(t, 2)] +
list(eom_pend_Newt.to_matrix(frame_i)) +
list(eom_pend_Euler.to_matrix(frame_i)) + [eom_motor],
[R_x(t), R_z(t), theta(t).diff(t,2), T(t), alpha(t).diff(t, 2), alpha(t).diff(t)])
sol_pend_sysID
pend_sysID = (sol_pend_sysID[theta(t).diff(t,2)].subs({x(t).diff(t,2):0, V(t):0}) - theta(t).diff(t,2))
pend_sysID
eom_motor
motor_sysID = eom_motor.subs({T(t):0})
motor_sysID
with open('data/segway/motor_sysid/sess001/log001.csv', 'r') as loadfile:
data1 = px4_logutil.px4_log_to_namedtuple(loadfile)
data = data1
def do_plotting(data):
i_start = 400
i_end = -1
t = (data.TIME.StartTime[i_start:i_end] - data1.TIME.StartTime[i_start])/1e6
V = data.BATT.V[i_start:i_end]
#pl.plot(t, data.ENCD.cnt0[i_start:i_end])
#pl.plot(t, data.ENCD.vel0[i_start:i_end])
pl.plot(t, V*(data.OUT1.Out0[i_start:i_end] - 1500)/1500)
do_plotting(data1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run CHILD in PyMT
Step2: You can now see the help information for Child. This time, have a look under the Parameters section (you may have to scroll down - it's the section after the citations). The Parameters section describes optional keywords that you can pass the the setup method. In the previous example we just used defaults. Below we'll see how to set input file parameters programmatically through keywords.
Step3: We can change input file paramters through setup keywords. The help description above gives a brief description of each of these. For this example we'll change the grid spacing, the size of the domain, and the duration of the simulation.
Step4: The setup folder now only contains the child input file.
Step5: Again, initialize and run the model for 10 time steps.
Step6: This time around it's now quite as clear what the units of time are. We can check in the same way as before.
Step7: Update until some time in the future. Notice that, in this case, we update to a partial time step. Child is fine with this however some other models may not be. For models that can not update to times that are not full time steps, PyMT will advance to the next time step and interpolate values to the requested time.
Step8: Child offers different output variables but we get them in the same way as before.
Step9: We can query each input and output variable. PyMT attaches a dictionary to each component called var that provides information about each variable. For instance we can see that "land_surface__elevation" has units of meters, is an input and output variable, and is defined on the nodes of grid with id 0.
Step10: If we plot this variable, we can visually see the unsructured triangular grid that Child has decomposed its grid into.
Step11: As with the var attribute, PyMT adds a dictionary, called grid, to components that provides a description of each of the model's grids. Here we can see how the x and y positions of each grid node, and how nodes connect to one another to form faces (the triangles in this case). Grids are described using the ugrid conventions.
Step12: Child initializes it's elevations with random noise centered around 0. We would like instead to give it elevations that have some land and some sea. First we'll get the x and y coordinates for each node along with their elevations.
Step13: All nodes above y=y_shore will be land, and all nodes below y=y_shore will be sea.
Step14: Just to verify we set things up correctly, we'll create a plot.
Step15: To get things going, we'll run the model for 5000 years and see what things look like.
Step16: We'll have some fun now by adding a simple uplift component. We'll run the component for another 5000 years but this time uplifting a corner of the grid by dz_dt.
Step17: A portion of the grid was uplifted and channels have begun eroding into it.
Step18: We now stop the uplift and run it for an additional 5000 years.
|
<ASSISTANT_TASK:>
Python Code:
# Some magic to make plots appear within the notebook
%matplotlib inline
import numpy as np # In case we need to use numpy
import pymt.models
model = pymt.models.Child()
help(model)
rm -rf _model # Clean up for the next step
config_file, initdir = model.setup('_model',
grid_node_spacing=750.,
grid_x_size=20000.,
grid_y_size=40000.,
run_duration=1e6)
ls _model
model.initialize(config_file, initdir)
for t in range(10):
model.update()
print(model.time)
model.time_units
model.update_until(201.5, units='year')
print(model.time)
model.output_var_names
model.get_value('land_surface__elevation')
model.var['land_surface__elevation']
model.quick_plot('land_surface__elevation', edgecolors='k', vmin=-200, vmax=200, cmap='BrBG_r')
model.grid[0]
x, y = model.get_grid_x(0), model.get_grid_y(0)
z = model.get_value('land_surface__elevation')
y_shore = 15000.
z[y < y_shore] -= 100
z[y >= y_shore] += 100
model.set_value('land_surface__elevation', z)
model.quick_plot('land_surface__elevation', edgecolors='k', vmin=-200, vmax=200, cmap='BrBG_r')
model.update_until(5000.)
model.quick_plot('land_surface__elevation', edgecolors='k', vmin=-200, vmax=200, cmap='BrBG_r')
dz_dt = .02
now = model.time
times, dt = np.linspace(now, now + 5000., 50, retstep=True)
for time in times:
model.update_until(time)
z = model.get_value('land_surface__elevation')
z[(y > 15000.) & (x > 10000.)] += dz_dt * dt
model.set_value('land_surface__elevation', z)
model.quick_plot('land_surface__elevation', edgecolors='k', vmin=-200, vmax=200, cmap='BrBG_r')
model.update_until(model.time + 5000.)
model.quick_plot('land_surface__elevation', edgecolors='k', vmin=-200, vmax=200, cmap='BrBG_r')
model.get_value('channel_water_sediment~bedload__mass_flow_rate')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Short documentation
Step2: Sign
Step3: Width
Step4: Precision
Step5: Width + Precision
Step6: Type
Step7: Float
Step8: Comparison between {
Step9: Comparison between {
Step10: More examples with {
|
<ASSISTANT_TASK:>
Python Code:
import math
"{:03}".format(1)
"{:.<9}".format(3)
"{:.<9}".format(11)
"{:.>9}".format(3)
"{:.>9}".format(11)
"{:.=9}".format(3)
"{:.=9}".format(11)
"{:.^9}".format(3)
"{:.^9}".format(11)
"{:+}".format(3)
"{:+}".format(-3)
"{:-}".format(3)
"{:-}".format(-3)
"{: }".format(3)
"{: }".format(-3)
"{:3}".format(3)
"{:3}".format(11)
"{}".format(math.pi)
"{:.2f}".format(math.pi)
"{:9.4f}".format(math.pi)
"{:9.4f}".format(12.123456789)
"{:}".format(21)
"{:b}".format(21)
"{:#b}".format(21)
"{:c}".format(21)
"{:d}".format(21)
"{:o}".format(21)
"{:#o}".format(21)
"{:x}".format(21)
"{:X}".format(21)
"{:#x}".format(21)
"{:#X}".format(21)
"{:n}".format(21)
"{}".format(math.pi)
"{:e}".format(math.pi)
"{:E}".format(math.pi)
"{:f}".format(math.pi)
"{:F}".format(math.pi)
"{:g}".format(math.pi)
"{:G}".format(math.pi)
"{:n}".format(math.pi)
"{:%}".format(math.pi)
numbers = [1000000, 100000, 10000, 1000, 100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]
for number in numbers:
print("{:f}".format(number), end="\t")
print("{:e}".format(number), end="\t")
print("{:g}".format(number))
numbers = [1000, 100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]
for number in numbers:
print("{:.2f}".format(number), end="\t\t")
print("{:.2e}".format(number), end="\t")
print("{:.2g}".format(number))
numbers = [1000, 100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]
for number in numbers:
print("{:g}".format(number), end="\t")
print("{:.3g}".format(number), end="\t")
print("{:.2g}".format(number), end="\t")
print("{:.1g}".format(number), end="\t")
print("{:.0g}".format(number))
numbers = [1234000, 123400, 12340, 1234, 123.4, 12.34, 1.234, 0.1234, 0.01234, 0.001234, 0.0001234, 0.00001234]
for number in numbers:
print("{:<10g}".format(number), end="\t")
print("{:<10.6g}".format(number), end="\t")
print("{:<10.5g}".format(number), end="\t")
print("{:<10.4g}".format(number), end="\t")
print("{:<10.3g}".format(number), end="\t")
print("{:<10.2g}".format(number), end="\t")
print("{:<10.1g}".format(number))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This converts a json object into a MARC class so the existing methods will work
Step2: This class has the base names of the files and my directory structure hard-coded in
Step3: Here's where we write it out.
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
import pymarc
import random
from bookwormMARC.bookwormMARC import BRecord
from bookwormMARC.bookwormMARC import parse_record
from bookwormMARC.hathi_methods import All_Hathi
from bookwormMARC.bookwormMARC import LCCallNumber
import bz2
import bookwormMARC
import sys
import os
from collections import defaultdict
#all_files = hathi_record_yielder()
import pymarc
import ujson as json
import gzip
all_hathi = All_Hathi("/home/bschmidt/data/hathi_metadata/")
demo = []
for i,entry in enumerate(all_hathi):
demo.append(entry)
if i > 10:
break
print demo
if __name__=="__main__":
all_hathi = All_Hathi("/home/bschmidt/data/hathi_metadata/")
dump = gzip.open("~/hathi_metadata/jsoncatalog_full.txt.gz","w")
for i,vol in enumerate(all_hathi):
if i % 250000 == 0:
sys.stdout.write("Reading item no. " + str(i) + "\n")
dump.write(json.dumps(vol) + "\n")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: Split data into training and testing.
Step3: Learning a multiple regression model
Step4: Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows
Step5: Making Predictions
Step6: Compute RSS
Step7: Test your function by computing the RSS on TEST data for the example model
Step8: Create some new features
Step9: Next create the following 4 new features as column in both TEST and TRAIN data
Step10: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.
Step11: Learning Multiple Models
Step12: Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients
Step13: Quiz Question
Step14: Quiz Question
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data.gl/')
train_data,test_data = sales.random_split(.8,seed=0)
example_features = ['sqft_living', 'bedrooms', 'bathrooms']
example_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features,
validation_set = None)
example_weight_summary = example_model.get("coefficients")
print example_weight_summary
example_predictions = example_model.predict(train_data)
print example_predictions[0] # should be 271789.505878
def get_residual_sum_of_squares(model, data, outcome):
# First get the predictions
predictions = model.predict(data)
# Then compute the residuals/errors
residuals = outcome - predictions
# Then square and add them up
RSS = sum(residuals*residuals)
return(RSS)
rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price'])
print rss_example_train # should be 2.7376153833e+14
from math import log
train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2)
test_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2)
# create the remaining 3 features in both TEST and TRAIN data
train_data['bed_bath_rooms'] = train_data['bedrooms']*train_data['bathrooms']
test_data['bed_bath_rooms'] = test_data['bedrooms']*test_data['bathrooms']
train_data['log_sqft_living'] = train_data['sqft_living'].apply(lambda x: log(x))
test_data['log_sqft_living'] = test_data['sqft_living'].apply(lambda x: log(x))
train_data['lat_plus_long'] = train_data['lat'] + train_data['long']
test_data['lat_plus_long'] = test_data['lat'] + test_data['long']
print 'bedrooms_squared %f' % round(sum(test_data['bedrooms_squared'])/len(test_data['bedrooms_squared']),2)
print 'bed_bath_rooms %f' % round(sum(test_data['bed_bath_rooms'])/len(test_data['bed_bath_rooms']),2)
print 'log_sqft_living %f' % round(sum(test_data['log_sqft_living'])/len(test_data['log_sqft_living']),2)
print 'lat_plus_long %f' % round(sum(test_data['lat_plus_long'])/len(test_data['lat_plus_long']),2)
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']
model_2_features = model_1_features + ['bed_bath_rooms']
model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']
# Learn the three models: (don't forget to set validation_set = None)
model_1 = graphlab.linear_regression.create(train_data, target = 'price', features = model_1_features,
validation_set = None)
model_2 = graphlab.linear_regression.create(train_data, target = 'price', features = model_2_features,
validation_set = None)
model_3 = graphlab.linear_regression.create(train_data, target = 'price', features = model_3_features,
validation_set = None)
# Examine/extract each model's coefficients:
print 'model 1'
model_1.get("coefficients")
print 'model 2'
model_2.get("coefficients")
print 'model 3'
model_3.get("coefficients")
# Compute the RSS on TRAINING data for each of the three models and record the values:
print 'model 1: %.9f model 2: %.9f model 3: %.9f' % (get_residual_sum_of_squares(model_1, train_data, test_data['price']),
get_residual_sum_of_squares(model_2, train_data, test_data['price']),
get_residual_sum_of_squares(model_3, train_data, test_data['price']))
# Compute the RSS on TESTING data for each of the three models and record the values:
print 'model 1: %.9f model 2: %.9f model 3: %.9f' % (get_residual_sum_of_squares(model_1, test_data, test_data['price']),
get_residual_sum_of_squares(model_2, test_data, test_data['price']),
get_residual_sum_of_squares(model_3, test_data, test_data['price']))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Setting Parameters
Step3: Let's set all the values of the sun based on the nominal solar values provided in the units package.
Step4: And so that we can compare with measured/expected values, we'll observe the sun from the earth - with an inclination of 23.5 degrees and at a distance of 1 AU.
Step5: Checking on the set values, we can see the values were converted correctly to PHOEBE's internal units.
Step6: Running Compute
Step7: Now we run our model and store the mesh so that we can plot the temperature distributions and test the size of the sun verse known values.
Step8: Comparing to Expected Values
Step9: For a rotating sphere, the minimum radius should occur at the pole and the maximum should occur at the equator.
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.0,<2.1"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_star(starA='sun')
print b['sun']
b.set_value('teff', 1.0*u.solTeff)
b.set_value('rpole', 1.0*u.solRad)
b.set_value('mass', 1.0*u.solMass)
b.set_value('period', 24.47*u.d)
b.set_value('incl', 23.5*u.deg)
b.set_value('distance', 1.0*u.AU)
print b.get_quantity('teff')
print b.get_quantity('rpole')
print b.get_quantity('mass')
print b.get_quantity('period')
print b.get_quantity('incl')
print b.get_quantity('distance')
b.add_dataset('lc', pblum=1*u.solLum)
b.run_compute(protomesh=True, pbmesh=True, irrad_method='none', distortion_method='rotstar')
axs, artists = b['protomesh'].plot(facecolor='teffs')
axs, artists = b['pbmesh'].plot(facecolor='teffs')
print "teff: {} ({})".format(b.get_value('teffs', dataset='pbmesh').mean(),
b.get_value('teff', context='component'))
print "rpole: {} ({})".format(b.get_value('rpole', dataset='pbmesh'),
b.get_value('rpole', context='component'))
print "rmin (pole): {} ({})".format(b.get_value('rs', dataset='pbmesh').min(),
b.get_value('rpole', context='component'))
print "rmax (equator): {} (>{})".format(b.get_value('rs', dataset='pbmesh').max(),
b.get_value('rpole', context='component'))
print "logg: {}".format(b.get_value('loggs', dataset='pbmesh').mean())
print "flux: {}".format(b.get_quantity('fluxes@model')[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Documentation
Step2: <a href="http
Step3: <i>"NumPy is an extension to the Python programming language, adding <b>support for large, multi-dimensional arrays and matrices</b>, along with a large library of <b>high-level mathematical functions</b> to operate on these arrays.
Step4: <a href="http
Step5: <i>matplotlib is a <b>plotting library</b> for the Python programming language and its numerical mathematics extension NumPy.
Step6: <a href='http
Step7: <i>"SciPy contains modules for <b>optimization, linear algebra, integration, interpolation, special functions, FFT, signal and image processing, ODE solvers</b> and other tasks common in science and engineering."</i>, Wikipedia
Step8: "SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible." SymPy
Step9: <a href='http
Step10: <i>"HDF5 is a <b>data model, library, and file format</b> for storing and managing data. It supports an <b>unlimited variety of datatypes</b>, and is designed for <b>flexible and efficient I/O and for high volume and complex data</b>. HDF5 is <b>portable and is extensible</b>, allowing applications to evolve in their use of HDF5. The HDF5 Technology suite includes tools and applications for managing, manipulating, viewing, and analyzing data in the HDF5 format."</i>, HDF Group
Step11: <a href='http
Step12: <i>"Cython is a compiled language that generates <b>CPython extension modules</b>. These extension modules can then be loaded and used by regular Python code using the import statement.
Step13: Examples
Step14: Simple Plot
Step15: Interactive
Step16: Subplots
Step17: Fitting
|
<ASSISTANT_TASK:>
Python Code:
import IPython
IPython.__version__
from IPython.display import YouTubeVideo
YouTubeVideo("05fA_DXgW-Y")
import numpy
numpy.__version__
from IPython.display import YouTubeVideo
YouTubeVideo("1zmV8lZsHF4")
import matplotlib
matplotlib.__version__
from IPython.display import YouTubeVideo
YouTubeVideo("MKucn8NtVeI")
import scipy
scipy.__version__
from IPython.display import YouTubeVideo
YouTubeVideo('0CFFTJUZ2dc')
from IPython.display import YouTubeVideo
YouTubeVideo('Lgp442bibDM')
import h5py
h5py.__version__
from IPython.display import YouTubeVideo
YouTubeVideo('nddj5OA8LJo')
import Cython
Cython.__version__
from IPython.display import YouTubeVideo
YouTubeVideo('gMvkiQ-gOW8')
# Import necassary packages
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# Set up matpltolib for nice plotting in the notebook
%matplotlib notebook
plt.style.use('seaborn-notebook')
x = np.linspace(0, 4 * np.pi, 200)
rad = x / np.pi
plt.figure(figsize=(12, 3))
line, = plt.plot(rad, 2 * np.sin(x))
plt.ylim(-5, 5)
plt.grid()
plt.tight_layout()
from ipywidgets import interact
@interact(a=(1, 4), f=(0.1, 2, 0.1), phi=(0, 2, 0.1))
def update(a=2, f = 1, phi=0):
line.set_ydata(a * np.sin((x + phi * np.pi) / f))
x = np.linspace(0, 4 * np.pi, 200)
rad = x / np.pi
#Create some noise
noise = 0.75 * np.random.randn(x.size)
# Define differnt harmonic functions
y0 = 1.0 * np.sin(x + 0) + noise
y1 = 1.5 * np.sin(x + np.pi / 2) + noise
y2 = 2.5 * np.sin(x + np.pi) + noise
# Plot everything
fig, axs = plt.subplots(3 , 1, figsize=(12, 6))
axs[0].plot(rad, y0, 'b.')
axs[0].set_xticks([])
axs[1].plot(rad, y1, 'g.')
axs[1].set_xticks([])
axs[2].plot(rad, y2, 'k.')
axs[2].set_xlabel('x / 2$\pi$')
for ax in axs:
ax.set_ylim(-5.5, 5.5)
plt.tight_layout(h_pad=0)
# Define the fit function
def sin(x, a, phi):
return a * np.sin(x + phi)
# Find the fit parameters
(a0, phi0), *err = curve_fit(sin, x, y0)
(a1, phi1), *err = curve_fit(sin, x, y1)
(a2, phi2), *err = curve_fit(sin, x, y2)
# Plot fits into subplots
axs[0].plot(rad, sin(x, a0, phi0), 'r--', lw=3, label='${:.2f} \cdot Sin(x + {:.2f}\pi$)'.format(a0, phi0 / np.pi))
axs[1].plot(rad, sin(x, a1, phi1), 'r--', lw=3, label='${:.2f} \cdot Sin(x + {:.2f}\pi$)'.format(a1, phi1 / np.pi))
axs[2].plot(rad, sin(x, a2, phi2), 'r--', lw=3, label='${:.2f} \cdot Sin(x + {:.2f}\pi$)'.format(a2, phi2 / np.pi))
for ax in axs:
ax.legend(loc=4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: An Introduction to Machine Learning with Scikit-learn
Step2: Generating data
Step3: To avoid overfitting, we split the data into a training set, used to train the algorithm, and a test set, used to evaluate its performance. There's no hard and fast rule about how big your training set should be, as this is highly problem-dependent. Here, we'll use 70% of the data as training data.
Step4: You should always rescale your features as many algorithms (including SVM and many neural network implementations) assume the features have zero mean and unit variance. They will likely underperform without scaling. In this example, the generated data are already scaled so it's unnecessary, but I leave this in to show you how it's done.
Step5: Now we can have a look at our training data, where I've coloured the points by the class they belong to.
Step6: Classification
Step7: This is the function that actually trains the classifier with our training data.
Step8: Now that the classifier is trained, we can use it to predect the classes of our test data and have a look at the accuracy.
Step9: But since the accuracy is only part of the story, let's get out the probability of belonging to each class so that we can generate the ROC curve.
Step10: Optimising hyperparameters
Step11: Let's see if the accuracy has improved
Step12: The accuracy is more or less unchanged, but we do get a slightly better ROC curve
Step13: Using a different algorithm
Step14: You can do cross validation to tune the hyperparameters for SVM, but it takes a bit longer than for KNN.
Step15: You can see the more complicated decision tree learns the behaviour of the spurious outliers. It's easy to check for over-fitting, whatever metric you're using (in this case mean squared error) will show much higher performance on the training than on the test set.
Step16: We'll now use cross validation to automatically choose the hyperparameters and avoid over-fitting
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function
from sklearn.datasets import make_circles
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import accuracy_score
from sklearn.metrics import roc_curve, roc_auc_score
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeRegressor
import numpy as np
import time
import matplotlib.pyplot as plt
%matplotlib nbagg
def plot_roc(fpr, tpr):
Simple ROC curve plotting function.
Parameters
----------
fpr : array
False positive rate
tpr : array
True positive rate
plt.plot(fpr, tpr, lw=1.5)
plt.xlim([-0.05,1.05])
plt.ylim([-0.05,1.05])
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
# X is the array of features, y is the array of corresponding class labels
X, y = make_circles(n_samples=1000, noise=0.1, random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7, random_state=0)
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
plt.figure()
plt.plot(X_train[y_train==0,0], X_train[y_train==0,1],'.')
plt.plot(X_train[y_train==1,0], X_train[y_train==1,1],'.')
plt.legend(('Class 0', 'Class 1'))
clf = KNeighborsClassifier()
print(clf)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(accuracy_score(y_test, y_pred))
probs = clf.predict_proba(X_test)
fpr,tpr, thresh = roc_curve(y_test, probs[:,1], pos_label=1)
auc = roc_auc_score(y_test, probs[:,1])
print('Area under curve', auc)
plt.figure()
plot_roc(fpr, tpr)
t1 = time.time()
clf = KNeighborsClassifier()
# Define a grid of parameters over which to search, as a dictionary
params = {'n_neighbors':np.arange(1, 30, 1), 'weights':['distance', 'uniform']}
# cv=5 means we're doing 5-fold cross validation.
clf = GridSearchCV(clf, params, cv=5)
clf.fit(X_train, y_train)
print('Time taken',time.time()-t1,'seconds')
# We can see what were the best combination of parameters
print(clf.best_params_)
y_pred = clf.predict(X_test)
print(accuracy_score(y_test, y_pred))
probs = clf.predict_proba(X_test)
fpr_knn, tpr_knn, thresh = roc_curve(y_test, probs[:,1], pos_label=1)
auc_knn = roc_auc_score(y_test, probs[:,1])
print('Area under curve', auc_knn)
plt.figure()
plot_roc(fpr_knn, tpr_knn)
clf = SVC(kernel='rbf', probability=True)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(accuracy_score(y_test, y_pred))
probs = clf.predict_proba(X_test)
fpr_svm, tpr_svm, thresh = roc_curve(y_test, probs[:,1], pos_label=1)
auc_svm = roc_auc_score(y_test, probs[:,1])
print('Area under curve', auc_svm)
plt.figure()
plot_roc(fpr_knn, tpr_knn)
plot_roc(fpr_svm, tpr_svm)
plt.legend(('KNN (%.3f)' %auc_knn, 'SVM (%.3f)' %auc_svm), loc='lower right')
np.random.seed(42)
x = np.linspace(-3,3, 100)
y = np.sin(x) + np.random.randn(len(x))*0.05
N = 25
outlier_ints = np.random.randint(0, len(x), N)
y[outlier_ints] += np.random.randn(N)*1
plt.figure()
plt.plot(x,y,'.')
plt.xlabel('x');
plt.ylabel('y');
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=42)
y_train = y_train[np.argsort(X_train)]
X_train.sort()
y_test = y_test[np.argsort(X_test)]
X_test.sort()
X_train = X_train[:, None] # sklearn doesn't like 1d X arrays
X_test = X_test[:, None]
dt1 = DecisionTreeRegressor(max_depth=10) # An overly complicated classifier
dt2 = DecisionTreeRegressor(max_depth=3) # A simpler classifier
dt1.fit(X_train, y_train)
dt2.fit(X_train, y_train)
y_train_1 = dt1.predict(X_train)
y_train_2 = dt2.predict(X_train)
y_test_1 = dt1.predict(X_test)
y_test_2 = dt2.predict(X_test)
plt.figure()
plt.plot(x,y,'.')
plt.plot(X_test, y_test_1, lw=1.5, alpha=0.5)
plt.plot(X_test,y_test_2, lw=1.5, alpha=0.5)
plt.xlabel('x')
plt.ylabel('y')
plt.legend(('Data', 'Max depth 10', 'Max depth 3'));
mse_train = np.mean((y_train-y_train_1)**2)
mse_test = np.mean((y_test-y_test_1)**2)
mse_train, mse_test
mse_train = np.mean((y_train-y_train_2)**2)
mse_test = np.mean((y_test-y_test_2)**2)
mse_train, mse_test
dt3 = GridSearchCV(DecisionTreeRegressor(), param_grid={'max_depth': np.arange(2,12)}, cv=5)
dt3.fit(X_train, y_train)
y_train_3 = dt3.predict(X_train)
y_test_3 = dt3.predict(X_test)
mse_train = np.mean((y_train-y_train_3)**2)
mse_test = np.mean((y_test-y_test_3)**2)
mse_train, mse_test
print(dt3.best_params_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in OpSim output for modern versions
Step2: Read in the OpSim DataBase into a pandas dataFrame
Step3: The opsim database is a large file (approx 4.0 GB), but still possible to read into memory on new computers. You usually only need the Summary Table, which is about 900 MB. If you are only interested in the Deep Drilling Fields, you can use the read_sql_query to only select information pertaining to Deep Drilling Observations. This has a memory footprint of about 40 MB.
Step4: Some properties of the OpSim Outputs
Step5: Construct our Summary
Step6: First Season
Step7: Example to obtain the observations of in a 100 day period in a field
Step8: Plots
Step9: This is a DDF.
Step10: WFD field
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# Required packages sqlachemy, pandas (both are part of anaconda distribution, or can be installed with a python installer)
# One step requires the LSST stack, can be skipped for a particular OPSIM database in question
import OpSimSummary.summarize_opsim as so
from sqlalchemy import create_engine
import pandas as pd
print so.__file__
# This step requires LSST SIMS package MAF. The main goal of this step is to set DD and WFD to integer keys that
# label an observation as Deep Drilling or for Wide Fast Deep.
# If you want to skip this step, you can use the next cell by uncommenting it, and commenting out this cell, if all you
# care about is the database used in this example. But there is no guarantee that the numbers in the cell below will work
# on other versions of opsim database outputs
#from lsst.sims.maf import db
#from lsst.sims.maf.utils import opsimUtils
# DD = 366
# WFD = 364
# Change dbname to point at your own location of the opsim output
dbname = '/Users/rbiswas/data/LSST/OpSimData/enigma_1189_sqlite.db'
#opsdb = db.OpsimDatabase(dbname)
#propID, propTags = opsdb.fetchPropInfo()
#DD = propTags['DD'][0]
#WFD = propTags['WFD'][0]
engine = create_engine('sqlite:///' + dbname)
# Load to a dataframe
# Summary = pd.read_hdf('storage.h5', 'table')
Summary = pd.read_sql_table('Summary', engine, index_col='obsHistID')
# EnigmaDeep = pd.read_sql_query('SELECT * FROM SUMMARY WHERE PROPID is 366', engine)
# EnigmaD = pd.read_sql_query('SELECT * FROM SUMMARY WHERE PROPID is 366', engine)
EnigmaCombined = Summary.query('propID == [364, 366]')# & (fieldID == list(EnigmaDeep.fieldID.unique().values)')
EnigmaCombined.propID.unique()
EnigmaCombined.fieldID.unique().size
Full = so.SummaryOpsim(EnigmaCombined)
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(111, projection='mollweide');
fig = Full.showFields(ax=fig.axes[0], marker='o', s=1)
fieldList = Full.fieldIds
len(fieldList)
selected = Full.df.query('fieldID == 290 and expMJD > 49490 and expMJD < 49590')
selected.head()
# write to disk in ascii file
selected.to_csv('selected_obs.csv', index='obsHistID')
# write to disk in ascii file with selected columns
selected[['expMJD', 'night', 'filter', 'fiveSigmaDepth', 'filtSkyBrightness', 'finSeeing']].to_csv('selected_cols.csv', index='obsHistID')
fig_firstSeason, firstSeasonCadence = Full.cadence_plot(fieldList[0], observedOnly=False, sql_query='night < 366')
fig_firstSeason_1, firstSeasonCadence_1 = Full.cadence_plot(fieldList[0], observedOnly=True, sql_query='night < 366')
fig_firstSeason_main, firstSeasonCadence_main = Full.cadence_plot(fieldList[1], observedOnly=False, sql_query='night < 366')
fig_long, figCadence_long = Full.cadence_plot(fieldList[0], observedOnly=False, sql_query='night < 3655', nightMax=3655)
fig_2, figCadence_2 = Full.cadence_plot(fieldList[0], observedOnly=False,
sql_query='night < 720', nightMax=720, nightMin=365)
fig_SN, SN_matrix = Full.cadence_plot(fieldList[0], observedOnly=False, mjd_center=49540., mjd_range=[-30., 50.])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in the Data
Step2: Formatting & Identifying the Data
Step3: Exploring the Data
Step4: Because the TxDot Estimates are so highly correlated with the winning bid, we can use it as a baseline for the model predictions
Step5: We now need to build and train a model that can classify a bid as over or under the TxDOT estimate
Step6: 1) Splitting the Data into Training and Testing Sets
Step7: ### This will split the training and testing sets into a more useful format for loading into models
Step8: Classifying using Decision Tree
Step9: Model given Wining Bid is within 10% of TxDot Estimate
Step10: Predicting
Step11: Model given Wining Bid is More Than 10% of TxDot Estimate
Step12: Predicting
Step13: Model given Wining Bid is Less Than 10% of TxDot Estimate
Step14: Predicting
Step15: Our Model Prediction of Winning Bid will be a Weighted average of Bid predictions with weights being the probability of being classified as such class found from Logistic Regression
Step16: df_test['Hyp_More'] = 1
Step17: E[Bid] = P(within10)xE[within10] + P(above10)xE[above10] + P(below10)xE[below10]
Step18: Bid Model Comparison by Graph
Step19: Model Predictions
|
<ASSISTANT_TASK:>
Python Code:
tree1 = tree.DecisionTreeClassifier( criterion ='entropy',random_state = 0)
fittedtree=tree1.fit( X_train, y_train)
#print metrics.confusion_matrix(y_train, fittedtree.predict(X_train))
#print metrics.accuracy_score(y_train, fittedtree.predict(X_train))
cross_validation.cross_val_score(tree1, X_train, y_train, cv = 10).mean()
#tree.export_graphviz(fittedtree, out_file =' tree.dot', feature_names =['Sepal Length', 'Sepal Width', 'Petal Length', 'Petal Width'])
import os
import math
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import csv
import seaborn as sns
import statsmodels.api as sm
import statsmodels.formula.api as smf
from sklearn import feature_selection, linear_model
df = pd.read_csv('C:/Users/Collin/Documents/Python_Projects/TxDOT/data/Bid_Data.csv')
len(df)
df = df[df['Rank'] == 1]
df = df[df['Type'] == 'Construction']
df = df.drop(' From', 1)
df = df.drop('To', 1)
df = df.drop('Contract Description', 1)
df = df.drop('Contractor', 1)
df = df.drop('Contract Category', 1)
df = df.drop('LNG MON', 1)
df = df.drop('MONTH', 1)
df['Award Amount'] = df['Award Amount'].str.lstrip('$')
df['Engineers Estimate'] = df['Engineers Estimate'].str.lstrip('$')
df['Award Amount'] = df['Award Amount'].str.replace(',','').astype(float)
df['Engineers Estimate'] = df['Engineers Estimate'].str.replace(',','').astype(float)
#Renaming Variables
df['EngEst'] = df['Engineers Estimate']
df['NBidders'] = df['Number of Bidders']
df['Date'] = pd.to_datetime(df['Letting Date'])
df.set_index('Date' , inplace=True)
df['Year'] = df.index.year
df['Month'] = df.index.month
df['WinBid'] = df['Award Amount']
# Creating New Varialbes
df['Diff'] = df['EngEst'] - df['WinBid']
df['lnWinBid'] = np.log(df['WinBid'])
df['lnEngEst'] = np.log(df['EngEst'])
df['DiffLn'] = df['lnWinBid'] - df['lnEngEst']
df['Within10Percent'] = 1
df['PercentOff'] = df['Diff'] / df['EngEst']
df['MoreOrLessThan10'] = 0
df['LessThan10'] = 0
df['MoreThan10'] = 0
df.loc[(df.PercentOff > .10) , 'Within10Percent'] = 0
df.loc[(df.PercentOff < -.10) , 'Within10Percent'] = 0
df.loc[(df.PercentOff > .10) , 'MoreOrLessThan10'] = 1
df.loc[(df.PercentOff < -.10) , 'MoreOrLessThan10'] = 2
df.loc[(df.PercentOff > .10) , 'MoreThan10'] = 1
df.loc[(df.PercentOff < -.10) , 'LessThan10'] = 1
print len(df)
sns.jointplot(x="EngEst", y="WinBid", data=df, kind="reg"); sns.jointplot(x="lnEngEst", y="lnWinBid", data=df, kind="reg");
#Using ALL the Data
Percent = float(df.Within10Percent.sum()) / len(df)
print (Percent)*100 , '% of All the TxDOT estimates were within 10% of actual bid'
Percent_April_2016 = float(df[(df.Year == 2016) & (df.Month == 4)].Within10Percent.sum()) / len(df_test)
print (Percent_April_2016)*100 , '% of the April 2016 TxDOT estimates were within 10% of actual bid'
cmap = {'0': 'g', '1': 'r', '2': 'b' }
df['cMoreOrLessThan10'] = df.MoreOrLessThan10.apply(lambda x: cmap[str(x)])
print df.plot('EngEst', 'WinBid', kind='scatter', c=df.cMoreOrLessThan10)
cmap = {'0': 'g', '1': 'r', '2': 'b' }
df['cMoreOrLessThan10'] = df.MoreOrLessThan10.apply(lambda x: cmap[str(x)])
print df.plot('lnEngEst', 'lnWinBid', kind='scatter', c=df.cMoreOrLessThan10)
df_test = df[(df.Year == 2016) & (df.Month == 4)]
print len(df_test) , 'projects in April 2016'
df_train = df[(df.Year != 2016) | (df.Month != 4)]
print len(df_train) ,'projects from Jan 2010 to April 2016'
#df_train[['Year','Month']].tail()
#df_test.columns
#names_x = ['Length','Year','Month','lnEngEst','Time', 'Highway', 'District', 'County' ]
names_x = ['Length','Year','Month','lnEngEst','Time', 'NBidders']
names_y = ['MoreOrLessThan10']
df_Train_x = df_train[ names_x ]
df_Train_y = df_train[ names_y ]
df_Test_x = df_test[ names_x ]
df_Test_y = df_test[ names_y ]
df_Train_x.head()
test_X.head()
from sklearn import tree
clf = tree.DecisionTreeClassifier()
clf = clf.fit(df_Train_x, df_Train_y)
clf.predict(df_Test_x)
clf.score(df_Test_x, df_Test_y)
#subsets the training data to just those who were within 10% of the TxDot Estimate
df_train_within = df[df.Within10Percent == 1]
model_3 = smf.ols(formula = 'lnWinBid ~ lnEngEst+Year+Month+Year*Month+NBidders+NBidders*Year+Time', data = df_train_within).fit()
model_3.summary()
df_test.loc[:,'Lnprediction_within'] = model_3.predict(test_X)
df_test.Lnprediction_within.head()
#subsets the training data to just those who were more than 10% of the TxDot Estimate
df_train_more = df[df.MoreThan10 == 1]
model_4 = smf.ols(formula = 'lnWinBid ~ lnEngEst+Year+Month+Year*Month+NBidders+NBidders*Year+Time', data = df_train_more).fit()
model_4.summary()
df_test.loc[:,'Lnprediction_more'] = model_4.predict(test_X)
#subsets the training data to just those who were less than 10% of the TxDot Estimate
df_train_less = df[df.LessThan10 == 1]
model_5 = smf.ols(formula = 'lnWinBid ~ lnEngEst+Year+Month+Year*Month+NBidders+NBidders*Year+Time', data = df_train_less).fit()
model_5.summary()
df_test.loc[:,'Lnprediction_less'] = model_5.predict(test_X)
#df_test.columns
df_test[['p_more', 'p_less', 'p_within', 'Lnprediction_within', 'Lnprediction_more', 'Lnprediction_less']].head()
df_test.loc[:,'lnpred'] = df_test.p_within*df_test.Lnprediction_within + df_test.p_more*df_test.Lnprediction_more + df_test.p_less*df_test.Lnprediction_less
df_test.loc[:,'BidPrediction'] = np.exp(df_test.loc[:,'lnpred'])
df_test.loc[:,'PredDiff'] = df_test.loc[:,'BidPrediction'] - df_test.loc[:,'WinBid']
df_test.loc[:,'PredPercentOff'] = df_test.loc[:,'PredDiff'] / df_test.loc[:,'BidPrediction']
df_test.loc[:,'PredWithin10Percent'] = 1
df_test.loc[(df_test.PredPercentOff > .10) , 'PredWithin10Percent'] = 0
df_test.loc[(df_test.PredPercentOff < -.10) , 'PredWithin10Percent'] = 0
ModelPercent = float(df_test.PredWithin10Percent.sum()) / len(df_test)
PercentIncrease = (ModelPercent)*100 - (Percent_April_2016)*100
NumberCorrectIncrease = (PercentIncrease/100)*len(df_test)
print (Percent_April_2016)*100 , '% of the TxDOT estimates were within 10% of actual bid'
print (ModelPercent)*100 , '% of the Model predictions were within 10% of actual bid'
print
print 'this is a increase of :', PercentIncrease, '%'
print 'or', NumberCorrectIncrease, 'more estimates within the 10% threshhold'
print 'In April 2016 TxDOT under estimated bids by: ' , df_test.Diff.sum()
print
print 'In April 2016 the Model under estimated bids by: ' ,df_test.PredDiff.sum()
print
print 'In April 2016 the model was ' , df_test.Diff.sum() - df_test.PredDiff.sum() , 'closer to the winning bids than TxDOT'
print
print 'The model predicted a sum of' ,df_test.BidPrediction.sum() ,'for all the projects in April 2016'
print
print 'TxDOT predicted a sum of' ,df_test.EngEst.sum() ,'for all the projects in April 2016'
df_test[['Diff','PredDiff']].std()
df_test[['Diff','PredDiff']].describe()
cmap = {'0': 'r', '1': 'g' }
df_test.loc[:,'cWithin10Percent'] = df_test.Within10Percent.apply(lambda x: cmap[str(x)])
print df_test.plot('lnEngEst', 'lnWinBid', kind='scatter', c=df_test.cWithin10Percent)
predcmap = {'0': 'r', '1': 'g' }
df_test.loc[:,'cPredWithin10Percent'] = df_test.PredWithin10Percent.apply(lambda x: predcmap[str(x)])
print df_test.plot('lnpred', 'lnWinBid', kind='scatter', c=df_test.cPredWithin10Percent)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Authenticate your GCP account
Step2: Create a Cloud Storage bucket
Step3: Only if your bucket doesn't already exist
Step4: Finally, validate access to your Cloud Storage bucket by examining its contents
Step5: Building and training a scikit-learn model
Step6: Write your preprocessor
Step7: Notice that an instance of MySimpleScaler saves the means and standard deviations of each feature column on first use. Then it uses these summary statistics to scale data it encounters afterward.
Step8: Deploying a custom prediction routine
Step9: Notice that, in addition to using the preprocessor that you defined during training, this predictor performs a postprocessing step that converts the prediction output from class indexes (0, 1, or 2) into label strings (the name of the flower type).
Step10: Then run the following command to createdist/my_custom_code-0.1.tar.gz
Step11: Upload model artifacts and custom code to Cloud Storage
Step12: Deploy your custom prediction routine
Step13: Then create your model
Step14: Next, create a version. In this step, provide paths to the artifacts and custom code you uploaded to Cloud Storage
Step15: Learn more about the options you must specify when you deploy a custom prediction routine.
Step16: Then send two instances of iris data to your deployed version
Step17: Note
Step18: Cleaning up
|
<ASSISTANT_TASK:>
Python Code:
PROJECT_ID = "<your-project-id>" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS '<path-to-your-service-account-key.json>'
BUCKET_NAME = "<your-bucket-name>" #@param {type:"string"}
REGION = "us-central1" #@param {type:"string"}
! gsutil mb -l $REGION gs://$BUCKET_NAME
! gsutil ls -al gs://$BUCKET_NAME
! pip install numpy>=1.16.0 scikit-learn==0.20.2
%%writefile preprocess.py
import numpy as np
class MySimpleScaler(object):
def __init__(self):
self._means = None
self._stds = None
def preprocess(self, data):
if self._means is None: # during training only
self._means = np.mean(data, axis=0)
if self._stds is None: # during training only
self._stds = np.std(data, axis=0)
if not self._stds.all():
raise ValueError('At least one column has standard deviation of 0.')
return (data - self._means) / self._stds
import pickle
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.externals import joblib
from preprocess import MySimpleScaler
iris = load_iris()
scaler = MySimpleScaler()
X = scaler.preprocess(iris.data)
y = iris.target
model = RandomForestClassifier()
model.fit(X, y)
joblib.dump(model, 'model.joblib')
with open ('preprocessor.pkl', 'wb') as f:
pickle.dump(scaler, f)
%%writefile predictor.py
import os
import pickle
import numpy as np
from sklearn.datasets import load_iris
from sklearn.externals import joblib
class MyPredictor(object):
def __init__(self, model, preprocessor):
self._model = model
self._preprocessor = preprocessor
self._class_names = load_iris().target_names
def predict(self, instances, **kwargs):
inputs = np.asarray(instances)
preprocessed_inputs = self._preprocessor.preprocess(inputs)
if kwargs.get('probabilities'):
probabilities = self._model.predict_proba(preprocessed_inputs)
return probabilities.tolist()
else:
outputs = self._model.predict(preprocessed_inputs)
return [self._class_names[class_num] for class_num in outputs]
@classmethod
def from_path(cls, model_dir):
model_path = os.path.join(model_dir, 'model.joblib')
model = joblib.load(model_path)
preprocessor_path = os.path.join(model_dir, 'preprocessor.pkl')
with open(preprocessor_path, 'rb') as f:
preprocessor = pickle.load(f)
return cls(model, preprocessor)
%%writefile setup.py
from setuptools import setup
setup(
name='my_custom_code',
version='0.1',
scripts=['predictor.py', 'preprocess.py'])
! python setup.py sdist --formats=gztar
! gsutil cp ./dist/my_custom_code-0.1.tar.gz gs://$BUCKET_NAME/custom_prediction_routine_tutorial/my_custom_code-0.1.tar.gz
! gsutil cp model.joblib preprocessor.pkl gs://$BUCKET_NAME/custom_prediction_routine_tutorial/model/
MODEL_NAME = 'IrisPredictor'
VERSION_NAME = 'v1'
! gcloud ai-platform models create $MODEL_NAME \
--regions $REGION
# --quiet automatically installs the beta component if it isn't already installed
! gcloud --quiet beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--runtime-version 1.13 \
--python-version 3.5 \
--origin gs://$BUCKET_NAME/custom_prediction_routine_tutorial/model/ \
--package-uris gs://$BUCKET_NAME/custom_prediction_routine_tutorial/my_custom_code-0.1.tar.gz \
--prediction-class predictor.MyPredictor
! pip install --upgrade google-api-python-client
import googleapiclient.discovery
instances = [
[6.7, 3.1, 4.7, 1.5],
[4.6, 3.1, 1.5, 0.2],
]
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, VERSION_NAME)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'])
response = service.projects().predict(
name=name,
body={'instances': instances, 'probabilities': True}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'])
# Delete version resource
! gcloud ai-platform versions delete $VERSION_NAME --quiet --model $MODEL_NAME
# Delete model resource
! gcloud ai-platform models delete $MODEL_NAME --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r gs://$BUCKET_NAME/custom_prediction_routine_tutorial
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: http
Step3: We now need some code to read pgm files.
Step4: Let's import it to H2O
Step5: Reconstructing the hidden space
Step6: Then we import this data inside H2O. We have to first map the columns to the gaussian data.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import pandas as pd
import scipy.io
import matplotlib.pyplot as plt
from IPython.display import Image, display
import h2o
from h2o.estimators.deeplearning import H2OAutoEncoderEstimator
h2o.init()
!wget -c http://www.cl.cam.ac.uk/Research/DTG/attarchive/pub/data/att_faces.tar.Z
!tar xzvf att_faces.tar.Z;rm att_faces.tar.Z;
import re
def read_pgm(filename, byteorder='>'):
Return image data from a raw PGM file as numpy array.
Format specification: http://netpbm.sourceforge.net/doc/pgm.html
with open(filename, 'rb') as f:
buffer = f.read()
try:
header, width, height, maxval = re.search(
b"(^P5\s(?:\s*#.*[\r\n])*"
b"(\d+)\s(?:\s*#.*[\r\n])*"
b"(\d+)\s(?:\s*#.*[\r\n])*"
b"(\d+)\s(?:\s*#.*[\r\n]\s)*)", buffer).groups()
except AttributeError:
raise ValueError("Not a raw PGM file: '%s'" % filename)
return np.frombuffer(buffer,
dtype='u1' if int(maxval) < 256 else byteorder+'u2',
count=int(width)*int(height),
offset=len(header)
).reshape((int(height), int(width)))
image = read_pgm("orl_faces/s12/6.pgm", byteorder='<')
image.shape
plt.imshow(image, plt.cm.gray)
plt.show()
import glob
import os
from collections import defaultdict
images = glob.glob("orl_faces/**/*.pgm")
data = defaultdict(list)
image_data = []
for img in images:
_,label,_ = img.split(os.path.sep)
imgdata = read_pgm(img, byteorder='<').flatten().tolist()
data[label].append(imgdata)
image_data.append(imgdata)
faces = h2o.H2OFrame(image_data)
faces.shape
from h2o.estimators.deeplearning import H2OAutoEncoderEstimator
model = H2OAutoEncoderEstimator(
activation="Tanh",
hidden=[50],
l1=1e-4,
epochs=10
)
model.train(x=faces.names, training_frame=faces)
model
import pandas as pd
gaussian_noise = np.random.randn(10304)
plt.imshow(gaussian_noise.reshape(112, 92), plt.cm.gray);
gaussian_noise_pre = dict(zip(faces.names,gaussian_noise))
gaussian_noise_hf = h2o.H2OFrame.from_python(gaussian_noise_pre)
result = model.predict(gaussian_noise_hf)
result.shape
img = result.as_data_frame()
img_data = img.T.values.reshape(112, 92)
plt.imshow(img_data, plt.cm.gray);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementation Notes
Step2: Now, Let's implement the algorithm
Step3: Note how implementation looks similiar to the algorithm, except the if block, which is used to yield the edges.
|
<ASSISTANT_TASK:>
Python Code:
import openanalysis.tree_growth as TreeGrowth
from openanalysis.base_data_structures import PriorityQueue
def dijkstra(G, source=None): # This signature is must
if source is None: source = G.nodes()[0] # selecting root as source
V = G.nodes()
dist, prev = {}, {}
Q = PriorityQueue()
for v in V:
dist[v] = float("inf")
prev[v] = None
Q.add_task(task=v, priority=dist[v])
dist[source] = 0
Q.update_task(task=source, new_priority=dist[source])
visited = set()
for i in range(0, len(G.nodes())):
u_star = Q.remove_min()
if prev[u_star] is not None:
yield (u_star, prev[u_star]) # yield the edge as soon as we visit the nodes
visited.add(u_star)
for u in G.neighbors(u_star):
if u not in visited and dist[u_star] + G.edge[u][u_star]['weight'] < dist[u]:
dist[u] = dist[u_star] + G.edge[u][u_star]['weight']
prev[u] = u_star
Q.update_task(u, dist[u])
TreeGrowth.apply_to_graph(dijkstra)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Import and pre-processing
Step2: Data Pre-processing
Step3: Analysis I
Step4: Some PSSM shows deviation from expected normal distribution -- which should be the case in neutral information setting due to Central Limit Theorem (CLT).
Step5: With normalization
Step6: Normalization over each feature space reduces the complexity of the problem which in turn improves the result.
Step7: Analysis III
Step8: Decision Tree
Step9: Random Forest
Step10: Extremely Randomized Tree
Step11: Random Forest on selected features
Step12: While there is a significant accuracy improvement going from Decision Tree to Random Forest, the resulting prediction from Extremely Random Forest only improves the accuracy by the margin. Likewise, manually handpicking the features does not seem to improve the performance of the accuracy.
|
<ASSISTANT_TASK:>
Python Code:
## matrix and vector tools
import pandas as pd
from pandas import DataFrame as df
from pandas import Series
import numpy as np
## sklearn
from sklearn.datasets import make_friedman1
from sklearn.feature_selection import RFE
from sklearn.svm import SVR
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
from sklearn.datasets import make_blobs
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.feature_selection import VarianceThreshold
# matplotlib et al.
from matplotlib import pyplot as plt
%matplotlib inline
dna = df.from_csv('../../data/training_data_binding_site_prediction/dna_big.csv')
## embed class
dna = dna.reset_index(drop=False)
dna['class_bool'] = dna['class'] == '+'
dna['class_num'] = dna.class_bool.apply(lambda x: 1 if x else 0)
## added protein ID and corresponding position
dna['ID'] = dna.ID_pos.apply(lambda x: ''.join(x.split('_')[:-1]))
dna['pos'] = dna.ID_pos.apply(lambda x: x.split('_')[-1])
## data columns
dna.columns
## print available features
for feature in dna.columns[:-6]:
print feature
dna
## create column-wise normalized data-set
dna_norm = dna.copy()
for col in dna_norm[dna_norm.columns[1:][:-6]].columns:
dna_norm[col] = (dna_norm[col] - dna_norm[col].mean()) / (dna_norm[col].std() + .00001)
dna_norm
# extract dataset and prediction
X = dna[dna.columns[1:][:-6]]
X = X[[x for x in X.columns.tolist() if 'pssm' in x]]
X = X.iloc[range(1000)]
y = dna['class_bool']
y = y[range(1000)]
# apply RFE on linear c-SVM
estimator = SVC(kernel="linear")
selector = RFE(estimator, 5, step=1)
selector = selector.fit(X, y)
print selector.ranking_
# redid previous routine on the whole data
pssm_rank = pd.DataFrame()
cat = dna['class']
for i in range(dna.index.size / 1000):
this_cat = cat[range(i * 1000, (i + 1) * 1000)]
if this_cat.unique().size > 1:
X = dna[dna.columns[1:][:-6]]
X = X[[c for c in X.columns.tolist() if 'pssm' in c]]
X = X.iloc[range(i * 1000, (i + 1) * 1000)]
y = dna['class_bool']
y = y[range(i * 1000, (i + 1) * 1000)]
estimator = SVC(kernel="linear")
selector = RFE(estimator, 5, step=1)
selector = selector.fit(X, y)
print selector.ranking_
pssm_rank[str(i)] = selector.ranking_
pssm_rank.index = [c for c in X.columns.tolist() if 'pssm' in c]
##sort PSSM features by its predictive power
rank_av = [np.mean(pssm_rank.ix[i]) for i in pssm_rank.index]
arg_rank_av = np.argsort(rank_av)
pssm_rank_sorted = pssm_rank.ix[pssm_rank.index[arg_rank_av]]
pssm_rank_sorted['RANK_AV'] = np.sort(rank_av)
pssm_rank_sorted
# plot average rank of all HSSP values
plt.hist([np.mean(pssm_rank.ix[i]) for i in pssm_rank.index], bins=60, alpha=.5)
plt.title("Histogram of Average HSSP Features Rank (RFE on linear SVM)")
fig = plt.gcf()
fig.set_size_inches(10, 6)
X = dna[dna.columns[1:][:-6]]
y = dna.class_num
## train c-SVM
clf_svm1 = SVC(kernel='rbf', C=0.7)
clf_svm1.fit(X[dna.fold == 0], y[dna.fold == 0])
## predict class
pred = clf_svm1.predict(dna[dna.fold == 1][dna.columns[1:][:-6]])
truth = dna[dna.fold == 1]['class_num']
tp = pred[(np.array(pred) == 1) & (np.array(truth) == 1)].size
tn = pred[(np.array(pred) == 0) & (np.array(truth) == 0)].size
fp = pred[(np.array(pred) == 1) & (np.array(truth) == 0)].size
fn = pred[(np.array(pred) == 0) & (np.array(truth) == 1)].size
cm = "Confusion Matrix:\n\tX\t\t(+)-pred\t(-)-pred\n" +\
"\t(+)-truth\t%d\t\t%d\n" +\
"\t(-)-truth\t%d\t\t%d"
print cm % (tp, fn, fp, tn)
print "Size of (-)- and (+)-sets:\n\t(+)\t %d\n\t(-)\t%d" % (truth[truth == 1].index.size, truth[truth == 0].index.size)
X_norm = dna_norm[dna_norm.columns[1:][:-6]]
y = dna_norm.class_num
## train c-SVM
clf_svm2 = SVC(kernel='rbf', C=0.7)
clf_svm2.fit(X_norm[dna_norm.fold == 0], y[dna_norm.fold == 0])
## predict class
pred2 = clf_svm2.predict(dna_norm[dna_norm.fold == 1][dna_norm.columns[1:][:-6]])
truth = dna_norm[dna_norm.fold == 1]['class_num']
tp = pred2[(np.array(pred2) == 1) & (np.array(truth) == 1)].size
tn = pred2[(np.array(pred2) == 0) & (np.array(truth) == 0)].size
fp = pred2[(np.array(pred2) == 1) & (np.array(truth) == 0)].size
fn = pred2[(np.array(pred2) == 0) & (np.array(truth) == 1)].size
cm = "Confusion Matrix:\n\tX\t\t(+)-pred\t(-)-pred\n" +\
"\t(+)-truth\t%d\t\t%d\n" +\
"\t(-)-truth\t%d\t\t%d"
print cm % (tp, fn, fp, tn)
## hand-pick features
features = [x for x in dna.columns[1:][:-6] if 'pssm' in x] +\
[x for x in dna.columns[1:][:-6] if 'glbl_aa_comp' in x] +\
[x for x in dna.columns[1:][:-6] if 'glbl_sec' in x] +\
[x for x in dna.columns[1:][:-6] if 'glbl_acc' in x] +\
[x for x in dna.columns[1:][:-6] if 'chemprop_mass' in x] +\
[x for x in dna.columns[1:][:-6] if 'chemprop_hyd' in x] +\
[x for x in dna.columns[1:][:-6] if 'chemprop_cbeta' in x] +\
[x for x in dna.columns[1:][:-6] if 'chemprop_charge' in x] +\
[x for x in dna.columns[1:][:-6] if 'inf_PP' in x] +\
[x for x in dna.columns[1:][:-6] if 'isis_bin' in x] +\
[x for x in dna.columns[1:][:-6] if 'isis_raw' in x] +\
[x for x in dna.columns[1:][:-6] if 'profbval_raw' in x] +\
[x for x in dna.columns[1:][:-6] if 'profphd_sec_raw' in x] +\
[x for x in dna.columns[1:][:-6] if 'profphd_sec_bin' in x] +\
[x for x in dna.columns[1:][:-6] if 'profphd_acc_bin' in x] +\
[x for x in dna.columns[1:][:-6] if 'profphd_normalize' in x] +\
[x for x in dna.columns[1:][:-6] if 'pfam_within_domain' in x] +\
[x for x in dna.columns[1:][:-6] if 'pfam_dom_cons' in x]
X_norm = dna_norm[features]
y = dna_norm.class_num
## train c-SVM
clf_svm3 = SVC(kernel='rbf', C=0.7)
clf_svm3.fit(X_norm[dna_norm.fold == 0], y[dna_norm.fold == 0])
## predict class
pred3 = clf_svm3.predict(X_norm[dna_norm.fold == 1])
truth = dna_norm[dna_norm.fold == 1]['class_num']
tp = pred3[(np.array(pred3) == 1) & (np.array(truth) == 1)].size
tn = pred3[(np.array(pred3) == 0) & (np.array(truth) == 0)].size
fp = pred3[(np.array(pred3) == 1) & (np.array(truth) == 0)].size
fn = pred3[(np.array(pred3) == 0) & (np.array(truth) == 1)].size
cm = "Confusion Matrix:\n\tX\t\t(+)-pred\t(-)-pred\n" +\
"\t(+)-truth\t%d\t\t%d\n" +\
"\t(-)-truth\t%d\t\t%d"
print cm % (tp, fn, fp, tn)
X = dna[dna.columns[1:][:-6]]
y = dna.class_num
# compute cross validated accuracy of the model
clf_t1 = DecisionTreeClassifier(max_depth=None, min_samples_split=2,
random_state=0)
scores = cross_val_score(clf_t1, X, y, cv=5)
print scores
print scores.mean()
clf_t1.fit(X[dna.fold == 0], y[dna.fold == 0])
pred_t1 = clf_t1.predict(X[dna.fold == 1])
truth = dna[dna.fold == 1]['class_num']
tp = pred_t1[(np.array(pred_t1) == 1) & (np.array(truth) == 1)].size
tn = pred_t1[(np.array(pred_t1) == 0) & (np.array(truth) == 0)].size
fp = pred_t1[(np.array(pred_t1) == 1) & (np.array(truth) == 0)].size
fn = pred_t1[(np.array(pred_t1) == 0) & (np.array(truth) == 1)].size
cm = "Confusion Matrix:\n\tX\t\t(+)-pred\t(-)-pred\n" +\
"\t(+)-truth\t%d\t\t%d\n" +\
"\t(-)-truth\t%d\t\t%d"
print cm % (tp, fn, fp, tn)
# compute cross validated accuracy of the model
clf_t2 = RandomForestClassifier(n_estimators=10, max_depth=None,
min_samples_split=2, random_state=0)
scores = cross_val_score(clf_t2, X, y, cv=5)
print scores
print scores.mean()
clf_t2.fit(X[dna.fold == 0], y[dna.fold == 0])
pred_t2 = clf_t2.predict(X[dna.fold == 1])
truth = dna[dna.fold == 1]['class_num']
tp = pred_t2[(np.array(pred_t2) == 1) & (np.array(truth) == 1)].size
tn = pred_t2[(np.array(pred_t2) == 0) & (np.array(truth) == 0)].size
fp = pred_t2[(np.array(pred_t2) == 1) & (np.array(truth) == 0)].size
fn = pred_t2[(np.array(pred_t2) == 0) & (np.array(truth) == 1)].size
cm = "Confusion Matrix:\n\tX\t\t(+)-pred\t(-)-pred\n" +\
"\t(+)-truth\t%d\t\t%d\n" +\
"\t(-)-truth\t%d\t\t%d"
print cm % (tp, fn, fp, tn)
# compute cross validated accuracy of the model
clf_t3 = ExtraTreesClassifier(n_estimators=10, max_depth=None,
min_samples_split=2, random_state=0)
scores = cross_val_score(clf_t3, X, y, cv=5)
print scores
print scores.mean()
clf_t3.fit(X[dna.fold == 0], y[dna.fold == 0])
pred_t3 = clf_t3.predict(X[dna.fold == 1])
truth = dna[dna.fold == 1]['class_num']
tp = pred_t3[(np.array(pred_t3) == 1) & (np.array(truth) == 1)].size
tn = pred_t3[(np.array(pred_t3) == 0) & (np.array(truth) == 0)].size
fp = pred_t3[(np.array(pred_t3) == 1) & (np.array(truth) == 0)].size
fn = pred_t3[(np.array(pred_t3) == 0) & (np.array(truth) == 1)].size
cm = "Confusion Matrix:\n\tX\t\t(+)-pred\t(-)-pred\n" +\
"\t(+)-truth\t%d\t\t%d\n" +\
"\t(-)-truth\t%d\t\t%d"
print cm % (tp, fn, fp, tn)
features = [x for x in dna.columns[1:][:-6] if 'pssm' in x] +\
[x for x in dna.columns[1:][:-6] if 'glbl_aa_comp' in x] +\
[x for x in dna.columns[1:][:-6] if 'glbl_sec' in x] +\
[x for x in dna.columns[1:][:-6] if 'glbl_acc' in x] +\
[x for x in dna.columns[1:][:-6] if 'chemprop_mass' in x] +\
[x for x in dna.columns[1:][:-6] if 'chemprop_hyd' in x] +\
[x for x in dna.columns[1:][:-6] if 'chemprop_cbeta' in x] +\
[x for x in dna.columns[1:][:-6] if 'chemprop_charge' in x] +\
[x for x in dna.columns[1:][:-6] if 'inf_PP' in x] +\
[x for x in dna.columns[1:][:-6] if 'isis_bin' in x] +\
[x for x in dna.columns[1:][:-6] if 'isis_raw' in x] +\
[x for x in dna.columns[1:][:-6] if 'profbval_raw' in x] +\
[x for x in dna.columns[1:][:-6] if 'profphd_sec_raw' in x] +\
[x for x in dna.columns[1:][:-6] if 'profphd_sec_bin' in x] +\
[x for x in dna.columns[1:][:-6] if 'profphd_acc_bin' in x] +\
[x for x in dna.columns[1:][:-6] if 'profphd_normalize' in x] +\
[x for x in dna.columns[1:][:-6] if 'pfam_within_domain' in x] +\
[x for x in dna.columns[1:][:-6] if 'pfam_dom_cons' in x]
X = dna[features]
y = dna.class_num
# compute cross validated accuracy of the model
clf_t4 = RandomForestClassifier(n_estimators=10, max_depth=None,
min_samples_split=2, random_state=0)
scores = cross_val_score(clf_t4, X, y, cv=5)
print scores
print scores.mean()
clf_t4.fit(X[dna.fold == 0], y[dna.fold == 0])
pred_t4 = clf_t4.predict(X[dna.fold == 1])
truth = dna[dna.fold == 1]['class_num']
tp = pred_t4[(np.array(pred_t4) == 1) & (np.array(truth) == 1)].size
tn = pred_t4[(np.array(pred_t4) == 0) & (np.array(truth) == 0)].size
fp = pred_t4[(np.array(pred_t4) == 1) & (np.array(truth) == 0)].size
fn = pred_t4[(np.array(pred_t4) == 0) & (np.array(truth) == 1)].size
cm = "Confusion Matrix:\n\tX\t\t(+)-pred\t(-)-pred\n" +\
"\t(+)-truth\t%d\t\t%d\n" +\
"\t(-)-truth\t%d\t\t%d"
print cm % (tp, fn, fp, tn)
X = dna[dna.columns[1:][:-6]]
y = dna.class_num
# compute cross validated accuracy of the model
ada = AdaBoostClassifier(n_estimators=100)
scores = cross_val_score(ada, X, y, cv=5)
print scores
print scores.mean()
ada.fit(X[dna.fold == 0], y[dna.fold == 0])
pred_ada = ada.predict(X[dna.fold == 1])
truth = dna[dna.fold == 1]['class_num']
tp = pred_ada[(np.array(pred_ada) == 1) & (np.array(truth) == 1)].size
tn = pred_ada[(np.array(pred_ada) == 0) & (np.array(truth) == 0)].size
fp = pred_ada[(np.array(pred_ada) == 1) & (np.array(truth) == 0)].size
fn = pred_ada[(np.array(pred_ada) == 0) & (np.array(truth) == 1)].size
cm = "Confusion Matrix:\n\tX\t\t(+)-pred\t(-)-pred\n" +\
"\t(+)-truth\t%d\t\t%d\n" +\
"\t(-)-truth\t%d\t\t%d"
print cm % (tp, fn, fp, tn)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot the source estimate
Step2: Plot the activation in the direction of maximal power for this data
Step3: The normal is very similar
Step4: You can also do this with a fixed-orientation inverse. It looks a lot like
|
<ASSISTANT_TASK:>
Python Code:
# Author: Marijn van Vliet <w.m.vanvliet@gmail.com>
#
# License: BSD-3-Clause
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
smoothing_steps = 7
# Read evoked data
meg_path = data_path / 'MEG' / 'sample'
fname_evoked = meg_path / 'sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
# Read inverse solution
fname_inv = meg_path / 'sample_audvis-meg-oct-6-meg-inv.fif'
inv = read_inverse_operator(fname_inv)
# Apply inverse solution, set pick_ori='vector' to obtain a
# :class:`mne.VectorSourceEstimate` object
snr = 3.0
lambda2 = 1.0 / snr ** 2
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', pick_ori='vector')
# Use peak getter to move visualization to the time point of the peak magnitude
_, peak_time = stc.magnitude().get_peak(hemi='lh')
brain = stc.plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir,
smoothing_steps=smoothing_steps)
# You can save a brain movie with:
# brain.save_movie(time_dilation=20, tmin=0.05, tmax=0.16, framerate=10,
# interpolation='linear', time_viewer=True)
stc_max, directions = stc.project('pca', src=inv['src'])
# These directions must by design be close to the normals because this
# inverse was computed with loose=0.2
print('Absolute cosine similarity between source normals and directions: '
f'{np.abs(np.sum(directions * inv["source_nn"][2::3], axis=-1)).mean()}')
brain_max = stc_max.plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir,
time_label='Max power', smoothing_steps=smoothing_steps)
brain_normal = stc.project('normal', inv['src'])[0].plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir,
time_label='Normal', smoothing_steps=smoothing_steps)
fname_inv_fixed = (
meg_path / 'sample_audvis-meg-oct-6-meg-fixed-inv.fif')
inv_fixed = read_inverse_operator(fname_inv_fixed)
stc_fixed = apply_inverse(
evoked, inv_fixed, lambda2, 'dSPM', pick_ori='vector')
brain_fixed = stc_fixed.plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir,
smoothing_steps=smoothing_steps)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Slack Tide
Step2: Flood Tide
Step3: Tidal Currents Exploration
Step4: In the following interactive plot you can calculate the velocity of the current between the ocean and the estuary, and know the stage of the tidal current. The following parameters determine the value of the velocity and its stage.
Step5: Tidal Currents in Admiralty Inlet
Step6: This takes a long time...be patience!
Step7: Now lets take a quiz on Tidal Currents
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image("Figures/EbbTideCurrent.jpg")
Image("Figures/SlackTide.jpg")
Image("Figures/FloodTideCurrent.jpg")
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
from ipywidgets import interact, interactive, fixed
from tydal.module3_utils import plot_currents
%matplotlib inline
interact(plot_currents,T=fixed(12.42),a1=[0,4],a2=[0,4],alpha=(0,90),N=(0,399))
import tydal.module3_utils as m3
import tydal.module2_utils as tu
URL1='http://107.170.217.21:8080/thredds/dodsC/Salish_L1_STA/Salish_L1_STA.ncml'
[ferry, ferry_download, message]=m3.ferry_data_download(URL1)
ferryQC= m3.ferry_data_QC(ferry,6.5,4,4)
ferryQC = m3.count_route_num(ferryQC[0])
#import tides
pt_tide = tu.load_Port_Townsend('Data/')
pt_tide = pt_tide['Water Level']
start_date = '2016-10-01'
end_date = '2016-11-01'
#plt.style.use('ggplot')
%matplotlib inline
interact(m3.plt_ferry_and_tide, ferryQc=fixed(ferryQC),
pt_tide=fixed(pt_tide), crossing_index = (0,280),
start_date = fixed(start_date), end_date = fixed(end_date))
import tydal.quiz3
tydal.quiz3
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.2 Loading Bokeh and adding QLF ploting code to the python path
Step2: 2. Wedge Plot
Step4: 2.2 Creating tooltip format
Step5: 2.3 Showing a Wedge Plot
Step6: 3. Simple Histogram
Step7: 4. Histogram with labels
Step9: 4.2 Creating new tooltip format
Step10: 4.3 Plotting a simple histogram
Step11: 4.4 Creating normal and warning divisors/labels
Step12: 5. Patch plot
|
<ASSISTANT_TASK:>
Python Code:
import urllib.request
import json
job = urllib.request.urlopen("http://ql.linea.gov.br/dashboard/api/job/?process=1").read()
api = json.loads(job)
mergedqa = api['results'][0]['output']
print('mergedQA loaded!')
import sys
sys.path.append('/app/qlf/backend/framework/qlf')
from bokeh.io import show, output_notebook
output_notebook()
from bokeh.models import ColumnDataSource, Range1d
import numpy as np
from dashboard.bokeh.helper import get_palette, sort_obj
from bokeh.models import LinearColorMapper
check_ccds = mergedqa['TASKS']['CHECK_CCDs']
gen_info = mergedqa['GENERAL_INFO']
flavor= mergedqa['FLAVOR']
ra = gen_info['RA']
dec = gen_info['DEC']
xw_fib = check_ccds['METRICS']['XWSIGMA_FIB']
xsigma = xw_fib[0]
xfiber = np.arange(len(xsigma))
obj_type = sort_obj(gen_info)
source = ColumnDataSource(data={
'x1': ra,
'y1': dec,
'xsigma': xsigma,
'xfiber': xfiber,
'OBJ_TYPE': obj_type,
'left': np.arange(0, 500)-0.4,
'right': np.arange(0, 500)+0.4,
'bottom': [0]*500
})
# centralize wedges in plots:
ra_center=0.5*(max(ra)+min(ra))
dec_center=0.5*(max(dec)+min(dec))
xrange_wedge = Range1d(start=ra_center + .95, end=ra_center-.95)
yrange_wedge = Range1d(start=dec_center+.82, end=dec_center-.82)
my_palette = get_palette("viridis")
xmapper = LinearColorMapper(palette=my_palette,
low=0.98*np.min(xsigma),
high=1.02*np.max(xsigma))
print('Data Ready!')
xsigma_tooltip =
<div>
<div>
<span style="font-size: 1vw; font-weight: bold; color: #303030;">XSigma: </span>
<span style="font-size: 1vw; color: #515151">@xsigma</span>
</div>
<div>
<span style="font-size: 1vw; font-weight: bold; color: #303030;">Obj Type: </span>
<span style="font-size: 1vw; color: #515151;">@OBJ_TYPE</span>
</div>
<div>
<span style="font-size: 1vw; font-weight: bold; color: #303030;">RA: </span>
<span style="font-size: 1vw; color: #515151;">@x1</span>
</div>
<div>
<span style="font-size: 1vw; font-weight: bold; color: #303030;">DEC: </span>
<span style="font-size: 1vw; color: #515151;">@y1</span>
</div>
<div>
<span style="font-size: 1vw; font-weight: bold; color: #303030;">FIBER ID: </span>
<span style="font-size: 1vw; color: #515151;">@xfiber</span>
</div>
</div>
print('Tooltip Created!')
from dashboard.bokeh.plots.plot2d.main import Plot2d
wedge_plot_x = Plot2d(
x_range=xrange_wedge,
y_range=yrange_wedge,
x_label="RA",
y_label="DEC",
tooltip=xsigma_tooltip,
title="XSIGMA",
width=500,
height=380,
yscale="auto"
).wedge(
source,
x='x1',
y='y1',
field='xsigma',
mapper=xmapper,
colorbar_title='xsigma'
).plot
show(wedge_plot_x)
d_yplt = (max(xsigma) - min(xsigma))*0.1
yrange = [0, max(xsigma) + d_yplt]
xhist = Plot2d(
yrange,
x_label="Fiber number",
y_label="X std dev (number of pixels)",
tooltip=xsigma_tooltip,
title="Histogram",
width=600,
height=300,
yscale="auto",
hover_mode="vline",
).quad(
source,
top='xsigma',
)
show(xhist)
wrg = check_ccds['PARAMS']['XWSIGMA_WARN_RANGE']
delta_rg = wrg[1] - wrg[0]
hist_rg = (wrg[0] - 0.1*delta_rg, wrg[1]+0.1*delta_rg)
if mergedqa['FLAVOR'].upper() == 'SCIENCE':
program = mergedqa['GENERAL_INFO']['PROGRAM'].upper()
program_prefix = '_'+program
else:
program_prefix = ''
xw_ref = check_ccds['PARAMS']['XWSIGMA'+program_prefix+'_REF']
hist, edges = np.histogram(xsigma, 'sqrt')
source_hist = ColumnDataSource(data={
'hist': hist,
'bottom': [0] * len(hist),
'left': edges[:-1],
'right': edges[1:]
})
print('Done!')
hist_tooltip_x =
<div>
<div>
<span style="font-size: 1vw; font-weight: bold; color: #303030;">Frequency: </span>
<span style="font-size: 1vw; color: #515151">@hist</span>
</div>
<div>
<span style="font-size: 1vw; font-weight: bold; color: #303030;">XSIGMA: </span>
<span style="font-size: 1vw; color: #515151;">[@left, @right]</span>
</div>
</div>
print('Tooltip Created!')
p_hist_x = Plot2d(
x_label="XSIGMA",
y_label="Frequency",
tooltip=hist_tooltip_x,
title="Histogram",
width=600,
height=300,
yscale="auto",
y_range=(0.0*max(hist), 1.1*max(hist)),
x_range=(hist_rg[0]+xw_ref[0],
hist_rg[1]+xw_ref[0]),
hover_mode="vline",
).quad(
source_hist,
top='hist',
bottom='bottom',
line_width=0.4,
)
show(p_hist_x)
nrg = check_ccds['PARAMS']['XWSIGMA_NORMAL_RANGE']
from bokeh.models import Span, Label
for ialert in nrg:
normal_divisors = Span(location=ialert+xw_ref[0],
dimension='height',
line_color='green',
line_dash='dashed', line_width=2)
p_hist_x.add_layout(normal_divisors)
normal_labels = Label(x=ialert+xw_ref[0],
y= yrange[-1]/2.2,
y_units='data',
text='Normal Range',
text_color='green', angle=np.pi/2.)
p_hist_x.add_layout(normal_labels)
for ialert in wrg:
warning_divisors = Span(location=ialert+xw_ref[0], dimension='height', line_color='tomato',
line_dash='dotdash', line_width=2)
p_hist_x.add_layout(warning_divisors)
warning_labels = Label(x=ialert+xw_ref[0], y=yrange[-1]/2.2, y_units='data',
text='Warning Range', text_color='tomato', angle=np.pi/2.)
p_hist_x.add_layout(warning_labels)
p_hist_x.title.text = "Histogram with labels"
show(p_hist_x)
from dashboard.bokeh.plots.patch.main import Patch
xw_amp = check_ccds['METRICS']['XWSIGMA_AMP']
xamp = Patch().plot_amp(
dz=xw_amp[0],
refexp=[xw_ref[0]]*4,
name="XSIGMA AMP",
description="X std deviation per Amp (number of pixels)",
wrg=wrg
)
show(xamp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Nicely formatted results
Step2: Creating cells
Step3: Once you've run all three cells, try modifying the first one to set class_name to your name, rather than "Intro to Data Analysis", so you can print that you are awesome. Then rerun the first and third cells without rerunning the second.
|
<ASSISTANT_TASK:>
Python Code:
# Hit shift + enter or use the run button to run this cell and see the results
print 'hello world'
# The last line of every code cell will be displayed by default,
# even if you don't print it. Run this cell to see how this works.
2 + 2 # The result of this line will not be displayed
3 + 3 # The result of this line will be displayed, because it is the last line of the cell
# If you run this cell, you should see the values displayed as a table.
# Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course.
import pandas as pd
df = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]})
df
# If you run this cell, you should see a scatter plot of the function y = x^2
%pylab inline
import matplotlib.pyplot as plt
xs = range(-30, 31)
ys = [x ** 2 for x in xs]
plt.scatter(xs, ys)
class_name = "Intro to Data Analysis"
message = class_name + " is awesome!"
message
import unicodecsv
## Longer version of code (replaced with shorter, equivalent version below)
# enrollments = []
# f = open(enrollments_filename, 'rb')
# reader = unicodecsv.DictReader(f)
# for row in reader:
# enrollments.append(row)
# f.close()
def read_csv(filename):
with open(filename, 'rb') as f:
reader = unicodecsv.DictReader(f)
lines = list(reader)
return lines
### Write code similar to the above to load the engagement
### and submission data. The data is stored in files with
### the given filenames. Then print the first row of each
### table to make sure that your code works. You can use the
### "Test Run" button to see the output of your code.
enrollments_filename = 'enrollments.csv'
engagement_filename = 'daily_engagement.csv'
submissions_filename = 'project_submissions.csv'
enrollments = read_csv(enrollments_filename)
daily_engagement = read_csv(engagement_filename)
project_submissions = read_csv(submissions_filename)
enrollments[0]
daily_engagement[0]
project_submissions[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Time Critical Path
Step2: Plot First and Last Frames
Step3: Reshape the Data for Clustering
Step4: Begin Heavy Lifting
Step5: Analysis
Step6: Visualization of the Data
|
<ASSISTANT_TASK:>
Python Code:
from read_video import *
import numpy as np
import matplotlib.pyplot as plt
import cv2
video_to_read = "/Users/cody/test.mov"
max_buf_size_mb = 500;
%time frame_buffer = ReadVideo(video_to_read, max_buf_size_mb)
frame_buffer.nbytes
print("Matrix shape: {}".format(frame_buffer.shape))
%matplotlib inline
#If you try to imshow doubles, it will look messed up.
plt.imshow(frame_buffer[0, :, :, :]); # Plot first frame
plt.show()
plt.imshow(frame_buffer[-1, :, :, :]); # Plot last frame
plt.show()
from sklearn import metrics
from sklearn.cluster import KMeans
from sklearn import cluster
from sklearn.datasets import load_digits
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
buf_s = frame_buffer.shape
K = buf_s[0] # Number of frames
M = buf_s[1]
N = buf_s[2]
chan = buf_s[3] # Color channel
%time scikit_buffer = frame_buffer.reshape([K, M*N*chan])
scikit_buffer.shape
k_means = cluster.KMeans(n_clusters=7, n_init=1, copy_x=False)
%time k_means.fit(scikit_buffer)
labels = k_means.labels_
values = k_means.cluster_centers_.squeeze()
labels
prev = labels[0]
plt_count = 0
for i in range(1, labels.size):
if (plt_count == 5):
break;
if (prev != labels[i]):
plt.subplot(1,2,1);
plt.title(i)
plt.imshow(frame_buffer[i, :, :, :])
plt.subplot(1,2,2);
plt.title(i-1)
plt.imshow(frame_buffer[i-1, :, :, :])
plt.show()
plt_count = plt_count + 1
prev = labels[i]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: People per structure used for sanity check on parcel counts by department
Step3: Get owned census-tracts for department and count parcels
Step5: Structure count by hazard category by owned census tract geometries
Step8: Export of parcels by owned census tracts
|
<ASSISTANT_TASK:>
Python Code:
q = select fd.id, fd.name, fd.state,
COALESCE(fd.population, 0) as population,
sum(rm.structure_count) as structure_count,
fd.population / sum(rm.structure_count)::float as people_per_structure
from firestation_firedepartment fd
inner join firestation_firedepartmentriskmodels rm
on rm.department_id = fd.id
where rm.level != 0
group by fd.id, COALESCE(fd.population, 0)
order by COALESCE(population / sum(rm.structure_count)::float, 0) desc
df = pd.read_sql_query(q, conn)
df.to_csv('/tmp/people_per_structure_by_department.csv')
filtered = df[df['population'] > 100000][:30]
res = tabulate(filtered, headers='keys', tablefmt='pipe')
open('/tmp/outf.md', 'w').write(res)
! cat /tmp/outf.md | pbcopy
display(filtered)
q = SELECT ST_Multi(ST_Union(bg.geom))
FROM nist.tract_years ty
INNER JOIN census_block_groups_2010 bg
ON ty.tr10_fid = ('14000US'::text || "substring"((bg.geoid10)::text, 0, 12))
WHERE ty.fc_dept_id = %(id)s
GROUP BY ty.fc_dept_id
geom = pd.read_sql_query(q, nfirs, params={'id': 96649})['st_multi'][0]
display_geom(wkb.loads(geom, hex=True))
q = select count(1), risk_category
from parcel_risk_category_local p
where ST_Intersects(p.wkb_geometry, ST_SetSRID(%(owned_geom)s::geometry, 4326))
group by risk_category
df = pd.read_sql_query(q, nfirs, params={'owned_geom': geom})
display(df)
q = select parcel_id, risk_category
from parcel_risk_category_local l
where ST_Intersects(l.wkb_geometry, ST_SetSRID(%(owned_geom)s::geometry, 4326))
owned_parcels = pd.read_sql_query(q, nfirs, params={'owned_geom': geom})
import geopandas
q = select p.*, rc.risk_category as hazard_level from parcels p
inner join parcel_risk_category_local rc using (parcel_id)
where p.parcel_id in %(ids)s
res = map(lambda x: x[0], owned_parcels.values)
gdf = geopandas.read_postgis(q, nfirs, geom_col='wkb_geometry', params={'ids': tuple(res)})
gdf.drop('risk_category', 1)
gdf.crs = {'init': 'epsg:4326'}
gdf.to_file('/tmp/tamarac-parcels.shp', driver='ESRI Shapefile')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Each Util2d instance now has a .format attribute, which is an ArrayFormat instance
Step2: The ArrayFormat class exposes each of the attributes seen in the ArrayFormat.___str___() call. ArrayFormat also exposes .fortran, .py and .numpy atrributes, which are the respective format descriptors
Step3: (re)-setting .format
Step4: Let's load the model we just wrote and check that the desired botm[0].format was used
Step5: We can also reset individual format components (we can also generate some warnings)
Step6: We can also select free format. Note that setting to free format resets the format attributes to the default, max precision
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sys
import os
import platform
import numpy as np
import matplotlib.pyplot as plt
import flopy
#Set name of MODFLOW exe
# assumes executable is in users path statement
version = 'mf2005'
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
mfexe = exe_name
#Set the paths
loadpth = os.path.join('..', 'data', 'freyberg')
modelpth = os.path.join('data')
#make sure modelpth directory exists
if not os.path.exists(modelpth):
os.makedirs(modelpth)
ml = flopy.modflow.Modflow.load('freyberg.nam', model_ws=loadpth,
exe_name=exe_name, version=version)
ml.model_ws = modelpth
ml.write_input()
success, buff = ml.run_model()
if not success:
print ('Something bad happened.')
files = ['freyberg.hds', 'freyberg.cbc']
for f in files:
if os.path.isfile(os.path.join(modelpth, f)):
msg = 'Output file located: {}'.format(f)
print (msg)
else:
errmsg = 'Error. Output file cannot be found: {}'.format(f)
print (errmsg)
print(ml.lpf.hk[0].format)
print(ml.dis.botm[0].format.fortran)
print(ml.dis.botm[0].format.py)
print(ml.dis.botm[0].format.numpy)
ml.dis.botm[0].format.fortran = "(6f10.4)"
print(ml.dis.botm[0].format.fortran)
print(ml.dis.botm[0].format.py)
print(ml.dis.botm[0].format.numpy)
ml.write_input()
success, buff = ml.run_model()
ml1 = flopy.modflow.Modflow.load("freyberg.nam",model_ws=modelpth)
print(ml1.dis.botm[0].format)
ml.dis.botm[0].format.width = 9
ml.dis.botm[0].format.decimal = 1
print(ml1.dis.botm[0].format)
ml.dis.botm[0].format.free = True
print(ml1.dis.botm[0].format)
ml.write_input()
success, buff = ml.run_model()
ml1 = flopy.modflow.Modflow.load("freyberg.nam",model_ws=modelpth)
print(ml1.dis.botm[0].format)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
import tensorflow as tf
from tensorflow.keras.losses import CategoricalCrossentropy
y_true = [[0, 1, 0], [1, 0, 0]]
y_pred = [[0.15, 0.75, 0.1], [0.75, 0.15, 0.1]]
cross_entropy_loss = CategoricalCrossentropy()
print(cross_entropy_loss(y_true, y_pred).numpy())
import tensorflow as tf
from tensorflow.keras.losses import SparseCategoricalCrossentropy
y_true = [1, 0]
y_pred = [[0.15, 0.75, 0.1], [0.75, 0.15, 0.1]]
cross_entropy_loss = SparseCategoricalCrossentropy()
loss = cross_entropy_loss(y_true, y_pred).numpy()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Predict Shakespeare with Cloud TPUs and Keras
Step2: Data, model, and training
Step4: Build the tf.data.Dataset
Step6: Build the model
Step7: Train the model
Step8: Make predictions with the model
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import tensorflow as tf
SEQ_LEN = 100
BATCH_SIZE = 128
!gsutil cat gs://cloud-training-demos/tpudemo/shakespeare.txt | head -1000 | tail -10
import numpy as np
import six
import tensorflow as tf
import time
import os
SHAKESPEARE_TXT = 'gs://cloud-training-demos/tpudemo/shakespeare.txt'
tf.logging.set_verbosity(tf.logging.INFO)
def transform(txt, pad_to=None):
# drop any non-ascii characters
output = np.asarray([ord(c) for c in txt if ord(c) < 255], dtype=np.int32)
if pad_to is not None:
output = output[:pad_to]
output = np.concatenate([
np.zeros([pad_to - len(txt)], dtype=np.int32),
output,
])
return output
def training_generator(seq_len=SEQ_LEN, batch_size=BATCH_SIZE):
A generator yields (source, target) arrays for training.
with tf.gfile.GFile(SHAKESPEARE_TXT, 'r') as f:
txt = f.read()
tf.logging.info('Input text [%d] %s', len(txt), txt[:50])
source = transform(txt)
while True:
offsets = np.random.randint(0, len(source) - seq_len, batch_size)
# Our model uses sparse crossentropy loss, but Keras requires labels
# to have the same rank as the input logits. We add an empty final
# dimension to account for this.
yield (
np.stack([source[idx:idx + seq_len] for idx in offsets]),
np.expand_dims(
np.stack([source[idx + 1:idx + seq_len + 1] for idx in offsets]),
-1),
)
a = six.next(training_generator(seq_len=10, batch_size=1))
print(a)
#print(tf.convert_to_tensor(a[1]))
def create_dataset():
return tf.data.Dataset.from_generator(training_generator,
(tf.int32, tf.int32),
(tf.TensorShape([BATCH_SIZE, SEQ_LEN]),
tf.TensorShape([BATCH_SIZE, SEQ_LEN, 1]))
)
EMBEDDING_DIM = 512
def lstm_model(seq_len, batch_size, stateful):
Language model: predict the next word given the current word.
source = tf.keras.Input(
name='seed', shape=(seq_len,), batch_size=batch_size, dtype=tf.int32)
embedding = tf.keras.layers.Embedding(input_dim=256, output_dim=EMBEDDING_DIM)(source)
lstm_1 = tf.keras.layers.LSTM(EMBEDDING_DIM, stateful=stateful, return_sequences=True)(embedding)
lstm_2 = tf.keras.layers.LSTM(EMBEDDING_DIM, stateful=stateful, return_sequences=True)(lstm_1)
predicted_char = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(256, activation='softmax'))(lstm_2)
model = tf.keras.Model(inputs=[source], outputs=[predicted_char])
model.compile(
optimizer=tf.train.RMSPropOptimizer(learning_rate=0.01),
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
return model
tf.keras.backend.clear_session()
training_model = lstm_model(seq_len=SEQ_LEN, batch_size=BATCH_SIZE, stateful=False)
# Use TPU if it exists, else fall back to GPU
try: # TPU detection
tpu = tf.contrib.cluster_resolver.TPUClusterResolver()
training_model = tf.contrib.tpu.keras_to_tpu_model(
training_model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(tpu))
training_input = create_dataset # Function that returns a dataset
print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
except ValueError:
tpu = None
training_input = create_dataset() # The dataset itself
print("Running on GPU or CPU")
# Run fit()
training_model.fit(
training_input,
steps_per_epoch=100,
epochs=10,
)
tpu_model.save_weights('/tmp/bard.h5', overwrite=True)
BATCH_SIZE = 5
PREDICT_LEN = 250
# Keras requires the batch size be specified ahead of time for stateful models.
# We use a sequence length of 1, as we will be feeding in one character at a
# time and predicting the next character.
prediction_model = lstm_model(seq_len=1, batch_size=BATCH_SIZE, stateful=True)
prediction_model.load_weights('/tmp/bard.h5')
# We seed the model with our initial string, copied BATCH_SIZE times
seed_txt = 'Looks it not like the king? Verily, we must go! '
seed = transform(seed_txt)
seed = np.repeat(np.expand_dims(seed, 0), BATCH_SIZE, axis=0)
# First, run the seed forward to prime the state of the model.
prediction_model.reset_states()
for i in range(len(seed_txt) - 1):
prediction_model.predict(seed[:, i:i + 1])
# Now we can accumulate predictions!
predictions = [seed[:, -1:]]
for i in range(PREDICT_LEN):
last_word = predictions[-1]
next_probits = prediction_model.predict(last_word)[:, 0, :]
# sample from our output distribution
next_idx = [
np.random.choice(256, p=next_probits[i])
for i in range(BATCH_SIZE)
]
predictions.append(np.asarray(next_idx, dtype=np.int32))
for i in range(BATCH_SIZE):
print('PREDICTION %d\n\n' % i)
p = [predictions[j][i] for j in range(PREDICT_LEN)]
generated = ''.join([chr(c) for c in p])
print(generated)
print()
assert len(generated) == PREDICT_LEN, 'Generated text too short'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: convert grey scale image to binary
|
<ASSISTANT_TASK:>
Python Code:
import cv2
img = cv2.imread('test.png',0)
resized_image = cv2.resize(img, (28, 28), interpolation = cv2.INTER_AREA)
test_images[test_images>0]=1
train_images[train_images>0]=1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Data
Step2: Calculate Cartesian Product (Method 1)
Step3: Calculate Cartesian Product (Method 2)
|
<ASSISTANT_TASK:>
Python Code:
# import pandas as pd
import pandas as pd
# Create two lists
i = [1,2,3,4,5]
j = [1,2,3,4,5]
# List every single x in i with every single y (i.e. Cartesian product)
[(x, y) for x in i for y in j]
# An alternative way to do the cartesian product
# import itertools
import itertools
# for two sets, find the the cartisan product
for i in itertools.product([1,2,3,4,5], [1,2,3,4,5]):
# and print it
print(i)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can simulate the generation process of conditianal probabilities by appropriately sampling from three random variables.
Step2: Notice that we have generated samples of whether the song has lyrics or not. Above I have also printed the associated genre label. In many probabilistic modeling problems some information is not available to the observer. For example we could be provided only the yes/no outcomes and the genres could be "hidden".
Step3: Now that we have selected the samples that are jazz we can simply count the lyrics yes and lyrics no entries and divide them by the total number of jazz samples to get estimates of the conditional probabilities. Think about the relationships
Step4: We have seen in the slides that the probability of a song being jazz if we know that it is instrumental is 0.66.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from scipy import stats
import numpy as np
class Random_Variable:
def __init__(self, name, values, probability_distribution):
self.name = name
self.values = values
self.probability_distribution = probability_distribution
if all(type(item) is np.int64 for item in values):
self.type = 'numeric'
self.rv = stats.rv_discrete(name = name, values = (values, probability_distribution))
elif all(type(item) is str for item in values):
self.type = 'symbolic'
self.rv = stats.rv_discrete(name = name, values = (np.arange(len(values)), probability_distribution))
self.symbolic_values = values
else:
self.type = 'undefined'
def sample(self,size):
if (self.type =='numeric'):
return self.rv.rvs(size=size)
elif (self.type == 'symbolic'):
numeric_samples = self.rv.rvs(size=size)
mapped_samples = [self.values[x] for x in numeric_samples]
return mapped_samples
# samples to generate
num_samples = 1000
## Prior probabilities of a song being jazz or country
values = ['country', 'jazz']
probs = [0.7, 0.3]
genre = Random_Variable('genre',values, probs)
# conditional probabilities of a song having lyrics or not given the genre
values = ['no', 'yes']
probs = [0.9, 0.1]
lyrics_if_jazz = Random_Variable('lyrics_if_jazz', values, probs)
values = ['no', 'yes']
probs = [0.2, 0.8]
lyrics_if_country = Random_Variable('lyrics_if_country', values, probs)
# conditional generating proces first sample prior and then based on outcome
# choose which conditional probability distribution to use
random_lyrics_samples = []
for n in range(num_samples):
# the 1 below is to get one sample and the 0 to get the first item of the list of samples
random_genre_sample = genre.sample(1)[0]
# depending on the outcome of the genre sampling sample the appropriate
# conditional probability
if (random_genre_sample == 'jazz'):
random_lyrics_sample = (lyrics_if_jazz.sample(1)[0], 'jazz')
else:
random_lyrics_sample = (lyrics_if_country.sample(1)[0], 'country')
random_lyrics_samples.append(random_lyrics_sample)
# output 1 item per line and output the first 20 samples
for s in random_lyrics_samples[0:20]:
print(s)
# First only consider jazz samples
jazz_samples = [x for x in random_lyrics_samples if x[1] == 'jazz']
for s in jazz_samples[0:20]:
print(s)
est_no_if_jazz = len([x for x in jazz_samples if x[0] == 'no']) / len(jazz_samples)
est_yes_if_jazz = len([x for x in jazz_samples if x[0] == 'yes']) / len(jazz_samples)
print(est_no_if_jazz, est_yes_if_jazz)
no_samples = [x for x in random_lyrics_samples if x[0] == 'no']
est_jazz_if_no_lyrics = len([x for x in no_samples if x[1] == 'jazz']) / len(no_samples)
print(est_jazz_if_no_lyrics)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Passo 1
Step2: Nosso dataset contém 6234 elementos, com 12 atributos cada, além de possuir missing data.
Step3: Adições ao passar do tempo
Step4: Diretores mais prestigiados
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv("netflix_titles.csv")
df
df.type.value_counts().plot(kind="bar")
# para apresentar os dois gráficos juntos, precisamos primeiro criar uma figura
fig = plt.figure(1, figsize=(20,10))
# podemos também dar um título a essa figura
fig.suptitle("Informações temporais dos conteúdos", fontsize=20)
# para apresentar diversos gráficos em uma mesma figura, precisamos criar "sub-gráficos", que serão dispostos
# como uma tabela.
# para criar um sub-gráfico, utilizamos a função plt.subplot(linhas, colunas, opsição)
plt.subplot(2,1,1)
# temos de ordenar a série temporal gerada a partir dos anos (que, nesse caso, são os índices)
year_count = df.release_year.value_counts().sort_index()
years_before_2020 = year_count.index < 2020
ax = year_count[years_before_2020].plot()
ax.set_title("Ano em que foi produzido", fontsize=15)
plt.subplot(2,1,2)
#para processar a data, precisamos da biblioteca datetime
from datetime import datetime
# primeiramente, vamos remover todos os dados que não possuem data definida
#depois vamos fazer o parser das datas
dates = df.date_added\
.dropna()\
.apply(lambda d: datetime.strptime(d.strip(), '%B %d, %Y'))
ax = dates.value_counts().sort_index().plot(ax=plt.gca())
ax.set_title("Data em que foi adicionado", fontsize=15)
df.director.value_counts().head(10).plot(kind="bar")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 写程序,可由键盘读入两个整数m与n(n不等于0),询问用户意图,如果要求和则计算从m到n的和输出,如果要乘积则计算从m到n的积并输出,如果要求余数则计算m除以n的余数的值并输出,否则则计算m整除n的值并输出。
Step2: 写程序,能够根据北京雾霾PM2.5数值给出对应的防护建议。如当PM2.5数值大于500,则应该打开空气净化器,戴防雾霾口罩等.
Step3: 英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议(提示,some_string.endswith(some_letter)函数可以判断某字符串结尾字符,可尝试运行:'myname'.endswith('me'),'liupengyuan'.endswith('n')`)。
Step4: 写程序,能够在屏幕上显示空行。
Step5: 写程序,由用户输入一些整数,能够得到几个整数中的次大值(第二大的值)并输出。
|
<ASSISTANT_TASK:>
Python Code:
name=input('请输入你的名字')
print(name)
date=float(input('请输入你的生日'))
if 1.19<date<2.19:
print('你是水瓶座')
elif 2.18<date<3.21:
print('你是双鱼座')
elif 3.20<date<4.20:
print('你是白羊座')
elif 4.19<date<5.21:
print('你是金牛座')
elif 5.20<date<6.22:
print('你是双子座')
elif 6.21<date<7.23:
print('你是巨蟹座')
elif 7.22<date<8.23:
print('你是狮子座')
elif 3.22<date<9.23:
print('你是处女座')
elif 9.22<date<10.24:
print('你是天枰座')
elif 10.23<date<11.23:
print('你是天蝎座')
elif 11.22<date<12.22:
print('你是射手座')
elif date>12.21 or date<1.20:
print('你是摩羯座')
m=int(input('请输入一个整数'))
n=int(input('请输入一个不为零的整数'))
i=int(input('请输入0,1,2或其他数字'))
total=m
product=m
if i==0 and m>n:
while m>n:
total=total+n
n=n+1
print(total)
elif i==0 and m<=n:
while m<=n:
total=total+n
n=n-1
print(total)
elif i==1 and m>n:
while m>n:
product=product*n
n=n+1
print(product)
elif i==1 and m<=n:
while m<n:
product=product*n
n=n-1
print(product)
elif i==2:
remainder=m/n
print(remainder)
else:
result=int(m/n)
print(m/n)
n=int(input('请输入今日雾霾指数'))
if n>500:
print('请打开空气进化器')
else:
print('空气状况良好')
word=str(input('请输入一个单词'))
if word.endswith('s') or word.endswith('es'):
print(word,'es', sep = '')
elif word.endswith('y'):
print('变y为i加es')
else:
print(word,'s', sep ='')
print(' ')
m = int(input('请输入要输入的整数个数,回车结束。'))
max_number = int(input('请输入一个整数,回车结束'))
min_number = max_number
n=int(input('请输入一个整数,以回车结束'))
if n>max_number:
max_number=n
elif n<min_number:
min_number=n
i = 2
while i < m:
i += 1
n = int(input('请输入一个整数,回车结束'))
if n>max_number:
result=max_number
max_number=n
elif n<min_number:
result=min_number
min_number=n
elif min_number<n<max_number:
result=n
min_number=n
print(result)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Form Parsing using Google Cloud Document AI
Step2: Document
Step3: Note
Step4: Enable Document AI
Step5: Create a service account authorization by visiting
Step6: Note
Step7: Option 1
Step8: We know that "Cash on Hand" is on Page 2.
Step9: Cool, we are at the right part of the document! Let's get the next block, which should be the actual amount.
Step10: Option 2
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/imported/formparsing.ipynb
from IPython.display import Markdown as md
### change to reflect your notebook
_nb_repo = 'training-data-analyst'
_nb_loc = "blogs/form_parser/formparsing.ipynb"
_nb_title = "Form Parsing Using Google Cloud Document AI"
### no need to change any of this
_nb_safeloc = _nb_loc.replace('/', '%2F')
_nb_safetitle = _nb_title.replace(' ', '+')
md(
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name={1}&url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2F{3}%2Fblob%2Fmaster%2F{2}&download_url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2F{3}%2Fraw%2Fmaster%2F{2}">
<img src="https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/logo-cloud.png"/> Run in AI Platform Notebook</a>
</td>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/GoogleCloudPlatform/{3}/blob/master/{0}">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/GoogleCloudPlatform/{3}/blob/master/{0}">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://raw.githubusercontent.com/GoogleCloudPlatform/{3}/master/{0}">
<img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
.format(_nb_loc, _nb_safetitle, _nb_safeloc, _nb_repo))
%%bash
if [ ! -f scott_walker.pdf ]; then
curl -O https://storage.googleapis.com/practical-ml-vision-book/images/scott_walker.pdf
fi
!ls *.pdf
from IPython.display import IFrame
IFrame("./scott_walker.pdf", width=600, height=300)
BUCKET="ai-analytics-solutions-kfpdemo" # CHANGE to a bucket that you own
!gsutil cp scott_walker.pdf gs://{BUCKET}/formparsing/scott_walker.pdf
!gsutil ls gs://{BUCKET}/formparsing/scott_walker.pdf
!gcloud auth list
%%bash
PDF="gs://ai-analytics-solutions-kfpdemo/formparsing/scott_walker.pdf" # CHANGE to your PDF file
REGION="us" # change to EU if the bucket is in the EU
cat <<EOM > request.json
{
"inputConfig":{
"gcsSource":{
"uri":"${PDF}"
},
"mimeType":"application/pdf"
},
"documentType":"general",
"formExtractionParams":{
"enabled":true
}
}
EOM
# Send request to Document AI.
PROJECT=$(gcloud config get-value project)
echo "Sending the following request to Document AI in ${PROJECT} ($REGION region), saving to response.json"
cat request.json
curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
https://${REGION}-documentai.googleapis.com/v1beta2/projects/${PROJECT}/locations/us/documents:process \
> response.json
!tail response.json
import json
ifp = open('response.json')
response = json.load(ifp)
allText = response['text']
print(allText[:100])
print(allText.index("CASH ON HAND"))
response['pages'][1]['blocks'][5]
response['pages'][1]['blocks'][5]['layout']['textAnchor']['textSegments'][0]
startIndex = int(response['pages'][1]['blocks'][5]['layout']['textAnchor']['textSegments'][0]['startIndex'])
endIndex = int(response['pages'][1]['blocks'][5]['layout']['textAnchor']['textSegments'][0]['endIndex'])
allText[startIndex:endIndex]
def extractText(allText, elem):
startIndex = int(elem['textAnchor']['textSegments'][0]['startIndex'])
endIndex = int(elem['textAnchor']['textSegments'][0]['endIndex'])
return allText[startIndex:endIndex].strip()
amount = float(extractText(allText, response['pages'][1]['blocks'][6]['layout']))
print(amount)
response['pages'][1].keys()
response['pages'][1]['formFields'][2]
fieldName = extractText(allText, response['pages'][1]['formFields'][2]['fieldName'])
fieldValue = extractText(allText, response['pages'][1]['formFields'][2]['fieldValue'])
print('key={}\nvalue={}'.format(fieldName, fieldValue))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Interactive Matplotlib plots in Jupyter
Step2: Example
Step3: Plotly
Step4: https
Step5: Make a first widget
Step6: Link the widget to a function
Step7: Widget list
Step8: When you pass this function as the first argument to interact along with an integer keyword argument (x=5), a slider is generated and bound to the function parameter.
Step9: When you move the slider, the function is called, and its return value is printed.
Step10: Integer (IntSlider)
Step11: Example with Matplotlib
Step12: Float (FloatSlider)
Step13: Boolean (Checkbox)
Step14: List (Dropdown)
Step15: Dictionnary (Dropdown)
Step16: Example of using multiple widgets on one function
Step17: Using interact as a decorator with named parameters
Step18: Integer (IntSlider)
Step19: Float (FloatSlider)
Step20: Boolean (Checkbox)
Step21: List (Dropdown)
Step22: Dictionnary (Dropdown)
Step23: Using interact as a decorator without parameter
Step24: The ipywidgets.interactive class
Step25: Layouts
Step26: Flickering and jumping output
Step27: Dataviz
Step28: Contour line
Step29: Time series exploration
Step30: Understand algorithms
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib widget
# To ignore warnings (http://stackoverflow.com/questions/9031783/hide-all-warnings-in-ipython)
import warnings
warnings.filterwarnings('ignore')
import IPython
import math
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
import matplotlib.cm as cm
x = np.arange(-10, 10, 0.01)
y = np.sin(2. * 2. * np.pi * x) * 1. / np.sqrt(2. * np.pi) * np.exp(-(x**2.)/2.)
plt.plot(x, y);
xx, yy = np.meshgrid(np.arange(-5, 5, 0.25), np.arange(-5, 5, 0.25))
z = np.sin(np.sqrt(xx**2 + yy**2))
fig = plt.figure()
ax = axes3d.Axes3D(fig)
ax.plot_surface(xx, yy, z, cmap=cm.jet, rstride=1, cstride=1, color='b', shade=True)
plt.show()
import plotly.express as px
df = px.data.iris()
fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species")
fig.show()
import plotly.express as px
df = px.data.iris()
fig = px.scatter_matrix(df, dimensions=["sepal_width", "sepal_length", "petal_width", "petal_length"], color="species")
fig.show()
import plotly.express as px
df = px.data.gapminder()
fig = px.scatter(df, x="gdpPercap", y="lifeExp", animation_frame="year", animation_group="country",
size="pop", color="continent", hover_name="country", facet_col="continent",
log_x=True, size_max=45, range_x=[100,100000], range_y=[25,90])
fig.show()
import plotly.express as px
df = px.data.carshare()
fig = px.scatter_mapbox(df, lat="centroid_lat", lon="centroid_lon", color="peak_hour", size="car_hours",
color_continuous_scale=px.colors.cyclical.IceFire, size_max=15, zoom=10,
mapbox_style="carto-positron")
fig.show()
# C.f. https://plotly.com/python/time-series/
import pandas as pd
import plotly.express as px
import plotly.io as pio
#pio.renderers.default = "browser"
df = pd.read_csv("pvgis.csv.tar.gz")
df['time'] = pd.date_range("2005-01-01", periods=len(df), freq='1h')
fig = px.line(df,
x='time',
y=[
'temperature',
],
width=800,
height=400,
title='Ratios')
fig.update_xaxes(
rangeslider_visible=True,
rangeselector=dict(
buttons=list([
dict(count=1, label="1d", step="day", stepmode="backward"),
dict(count=7, label="1w", step="day", stepmode="backward"),
dict(count=1, label="1m", step="month", stepmode="backward"),
dict(count=1, label="1y", step="year", stepmode="backward"),
dict(step="all")
])
)
)
fig.show()
import ipywidgets
from ipywidgets import interact
%matplotlib inline
from ipywidgets import IntSlider
from IPython.display import display
slider = IntSlider(min=1, max=10) # make the widget
display(slider) # display it
import ipywidgets
slider = IntSlider(min=1, max=10, description='x') # make the widget
def f(x):
print(x**2)
out = ipywidgets.interactive_output(f, {'x': slider})
ipywidgets.HBox([ipywidgets.VBox([slider]), out])
def f(x):
return x
from ipywidgets import interact
interact(f, x=5);
def f(x):
print("Hello {}".format(x))
interact(f, x="IPython Widgets");
def square(num):
print("{} squared is {}".format(num, num*num))
interact(square, num=5);
def square(num):
print("{} squared is {}".format(num, num*num))
interact(square, num=(0, 100));
def square(num):
print("{} squared is {}".format(num, num*num))
interact(square, num=(0, 100, 10));
def plot(t):
fig = plt.figure()
ax = plt.axes(xlim=(0, 2), ylim=(-2, 2))
x = np.linspace(0, 2, 100)
y = np.sin(2. * np.pi * (x - 0.01 * t))
ax.plot(x, y, lw=2)
interact(plot, t=(0, 100, 1));
x = np.random.normal(size=1000)
def plot(num):
plt.hist(x, bins=num)
interact(plot, num=(10, 100));
x, y = np.random.normal(size=(2, 100000))
def plot(num):
fig = plt.figure(figsize=(8.0, 8.0))
ax = fig.add_subplot(111)
im = ax.hexbin(x, y, gridsize=num)
fig.colorbar(im, ax=ax)
interact(plot, num=(10, 60));
def square(num):
print("{} squared is {}".format(num, num*num))
interact(square, num=5.);
def square(num):
print("{} squared is {}".format(num, num*num))
interact(square, num=(0., 10.));
def square(num):
print("{} squared is {}".format(num, num*num))
interact(square, num=(0., 10., 0.5));
def greeting(upper):
text = "hello"
if upper:
print(text.upper())
else:
print(text.lower())
interact(greeting, upper=False);
def greeting(name):
print("Hello {}".format(name))
interact(greeting, name=["John", "Bob", "Alice"]);
def translate(word):
print(word)
interact(translate, word={"One": "Un", "Two": "Deux", "Three": "Trois"});
x = np.arange(-2 * np.pi, 2 * np.pi, 0.1)
def plot(function):
y = function(x)
plt.plot(x, y)
interact(plot, function={"Sin": np.sin, "Cos": np.cos});
def greeting(upper, name):
text = "hello {}".format(name)
if upper:
print(text.upper())
else:
print(text.lower())
interact(greeting, upper=False, name=["john", "bob", "alice"]);
@interact(text="IPython Widgets")
def greeting(text):
print("Hello {}".format(text))
@interact(num=5)
def square(num):
print("{} squared is {}".format(num, num*num))
@interact(num=(0, 100))
def square(num):
print("{} squared is {}".format(num, num*num))
@interact(num=(0, 100, 10))
def square(num):
print("{} squared is {}".format(num, num*num))
@interact(num=5.)
def square(num):
print("{} squared is {}".format(num, num*num))
@interact(num=(0., 10.))
def square(num):
print("{} squared is {}".format(num, num*num))
@interact(num=(0., 10., 0.5))
def square(num):
print("{} squared is {}".format(num, num*num))
@interact(upper=False)
def greeting(upper):
text = "hello"
if upper:
print(text.upper())
else:
print(text.lower())
@interact(name=["John", "Bob", "Alice"])
def greeting(name):
print("Hello {}".format(name))
@interact(word={"One": "Un", "Two": "Deux", "Three": "Trois"})
def translate(word):
print(word)
x = np.arange(-2 * np.pi, 2 * np.pi, 0.1)
@interact(function={"Sin": np.sin, "Cos": np.cos})
def plot(function):
y = function(x)
plt.plot(x, y)
@interact
def square(num=2):
print("{} squared is {}".format(num, num*num))
@interact
def square(num=(0, 100)):
print("{} squared is {}".format(num, num*num))
@interact
def square(num=(0, 100, 10)):
print("{} squared is {}".format(num, num*num))
from ipywidgets import interactive
def f(a, b):
display(a + b)
return a+b
w = interactive(f, a=10, b=20)
display(w)
w.kwargs
w.result
x = np.random.normal(size=1000)
def plot(num):
x = np.arange(-5, 5, 0.25)
y = np.arange(-5, 5, 0.25)
xx,yy = np.meshgrid(x, y)
z = np.sin(np.sqrt(xx**2 + yy**2) + num)
fig = plt.figure()
ax = axes3d.Axes3D(fig)
ax.set_title("sin(sqrt(x² + y²) + {:0.2f})".format(num))
ax.plot_wireframe(xx, yy, z)
interact(plot, num=(10., 25., 0.1));
interactive_plot = interactive(plot, num=(10., 25., 0.1))
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
from matplotlib.ticker import FuncFormatter
X, Y = np.random.normal(size=(2, 100000))
def plot(num, name):
fig = plt.figure(figsize=(8.0, 8.0))
ax = fig.add_subplot(111)
x = np.log10(X) if name in ("xlog", "loglog") else X
y = np.log10(Y) if name in ("ylog", "loglog") else Y
im = ax.hexbin(x, y, gridsize=num)
fig.colorbar(im, ax=ax)
# Use "10^n" instead "n" as ticks label
func_formatter = lambda x, pos: r'$10^{{{}}}$'.format(int(x))
ax.xaxis.set_major_formatter(FuncFormatter(func_formatter))
ax.yaxis.set_major_formatter(FuncFormatter(func_formatter))
ax.set_title(name)
interact(plot, num=(10, 60), name=["linear", "xlog", "ylog", "loglog"]);
import scipy.optimize
xmin = [-4., -5.]
xmax = [4., 10.]
f = scipy.optimize.rosen
def plot(cl1, cl2):
# Setup
x1_space = np.linspace(xmin[0], xmax[0], 100)
x2_space = np.linspace(xmin[1], xmax[1], 100)
x1_mesh, x2_mesh = np.meshgrid(x1_space, x2_space)
z = [f([x, y]) for x, y in zip(x1_mesh.ravel(), x2_mesh.ravel())]
zz = np.array(z).reshape(x1_mesh.shape)
# Plot
fig, ax = plt.subplots(figsize=(15, 7))
im = ax.pcolormesh(x1_mesh, x2_mesh, zz,
shading='gouraud',
norm=matplotlib.colors.LogNorm(), # TODO
cmap='gnuplot2') # 'jet' # 'gnuplot2'
plt.colorbar(im, ax=ax)
levels = (cl1, cl2) # TODO
cs = plt.contour(x1_mesh, x2_mesh, zz, levels,
linewidths=(2, 3),
linestyles=('dashed', 'solid'), # 'dotted', '-.',
alpha=0.5,
colors='red')
ax.clabel(cs, inline=False, fontsize=12)
#interact(plot, cl1=(0.1, 10., 0.1), cl2=(0.1, 100., 0.1));
interactive_plot = interactive(plot, cl1=(1., 99., 0.1), cl2=(100., 10000., 0.1))
output = interactive_plot.children[-1]
output.layout.height = '500px'
interactive_plot
import statsmodels.api as sm
# https://www.statsmodels.org/devel/datasets/index.html
data = sm.datasets.elnino.load_pandas()
df = data.data
df.index = df.YEAR
df = df.drop(['YEAR'], axis=1)
def plot(year):
ax = df.loc[year,:].plot()
ax.set_ylim(15, 30)
ax.set_title("El Nino - Sea Surface Temperatures")
interact(plot, year=(1950, 2010));
# Activation functions ########################################################
def identity(x):
return x
def tanh(x):
return np.tanh(x)
def relu(x):
x_and_zeros = np.array([x, np.zeros(x.shape)])
return np.max(x_and_zeros, axis=0)
# Dense Multi-Layer Neural Network ############################################
IN_SIZE = 2
OUT_SIZE = 1
H_SIZE = 4 # H_SIZE = number of neurons on the hidden layer
# Set the neural network activation functions (one function per layer)
activation_functions = (relu, tanh)
# Make a neural network with 2 hidden layers of `H_SIZE` units
weights = (np.random.random(size=[IN_SIZE + 1, H_SIZE]),
np.random.random(size=[H_SIZE + 1, OUT_SIZE]))
def feed_forward(inputs, weights, activation_functions, verbose=False):
x = inputs.copy()
for layer_weights, layer_activation_fn in zip(weights, activation_functions):
y = np.dot(x, layer_weights[1:])
y += layer_weights[0]
layer_output = layer_activation_fn(y)
if verbose:
print("x", x)
print("bias", layer_weights[0])
print("W", layer_weights[1:])
print("y", y)
print("z", layer_output)
x = layer_output
return layer_output
xmin, xmax = -10., 10.
res = 100
def plot(w11, w12, w13, w14, w21, w22, w23, w24, w31, w32, w33, w34, wo1, wo2, wo3, wo4, wo5):
weights = (np.array([[w11, w12, w13, w14],
[w21, w22, w23, w24],
[w31, w32, w33, w34]]),
np.array([[wo1],
[wo2],
[wo3],
[wo4],
[wo5]]))
x1_space = np.linspace(xmin, xmax, res)
x2_space = np.linspace(xmin, xmax, res)
x1_mesh, x2_mesh = np.meshgrid(x1_space, x2_space)
z = [feed_forward(inputs=[x, y],
weights=weights,
activation_functions=activation_functions) for x, y in zip(x1_mesh.ravel(), x2_mesh.ravel())]
zz = np.array(z).reshape(x1_mesh.shape)
fig, ax = plt.subplots(figsize=(8, 8))
im = ax.pcolormesh(x1_mesh, x2_mesh, zz,
shading='gouraud',
#norm=matplotlib.colors.LogNorm(), # TODO
cmap='magma') # 'jet' # 'gnuplot2'
plt.colorbar(im, ax=ax)
from ipywidgets import GridspecLayout, FloatSlider, Layout
import random
grid = GridspecLayout(5, 4)
for i in range(5):
for j in range(4):
grid[i, j] = FloatSlider(value=random.random(), min=-5., max=5., step=0.1, description="w{}{}".format(i, j), layout=Layout(width='75%'))
out = ipywidgets.interactive_output(plot, {'w11': grid[0, 0], 'w12': grid[1, 0], 'w13': grid[2, 0], 'w14': grid[3, 0],
'w21': grid[0, 1], 'w22': grid[1, 1], 'w23': grid[2, 1], 'w24': grid[3, 1],
'w31': grid[0, 2], 'w32': grid[1, 2], 'w33': grid[2, 2], 'w34': grid[3, 2],
'wo1': grid[0, 3], 'wo2': grid[1, 3], 'wo3': grid[2, 3], 'wo4': grid[3, 3], 'wo5': grid[4, 3]})
#display(grid, out)
ipywidgets.VBox([grid, out])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.
Step2: 3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?
Step3: 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results.
|
<ASSISTANT_TASK:>
Python Code:
#First, the libraries. And, make sure matplotlib shows up in jupyter notebook! hurrah
import pandas as pd
from sklearn import datasets
from pandas.tools.plotting import scatter_matrix
import matplotlib.pyplot as plt
%matplotlib inline
iris = datasets.load_iris()
x = iris.data[:,2:] # the attributes
y = iris.target # the target variable
from sklearn import tree
dt = tree.DecisionTreeClassifier()
dt = dt.fit(x,y)
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5,random_state=42)
dt = dt.fit(x_train,y_train)
from sklearn import metrics
import numpy as np
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(x_test,y_test,dt)
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75,random_state=42)
measure_performance(x_test,y_test,dt)
bc=datasets.load_breast_cancer()
print(bc.keys())
print(bc['target_names'])
bc['DESCR']
print(bc['target'])
bc['data']
bc['feature_names']
df = pd.DataFrame(bc.data, columns= bc.feature_names)
#Okay this does not work because in my data frame I only have the features, not the classes, no way to see the best predictors for the classes. :(
x = bc.data[:,21:] # the attributes. I chose column 22 onward because... they have the word "worst" in them... :/
y = bc.target # the target variable. It has already been dummified. I guess. This dataset is unfriendly
dt = tree.DecisionTreeClassifier()
dt = dt.fit(x,y)
#With a 50/50 split
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5,random_state=42)
dt = dt.fit(x_train,y_train)
measure_performance(x_test,y_test,dt)
#With a 75/25 split
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75,random_state=42)
dt = dt.fit(x_train,y_train)
measure_performance(x_test,y_test,dt)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled so the image resolution is decreased from 28x28 to 14x14.
Step2: The step-size for moving the filter across the input is called the stride. There is a stride for moving the filter horizontally (x-axis) and another stride for moving vertically (y-axis).
Step3: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step4: Configuration of Neural Network
Step5: Load Data
Step6: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step7: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
Step8: Data Dimensions
Step9: Helper-function for plotting images
Step10: Plot a few images to see if data is correct
Step11: TensorFlow Graph
Step12: Helper-function for creating a new Convolutional Layer
Step13: Helper-function for flattening a layer
Step14: Helper-function for creating a new Fully-Connected Layer
Step15: Placeholder variables
Step16: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is
Step17: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step18: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
Step19: Convolutional Layer 1
Step20: Check the shape of the tensor that will be output by the convolutional layer. It is (?, 14, 14, 16) which means that there is an arbitrary number of images (this is the ?), each image is 14 pixels wide and 14 pixels high, and there are 16 different channels, one channel for each of the filters.
Step21: Convolutional Layer 2
Step22: Check the shape of the tensor that will be output from this convolutional layer. The shape is (?, 7, 7, 36) where the ? again means that there is an arbitrary number of images, with each image having width and height of 7 pixels, and there are 36 channels, one for each filter.
Step23: Flatten Layer
Step24: Check that the tensors now have shape (?, 1764) which means there's an arbitrary number of images which have been flattened to vectors of length 1764 each. Note that 1764 = 7 x 7 x 36.
Step25: Fully-Connected Layer 1
Step26: Check that the output of the fully-connected layer is a tensor with shape (?, 128) where the ? means there is an arbitrary number of images and fc_size == 128.
Step27: Fully-Connected Layer 2
Step28: Predicted Class
Step29: The class-number is the index of the largest element.
Step30: Cost-function to be optimized
Step31: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
Step32: Optimization Method
Step33: Performance Measures
Step34: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
Step35: TensorFlow Run
Step36: Initialize variables
Step37: Helper-function to perform optimization iterations
Step38: Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
Step39: Helper-function to plot example errors
Step40: Helper-function to plot confusion matrix
Step41: Helper-function for showing the performance
Step42: Performance before any optimization
Step43: Performance after 1 optimization iteration
Step44: Performance after 100 optimization iterations
Step45: Performance after 1000 optimization iterations
Step46: Performance after 10,000 optimization iterations
Step47: Visualization of Weights and Layers
Step48: Helper-function for plotting the output of a convolutional layer
Step49: Input Images
Step50: Plot an image from the test-set which will be used as an example below.
Step51: Plot another example image from the test-set.
Step52: Convolution Layer 1
Step53: Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer. Note that these images are down-sampled to 14 x 14 pixels which is half the resolution of the original input image.
Step54: The following images are the results of applying the convolutional filters to the second image.
Step55: It is difficult to see from these images what the purpose of the convolutional filters might be. It appears that they have merely created several variations of the input image, as if light was shining from different angles and casting shadows in the image.
Step56: There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
Step57: It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.
Step58: And these are the results of applying the filter-weights to the second image.
Step59: From these images, it looks like the second convolutional layer might detect lines and patterns in the input images, which are less sensitive to local variations in the original input images.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image('images/02_network_flowchart.png')
Image('images/02_convolution.png')
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
tf.__version__
# Convolutional Layer 1.
filter_size1 = 5 # Convolution filters are 5 x 5 pixels.
num_filters1 = 16 # There are 16 of these filters.
# Convolutional Layer 2.
filter_size2 = 5 # Convolution filters are 5 x 5 pixels.
num_filters2 = 36 # There are 36 of these filters.
# Fully-connected layer.
fc_size = 128 # Number of neurons in fully-connected layer.
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
data.test.cls = np.argmax(data.test.labels, axis=1)
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
def new_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
def new_biases(length):
return tf.Variable(tf.constant(0.05, shape=[length]))
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of each filter.
num_filters, # Number of filters.
use_pooling=True): # Use 2x2 max-pooling.
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
weights = new_weights(shape=shape)
# Create new biases, one for each filter.
biases = new_biases(length=num_filters)
# Create the TensorFlow operation for convolution.
# Note the strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# But e.g. strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Use pooling to down-sample the image resolution?
if use_pooling:
# This is 2x2 max-pooling, which means that we
# consider 2x2 windows and select the largest value
# in each window. Then we move 2 pixels to the next window.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x)) we can
# save 75% of the relu-operations by max-pooling first.
# We return both the resulting layer and the filter-weights
# because we will plot the weights later.
return layer, weights
def flatten_layer(layer):
# Get the shape of the input layer.
layer_shape = layer.get_shape()
# The shape of the input layer is assumed to be:
# layer_shape == [num_images, img_height, img_width, num_channels]
# The number of features is: img_height * img_width * num_channels
# We can use a function from TensorFlow to calculate this.
num_features = layer_shape[1:4].num_elements()
# Reshape the layer to [num_images, num_features].
# Note that we just set the size of the second dimension
# to num_features and the size of the first dimension to -1
# which means the size in that dimension is calculated
# so the total size of the tensor is unchanged from the reshaping.
layer_flat = tf.reshape(layer, [-1, num_features])
# The shape of the flattened layer is now:
# [num_images, img_height * img_width * num_channels]
# Return both the flattened layer and the number of features.
return layer_flat, num_features
def new_fc_layer(input, # The previous layer.
num_inputs, # Num. inputs from prev. layer.
num_outputs, # Num. outputs.
use_relu=True): # Use Rectified Linear Unit (ReLU)?
# Create new weights and biases.
weights = new_weights(shape=[num_inputs, num_outputs])
biases = new_biases(length=num_outputs)
# Calculate the layer as the matrix multiplication of
# the input and weights, and then add the bias-values.
layer = tf.matmul(input, weights) + biases
# Use ReLU?
if use_relu:
layer = tf.nn.relu(layer)
return layer
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
y_true_cls = tf.argmax(y_true, dimension=1)
layer_conv1, weights_conv1 = \
new_conv_layer(input=x_image,
num_input_channels=num_channels,
filter_size=filter_size1,
num_filters=num_filters1,
use_pooling=True)
layer_conv1
layer_conv2, weights_conv2 = \
new_conv_layer(input=layer_conv1,
num_input_channels=num_filters1,
filter_size=filter_size2,
num_filters=num_filters2,
use_pooling=True)
layer_conv2
layer_flat, num_features = flatten_layer(layer_conv2)
layer_flat
num_features
layer_fc1 = new_fc_layer(input=layer_flat,
num_inputs=num_features,
num_outputs=fc_size,
use_relu=True)
layer_fc1
layer_fc2 = new_fc_layer(input=layer_fc1,
num_inputs=fc_size,
num_outputs=num_classes,
use_relu=False)
layer_fc2
y_pred = tf.nn.softmax(layer_fc2)
y_pred_cls = tf.argmax(y_pred, dimension=1)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,
labels=y_true)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
session = tf.Session()
session.run(tf.initialize_all_variables())
train_batch_size = 64
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(total_iterations,
total_iterations + num_iterations):
total_batch = int(mnist.train.num_examples/batch_size)
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
print_test_accuracy()
optimize(num_iterations=1)
print_test_accuracy()
optimize(num_iterations=99) # We already performed 1 iteration above.
print_test_accuracy(show_example_errors=True)
optimize(num_iterations=900) # We performed 100 iterations above.
print_test_accuracy(show_example_errors=True)
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
def plot_conv_layer(layer, image):
# Assume layer is a TensorFlow op that outputs a 4-dim tensor
# which is the output of a convolutional layer,
# e.g. layer_conv1 or layer_conv2.
# Create a feed-dict containing just one image.
# Note that we don't need to feed y_true because it is
# not used in this calculation.
feed_dict = {x: [image]}
# Calculate and retrieve the output values of the layer
# when inputting that image.
values = session.run(layer, feed_dict=feed_dict)
# Number of filters used in the conv. layer.
num_filters = values.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot the output images of all the filters.
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i<num_filters:
# Get the output image of using the i'th filter.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap='binary')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
def plot_image(image):
plt.imshow(image.reshape(img_shape),
interpolation='nearest',
cmap='binary')
plt.show()
image1 = data.test.images[0]
plot_image(image1)
image2 = data.test.images[13]
plot_image(image2)
plot_conv_weights(weights=weights_conv1)
plot_conv_layer(layer=layer_conv1, image=image1)
plot_conv_layer(layer=layer_conv1, image=image2)
plot_conv_weights(weights=weights_conv2, input_channel=0)
plot_conv_weights(weights=weights_conv2, input_channel=1)
plot_conv_layer(layer=layer_conv2, image=image1)
plot_conv_layer(layer=layer_conv2, image=image2)
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Complete graph Laplacian
Step3: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
Step5: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
Step6: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
def complete_deg(n):
Return the integer valued degree matrix D for the complete graph K_n.
a=np.zeros((n,n))
np.fill_diagonal(a,(n-1)) # found this line on http://docs.scipy.org/doc/numpy/reference/generated/numpy.fill_diagonal.html
return a
print(complete_deg(3))
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
def complete_adj(n):
Return the integer valued adjacency matrix A for the complete graph K_n.
a=np.ones((n,n),int)
np.fill_diagonal(a,(0))
return a
print(complete_adj(4))
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
def L(n):
a=np.linalg.eigvals(complete_deg(n)-complete_adj(n))
return a
print(L(10))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TensorFlow を使用した Azure Blob Storage
Step2: Azurite のインストールとセットアップ(オプション)
Step3: TensorFlow を使用した Azure Storage のファイルの読み取りと書き込み
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
%tensorflow_version 2.x
except Exception:
pass
!pip install tensorflow-io
!npm install azurite@2.7.0
# The path for npm might not be exposed in PATH env,
# you can find it out through 'npm bin' command
npm_bin_path = get_ipython().getoutput('npm bin')[0]
print('npm bin path: ', npm_bin_path)
# Run `azurite-blob -s` as a background process.
# IPython doesn't recognize `&` in inline bash cells.
get_ipython().system_raw(npm_bin_path + '/' + 'azurite-blob -s &')
import os
import tensorflow as tf
import tensorflow_io as tfio
# Switch to False to use Azure Storage instead:
use_emulator = True
if use_emulator:
os.environ['TF_AZURE_USE_DEV_STORAGE'] = '1'
account_name = 'devstoreaccount1'
else:
# Replace <key> with Azure Storage Key, and <account> with Azure Storage Account
os.environ['TF_AZURE_STORAGE_KEY'] = '<key>'
account_name = '<account>'
# Alternatively, you can use a shared access signature (SAS) to authenticate with the Azure Storage Account
os.environ['TF_AZURE_STORAGE_SAS'] = '<your sas>'
account_name = '<account>'
pathname = 'az://{}/aztest'.format(account_name)
tf.io.gfile.mkdir(pathname)
filename = pathname + '/hello.txt'
with tf.io.gfile.GFile(filename, mode='w') as w:
w.write("Hello, world!")
with tf.io.gfile.GFile(filename, mode='r') as r:
print(r.read())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating Evoked objects from Epochs
Step2: You may have noticed that MNE informed us that "baseline correction" has been
Step3: Basic visualization of Evoked objects
Step4: Like the plot() methods for
Step5: To select based on time in seconds, the
Step6: Similarities among the core data structures
Step7: Notice that
Step8: If you want to load only some of the conditions present in a .fif file,
Step9: Previously, when we created an
Step10: This can be remedied by either passing a baseline parameter to
Step11: Notice that
Step12: This approach will weight each epoch equally and create a single
Step13: However, this may not always be the case. If for statistical reasons it is
Step14: Note that the nave attribute of the resulting
|
<ASSISTANT_TASK:>
Python Code:
import mne
root = mne.datasets.sample.data_path() / 'MEG' / 'sample'
raw_file = root / 'sample_audvis_raw.fif'
raw = mne.io.read_raw_fif(raw_file, verbose=False)
events = mne.find_events(raw, stim_channel='STI 014')
# we'll skip the "face" and "buttonpress" conditions to save memory
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4}
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,
preload=True)
evoked = epochs['auditory/left'].average()
del raw # reduce memory usage
print(f'Epochs baseline: {epochs.baseline}')
print(f'Evoked baseline: {evoked.baseline}')
evoked.plot()
print(evoked.data[:2, :3]) # first 2 channels, first 3 timepoints
evoked_eeg = evoked.copy().pick_types(meg=False, eeg=True)
print(evoked_eeg.ch_names)
new_order = ['EEG 002', 'MEG 2521', 'EEG 003']
evoked_subset = evoked.copy().reorder_channels(new_order)
print(evoked_subset.ch_names)
evk_file = root / 'sample_audvis-ave.fif'
evokeds_list = mne.read_evokeds(evk_file, verbose=False)
print(evokeds_list)
print(type(evokeds_list))
for evok in evokeds_list:
print(evok.comment)
right_vis = mne.read_evokeds(evk_file, condition='Right visual')
print(right_vis)
print(type(right_vis))
evokeds_list[0].plot(picks='eeg')
# Original baseline (none set)
print(f'Baseline after loading: {evokeds_list[0].baseline}')
# Apply a custom baseline correction
evokeds_list[0].apply_baseline((None, 0))
print(f'Baseline after calling apply_baseline(): {evokeds_list[0].baseline}')
# Visualize the evoked response
evokeds_list[0].plot(picks='eeg')
left_right_aud = epochs['auditory'].average()
print(left_right_aud)
left_aud = epochs['auditory/left'].average()
right_aud = epochs['auditory/right'].average()
print([evok.nave for evok in (left_aud, right_aud)])
left_right_aud = mne.combine_evoked([left_aud, right_aud], weights='nave')
assert left_right_aud.nave == left_aud.nave + right_aud.nave
for ix, trial in enumerate(epochs[:3].iter_evoked()):
channel, latency, value = trial.get_peak(ch_type='eeg',
return_amplitude=True)
latency = int(round(latency * 1e3)) # convert to milliseconds
value = int(round(value * 1e6)) # convert to µV
print('Trial {}: peak of {} µV at {} ms in channel {}'
.format(ix, value, latency, channel))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we need to define six different variables
Step2: Now we can create two SymPy expressions that represent our two equations. We can subtract the %crystallinity from the left side of the equation to set the equation to zero. The result of moving the %crystallinity term to the other side of the equation is shown below. Note how the second equation equals zero.
Step3: Next, we'll substitute in the known values $\rho_1 = 0.904$ and $c_1 = 0.628$ into our first expression expr1. Note you need to set the output of SymPy's .subs method to a variable. SymPy expressions are not modified in-place. You need to capture the output of the .subs method in a variable.
Step4: Now we'll substitue the second set of given values $\rho_2 = 0.895$ and $c_2 = 0.544$ into our second expression expr2.
Step5: We'll use SymPy's nonlinsolve() function to solve the two equations expr1 and expr2 for to unknows pa and pc. SymPy's nonlinsolve() function expects a list of expressions [expr1,expr2] followed by a list variables [pa,pc] to solve for.
Step6: We see that the value of $\rho_a = 0.84079$ and $\rho_c = 0.94613$.
Step7: Use SymPy to calculate a numerical result
Step8: Next, we will create three SymPy symbols objects. These three symbols objects will be used to build our expression.
Step9: The expression that relates % crystallinity of a polymer sample to the density of 100% amorphus and 100% crystalline versions of the same polymer is below.
Step10: Now we can substitute our $ \rho_a $ and $ \rho_c $ from above. Note the SymPy's .subs() method does not modify an expression in place. We have to set the modified expression to a new variable before we can make another substitution. After the substitutions are complete, we can print out the numerical value of the expression. This is accomplished with SymPy's .evalf() method.
Step11: As a final step, we can print out the answer using a Python f-string.
|
<ASSISTANT_TASK:>
Python Code:
from sympy import symbols, nonlinsolve
pc, pa, p1, p2, c1, c2 = symbols('pc pa p1 p2 c1 c2')
expr1 = ( (pc*(p1-pa) ) / (p1*(pc-pa)) - c1)
expr2 = ( (pc*(p2-pa) ) / (p2*(pc-pa)) - c2)
expr1 = expr1.subs(p1, 0.904)
expr1 = expr1.subs(c1, 0.628)
print(expr1)
expr2 = expr2.subs(p2, 0.895)
expr2 = expr2.subs(c2, 0.544)
print(expr2)
sol = nonlinsolve([expr1,expr2],[pa,pc])
print(sol)
print(type(sol))
pa = sol.args[0][0]
pc = sol.args[0][1]
print(f' Density of 100% amorphous polymer, pa = {round(pa,2)} g/cm3')
print(f' Density of 100% crystaline polymer, pc = {round(pc,2)} g/cm3')
print(pa)
print(pc)
pc, pa, ps = symbols('pc pa ps')
expr = ( pc*(ps-pa) ) / (ps*(pc-pa))
expr = expr.subs(pa, 0.840789786223278)
expr = expr.subs(pc, 0.946134313397929)
expr = expr.subs(ps, 0.921)
print(expr.evalf())
print(f'The percent crystallinity of the sample is {round(expr*100,1)} percent')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-esm2-sr5', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define Common Parameters
Step2: Question 1.
Step3: Run the simulation using the Binomial process which is equivalent to performing a very large (~1000's) Bernoulli processes and grouping their results. Since the order in which 1's and 0's occur in the sequence does not affect the final result.
Step4: Question 2.
Step5: The previous plot shows the evolution of the capital throughout the Binomial process, alongside we show the mean and the most probable value of the possible outcomes. As one increases the number of iterations the mean surpassess the most probable value for good while maintaining a very close gap.
Step6: Question 5.
|
<ASSISTANT_TASK:>
Python Code:
# Numpy
import numpy as np
# Scipy
from scipy import stats
from scipy import linspace
# Plotly
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly.graph_objs as go
init_notebook_mode(connected=True) # Offline plotting
# Probabilities
P_G = 0.8
# Return on investment rates
ROI_G = 1
ROI_L = -0.2
# Principal (initial capital)
P = 1
# Takes the principal P and performs the evolution of the capital using
# the result x of the random binomial variable after n trials
def evolve_with_binomial(P, x, n):
return P * ((1 + ROI_G) ** x) * ((1 + ROI_L) ** (n - x))
# Number of iterations
years = 5
iterations_per_year = 2
n = iterations_per_year * (years)
# Sorted array of unique values ocurring in instance of Binomial process
x_binomial = linspace(0,n,n+1)
# Arrays of data to plot
data_dict = { 'x': [], 'y': []}
data_dict['x'] = [evolve_with_binomial(P, x, max(x_binomial)) for x in x_binomial]
data_dict['y'] = stats.binom.pmf(x_binomial,max(x_binomial),P_G)
# Plot data variable. It contains the trace objects
fig_data = [
go.Bar(
x=data_dict['x'],
y=data_dict['y'],
name="Probabilities"
),
go.Scatter(
x=data_dict['x'],
y=data_dict['y'],
mode='lines+markers',
name="Fitting",
line=dict(
shape='spline'
)
)
]
# Set layout for figure
layout = go.Layout(
title='Binomial Distribution of Capital at N Iterations',
font=dict(
family='Arial, sans-serif;',
size=12,
color='#000'
),
xaxis = dict(title='Capital Multiplier'),
yaxis = dict(title='Event Probability'),
orientation=0,
autosize=True,
annotations=[
dict(
x=max(data_dict['x'])/2,
y=max(data_dict['y']),
text='N: {0} | P_G: {1}'.format(n, P_G),
showarrow=False
)
]
)
# Plot figure
#iplot({"data": fig_data, "layout": layout})
# Number of iterations
years = 5
iterations_per_year = 2
n = iterations_per_year * (years)
# Arrays of data to plot
data_dict = { 'values': [], 'probs': np.array([]), 'iterations': [], 'mean': [], 'most_prob': [], 'uniq_iterations': []}
# For each iteration less than the maximun number of iterations
i = 1
while i <= n:
x_i = linspace(0,i,i+1) # Possible values of success event in "i" trials
values = [evolve_with_binomial(P, x, max(x_i)) for x in x_i] # Capital evolution according to Binomial process
probs = stats.binom.pmf(x_i,max(x_i),P_G) # Probabilities of Binomial process
# Set values in dictionary
data_dict['values'] = data_dict['values'] + values
data_dict['mean'].append(np.mean(values))
data_dict['most_prob'].append(values[np.argmax(probs)])
data_dict['uniq_iterations'].append(i)
data_dict['probs'] = np.concatenate((data_dict['probs'], probs), axis=0)
data_dict['iterations'] = data_dict['iterations'] + [i]*len(x_i)
i += 1
# Plot data variable. It contains the trace objects
fig_data = [
go.Scatter(
x=data_dict['iterations'],
y=data_dict['values'],
mode='markers',
name="Evolution",
marker=dict(
cmin = 0,
cmax = 1,
color = data_dict['probs'],
size = 16
)
),
go.Scatter(
x=data_dict['uniq_iterations'],
y=data_dict['mean'],
mode='lines+markers',
name="Mean",
line=dict(
shape='spline'
)
),
go.Scatter(
x=data_dict['uniq_iterations'],
y=data_dict['most_prob'],
mode='lines+markers',
name="Most Probable",
line=dict(
shape='spline'
)
)
]
# Set layout for figure
layout = go.Layout(
title='Evolution of Capital Through Binomial Process',
font=dict(
family='Arial, sans-serif;',
size=12,
color='#000'
),
xaxis = dict(title='Iteration Number'),
yaxis = dict(title='Capital Multiplier'),
orientation=0,
autosize=True,
annotations=[
dict(
x=n/2,
y=max(data_dict['values']),
text='P_G: {0}'.format(P_G),
showarrow=False
)
]
)
# Plot figure
#iplot({"data": fig_data, "layout": layout})
# Calculate the possible capital declines and their respective probabilities
data_dict["decline_values"] = []
data_dict["decline_probs"] = []
data_dict["decline_iterations"] = []
for index, val in enumerate(data_dict["values"]):
if val < 1:
data_dict["decline_values"].append((1-val)*100)
data_dict["decline_probs"].append(100*data_dict["probs"][index])
data_dict["decline_iterations"].append(data_dict["iterations"][index])
# Plot data variable. It contains the trace objects
fig_data = [
go.Scatter(
x=data_dict['decline_iterations'],
y=data_dict['decline_values'],
mode='markers',
name="Evolution",
marker=dict(
cmin = 0,
cmax = 1,
color = data_dict['decline_probs']
)
)
]
fig_data[0].text = ["Probability: {0:.2f}%".format(prob) for prob in data_dict["decline_probs"]]
# Set layout for figure
layout = go.Layout(
title='Possible Capital Decline Through Binomial Process',
font=dict(
family='Arial, sans-serif;',
size=12,
color='#000'
),
xaxis = dict(title='Iteration Number'),
yaxis = dict(title='Percentage Decline [%]'),
orientation=0,
autosize=True,
annotations=[
dict(
x=max(data_dict["decline_iterations"])/2,
y=max(data_dict['decline_values']),
text='P_G: {0}'.format(P_G),
showarrow=False
)
]
)
# Plot figure
#iplot({"data": fig_data, "layout": layout})
# Capital percentage decline of bankruptcy
CP_br = 20
# Variable to store the plot data
data_dict["bankruptcy_probs"] = []
data_dict["bankruptcy_iterations"] = []
# Calculate for each iteration the probability of bankruptcy
iter_counter = 0
for i, iteration in enumerate(data_dict["decline_iterations"]):
if data_dict["decline_values"][i] >= CP_br:
if iteration > iter_counter:
data_dict["bankruptcy_probs"].append(data_dict["decline_probs"][i])
data_dict["bankruptcy_iterations"].append(iteration)
else:
data_dict["bankruptcy_probs"][-1] = data_dict["bankruptcy_probs"][-1] + data_dict["decline_probs"][i]
iter_counter = iteration
# Plot data variable. It contains the trace objects
fig_data = [
go.Scatter(
x=data_dict['bankruptcy_iterations'],
y=data_dict['bankruptcy_probs'],
mode='lines+markers',
name="Mean",
line=dict(
shape='spline'
)
)
]
# Set layout for figure
layout = go.Layout(
title='Probability of Bankruptcy Through Binomial Process',
font=dict(
family='Arial, sans-serif;',
size=12,
color='#000'
),
xaxis = dict(title='Iteration Number'),
yaxis = dict(title='Event Probability [%]'),
orientation=0,
autosize=True,
annotations=[
dict(
x=max(data_dict['bankruptcy_iterations'])/2,
y=max(data_dict['bankruptcy_probs']),
text='P_G: {0} | CP_br: {1}%'.format(P_G, CP_br),
showarrow=False
)
]
)
# Plot figure
#iplot({"data": fig_data, "layout": layout})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Write a function that checks whether a number is in a given range (Inclusive of high and low)
Step2: If you only wanted to return a boolean
Step3: Write a Python function that accepts a string and calculate the number of upper case letters and lower case letters.
Step4: Write a Python function that takes a list and returns a new list with unique elements of the first list.
Step5: Write a Python function to multiply all the numbers in a list.
Step6: Write a Python function that checks whether a passed string is palindrome or not.
Step7: Hard
|
<ASSISTANT_TASK:>
Python Code:
import math
def vol(rad):
return 4/3*math.pi*(rad**(3))
vol(2)
def ran_check(num,low,high):
return low <= num <= high
ran_check(3,4,5)
ran_check(3,1,100)
def ran_bool(num,low,high):
return low <= num <= high
ran_bool(3,1,10)
def up_low(s):
nUpper = 0
nLower = 0
for word in s.split():
for letter in word:
if letter.isupper():
nUpper += 1
elif letter.islower():
nLower += 1
print 'No. of Upper case character : %d' % nUpper
print 'No. of Lower case character : %d' % nLower
up_low('Hello Mr. Rogers, how are you this fine Tuesday?')
def unique_list(l):
return list(set(l))
unique_list([1,1,1,1,2,2,3,3,3,3,4,5])
def multiply(numbers):
return reduce(lambda x,y: x*y, numbers)
multiply([1,2,3,-4])
def palindrome(s):
return s[::-1] == s
palindrome('helleh')
import string
def ispangram(str1, alphabet=string.ascii_lowercase):
# str1 = str1.lower()
str1 = str1.lower().replace(" ","")
return set(str1) == set(alphabet)
ispangram("The quick brown fox jumps over the lazy dog")
string.ascii_lowercase
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can see the number of records in each column to ensure all of our datapoints are complete
Step2: And we can see the data type for each column like so
Step3: Visualization
Step4: Or we can use pairplot to do this for all combinations of features!
Step5: From these plots we can see that Iris setosa is linearly separable from the others in all feature pairs. This could prove useful for the design of our network classifier.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
iris = pd.read_csv('data/iris.csv')
# Display the first few rows of the dataframe
iris.head()
iris.count()
iris.dtypes
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
sns.FacetGrid(iris, hue="Species", size=6) \
.map(plt.scatter, "SepalLengthCm", "SepalWidthCm") \
.add_legend()
sns.pairplot(iris.drop("Id", axis=1), hue="Species", size=3)
%matplotlib inline
# This cell can be run independently of the ones above it.
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
# Path for saving model data
model_path = 'tmp/model.ckpt'
# Hyperparameters
learn_rate = .5
batch_size = 10
epochs = 50
# Load the data into dataframes
# There is NO OVERLAP between the training and testing data
# Take a minute to remember why this should be the case!
iris_train = pd.read_csv('data/iris_train.csv', dtype={'Species': 'category'})
iris_test = pd.read_csv('data/iris_test.csv', dtype={'Species': 'category'})
test_features = iris_test.as_matrix()[:,:4]
test_targets = pd.get_dummies(iris_test.Species).as_matrix()
# Create placeholder for the input tensor (input layer):
# Our input has four features so our shape will be (none, 4)
# A variable number of rows and four feature columns.
x = tf.placeholder(tf.float32, [None, 4])
# Outputs will have 3 columns since there are three categories
# This placeholder is for our targets (correct categories)
# It will be fed with one-hot vectors from the data
y_ = tf.placeholder(tf.float32, [None, 3])
# The baseline model will consist of a single softmax layer with
# weights W and bias b
# Because these values will be calculated and recalculated
# on the fly, we'll declare variables for them.
# We use a normal distribution to initialize our matrix with small random values
W = tf.Variable(tf.truncated_normal([4, 3], stddev=0.1))
# And an initial value of zero for the bias.
b = tf.Variable(tf.zeros([3]))
# We define our simple model here
y = tf.nn.softmax(tf.matmul(x, W) + b)
#=================================================================
# And our cost function here (make sure only one is uncommented!)|
#=================================================================
# Mean Squared Error
cost = tf.reduce_mean(tf.squared_difference(y_, y))
# Cross-Entropy
#cost = tf.reduce_mean(
# tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
#
#=================================================================
# Gradient descent step
train_step = tf.train.GradientDescentOptimizer(learn_rate).minimize(cost)
# Start a TensorFlow session
with tf.Session() as sess:
# Initialize all of the Variables
sess.run(tf.global_variables_initializer())
# Operation for saving all variables
saver = tf.train.Saver()
# Training loop
for epoch in range(epochs):
avg_cost = 0.
num_batches = int(iris_train.shape[0]/batch_size)
for _ in range(num_batches):
# Randomly select <batch_size> samples from the set (with replacement)
batch = iris_train.sample(n=batch_size)
# Capture the x and y_ data
batch_features = batch.as_matrix()[:,:4]
# get_dummies turns our categorical data into one-hot vectors
batch_targets = pd.get_dummies(batch.Species).as_matrix()
# Run the training step using batch_features and batch_targets
# as x and y_, respectively and capture the cost at each step
_, c = sess.run([train_step, cost], feed_dict={x:batch_features, y_:batch_targets})
# Calculate the average cost for the epoch
avg_cost += c/num_batches
# Print epoch results
print("Epoch %04d cost: %s" % (epoch + 1, "{:.4f}".format(avg_cost)))
# If our model's most likely classification is equal to the one-hot index
# add True to our correct_prediction tensor
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
# Cast the boolean variables as floats and take the mean.
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Calculate the percentage of correct answers using the test data
score = sess.run(accuracy, feed_dict={x: test_features, y_: test_targets}) * 100
print("\nThe model correctly identified %s of the test data." % "{:.2f}%".format(score))
# Save the model data
save_path = saver.save(sess, model_path)
print("\nModel data saved to %s" % model_path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The steady-states are solutions to
Step2: So we see that the qualitative behaviour of the system depends on whether the maximum slope of the Hill function is less than or greater than $k_\text{decay}$ (the slope of the straight line).
Step3: This is an example of hysteresis.
Step4: So we see that the system is stable for $I_\text{app} = 0$, but with a perturbation which is sufficiently large we see a large (excitable) response, before returning to the original value.
Step5: Stability of the model
Step6: Hopf bifurcation in the FitzHugh-Nagumo model
Step7: Model discussion
|
<ASSISTANT_TASK:>
Python Code:
from python.f06 import *
%matplotlib inline
# try varying p0
# then set kact_s < 0.1, and try varying p0
interact(plot_switch, k=fixed(4), n=fixed(3))
interact(plot_switch_eqns, k=fixed(4), n=fixed(3))
# try kdecay = 0.13
interact(plot_switch_ss, k=fixed(4)) # n = 2
interact(plot_hh)
interact(plot_fitzn, Iapp=fixed(0), a=fixed(0.25), b=fixed(0.002), c=fixed(0.002))
interact(plot_fitzn_pp, Iapp=fixed(0), a=fixed(0.25), b=fixed(0.002), c=fixed(0.002))
interact(plot_fitzn_pp, Iapp=fixed(0), a=(0.333,0.334), b=fixed(0.111), c=fixed(1))
interact(plot_fitzn, v0=fixed(0), a=fixed(0.25), b=fixed(0.002), c=fixed(0.002))
interact(plot_fitzn_pp, v0=fixed(0), a=fixed(0.25), b=fixed(0.002), c=fixed(0.002))
# Jupyter notebook setup
from IPython.core.display import HTML
HTML(open("../styles/custom.css", "r").read())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Trata de dibujar un cuadrado de 150 pasos de lado.
Step2: ¿Para qué escribimos turtle.Turtle() para obtener un nuevo objeto Turtle (tortuga)?
Step3: Haz que henry dibuje un triángulo verde al tiempo que julián dibuja un cuadrado amarillo (poniendo las instrucciones todas en la misma celda).
|
<ASSISTANT_TASK:>
Python Code:
import turtle # necesitamos un módulo llamado turtle para esta parte
ventana = turtle.Screen() # crea una ventana para dibujar
henry = turtle.Turtle() # crea una tortuga (cursor) que se llama henry
henry.forward(150) # le decimos a henry que avance 150 pasos
henry.left(90) # henry, da una vuelta de 90 grados hacia la derecha
henry.undo() # deshace la instruccion anterior
henry.position() # Esto es un atributo
henry.color() # Esto también es un atributo, y se puede cambiar
henry.color("pink")
henry.forward(100) # Esto es un método que cambia la posición del cursor
# Si quiero volver a empezar, pero cambiando el color del fondo
ventana.clear()
ventana.bgcolor("lightgreen")
henry = turtle.Turtle()
henry.pensize(3)
henry.goto(100,100) # Qué diferencia hay con el método forward?
henry.penup()
henry.goto(100,0)
henry.pendown()
henry.goto(0,0)
julian = turtle.Turtle() # crea otra tortuga que se llama julian
# esta celda funciona en binder
for amiga in ['Jacinta','Nepomucena','Gertrudis','Poncha','Domitila']:
print("Mi mejor amiga hoy es",amiga) # indentación es un tab para lo que quiero que se repita en el ciclo
# esta celda funciona en binder
for i in ['Jacinta','Nepomucena','Gertrudis','Poncha','Domitila']:
print("Mi mejor amiga hoy es",i)
# esta celda funciona en binder
amigas=['Jacinta','Nepomucena','Gertrudis','Poncha','Domitila'] # esto es un objeto llamado lista
amigas
# esta celda funciona en binder
for amiga in amigas:
print("Mi mejor amiga hoy es",amiga)
# esta celda funciona en binder
type(amigas)
# esta celda funciona en binder
type(amiga)
# esta celda funciona en binder
amiga
# esta celda funciona en binder
k=1
for amiga in amigas:
print("Mi amiga numero",k,"es",amiga)
k+=1
ventana.clear()
german = turtle.Turtle()
for i in [0, 1, 2, 3]: # repite cuatro veces las instrucciones
german.forward(150)
german.left(90)
for aColor in ["yellow", "red", "purple", "blue"]:
german.forward(50)
german.left(90)
for aColor in ["yellow", "red", "purple", "blue"]:
german.color(aColor)
german.forward(50)
german.left(90)
for aColor in ["yellow", "red", "purple", "blue"]:
german.color(aColor)
print(aColor)
german.forward(50)
german.left(90)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will be loading the corpus and dictionary from disk. Here our corpus in the Blei corpus format, but it can be any iterable corpus.
Step2: For DTM to work it first needs the Sufficient Statistics from a trained LDA model on the same dataset.
Step3: Now that our model is trained, let's see what our results look like.
Step4: We can see 5 different fairly well defined topics
Step5: Document - Topic Proportions
Step6: It's pretty clear that it's a news article about football. What topics will it likely be comprised of?
Step7: Let's look at our topics as desribed by us, again
Step8: Pretty neat! Topic 1 is about the Economy, and this document also has traces of football and technology, so topics 1, 3, and 4 got correctly activated.
Step9: The topic distributions are quite related - matches well in football and economy.
Step10: As expected, the value is very high, meaning the topic distributions are far apart.
Step11: Now let us do the same, but after increasing the chain_variance value.
Step12: It's noticable that the values are moving more freely after increasing the chain_variance. Film went from highest probability to 5th to 8th!
Step13: LDA Model and DTM
Step14: If you take some time to look at the topics, you will notice large semantic similarities with the wrapper and the python DTM.
Step15: Now let us do the same for the python DTM.
Step16: Visualising topics is a handy way to compare topic models.
|
<ASSISTANT_TASK:>
Python Code:
# setting up our imports
from gensim.models import ldaseqmodel
from gensim.corpora import Dictionary, bleicorpus
import numpy
from gensim.matutils import hellinger
# loading our corpus and dictionary
try:
dictionary = Dictionary.load('datasets/news_dictionary')
except FileNotFoundError as e:
raise ValueError("SKIP: Please download the Corpus/news_dictionary dataset.")
corpus = bleicorpus.BleiCorpus('datasets/news_corpus')
# it's very important that your corpus is saved in order of your time-slices!
time_slice = [438, 430, 456]
ldaseq = ldaseqmodel.LdaSeqModel(corpus=corpus, id2word=dictionary, time_slice=time_slice, num_topics=5)
ldaseq.print_topics(time=0)
ldaseq.print_topic_times(topic=0) # evolution of 1st topic
# to check Document - Topic proportions, use `doc-topics`
words = [dictionary[word_id] for word_id, count in corpus[558]]
print (words)
doc = ldaseq.doc_topics(558) # check the 558th document in the corpuses topic distribution
print (doc)
doc_football_1 = ['economy', 'bank', 'mobile', 'phone', 'markets', 'buy', 'football', 'united', 'giggs']
doc_football_1 = dictionary.doc2bow(doc_football_1)
doc_football_1 = ldaseq[doc_football_1]
print (doc_football_1)
doc_football_2 = ['arsenal', 'fourth', 'wenger', 'oil', 'middle', 'east', 'sanction', 'fluctuation']
doc_football_2 = dictionary.doc2bow(doc_football_2)
doc_football_2 = ldaseq[doc_football_2]
hellinger(doc_football_1, doc_football_2)
doc_governemt_1 = ['tony', 'government', 'house', 'party', 'vote', 'european', 'official', 'house']
doc_governemt_1 = dictionary.doc2bow(doc_governemt_1)
doc_governemt_1 = ldaseq[doc_governemt_1]
hellinger(doc_football_1, doc_governemt_1)
ldaseq.print_topic_times(1)
ldaseq_chain = ldaseqmodel.LdaSeqModel(corpus=corpus, id2word=dictionary, time_slice=time_slice, num_topics=5, chain_variance=0.05)
ldaseq_chain.print_topic_times(2)
from gensim.models.wrappers.dtmmodel import DtmModel
from gensim.corpora import Dictionary, bleicorpus
import pyLDAvis
# dtm_path = "/Users/bhargavvader/Downloads/dtm_release/dtm/main"
# dtm_model = DtmModel(dtm_path, corpus, time_slice, num_topics=5, id2word=dictionary, initialize_lda=True)
# dtm_model.save('dtm_news')
# if we've saved before simply load the model
dtm_model = DtmModel.load('dtm_news')
doc_topic, topic_term, doc_lengths, term_frequency, vocab = dtm_model.dtm_vis(time=0, corpus=corpus)
vis_wrapper = pyLDAvis.prepare(topic_term_dists=topic_term, doc_topic_dists=doc_topic, doc_lengths=doc_lengths, vocab=vocab, term_frequency=term_frequency)
pyLDAvis.display(vis_wrapper)
doc_topic, topic_term, doc_lengths, term_frequency, vocab = ldaseq.dtm_vis(time=0, corpus=corpus)
vis_dtm = pyLDAvis.prepare(topic_term_dists=topic_term, doc_topic_dists=doc_topic, doc_lengths=doc_lengths, vocab=vocab, term_frequency=term_frequency)
pyLDAvis.display(vis_dtm)
from gensim.models.coherencemodel import CoherenceModel
import pickle
# we just have to specify the time-slice we want to find coherence for.
topics_wrapper = dtm_model.dtm_coherence(time=0)
topics_dtm = ldaseq.dtm_coherence(time=2)
# running u_mass coherence on our models
cm_wrapper = CoherenceModel(topics=topics_wrapper, corpus=corpus, dictionary=dictionary, coherence='u_mass')
cm_DTM = CoherenceModel(topics=topics_dtm, corpus=corpus, dictionary=dictionary, coherence='u_mass')
print ("U_mass topic coherence")
print ("Wrapper coherence is ", cm_wrapper.get_coherence())
print ("DTM Python coherence is", cm_DTM.get_coherence())
# to use 'c_v' we need texts, which we have saved to disk.
texts = pickle.load(open('Corpus/texts', 'rb'))
cm_wrapper = CoherenceModel(topics=topics_wrapper, texts=texts, dictionary=dictionary, coherence='c_v')
cm_DTM = CoherenceModel(topics=topics_dtm, texts=texts, dictionary=dictionary, coherence='c_v')
print ("C_v topic coherence")
print ("Wrapper coherence is ", cm_wrapper.get_coherence())
print ("DTM Python coherence is", cm_DTM.get_coherence())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 대각 성분
Step2: 행렬식
Step3: 전치 행렬과 대칭 행렬
|
<ASSISTANT_TASK:>
Python Code:
A = (np.arange(9) - 4).reshape((3, 3))
A
np.linalg.norm(A)
np.trace(np.eye(3))
A = np.array([[1, 2], [3, 4]])
A
np.linalg.det(A)
A = np.array([[1.0, 3.0], [1.0, 4.0]])
A
B = sp.linalg.logm(A)
B
sp.linalg.expm(B)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can access the values from the form by treating it as an array indexed on the field names
Step2: The array works both ways, so you set default values on the fields by writing the array
Step3: Event Handlers for Smarter Forms
Step4: All Kinds of Fields
Step5: Dates
Step6: SetData
Step7: Default Values and placeholder
Step8: JupyterJSWidgets work with EasyForm
|
<ASSISTANT_TASK:>
Python Code:
from beakerx import *
f = EasyForm("Form and Run")
f.addTextField("first")
f['first'] = "First"
f.addTextField("last")
f['last'] = "Last"
f.addButton("Go!", tag="run")
f
"Good morning " + f["first"] + " " + f["last"]
f['last'][::-1] + '...' + f['first']
f['first'] = 'Beaker'
f['last'] = 'Berzelius'
import operator
f1 = EasyForm("OnInit and OnChange")
f1.addTextField("first", width=15)
f1.addTextField("last", width=15)\
.onInit(lambda: operator.setitem(f1, 'last', "setinit1"))\
.onChange(lambda text: operator.setitem(f1, 'first', text + ' extra'))
button = f1.addButton("action", tag="action_button")
button.actionPerformed = lambda: operator.setitem(f1, 'last', 'action done')
f1
f1['last'] + ", " + f1['first']
f1['last'] = 'new Value'
f1['first'] = 'new Value2'
g = EasyForm("Field Types")
g.addTextField("Short Text Field", width=10)
g.addTextField("Text Field")
g.addPasswordField("Password Field", width=10)
g.addTextArea("Text Area")
g.addTextArea("Tall Text Area", 10, 5)
g.addCheckBox("Check Box")
options = ["a", "b", "c", "d"]
g.addComboBox("Combo Box", options)
g.addComboBox("Combo Box editable", options, editable=True)
g.addList("List", options)
g.addList("List Single", options, multi=False)
g.addList("List Two Row", options, rows=2)
g.addCheckBoxes("Check Boxes", options)
g.addCheckBoxes("Check Boxes H", options, orientation=EasyForm.HORIZONTAL)
g.addRadioButtons("Radio Buttons", options)
g.addRadioButtons("Radio Buttons H", options, orientation=EasyForm.HORIZONTAL)
g.addDatePicker("Date")
g.addButton("Go!", tag="run2")
g
result = dict()
for child in g:
result[child] = g[child]
TableDisplay(result)
gdp = EasyForm("Field Types")
gdp.addDatePicker("Date")
gdp
gdp['Date']
easyForm = EasyForm("Field Types")
easyForm.addDatePicker("Date", value=datetime.today().strftime('%Y%m%d'))
easyForm
h = EasyForm("Default Values")
h.addTextArea("Default Value", value = "Initial value")
h.addTextArea("Place Holder", placeholder = "Put here some text")
h.addCheckBox("Default Checked", value = True)
h.addButton("Press", tag="check")
h
result = dict()
for child in h:
result[child] = h[child]
TableDisplay(result)
from ipywidgets import *
w = IntSlider()
widgetForm = EasyForm("python widgets")
widgetForm.addWidget("IntSlider", w)
widgetForm.addButton("Press", tag="widget_test")
widgetForm
widgetForm['IntSlider']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: loading file with very lighty dark
Step2: The DarkIntegration data shows that there are 2 observations of darks, approx 22 minutes apart
Step3: Let's look at the original darks. There are 2 available
Step4: As one can see, they are strongly different. The last dark looks similar to a usual light image.
|
<ASSISTANT_TASK:>
Python Code:
from iuvs import io
%autocall 1
import os
files = !ls ~/data/iuvs/level1b/*.gz
for file in files:
print(os.path.basename(file))
l1b = io.L1BReader(files[-1])
l1b.DarkIntegration
l1b.detector_dark.shape
def compare_darks(dark1, dark2):
fig, ax = subplots(nrows=2, figsize=(10,8))
ax[0].imshow(dark1)
ax[1].imshow(dark2)
compare_darks(*l1b.detector_dark) # this trick puts the first axis of a cube into a function
l1b.DarkEngineering
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import packages
Step2: Palmer Penguins example pipeline
Step3: Run TFX Components
Step4: As seen above, .selected_features contains the features selected after running the component with the speified parameters.
|
<ASSISTANT_TASK:>
Python Code:
!pip install -U tfx
# getting the code directly from the repo
x = !pwd
if 'feature_selection' not in str(x):
!git clone -b main https://github.com/tensorflow/tfx-addons.git
%cd tfx-addons/tfx_addons/feature_selection
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
import importlib
pp = pprint.PrettyPrinter()
from tfx import v1 as tfx
import importlib
from tfx.components import CsvExampleGen
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
# importing the feature selection component
from component import FeatureSelection
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
# getting the dataset
_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
context = InteractiveContext()
#create and run exampleGen component
example_gen = CsvExampleGen(input_base=_data_root )
context.run(example_gen)
#create and run statisticsGen component
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
# using the feature selection component
#feature selection component
feature_selector = FeatureSelection(orig_examples = example_gen.outputs['examples'],
module_file='example.modules.penguins_module')
context.run(feature_selector)
# Display Selected Features
context.show(feature_selector.outputs['feature_selection']._artifacts[0])
context.show(feature_selector.outputs['updated_data']._artifacts[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's ask the following question
Step2: Let's test whether similarity is higher for faces across runs within-condition versus similarity between faces and all other categories. Note that we would generally want to compute this for each subject and do statistics on the means across subjects, rather than computing the statistics within-subject as we do below (which treats subject as a fixed effect)
|
<ASSISTANT_TASK:>
Python Code:
import numpy
import nibabel
import os
from haxby_data import HaxbyData
from nilearn.input_data import NiftiMasker
%matplotlib inline
import matplotlib.pyplot as plt
import sklearn.manifold
import scipy.cluster.hierarchy
datadir='/home/vagrant/nilearn_data/haxby2001/subj2'
print('Using data from %s'%datadir)
haxbydata=HaxbyData(datadir)
modeldir=os.path.join(datadir,'blockmodel')
try:
os.chdir(modeldir)
except:
print('problem changing to %s'%modeldir)
print('you may need to run the Classification Analysis script first')
use_whole_brain=False
if use_whole_brain:
maskimg=haxbydata.brainmaskfile
else:
maskimg=haxbydata.vtmaskfile
nifti_masker = NiftiMasker(mask_img=maskimg, standardize=False)
fmri_masked = nifti_masker.fit_transform(os.path.join(modeldir,'zstatdata.nii.gz'))
print(ci,cj,i,j)
cc=numpy.zeros((8,8,12,12))
# loop through conditions
for ci in range(8):
for cj in range(8):
for i in range(12):
for j in range(12):
if i==6 or j==6: # problem with run 6 - skip it
continue
idx=numpy.where(numpy.logical_and(haxbydata.runs==i,haxbydata.condnums==ci+1))
if len(idx[0])>0:
idx_i=idx[0][0]
else:
print('problem',ci,cj,i,j)
idx_i=None
idx=numpy.where(numpy.logical_and(haxbydata.runs==j,haxbydata.condnums==cj+1))
if len(idx[0])>0:
idx_j=idx[0][0]
else:
print('problem',ci,cj,i,j)
idx_j=None
if not idx_i is None and not idx_j is None:
cc[ci,cj,i,j]=numpy.corrcoef(fmri_masked[idx_i,:],fmri_masked[idx_j,:])[0,1]
else:
cc[ci,cj,i,j]=numpy.nan
meansim=numpy.zeros((8,8))
for ci in range(8):
for cj in range(8):
cci=cc[ci,cj,:,:]
meansim[ci,cj]=numpy.nanmean(numpy.hstack((cci[numpy.triu_indices(12,1)],
cci[numpy.tril_indices(12,1)])))
plt.imshow(meansim,interpolation='nearest')
plt.colorbar()
l=scipy.cluster.hierarchy.ward(1.0 - meansim)
cl=scipy.cluster.hierarchy.dendrogram(l,labels=haxbydata.condlabels,orientation='right')
# within-condition
face_corr={}
corr_means=[]
corr_stderr=[]
corr_stimtype=[]
for k in haxbydata.cond_dict.keys():
face_corr[k]=[]
for i in range(12):
for j in range(12):
if i==6 or j==6:
continue
if i==j:
continue
face_corr[k].append(cc[haxbydata.cond_dict['face']-1,haxbydata.cond_dict[k]-1,i,j])
corr_means.append(numpy.mean(face_corr[k]))
corr_stderr.append(numpy.std(face_corr[k])/numpy.sqrt(len(face_corr[k])))
corr_stimtype.append(k)
idx=numpy.argsort(corr_means)[::-1]
plt.bar(numpy.arange(0.5,8.),[corr_means[i] for i in idx],yerr=[corr_stderr[i] for i in idx]) #,yerr=corr_sterr[idx])
t=plt.xticks(numpy.arange(1,9), [corr_stimtype[i] for i in idx],rotation=70)
plt.ylabel('Mean between-run correlation with faces')
import sklearn.manifold
mds=sklearn.manifold.MDS()
#mds=sklearn.manifold.TSNE(early_exaggeration=10,perplexity=70,learning_rate=100,n_iter=5000)
encoding=mds.fit_transform(fmri_masked)
plt.figure(figsize=(12,12))
ax=plt.axes() #[numpy.min(encoding[0]),numpy.max(encoding[0]),numpy.min(encoding[1]),numpy.max(encoding[1])])
ax.scatter(encoding[:,0],encoding[:,1])
offset=0.01
for i in range(encoding.shape[0]):
ax.annotate(haxbydata.conditions[i].split('-')[0],(encoding[i,0],encoding[i,1]),xytext=[encoding[i,0]+offset,encoding[i,1]+offset])
#for i in range(encoding.shape[0]):
# plt.text(encoding[i,0],encoding[i,1],'%d'%haxbydata.condnums[i])
mdsmeans=numpy.zeros((2,8))
for i in range(8):
mdsmeans[:,i]=numpy.mean(encoding[haxbydata.condnums==(i+1),:],0)
for i in range(2):
print('Dimension %d:'%int(i+1))
idx=numpy.argsort(mdsmeans[i,:])
for j in idx:
print('%s:\t%f'%(haxbydata.condlabels[j],mdsmeans[i,j]))
print('')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The real numbers can also be represented in scientific notation, for example
Step2: The operator % is used for string interpolation. The interpolation is more efficient in use of memory than the conventional concatenation.
Step3: Since version 2.6, in addition to interpolation operator %, the string method and function format() is available.
Step4: The function format() can be used only to format one piece of data each time.
Step5: Various functions for dealing with text are implemented in the module string.
Step6: The module also implements a type called Template, which is a model string that can be filled through a dictionary. Identifiers are initialized by a dollar sign ($) and may be surrounded by curly braces, to avoid confusion.
Step7: It is possible to use mutable strings in Python through the UserString module, which defines the MutableString type
Step8: Mutable Strings are less efficient than immutable strings, as they are more complex (in terms of the structure), which is reflected in increased consumption of resources (CPU and memory).
Step9: To use both methods, it is necessary to pass as an argument the compliant coding. The most used are "latin1" "utf8".
Step10: The function enumerate() returns a tuple of two elements in each iteration
Step11: The sort (sort) and reversal (reverse) operations are performed in the list and do not create new lists.
Step12: When one list is converted to a set, the repetitions are discarded.
Step13: Sparse matrix example
Step14: Generating the sparse matrix
Step15: The sparse matrix is a good solution for processing structures in which most of the items remain empty, like spreadsheets for example.
|
<ASSISTANT_TASK:>
Python Code:
# Converting real to integer
print 'int(3.14) =', int(3.14)
# Converting integer to real
print 'float(5) =', float(5)
# Calculation between integer and real results in real
print '5.0 / 2 + 3 = ', 5.0 / 2 + 3
# Integers in other base
print "int('20', 8) =", int('20', 8) # base 8
print "int('20', 16) =", int('20', 16) # base 16
# Operations with complex numbers
c = 3 + 4j
print 'c =', c
print 'Real Part:', c.real
print 'Imaginary Part:', c.imag
print 'Conjugate:', c.conjugate()
s = 'Camel'
# Concatenation
print 'The ' + s + ' ran away!'
# Interpolation
print 'Size of %s => %d' % (s, len(s))
# String processed as a sequence
for ch in s: print ch
# Strings are objects
if s.startswith('C'): print s.upper()
# what will happen?
print 3 * s
# 3 * s is consistent with s + s + s
# Zeros left
print 'Now is %02d:%02d.' % (16, 30)
# Real (The number after the decimal point specifies how many decimal digits )
print 'Percent: %.1f%%, Exponencial:%.2e' % (5.333, 0.00314)
# Octal and hexadecimal
print 'Decimal: %d, Octal: %o, Hexadecimal: %x' % (10, 10, 10)
musicians = [('Page', 'guitarist', 'Led Zeppelin'),
('Fripp', 'guitarist', 'King Crimson')]
# Parameters are identified by order
msg = '{0} is {1} of {2}'
for name, function, band in musicians:
print(msg.format(name, function, band))
# Parameters are identified by name
msg = '{greeting}, it is {hour:02d}:{minute:02d}'
print msg.format(greeting='Good Morning', hour=7, minute=30)
# Builtin function format()
print 'Pi =', format(3.14159, '.3e')
print 'Python'[::-1]
# shows: nohtyP
import string
# the alphabet
a = string.ascii_letters
# Shifting left the alphabet
b = a[1:] + a[0]
# The function maketrans() creates a translation table
# from the characters of both strings it received as parameters.
# The characters not present in the table will be
# copied to the output.
tab = string.maketrans(a, b)
# The message...
msg = '''This text will be translated..
It will become very strange.
'''
# The function translate() uses the translation table
# created by maketrans() to translate the string
print string.translate(msg, tab)
import string
# Creates a template string
st = string.Template('$warning occurred in $when')
# Fills the model with a dictionary
s = st.substitute({'warning': 'Lack of electricity',
'when': 'April 3, 2002'})
# Shows:
# Lack of electricity occurred in April 3, 2002
print s
import UserString
s = UserString.MutableString('Python')
s[0] = 'p'
print s # shows "python"
# Unicode String
u = u'Hüsker Dü'
# Convert to str
s = u.encode('latin1')
print s, '=>', type(s)
# String str
s = 'Hüsker Dü'
u = s.decode('latin1')
print repr(u), '=>', type(u)
# a new list: 70s Brit Progs
progs = ['Yes', 'Genesis', 'Pink Floyd', 'ELP']
# processing the entire list
for prog in progs:
print prog
# Changing the last element
progs[-1] = 'King Crimson'
# Including
progs.append('Camel')
# Removing
progs.remove('Pink Floyd')
# Ordering
progs.sort()
# Inverting
progs.reverse()
# prints with number order
for i, prog in enumerate(progs):
print i + 1, '=>', prog
# prints from de second item
print progs[1:]
my_list = ['A', 'B', 'C']
print 'list:', my_list
# The empty list is evaluated as false
while my_list:
# In queues, the first item is the first to go out
# pop(0) removes and returns the first item
print 'Left', my_list.pop(0), ', remain', len(my_list)
# More items on the list
my_list += ['D', 'E', 'F']
print 'list:', my_list
while my_list:
# On stacks, the first item is the last to go out
# pop() removes and retorns the last item
print 'Left', my_list.pop(), ', remain', len(my_list)
# Data sets
s1 = set(range(3))
s2 = set(range(10, 7, -1))
s3 = set(range(2, 10, 2))
# Shows the data
print 's1:', s1, '\ns2:', s2, '\ns3:', s3
# Union
s1s2 = s1.union(s2)
print 'Union of s1 and s2:', s1s2
# Difference
print 'Difference with s3:', s1s2.difference(s3)
# Intersectiono
print 'Intersection with s3:', s1s2.intersection(s3)
# Tests if a set includes the other
if s1.issuperset([1, 2]):
print 's1 includes 1 and 2'
# Tests if there is no common elements
if s1.isdisjoint(s2):
print 's1 and s2 have no common elements'
# Progs and their albums
progs = {'Yes': ['Close To The Edge', 'Fragile'],
'Genesis': ['Foxtrot', 'The Nursery Crime'],
'ELP': ['Brain Salad Surgery']}
# More progs
progs['King Crimson'] = ['Red', 'Discipline']
# items() returns a list of
# tuples with key and value
for prog, albums in progs.items():
print prog, '=>', albums
# If there is 'ELP', removes
if progs.has_key('ELP'):
del progs['ELP']
# Sparse Matrix implemented
# with dictionary
# Sparse Matrix is a structure
# that only stores values that are
# present in the matrix
dim = 6, 12
mat = {}
# Tuples are immutable
# Each tuple represents
# a position in the matrix
mat[3, 7] = 3
mat[4, 6] = 5
mat[6, 3] = 7
mat[5, 4] = 6
mat[2, 9] = 4
mat[1, 0] = 9
for lin in range(dim[0]):
for col in range(dim[1]):
# Method get(key, value)
# returns the key value
# in dictionary or
# if the key doesn't exists
# returns the second argument
print mat.get((lin, col), 0),
print
# Matrix in form of string
matrix = '''0 0 0 0 0 0 0 0 0 0 0 0
9 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 4 0 0
0 0 0 0 0 0 0 3 0 0 0 0
0 0 0 0 0 0 5 0 0 0 0 0
0 0 0 0 6 0 0 0 0 0 0 0'''
mat = {}
# split the matrix in lines
for row, line in enumerate(matrix.splitlines()):
# Splits the line int cols
for col, column in enumerate(line.split()):
column = int(column)
# Places the column in the result,
# if it is differente from zero
if column:
mat[row, col] = column
print mat
# The counting starts with zero
print 'Complete matrix size:', (row + 1) * (col + 1)
print 'Sparse matrix size:', len(mat)
print 0 and 3 # Shows 0
print 2 and 3 # Shows 3
print 0 or 3 # Shows 3
print 2 or 3 # Shows 2
print not 0 # Shows True
print not 2 # Shows False
print 2 in (2, 3) # Shows True
print 2 is 3 # Shows False
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To implement our square wave in magma, we start by importing the IceStick module from loam. We instance the IceStick and turn on the Clock and J3[0] (configured as an output).
Step2: Compile and build the circuit.
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
x = np.arange(0, 100)
def square(x):
return (x % 50) < 25
plt.plot(x, square(x))
import magma as m
m.set_mantle_target("ice40")
import mantle
from loam.boards.icestick import IceStick
icestick = IceStick()
icestick.Clock.on()
icestick.J3[0].output().on()
main = icestick.main()
counter = mantle.Counter(32)
square = counter.O[9]
m.wire( square, main.J3 )
m.compile('build/square', main)
%%bash
cd build
cat square.pcf
yosys -q -p 'synth_ice40 -top main -blif square.blif' square.v
arachne-pnr -q -d 1k -o square.txt -p square.pcf square.blif
icepack square.txt square.bin
iceprog square.bin
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Captioning with RNNs
Step2: Microsoft COCO
Step3: Look at the data
Step4: Recurrent Neural Networks
Step5: Vanilla RNN
Step6: Vanilla RNN
Step7: Vanilla RNN
Step8: Word embedding
Step9: Word embedding
Step10: Temporal Affine layer
Step11: Temporal Softmax loss
Step12: RNN for image captioning
Step13: Run the following cell to perform numeric gradient checking on the CaptioningRNN class; you should errors around 1e-7 or less.
Step14: Overfit small data
Step15: Test-time sampling
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
import time, os, json
import numpy as np
import matplotlib.pyplot as plt
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.rnn_layers import *
from cs231n.captioning_solver import CaptioningSolver
from cs231n.classifiers.rnn import CaptioningRNN
from cs231n.coco_utils import load_coco_data, sample_coco_minibatch, decode_captions
from cs231n.image_utils import image_from_url
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load COCO data from disk; this returns a dictionary
# We'll work with dimensionality-reduced features for this notebook, but feel
# free to experiment with the original features by changing the flag below.
data = load_coco_data(pca_features=True)
# Print out all the keys and values from the data dictionary
for k, v in data.iteritems():
if type(v) == np.ndarray:
print k, type(v), v.shape, v.dtype
else:
print k, type(v), len(v)
# Sample a minibatch and show the images and captions
batch_size = 3
captions, features, urls = sample_coco_minibatch(data, batch_size=batch_size)
for i, (caption, url) in enumerate(zip(captions, urls)):
plt.imshow(image_from_url(url))
plt.axis('off')
caption_str = decode_captions(caption, data['idx_to_word'])
plt.title(caption_str)
plt.show()
N, D, H = 3, 10, 4
x = np.linspace(-0.4, 0.7, num=N*D).reshape(N, D)
prev_h = np.linspace(-0.2, 0.5, num=N*H).reshape(N, H)
Wx = np.linspace(-0.1, 0.9, num=D*H).reshape(D, H)
Wh = np.linspace(-0.3, 0.7, num=H*H).reshape(H, H)
b = np.linspace(-0.2, 0.4, num=H)
next_h, _ = rnn_step_forward(x, prev_h, Wx, Wh, b)
expected_next_h = np.asarray([
[-0.58172089, -0.50182032, -0.41232771, -0.31410098],
[ 0.66854692, 0.79562378, 0.87755553, 0.92795967],
[ 0.97934501, 0.99144213, 0.99646691, 0.99854353]])
print 'next_h error: ', rel_error(expected_next_h, next_h)
from cs231n.rnn_layers import rnn_step_forward, rnn_step_backward
N, D, H = 4, 5, 6
x = np.random.randn(N, D)
h = np.random.randn(N, H)
Wx = np.random.randn(D, H)
Wh = np.random.randn(H, H)
b = np.random.randn(H)
out, cache = rnn_step_forward(x, h, Wx, Wh, b)
dnext_h = np.random.randn(*out.shape)
fx = lambda x: rnn_step_forward(x, h, Wx, Wh, b)[0]
fh = lambda prev_h: rnn_step_forward(x, h, Wx, Wh, b)[0]
fWx = lambda Wx: rnn_step_forward(x, h, Wx, Wh, b)[0]
fWh = lambda Wh: rnn_step_forward(x, h, Wx, Wh, b)[0]
fb = lambda b: rnn_step_forward(x, h, Wx, Wh, b)[0]
dx_num = eval_numerical_gradient_array(fx, x, dnext_h)
dprev_h_num = eval_numerical_gradient_array(fh, h, dnext_h)
dWx_num = eval_numerical_gradient_array(fWx, Wx, dnext_h)
dWh_num = eval_numerical_gradient_array(fWh, Wh, dnext_h)
db_num = eval_numerical_gradient_array(fb, b, dnext_h)
dx, dprev_h, dWx, dWh, db = rnn_step_backward(dnext_h, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dprev_h error: ', rel_error(dprev_h_num, dprev_h)
print 'dWx error: ', rel_error(dWx_num, dWx)
print 'dWh error: ', rel_error(dWh_num, dWh)
print 'db error: ', rel_error(db_num, db)
N, T, D, H = 2, 3, 4, 5
x = np.linspace(-0.1, 0.3, num=N*T*D).reshape(N, T, D)
h0 = np.linspace(-0.3, 0.1, num=N*H).reshape(N, H)
Wx = np.linspace(-0.2, 0.4, num=D*H).reshape(D, H)
Wh = np.linspace(-0.4, 0.1, num=H*H).reshape(H, H)
b = np.linspace(-0.7, 0.1, num=H)
h, _ = rnn_forward(x, h0, Wx, Wh, b)
expected_h = np.asarray([
[
[-0.42070749, -0.27279261, -0.11074945, 0.05740409, 0.22236251],
[-0.39525808, -0.22554661, -0.0409454, 0.14649412, 0.32397316],
[-0.42305111, -0.24223728, -0.04287027, 0.15997045, 0.35014525],
],
[
[-0.55857474, -0.39065825, -0.19198182, 0.02378408, 0.23735671],
[-0.27150199, -0.07088804, 0.13562939, 0.33099728, 0.50158768],
[-0.51014825, -0.30524429, -0.06755202, 0.17806392, 0.40333043]]])
print 'h error: ', rel_error(expected_h, h)
N, D, T, H = 2, 3, 10, 5
x = np.random.randn(N, T, D)
h0 = np.random.randn(N, H)
Wx = np.random.randn(D, H)
Wh = np.random.randn(H, H)
b = np.random.randn(H)
out, cache = rnn_forward(x, h0, Wx, Wh, b)
dout = np.random.randn(*out.shape)
dx, dh0, dWx, dWh, db = rnn_backward(dout, cache)
fx = lambda x: rnn_forward(x, h0, Wx, Wh, b)[0]
fh0 = lambda h0: rnn_forward(x, h0, Wx, Wh, b)[0]
fWx = lambda Wx: rnn_forward(x, h0, Wx, Wh, b)[0]
fWh = lambda Wh: rnn_forward(x, h0, Wx, Wh, b)[0]
fb = lambda b: rnn_forward(x, h0, Wx, Wh, b)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
dh0_num = eval_numerical_gradient_array(fh0, h0, dout)
dWx_num = eval_numerical_gradient_array(fWx, Wx, dout)
dWh_num = eval_numerical_gradient_array(fWh, Wh, dout)
db_num = eval_numerical_gradient_array(fb, b, dout)
print 'dx error: ', rel_error(dx_num, dx)
print 'dh0 error: ', rel_error(dh0_num, dh0)
print 'dWx error: ', rel_error(dWx_num, dWx)
print 'dWh error: ', rel_error(dWh_num, dWh)
print 'db error: ', rel_error(db_num, db)
N, T, V, D = 2, 4, 5, 3
x = np.asarray([[0, 3, 1, 2], [2, 1, 0, 3]])
W = np.linspace(0, 1, num=V*D).reshape(V, D)
out, _ = word_embedding_forward(x, W)
expected_out = np.asarray([
[[ 0., 0.07142857, 0.14285714],
[ 0.64285714, 0.71428571, 0.78571429],
[ 0.21428571, 0.28571429, 0.35714286],
[ 0.42857143, 0.5, 0.57142857]],
[[ 0.42857143, 0.5, 0.57142857],
[ 0.21428571, 0.28571429, 0.35714286],
[ 0., 0.07142857, 0.14285714],
[ 0.64285714, 0.71428571, 0.78571429]]])
print 'out error: ', rel_error(expected_out, out)
N, T, V, D = 50, 3, 5, 6
x = np.random.randint(V, size=(N, T))
W = np.random.randn(V, D)
out, cache = word_embedding_forward(x, W)
dout = np.random.randn(*out.shape)
dW = word_embedding_backward(dout, cache)
f = lambda W: word_embedding_forward(x, W)[0]
dW_num = eval_numerical_gradient_array(f, W, dout)
print 'dW error: ', rel_error(dW, dW_num)
# Gradient check for temporal affine layer
N, T, D, M = 2, 3, 4, 5
x = np.random.randn(N, T, D)
w = np.random.randn(D, M)
b = np.random.randn(M)
out, cache = temporal_affine_forward(x, w, b)
dout = np.random.randn(*out.shape)
fx = lambda x: temporal_affine_forward(x, w, b)[0]
fw = lambda w: temporal_affine_forward(x, w, b)[0]
fb = lambda b: temporal_affine_forward(x, w, b)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
dw_num = eval_numerical_gradient_array(fw, w, dout)
db_num = eval_numerical_gradient_array(fb, b, dout)
dx, dw, db = temporal_affine_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
# Sanity check for temporal softmax loss
from cs231n.rnn_layers import temporal_softmax_loss
N, T, V = 100, 1, 10
def check_loss(N, T, V, p):
x = 0.001 * np.random.randn(N, T, V)
y = np.random.randint(V, size=(N, T))
mask = np.random.rand(N, T) <= p
print temporal_softmax_loss(x, y, mask)[0]
check_loss(100, 1, 10, 1.0) # Should be about 2.3
check_loss(100, 10, 10, 1.0) # Should be about 23
check_loss(5000, 10, 10, 0.1) # Should be about 2.3
# Gradient check for temporal softmax loss
N, T, V = 7, 8, 9
x = np.random.randn(N, T, V)
y = np.random.randint(V, size=(N, T))
mask = (np.random.rand(N, T) > 0.5)
loss, dx = temporal_softmax_loss(x, y, mask, verbose=False)
dx_num = eval_numerical_gradient(lambda x: temporal_softmax_loss(x, y, mask)[0], x, verbose=False)
print 'dx error: ', rel_error(dx, dx_num)
N, D, W, H = 10, 20, 30, 40
word_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3}
V = len(word_to_idx)
T = 13
model = CaptioningRNN(word_to_idx,
input_dim=D,
wordvec_dim=W,
hidden_dim=H,
cell_type='rnn',
dtype=np.float64)
# Set all model parameters to fixed values
for k, v in model.params.iteritems():
model.params[k] = np.linspace(-1.4, 1.3, num=v.size).reshape(*v.shape)
features = np.linspace(-1.5, 0.3, num=(N * D)).reshape(N, D)
captions = (np.arange(N * T) % V).reshape(N, T)
loss, grads = model.loss(features, captions)
expected_loss = 9.83235591003
print 'loss: ', loss
print 'expected loss: ', expected_loss
print 'difference: ', abs(loss - expected_loss)
batch_size = 2
timesteps = 3
input_dim = 4
wordvec_dim = 5
hidden_dim = 6
word_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3}
vocab_size = len(word_to_idx)
captions = np.random.randint(vocab_size, size=(batch_size, timesteps))
features = np.random.randn(batch_size, input_dim)
model = CaptioningRNN(word_to_idx,
input_dim=input_dim,
wordvec_dim=wordvec_dim,
hidden_dim=hidden_dim,
cell_type='rnn',
dtype=np.float64,
)
loss, grads = model.loss(features, captions)
for param_name in sorted(grads):
f = lambda _: model.loss(features, captions)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print '%s relative error: %e' % (param_name, e)
small_data = load_coco_data(max_train=50)
small_rnn_model = CaptioningRNN(
cell_type='rnn',
word_to_idx=data['word_to_idx'],
input_dim=data['train_features'].shape[1],
hidden_dim=512,
wordvec_dim=256,
)
small_rnn_solver = CaptioningSolver(small_rnn_model, small_data,
update_rule='adam',
num_epochs=50,
batch_size=25,
optim_config={
'learning_rate': 5e-3,
},
lr_decay=0.95,
verbose=True, print_every=10,
)
small_rnn_solver.train()
# Plot the training losses
plt.plot(small_rnn_solver.loss_history)
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.title('Training loss history')
plt.show()
for split in ['train', 'val']:
minibatch = sample_coco_minibatch(small_data, split=split, batch_size=2)
gt_captions, features, urls = minibatch
gt_captions = decode_captions(gt_captions, data['idx_to_word'])
sample_captions = small_rnn_model.sample(features)
sample_captions = decode_captions(sample_captions, data['idx_to_word'])
for gt_caption, sample_caption, url in zip(gt_captions, sample_captions, urls):
plt.imshow(image_from_url(url))
plt.title('%s\n%s\nGT:%s' % (split, sample_caption, gt_caption))
plt.axis('off')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Equilibrium with condensed species
Step2: Smoky white space shuttle SRB exhaust
Step3: And one final condensed species case
|
<ASSISTANT_TASK:>
Python Code:
p = ppp.ShiftingPerformance()
o2 = ppp.PROPELLANTS['OXYGEN (GAS)']
ch4 = ppp.PROPELLANTS['METHANE']
p.add_propellants([(ch4, 1.0), (o2, 1.0)])
p.set_state(P=10, Pe=0.01)
print p
for k,v in p.composition.items():
print "{} : ".format(k)
pprint.pprint(v[0:8], indent=4)
OF = np.linspace(1, 5)
m_CH4 = 1.0
cstar_fr = []
cstar_sh = []
Isp_fr = []
Isp_sh = []
for i in xrange(len(OF)):
p = ppp.FrozenPerformance()
psh = ppp.ShiftingPerformance()
m_O2 = OF[i]
p.add_propellants_by_mass([(ch4, m_CH4), (o2, m_O2)])
psh.add_propellants_by_mass([(ch4, m_CH4), (o2, m_O2)])
p.set_state(P=1000./14.7, Pe=1)
psh.set_state(P=1000./14.7, Pe=1)
cstar_fr.append(p.performance.cstar)
Isp_fr.append(p.performance.Isp/9.8)
cstar_sh.append(psh.performance.cstar)
Isp_sh.append(psh.performance.Isp/9.8)
ax = plt.subplot(211)
ax.plot(OF, cstar_fr, label='Frozen')
ax.plot(OF, cstar_sh, label='Shifting')
ax.set_ylabel('C*')
ax1 = plt.subplot(212, sharex=ax)
ax1.plot(OF, Isp_fr, label='Frozen')
ax1.plot(OF, Isp_sh, label='Shifting')
ax1.set_ylabel('Isp (s)')
plt.xlabel('O/F')
plt.legend(loc='best')
kno3 = ppp.PROPELLANTS['POTASSIUM NITRATE']
sugar = ppp.PROPELLANTS['SUCROSE (TABLE SUGAR)']
p = ppp.ShiftingPerformance()
p.add_propellants_by_mass([(kno3, 0.65), (sugar, 0.35)])
p.set_state(P=30, Pe=1.)
for station in ['chamber', 'throat', 'exit']:
print "{} : ".format(station)
pprint.pprint(p.composition[station][0:8], indent=4)
print "Condensed: "
pprint.pprint(p.composition_condensed[station], indent=4)
print '\n'
ap = ppp.PROPELLANTS['AMMONIUM PERCHLORATE (AP)']
pban = ppp.PROPELLANTS['POLYBUTADIENE/ACRYLONITRILE CO POLYMER']
al = ppp.PROPELLANTS['ALUMINUM (PURE CRYSTALINE)']
p = ppp.ShiftingPerformance()
p.add_propellants_by_mass([(ap, 0.70), (pban, 0.12), (al, 0.16)])
p.set_state(P=45, Ae_At=7.7)
for station in ['chamber', 'throat', 'exit']:
print "{} : ".format(station)
pprint.pprint(p.composition[station][0:8], indent=4)
print "Condensed: "
pprint.pprint(p.composition_condensed[station], indent=4)
print '\n'
print p.performance.Ivac/9.8
p = ppp.ShiftingPerformance()
lh2 = ppp.PROPELLANTS['HYDROGEN (CRYOGENIC)']
lox = ppp.PROPELLANTS['OXYGEN (LIQUID)']
OF = 3
p.add_propellants_by_mass([(lh2, 1.0), (lox, OF)])
p.set_state(P=200, Pe=0.01)
print "Chamber Temperature: %.3f K, Exit temperature: %.3f K" % (p.properties[0].T, p.properties[2].T)
print "Gaseous exit products:"
pprint.pprint(p.composition['exit'][0:8])
print "Condensed exit products:"
pprint.pprint(p.composition_condensed['exit'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We don't want all data, so let's focus on a few variables.
Step2: Need to convert prices to floats
Step3: We might think that better apartments get rented more often, let's plot a scatter (or multiple boxes?) plot of the number of reviews vs the review score
Step4: Better reviews also are correlated with higher prices
|
<ASSISTANT_TASK:>
Python Code:
url1 = "http://data.insideairbnb.com/united-states/"
url2 = "ny/new-york-city/2016-02-02/data/listings.csv.gz"
full_df = pd.read_csv(url1+url2, compression="gzip")
full_df.head()
df = full_df[["id", "price", "number_of_reviews", "review_scores_rating"]]
df.head()
df.replace({'price': {'\$': ''}}, regex=True, inplace=True)
df.replace({'price': {'\,': ''}}, regex=True, inplace=True)
df['price'] = df['price'].astype('float64', copy=False)
df.plot.scatter(x="number_of_reviews", y="review_scores_rating", figsize=(10, 8), alpha=0.2)
bins = [0, 5, 10, 25, 50, 100, 350]
boxplot_vecs = []
fig, ax = plt.subplots(figsize=(10, 8))
for i in range(1, 7):
lb = bins[i-1]
ub = bins[i]
foo = df["review_scores_rating"][df["number_of_reviews"].apply(lambda x: lb <= x <= ub)].dropna()
boxplot_vecs.append(foo.values)
ax.boxplot(boxplot_vecs, labels=bins[:-1])
plt.show()
df.plot.scatter(x="review_scores_rating", y="price", figsize=(10, 8), alpha=0.2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Prepare emissions
Step3: Now, let's inspect our emissions to ensure they look resonable.
Step4: Fig. 1
Step5: Summarizing results
Step6: We're calculating somewhere between 6,000 and 16,000 deaths every year caused by air pollution emissions from electricity generators.
Step7: So the health damages from power plants are equivalent to between 50 and 140 billion dollars per year. By using multiple SR matrices and multiple esitimates of the relationship between concentrations and mortality rate, we're able to estimate the uncertainty in our results.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import (absolute_import, division,
print_function, unicode_literals)
from builtins import *
# Note: This step can take a while to run.
from io import BytesIO, TextIOWrapper
from zipfile import ZipFile
import urllib.request
import csv
from shapely.geometry import Point
import geopandas as gpd
# Download file from EPA website.
url = urllib.request.urlopen("ftp://newftp.epa.gov/air/emismod/2016/alpha/2016fd/emissions/2016fd_inputs_point.zip")
VOC, NOx, NH3, SOx, PM2_5 = [], [], [], [], []
height, diam, temp, velocity = [], [], [], []
coords = []
def add_record(row):
Process one row of the emissions file
pol = row[12] # The pollutant is in the 13th column of the CSV file
# (In Python, the first column is called column 0.)
emis = row[13] # We are only extracting annual total emissions here.
# If monthly emissions are reported, we'll miss them.
# Emissions are short tons/year.
if emis == '': return
if pol in ['VOC', 'VOC_INV', 'XYL', 'TOL', 'TERP', 'PAR', 'OLE', 'NVOL', 'MEOH',
'ISOP', 'IOLE', 'FORM', 'ETOH', 'ETHA', 'ETH', 'ALD2', 'ALDX', 'CB05_ALD2',
'CB05_ALDX', 'CB05_BENZENE', 'CB05_ETH', 'CB05_ETHA', 'CB05_ETOH',
'CB05_FORM', 'CB05_IOLE', 'CB05_ISOP', 'CB05_MEOH', 'CB05_OLE', 'CB05_PAR',
'CB05_TERP', 'CB05_TOL', 'CB05_XYL', 'ETHANOL', 'NHTOG', 'NMOG', 'VOC_INV']:
VOC.append(float(emis))
NOx.append(0)
NH3.append(0)
SOx.append(0)
PM2_5.append(0)
elif pol in ['PM25-PRI', 'PM2_5', 'DIESEL-PM25', 'PAL', 'PCA', 'PCL', 'PEC', 'PFE', 'PK',
'PMG', 'PMN', 'PMOTHR', 'PNH4', 'PNO3', 'POC', 'PSI', 'PSO4', 'PTI']:
VOC.append(0)
NOx.append(0)
NH3.append(0)
SOx.append(0)
PM2_5.append(float(emis))
elif pol in ['NOX', 'HONO', 'NO', 'NO2']:
VOC.append(0)
NOx.append(float(emis))
NH3.append(0)
SOx.append(0)
PM2_5.append(0)
elif pol == 'NH3':
VOC.append(0)
NOx.append(0)
NH3.append(float(emis))
SOx.append(0)
PM2_5.append(0)
elif pol == 'SO2':
VOC.append(0)
NOx.append(0)
NH3.append(0)
SOx.append(float(emis))
PM2_5.append(0)
else: return
h = row[17]
height.append(float(h) * 0.3048) if h != '' else height.append(0)
d = row[18]
diam.append(float(d) * 0.3048) if d != '' else diam.append(0)
t = row[19]
temp.append((float(t) - 32) * 5.0/9.0 + 273.15) if t != '' else temp.append(0)
v = row[21]
velocity.append(float(v) * 0.3048) if v != '' else velocity.append(0)
coords.append(Point(float(row[23]), float(row[24])))
with ZipFile(BytesIO(url.read())) as zf:
for contained_file in zf.namelist():
if "egu" in contained_file: # Only process files with electricity generating unit (EGU) emissions.
for row in csv.reader(TextIOWrapper(zf.open(contained_file, 'r'), newline='')):
if (len(row) == 0) or (len(row[0]) == 0) or (row[0][0] == '#'): continue
add_record(row)
emis = gpd.GeoDataFrame({
"VOC": VOC, "NOx": NOx, "NH3": NH3, "SOx": SOx, "PM2_5": PM2_5,
"height": height, "diam": diam, "temp": temp, "velocity": velocity,
}, geometry=coords, crs={'init': 'epsg:4269'})
# First, we print the first several rows of the dataframe:
emis.head()
# Now, let's look at the sums of emissions for all power plants (in short tons/year).
emis.sum(axis=0)[["VOC", "NOx", "NH3", "SOx", "PM2_5"]]
# Finally, lets make some maps of the emissions.
import matplotlib.pyplot as plt
%matplotlib inline
pols = ["SOx", "NOx", "PM2_5", "VOC", "NH3"]
pol_names = ["SO$_2$", "NO$_x$", "PM$_{2.5}$", "VOC", "NH$_3$"]
fig, axes = plt.subplots(figsize=(7, 3), nrows=2, ncols=3, sharex=True, sharey=True)
plt.subplots_adjust(left=0, bottom=0, right=1, top=1, wspace=0.1, hspace=0.1)
i = 0
for x in axes:
for ax in x:
if i < len(pols):
emis.plot(ax=ax, markersize=emis[pols[i]]**0.5 / 5)
ax.set_title(pol_names[i])
ax.set_xticks([])
ax.set_yticks([])
ax.axis('off')
i = i+1
plt.show()
# This step might take a while.
from sr_util import run_sr # This allows us to use the 'run_sr' function
# in the 'sr_util.py' file in this same directory.
output_variables = {
'TotalPM25':'PrimaryPM25 + pNH4 + pSO4 + pNO3 + SOA',
'deathsK':'(exp(log(1.06)/10 * TotalPM25) - 1) * TotalPop * 1.0465819687408728 * MortalityRate / 100000 * 1.025229357798165',
'deathsL':'(exp(log(1.14)/10 * TotalPM25) - 1) * TotalPop * 1.0465819687408728 * MortalityRate / 100000 * 1.025229357798165',
}
resultsISRM = run_sr(emis, model="isrm", emis_units="tons/year", output_variables=output_variables)
resultsAPSCA = run_sr(emis, model="apsca_q0", emis_units="tons/year", output_variables=output_variables)
import pandas as pd
deaths = pd.DataFrame.from_dict({
"Model": ["ISRM", "APSCA"],
"Krewski Deaths": [resultsISRM.deathsK.sum(), resultsAPSCA.deathsK.sum()],
"LePeule Deaths": [resultsISRM.deathsL.sum(), resultsAPSCA.deathsL.sum()],
})
deaths
vsl = 9.0e6
pd.DataFrame.from_dict({
"Model": ["ISRM", "APSCA"],
"Krewski Damages": deaths["Krewski Deaths"] * vsl,
"LePeule Damages": deaths["LePeule Deaths"] * vsl,
})
import numpy as np
q = 0.995 # We are going to truncate our results at the 99.5th percentile
# to make the maps easier to interpret.
cut = resultsISRM.TotalPM25.append(resultsAPSCA.TotalPM25, ignore_index=True).quantile(q)
fig, axes = plt.subplots(figsize=(7, 2.5), nrows=1, ncols=2, sharex=True, sharey=True)
plt.subplots_adjust(left=0, bottom=0, right=1, top=1, wspace=0, hspace=0)
# Create the color bar.
im1 = axes[0].imshow(np.random.random((10,10)), vmin=0, cmap="GnBu", vmax=cut)
fig.subplots_adjust(right=0.85)
cbar_ax1 = fig.add_axes([0.86, 0.05, 0.025, 0.9])
c1 = fig.colorbar(im1, cax=cbar_ax1)
c1.ax.set_ylabel('PM$_{2.5}$ concentration (μg m$^{-3}$)')
axes[0].clear()
resultsISRM.plot(ax=axes[0], vmin=0, vmax=cut, cmap="GnBu", column="TotalPM25")
resultsAPSCA.plot(ax=axes[1], vmin=0, vmax=cut, cmap="GnBu", column="TotalPM25")
axes[0].axis('off')
axes[1].axis('off')
axes[0].set_title("ISRM")
axes[1].set_title("APSCA")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 150 observations
Step2: scikit-learn 4-step modeling pattern
Step 1
Step3: Step 2
Step4: Name of the object does not matter
Step5: Step 3
Step6: Step 4
Step7: Returns a NumPy array
Step8: Using a different value for K
Step9: Using a different classification model
Step10: Resources
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import IFrame
IFrame('http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data', width=300, height=200)
# import load_iris function from datasets module
from sklearn.datasets import load_iris
# save "bunch" object containing iris dataset and its attributes
iris = load_iris()
# store feature matrix in "X"
X = iris.data
# store response vector in "y"
y = iris.target
# print the shapes of X and y
print(X.shape)
print(y.shape)
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
print(knn)
knn.fit(X, y)
knn.predict([[3, 5, 4, 2]])
X_new = [[3, 5, 4, 2], [5, 4, 3, 2]]
knn.predict(X_new)
# instantiate the model (using the value K=5)
knn = KNeighborsClassifier(n_neighbors=5)
# fit the model with data
knn.fit(X, y)
# predict the response for new observations
knn.predict(X_new)
# import the class
from sklearn.linear_model import LogisticRegression
# instantiate the model (using the default parameters)
logreg = LogisticRegression()
# fit the model with data
logreg.fit(X, y)
# predict the response for new observations
logreg.predict(X_new)
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, we load the data.
Step2: Specify our recode dictionary.
Step3: Our goal is to predict whether the 'INFANT_ALIVE_AT_REPORT' is either 1 or 0. Thus, we will drop all of the features that relate to the infant.
Step4: Specify the recoding methods.
Step5: Correct the features related to the number of smoked cigarettes.
Step6: Figure out which Yes/No/Unknown features are.
Step7: DataFrames can transform the features in bulk while selecting features.
Step8: Transform all the YNU_cols in one using a list of transformations.
Step9: Let's check if we got it correctly.
Step10: Get to know your data
Step11: For the categorical variables we will calculate the frequencies of their values.
Step12: Correlations
Step13: We can drop most of highly correlated features.
Step14: Statistical testing
Step15: Create the final dataset
Step16: Split into training and testing
Step17: Predicting infant survival
Step18: Let's now use the model to predict the classes for our testing set.
Step19: Let's check how well or how bad our model performed.
Step20: Selecting only the most predictable features
Step21: Random Forest in Spark
Step22: Let's see how well our model did.
Step23: Let's see how the logistic regression would perform with reduced number of features.
|
<ASSISTANT_TASK:>
Python Code:
import pyspark.sql.types as typ
labels = [
('INFANT_ALIVE_AT_REPORT', typ.StringType()),
('BIRTH_YEAR', typ.IntegerType()),
('BIRTH_MONTH', typ.IntegerType()),
('BIRTH_PLACE', typ.StringType()),
('MOTHER_AGE_YEARS', typ.IntegerType()),
('MOTHER_RACE_6CODE', typ.StringType()),
('MOTHER_EDUCATION', typ.StringType()),
('FATHER_COMBINED_AGE', typ.IntegerType()),
('FATHER_EDUCATION', typ.StringType()),
('MONTH_PRECARE_RECODE', typ.StringType()),
('CIG_BEFORE', typ.IntegerType()),
('CIG_1_TRI', typ.IntegerType()),
('CIG_2_TRI', typ.IntegerType()),
('CIG_3_TRI', typ.IntegerType()),
('MOTHER_HEIGHT_IN', typ.IntegerType()),
('MOTHER_BMI_RECODE', typ.IntegerType()),
('MOTHER_PRE_WEIGHT', typ.IntegerType()),
('MOTHER_DELIVERY_WEIGHT', typ.IntegerType()),
('MOTHER_WEIGHT_GAIN', typ.IntegerType()),
('DIABETES_PRE', typ.StringType()),
('DIABETES_GEST', typ.StringType()),
('HYP_TENS_PRE', typ.StringType()),
('HYP_TENS_GEST', typ.StringType()),
('PREV_BIRTH_PRETERM', typ.StringType()),
('NO_RISK', typ.StringType()),
('NO_INFECTIONS_REPORTED', typ.StringType()),
('LABOR_IND', typ.StringType()),
('LABOR_AUGM', typ.StringType()),
('STEROIDS', typ.StringType()),
('ANTIBIOTICS', typ.StringType()),
('ANESTHESIA', typ.StringType()),
('DELIV_METHOD_RECODE_COMB', typ.StringType()),
('ATTENDANT_BIRTH', typ.StringType()),
('APGAR_5', typ.IntegerType()),
('APGAR_5_RECODE', typ.StringType()),
('APGAR_10', typ.IntegerType()),
('APGAR_10_RECODE', typ.StringType()),
('INFANT_SEX', typ.StringType()),
('OBSTETRIC_GESTATION_WEEKS', typ.IntegerType()),
('INFANT_WEIGHT_GRAMS', typ.IntegerType()),
('INFANT_ASSIST_VENTI', typ.StringType()),
('INFANT_ASSIST_VENTI_6HRS', typ.StringType()),
('INFANT_NICU_ADMISSION', typ.StringType()),
('INFANT_SURFACANT', typ.StringType()),
('INFANT_ANTIBIOTICS', typ.StringType()),
('INFANT_SEIZURES', typ.StringType()),
('INFANT_NO_ABNORMALITIES', typ.StringType()),
('INFANT_ANCEPHALY', typ.StringType()),
('INFANT_MENINGOMYELOCELE', typ.StringType()),
('INFANT_LIMB_REDUCTION', typ.StringType()),
('INFANT_DOWN_SYNDROME', typ.StringType()),
('INFANT_SUSPECTED_CHROMOSOMAL_DISORDER', typ.StringType()),
('INFANT_NO_CONGENITAL_ANOMALIES_CHECKED', typ.StringType()),
('INFANT_BREASTFED', typ.StringType())
]
schema = typ.StructType([
typ.StructField(e[0], e[1], False) for e in labels
])
births = spark.read.csv('births_train.csv.gz',
header=True,
schema=schema)
recode_dictionary = {
'YNU': {
'Y': 1,
'N': 0,
'U': 0
}
}
selected_features = [
'INFANT_ALIVE_AT_REPORT',
'BIRTH_PLACE',
'MOTHER_AGE_YEARS',
'FATHER_COMBINED_AGE',
'CIG_BEFORE',
'CIG_1_TRI',
'CIG_2_TRI',
'CIG_3_TRI',
'MOTHER_HEIGHT_IN',
'MOTHER_PRE_WEIGHT',
'MOTHER_DELIVERY_WEIGHT',
'MOTHER_WEIGHT_GAIN',
'DIABETES_PRE',
'DIABETES_GEST',
'HYP_TENS_PRE',
'HYP_TENS_GEST',
'PREV_BIRTH_PRETERM'
]
births_trimmed = births.select(selected_features)
import pyspark.sql.functions as func
def recode(col, key):
return recode_dictionary[key][col]
def correct_cig(feat):
return func \
.when(func.col(feat) != 99, func.col(feat))\
.otherwise(0)
rec_integer = func.udf(recode, typ.IntegerType())
births_transformed = births_trimmed \
.withColumn('CIG_BEFORE', correct_cig('CIG_BEFORE'))\
.withColumn('CIG_1_TRI', correct_cig('CIG_1_TRI'))\
.withColumn('CIG_2_TRI', correct_cig('CIG_2_TRI'))\
.withColumn('CIG_3_TRI', correct_cig('CIG_3_TRI'))
cols = [(col.name, col.dataType) for col in births_trimmed.schema]
YNU_cols = []
for i, s in enumerate(cols):
if s[1] == typ.StringType():
dis = births.select(s[0]) \
.distinct() \
.rdd \
.map(lambda row: row[0]) \
.collect()
if 'Y' in dis:
YNU_cols.append(s[0])
births.select([
'INFANT_NICU_ADMISSION',
rec_integer(
'INFANT_NICU_ADMISSION', func.lit('YNU')
) \
.alias('INFANT_NICU_ADMISSION_RECODE')]
).take(5)
exprs_YNU = [
rec_integer(x, func.lit('YNU')).alias(x)
if x in YNU_cols
else x
for x in births_transformed.columns
]
births_transformed = births_transformed.select(exprs_YNU)
births_transformed.select(YNU_cols[-5:]).show(5)
import pyspark.mllib.stat as st
import numpy as np
numeric_cols = ['MOTHER_AGE_YEARS','FATHER_COMBINED_AGE',
'CIG_BEFORE','CIG_1_TRI','CIG_2_TRI','CIG_3_TRI',
'MOTHER_HEIGHT_IN','MOTHER_PRE_WEIGHT',
'MOTHER_DELIVERY_WEIGHT','MOTHER_WEIGHT_GAIN'
]
numeric_rdd = births_transformed\
.select(numeric_cols)\
.rdd \
.map(lambda row: [e for e in row])
mllib_stats = st.Statistics.colStats(numeric_rdd)
for col, m, v in zip(numeric_cols,
mllib_stats.mean(),
mllib_stats.variance()):
print('{0}: \t{1:.2f} \t {2:.2f}'.format(col, m, np.sqrt(v)))
categorical_cols = [e for e in births_transformed.columns
if e not in numeric_cols]
categorical_rdd = births_transformed\
.select(categorical_cols)\
.rdd \
.map(lambda row: [e for e in row])
for i, col in enumerate(categorical_cols):
agg = categorical_rdd \
.groupBy(lambda row: row[i]) \
.map(lambda row: (row[0], len(row[1])))
print(col, sorted(agg.collect(),
key=lambda el: el[1],
reverse=True))
corrs = st.Statistics.corr(numeric_rdd)
for i, el in enumerate(corrs > 0.5):
correlated = [
(numeric_cols[j], corrs[i][j])
for j, e in enumerate(el)
if e == 1.0 and j != i]
if len(correlated) > 0:
for e in correlated:
print('{0}-to-{1}: {2:.2f}' \
.format(numeric_cols[i], e[0], e[1]))
features_to_keep = [
'INFANT_ALIVE_AT_REPORT',
'BIRTH_PLACE',
'MOTHER_AGE_YEARS',
'FATHER_COMBINED_AGE',
'CIG_1_TRI',
'MOTHER_HEIGHT_IN',
'MOTHER_PRE_WEIGHT',
'DIABETES_PRE',
'DIABETES_GEST',
'HYP_TENS_PRE',
'HYP_TENS_GEST',
'PREV_BIRTH_PRETERM'
]
births_transformed = births_transformed.select([e for e in features_to_keep])
import pyspark.mllib.linalg as ln
for cat in categorical_cols[1:]:
agg = births_transformed \
.groupby('INFANT_ALIVE_AT_REPORT') \
.pivot(cat) \
.count()
agg_rdd = agg \
.rdd\
.map(lambda row: (row[1:])) \
.flatMap(lambda row:
[0 if e == None else e for e in row]) \
.collect()
row_length = len(agg.collect()[0]) - 1
agg = ln.Matrices.dense(row_length, 2, agg_rdd)
test = st.Statistics.chiSqTest(agg)
print(cat, round(test.pValue, 4))
import pyspark.mllib.feature as ft
import pyspark.mllib.regression as reg
hashing = ft.HashingTF(7)
births_hashed = births_transformed \
.rdd \
.map(lambda row: [
list(hashing.transform(row[1]).toArray())
if col == 'BIRTH_PLACE'
else row[i]
for i, col
in enumerate(features_to_keep)]) \
.map(lambda row: [[e] if type(e) == int else e
for e in row]) \
.map(lambda row: [item for sublist in row
for item in sublist]) \
.map(lambda row: reg.LabeledPoint(
row[0],
ln.Vectors.dense(row[1:]))
)
births_train, births_test = births_hashed.randomSplit([0.6, 0.4])
from pyspark.mllib.classification \
import LogisticRegressionWithLBFGS
LR_Model = LogisticRegressionWithLBFGS \
.train(births_train, iterations=10)
LR_results = (
births_test.map(lambda row: row.label) \
.zip(LR_Model \
.predict(births_test\
.map(lambda row: row.features)))
).map(lambda row: (row[0], row[1] * 1.0))
import pyspark.mllib.evaluation as ev
LR_evaluation = ev.BinaryClassificationMetrics(LR_results)
print('Area under PR: {0:.2f}' \
.format(LR_evaluation.areaUnderPR))
print('Area under ROC: {0:.2f}' \
.format(LR_evaluation.areaUnderROC))
LR_evaluation.unpersist()
selector = ft.ChiSqSelector(4).fit(births_train)
topFeatures_train = (
births_train.map(lambda row: row.label) \
.zip(selector \
.transform(births_train \
.map(lambda row: row.features)))
).map(lambda row: reg.LabeledPoint(row[0], row[1]))
topFeatures_test = (
births_test.map(lambda row: row.label) \
.zip(selector \
.transform(births_test \
.map(lambda row: row.features)))
).map(lambda row: reg.LabeledPoint(row[0], row[1]))
from pyspark.mllib.tree import RandomForest
RF_model = RandomForest \
.trainClassifier(data=topFeatures_train,
numClasses=2,
categoricalFeaturesInfo={},
numTrees=6,
featureSubsetStrategy='all',
seed=666)
RF_results = (
topFeatures_test.map(lambda row: row.label) \
.zip(RF_model \
.predict(topFeatures_test \
.map(lambda row: row.features)))
)
RF_evaluation = ev.BinaryClassificationMetrics(RF_results)
print('Area under PR: {0:.2f}' \
.format(RF_evaluation.areaUnderPR))
print('Area under ROC: {0:.2f}' \
.format(RF_evaluation.areaUnderROC))
RF_evaluation.unpersist()
LR_Model_2 = LogisticRegressionWithLBFGS \
.train(topFeatures_train, iterations=10)
LR_results_2 = (
topFeatures_test.map(lambda row: row.label) \
.zip(LR_Model_2 \
.predict(topFeatures_test \
.map(lambda row: row.features)))
).map(lambda row: (row[0], row[1] * 1.0))
LR_evaluation_2 = ev.BinaryClassificationMetrics(LR_results_2)
print('Area under PR: {0:.2f}' \
.format(LR_evaluation_2.areaUnderPR))
print('Area under ROC: {0:.2f}' \
.format(LR_evaluation_2.areaUnderROC))
LR_evaluation_2.unpersist()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.
Step2: Nota
|
<ASSISTANT_TASK:>
Python Code:
#importar los paquetes que se van a usar
import pandas as pd
import pandas_datareader.data as web
import numpy as np
import datetime
from datetime import datetime
import scipy.stats as stats
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#algunas opciones para Python
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', 6)
pd.set_option('display.max_rows', 10)
pd.set_option('display.width', 78)
pd.set_option('precision', 3)
#Descargar datos de Yahoo! finance
#Tickers
tickers = ['AA','AAPL','MSFT', '^GSPC']
# Fuente
data_source = 'yahoo'
# Fechas: desde 01/01/2014 hasta 12/31/2016.
start_date = '2014-01-01'
end_date = '2016-12-31'
# Usar el pandas data reader. El comando sort_index ordena los datos por fechas
assets = (web.DataReader(tickers, data_source, start_date, end_date)).sort_index('major_axis')
assets
allA=assets['Adj Close']
R = ((allA - allA.shift(1))/allA)[1:]
r=np.log(1+R)
R.describe()
min_periods = 180
vol = R.rolling(window=min_periods).std()*np.sqrt(min_periods)
vol.plot(figsize=(8, 6));
rolling_corr =R['AAPL'].rolling(window=180).corr(R['MSFT']).dropna()
rolling_corr.plot(figsize=(8, 6));
f, axes = plt.subplots(2, 2, figsize=(15, 7), sharex=True)
# Plot a simple histogram with binsize determined automatically
sns.distplot(R['AA'], color="b", fit=stats.norm, norm_hist=True, ax=axes[0, 0])
sns.distplot(R['AAPL'], color="r", fit=stats.norm, norm_hist=True, ax=axes[0, 1])
sns.distplot(R['MSFT'], color="g", fit=stats.norm, norm_hist=True, ax=axes[1, 0])
sns.distplot(R['^GSPC'], color="m", fit=stats.norm, norm_hist=True, ax=axes[1, 1])
plt.tight_layout()
sns.set(style="ticks")
sns.pairplot(R);
sns.jointplot("MSFT", "MSFT",data=R, color="k").plot_joint(sns.kdeplot, zorder=0, n_levels=60);
sns.jointplot("AA", "MSFT",data=R, color="k").plot_joint(sns.kdeplot, zorder=0, n_levels=60);
sns.jointplot("AAPL", "MSFT",data=R, color="k").plot_joint(sns.kdeplot, zorder=0, n_levels=60);
sns.jointplot("^GSPC", "MSFT",data=R, color="k").plot_joint(sns.kdeplot, zorder=0, n_levels=60);
sns.lmplot(x="AA", y="MSFT", truncate=True, size=5, data=R);
sns.lmplot(x="AAPL", y="MSFT", truncate=True, size=5, data=R);
sns.lmplot(x="^GSPC", y="MSFT", truncate=True, size=5, data=R);
g = sns.PairGrid(R, y_vars=["MSFT"], x_vars=["AA", "AAPL", "^GSPC"], size=4)
g.map(sns.regplot, color=".3");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step6: NXP IMU
Step7: Run Raw Compass Performance
Step8: Now using this bias, we should get better performance.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
from __future__ import division
import numpy as np
from the_collector import BagReader
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot as plt
from math import sin, cos, atan2, pi, sqrt, asin
from math import radians as deg2rad
from math import degrees as rad2deg
def normalize(x, y, z):
Return a unit vector
norm = sqrt(x * x + y * y + z * z)
if norm > 0.0:
inorm = 1/norm
x *= inorm
y *= inorm
z *= inorm
else:
raise Exception('division by zero: {} {} {}'.format(x, y, z))
return (x, y, z)
def plotArray(g, dt=None, title=None):
Plots the x, y, and z components of a sensor.
In:
title - what you want to name something
[[x,y,z],[x,y,z],[x,y,z], ...]
Out:
None
x = []
y = []
z = []
for d in g:
x.append(d[0])
y.append(d[1])
z.append(d[2])
plt.subplot(3,1,1)
plt.plot(x)
plt.ylabel('x')
plt.grid(True)
if title:
plt.title(title)
plt.subplot(3,1,2)
plt.plot(y)
plt.ylabel('y')
plt.grid(True)
plt.subplot(3,1,3)
plt.plot(z)
plt.ylabel('z')
plt.grid(True)
def getOrientation(accel, mag, deg=True):
ax, ay, az = normalize(*accel)
mx, my, mz = normalize(*mag)
roll = atan2(ay, az)
pitch = atan2(-ax, ay*sin(roll)+az*cos(roll))
heading = atan2(
mz*sin(roll) - my*cos(roll),
mx*cos(pitch) + my*sin(pitch)*sin(roll) + mz*sin(pitch)*cos(roll)
)
if deg:
roll *= 180/pi
pitch *= 180/pi
heading *= 180/pi
heading = heading if heading >= 0.0 else 360 + heading
heading = heading if heading <= 360 else heading - 360
else:
heading = heading if heading >= 0.0 else 2*pi + heading
heading = heading if heading <= 2*pi else heading - 2*pi
return (roll, pitch, heading)
def find_calibration(mag):
Go through the raw data and find the max/min for x, y, z
max_m = [-1000]*3
min_m = [1000]*3
for m in mag:
for i in range(3):
max_m[i] = m[i] if m[i] > max_m[i] else max_m[i]
min_m[i] = m[i] if m[i] < min_m[i] else min_m[i]
bias = [0]*3
for i in range(3):
bias[i] = (max_m[i] + min_m[i])/2
return bias
def apply_calibration(data, bias):
Given the data and the bias, correct the data
c_data = []
for d in data:
t = []
for i in [0,1,2]:
t.append(d[i]-bias[i])
c_data.append(t)
return c_data
def split_xyz(data):
Break out the x, y, and z into it's own array for plotting
xx = []
yy = []
zz = []
for v in data:
xx.append(v[0])
yy.append(v[1])
zz.append(v[2])
return xx, yy, zz
def plotMagnetometer3D(data, title=None):
x,y,z = split_xyz(data)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(x, y, z, '.b');
ax.set_xlabel('$\mu$T')
ax.set_ylabel('$\mu$T')
ax.set_zlabel('$\mu$T')
if title:
plt.title(title);
def plotMagnetometer(data, title=None):
x,y,z = split_xyz(data)
plt.plot(x,y,'.b', x,z,'.r', z,y, '.g')
plt.xlabel('$\mu$T')
plt.ylabel('$\mu$T')
plt.grid(True);
plt.legend(['x', 'y', 'z'])
if title:
plt.title(title);
bag = BagReader()
bag.use_compression = True
cal = bag.load('imu-1-2.json')
def split(data):
ret = []
rdt = []
start = data[0][1]
for d, ts in data:
ret.append(d)
rdt.append(ts - start)
return ret, rdt
accel, adt = split(cal['accel'])
mag, mdt = split(cal['mag'])
gyro, gdt = split(cal['gyro'])
plotArray(accel, 'Accel [g]')
plotArray(mag, 'Mag [uT]')
plotArray(gyro, 'Gyros [dps]')
# now, ideally these should be an ellipsoid centered around 0.0
# but they aren't ... need to fix the bias (offset)
plotMagnetometer(mag, 'raw mag')
plotMagnetometer3D(mag, 'raw mag')
# so let's find the bias needed to correct the imu
bias = find_calibration(mag)
print('bias', bias)
# now the data should be nicely centered around (0,0,0)
cm = apply_calibration(mag, bias)
plotMagnetometer(cm, 'corrected mag')
plotMagnetometer3D(cm, 'corrected mag')
# apply correction in previous step
cm = apply_calibration(mag, bias)
plotMagnetometer(cm)
# Now let's run through the data and correct it
roll = []
pitch = []
heading = []
for accel, mag in zip(a, cm):
r,p,h = getOrientation(accel, mag)
roll.append(r)
pitch.append(p)
heading.append(h)
x_scale = [x-ts[0] for x in ts]
print('timestep', ts[1] - ts[0])
plt.subplot(2,2,1)
plt.plot(x_scale, roll)
plt.grid(True)
plt.title('Roll')
plt.subplot(2,2,2)
plt.plot(x_scale, pitch)
plt.grid(True)
plt.title('Pitch')
plt.subplot(2,2,3)
plt.plot(x_scale, heading)
plt.grid(True)
plt.title('Heading');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The CourseTalk dataset
Step2: Using pd.merge we get it all into one big DataFrame.
Step3: Collaborative filtering
Step4: Now let's filter down to courses that received at least 20 ratings (a completely arbitrary number);
Step5: The index of titles receiving at least 20 ratings can then be used to select rows from mean_ratings above
Step6: By computing the mean rating for each course, we will order with the highest rating listed first.
Step7: To see the top courses among Coursera students, we can sort by the 'Coursera' column in descending order
Step8: Now, let's go further! How about rank the courses with the highest percentage of ratings that are 4 or higher ? % of ratings 4+
Step9: Let's extract only the rating that are 4 or higher.
Step10: Now picking the number of total ratings for each course and the count of ratings 4+ , we can merge them into one DataFrame.
Step11: Let's now go easy. Let's count the number of ratings for each course, and order with the most number of ratings.
Step12: Considering this information we can sort by the most rated ones with highest percentage of 4+ ratings.
Step13: Finally using the formula above that we learned, let's find out what the courses that most often occur wit the popular MOOC An introduction to Interactive Programming with Python by using the method "x + y/ x" . For each course, calculate the percentage of Programming with python raters who also rated that course. Order with the highest percentage first, and voilá we have the top 5 moocs.
Step14: First, let's get only the users that rated the course An Introduction to Interactive Programming in Python
Step15: Now, for all other courses let's filter out only the ratings from users that rated the Python course.
Step16: By applying the division
Step17: Ordering by the score, highest first excepts the first one which contains the course itself.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import Image
Image(filename='./imgs/recsys_arch.png')
import pandas as pd
unames = ['user_id', 'username']
users = pd.read_table('./data/users_set.dat',
sep='|', header=None, names=unames)
rnames = ['user_id', 'course_id', 'rating']
ratings = pd.read_table('./data/ratings.dat',
sep='|', header=None, names=rnames)
mnames = ['course_id', 'title', 'avg_rating', 'workload', 'university', 'difficulty', 'provider']
courses = pd.read_table('./data/cursos.dat',
sep='|', header=None, names=mnames)
# show how one of them looks
ratings.head(10)
# show how one of them looks
users[:5]
courses[:5]
coursetalk = pd.merge(pd.merge(ratings, courses), users)
coursetalk
coursetalk.ix[0]
mean_ratings = coursetalk.pivot_table('rating', rows='provider', aggfunc='mean')
mean_ratings.order(ascending=False)
ratings_by_title = coursetalk.groupby('title').size()
ratings_by_title[:10]
active_titles = ratings_by_title.index[ratings_by_title >= 20]
active_titles[:10]
mean_ratings = coursetalk.pivot_table('rating', rows='title', aggfunc='mean')
mean_ratings
mean_ratings.ix[active_titles].order(ascending=False)
mean_ratings = coursetalk.pivot_table('rating', rows='title',cols='provider', aggfunc='mean')
mean_ratings[:10]
mean_ratings['coursera'][active_titles].order(ascending=False)[:10]
# transform the ratings frame into a ratings matrix
ratings_mtx_df = coursetalk.pivot_table(values='rating',
rows='user_id',
cols='title')
ratings_mtx_df.ix[ratings_mtx_df.index[:15], ratings_mtx_df.columns[:15]]
ratings_gte_4 = ratings_mtx_df[ratings_mtx_df>=4.0]
# with an integer axis index only label-based indexing is possible
ratings_gte_4.ix[ratings_gte_4.index[:15], ratings_gte_4.columns[:15]]
ratings_gte_4_pd = pd.DataFrame({'total': ratings_mtx_df.count(), 'gte_4': ratings_gte_4.count()})
ratings_gte_4_pd.head(10)
ratings_gte_4_pd['gte_4_ratio'] = (ratings_gte_4_pd['gte_4'] * 1.0)/ ratings_gte_4_pd.total
ratings_gte_4_pd.head(10)
ranking = [(title,total,gte_4, score) for title, total, gte_4, score in ratings_gte_4_pd.itertuples()]
for title, total, gte_4, score in sorted(ranking, key=lambda x: (x[3], x[2], x[1]) , reverse=True)[:10]:
print title, total, gte_4, score
ratings_by_title = coursetalk.groupby('title').size()
ratings_by_title.order(ascending=False)[:10]
for title, total, gte_4, score in sorted(ranking, key=lambda x: (x[2], x[3], x[1]) , reverse=True)[:10]:
print title, total, gte_4, score
course_users = coursetalk.pivot_table('rating', rows='title', cols='user_id')
course_users.ix[course_users.index[:15], course_users.columns[:15]]
ratings_by_course = coursetalk[coursetalk.title == 'An Introduction to Interactive Programming in Python']
ratings_by_course.set_index('user_id', inplace=True)
their_ids = ratings_by_course.index
their_ratings = course_users[their_ids]
course_users[their_ids].ix[course_users[their_ids].index[:15], course_users[their_ids].columns[:15]]
course_count = their_ratings.ix['An Introduction to Interactive Programming in Python'].count()
sims = their_ratings.apply(lambda profile: profile.count() / float(course_count) , axis=1)
sims.order(ascending=False)[1:][:10]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hamiltonian Time Evolution and Expectation Value Computation
Step2: Application of one- and two-body fermionic gates
Step3: Exact evolution implementation of quadratic Hamiltonians
Step4: Exact evolution of dense quadratic hamiltonians is supported. Here is an evolution example using a spin restricted Hamiltonian on a number and spin conserving wavefunction
Step5: The GSO Hamiltonian is for evolution of quadratic hamiltonians that are spin broken and number conserving.
Step6: The BCS hamiltonian evovles spin conserved and number broken wavefunctions.
Step7: Exact Evolution Implementation of Diagonal Coulomb terms
Step8: Exact evolution of individual n-body anti-Hermitian gnerators
Step9: Approximate evolution of sums of n-body generators
Step10: API for determining desired expectation values
Step11: 2.B.1 RDMs
Step12: 2.B.2 Hamiltonian expectations (or any expectation values)
Step13: 2.B.3 Symmetry operations
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import fqe
except ImportError:
!pip install fqe --quiet
Print = True
from openfermion import FermionOperator, MolecularData
from openfermion.utils import hermitian_conjugated
import numpy
import fqe
from fqe.unittest_data import build_lih_data
numpy.set_printoptions(floatmode='fixed', precision=6, linewidth=80, suppress=True)
numpy.random.seed(seed=409)
h1e, h2e, wfn = build_lih_data.build_lih_data('energy')
lih_hamiltonian = fqe.get_restricted_hamiltonian(([h1e, h2e]))
lihwfn = fqe.Wavefunction([[4, 0, 6]])
lihwfn.set_wfn(strategy='from_data', raw_data={(4, 0): wfn})
if Print:
lihwfn.print_wfn()
# dummy geometry
from openfermion.chem.molecular_data import spinorb_from_spatial
from openfermion import jordan_wigner, get_sparse_operator, InteractionOperator, get_fermion_operator
h1s, h2s = spinorb_from_spatial(h1e, numpy.einsum("ijlk", -2 * h2e) * 0.5)
mol = InteractionOperator(0, h1s, h2s)
ham_fop = get_fermion_operator(mol)
ham_mat = get_sparse_operator(jordan_wigner(ham_fop)).toarray()
from scipy.linalg import expm
time = 0.01
evolved1 = lihwfn.time_evolve(time, lih_hamiltonian)
if Print:
evolved1.print_wfn()
evolved2 = fqe.time_evolve(lihwfn, time, lih_hamiltonian)
if Print:
evolved2.print_wfn()
assert numpy.isclose(fqe.vdot(evolved1, evolved2), 1)
cirq_wf = fqe.to_cirq_ncr(lihwfn)
evolve_cirq = expm(-1j * time * ham_mat) @ cirq_wf
test_evolve = fqe.from_cirq(evolve_cirq, thresh=1.0E-12)
assert numpy.isclose(fqe.vdot(test_evolve, evolved1), 1)
wfn = fqe.Wavefunction([[4, 2, 4]])
wfn.set_wfn(strategy='random')
if Print:
wfn.print_wfn()
diagonal = FermionOperator('0^ 0', -2.0) + \
FermionOperator('1^ 1', -1.7) + \
FermionOperator('2^ 2', -0.7) + \
FermionOperator('3^ 3', -0.55) + \
FermionOperator('4^ 4', -0.1) + \
FermionOperator('5^ 5', -0.06) + \
FermionOperator('6^ 6', 0.5) + \
FermionOperator('7^ 7', 0.3)
if Print:
print(diagonal)
evolved = wfn.time_evolve(time, diagonal)
if Print:
evolved.print_wfn()
norb = 4
h1e = numpy.zeros((norb, norb), dtype=numpy.complex128)
for i in range(norb):
for j in range(norb):
h1e[i, j] += (i+j) * 0.02
h1e[i, i] += i * 2.0
hamil = fqe.get_restricted_hamiltonian((h1e,))
wfn = fqe.Wavefunction([[4, 0, norb]])
wfn.set_wfn(strategy='random')
initial_energy = wfn.expectationValue(hamil)
print('Initial Energy: {}'.format(initial_energy))
evolved = wfn.time_evolve(time, hamil)
final_energy = evolved.expectationValue(hamil)
print('Final Energy: {}'.format(final_energy))
norb = 4
h1e = numpy.zeros((2*norb, 2*norb), dtype=numpy.complex128)
for i in range(2*norb):
for j in range(2*norb):
h1e[i, j] += (i+j) * 0.02
h1e[i, i] += i * 2.0
hamil = fqe.get_gso_hamiltonian((h1e,))
wfn = fqe.get_number_conserving_wavefunction(4, norb)
wfn.set_wfn(strategy='random')
initial_energy = wfn.expectationValue(hamil)
print('Initial Energy: {}'.format(initial_energy))
evolved = wfn.time_evolve(time, hamil)
final_energy = evolved.expectationValue(hamil)
print('Final Energy: {}'.format(final_energy))
norb = 4
time = 0.001
wfn_spin = fqe.get_spin_conserving_wavefunction(2, norb)
hamil = FermionOperator('', 6.0)
for i in range(0, 2*norb, 2):
for j in range(0, 2*norb, 2):
opstring = str(i) + ' ' + str(j + 1)
hamil += FermionOperator(opstring, (i+1 + j*2)*0.1 - (i+1 + 2*(j + 1))*0.1j)
opstring = str(i) + '^ ' + str(j + 1) + '^ '
hamil += FermionOperator(opstring, (i+1 + j)*0.1 + (i+1 + j)*0.1j)
h_noncon = (hamil + hermitian_conjugated(hamil))/2.0
if Print:
print(h_noncon)
wfn_spin.set_wfn(strategy='random')
if Print:
wfn_spin.print_wfn()
spin_evolved = wfn_spin.time_evolve(time, h_noncon)
if Print:
spin_evolved.print_wfn()
norb = 4
wfn = fqe.Wavefunction([[5, 1, norb]])
vij = numpy.zeros((norb, norb, norb, norb), dtype=numpy.complex128)
for i in range(norb):
for j in range(norb):
vij[i, j] += 4*(i % norb + 1)*(j % norb + 1)*0.21
wfn.set_wfn(strategy='random')
if Print:
wfn.print_wfn()
hamil = fqe.get_diagonalcoulomb_hamiltonian(vij)
evolved = wfn.time_evolve(time, hamil)
if Print:
evolved.print_wfn()
norb = 3
nele = 4
ops = FermionOperator('5^ 1^ 2 0', 3.0 - 1.j)
ops += FermionOperator('0^ 2^ 1 5', 3.0 + 1.j)
wfn = fqe.get_number_conserving_wavefunction(nele, norb)
wfn.set_wfn(strategy='random')
wfn.normalize()
if Print:
wfn.print_wfn()
evolved = wfn.time_evolve(time, ops)
if Print:
evolved.print_wfn()
lih_evolved = lihwfn.apply_generated_unitary(time, 'taylor', lih_hamiltonian, accuracy=1.e-8)
if Print:
lih_evolved.print_wfn()
norb = 2
nalpha = 1
nbeta = 1
nele = nalpha + nbeta
time = 0.05
h1e = numpy.zeros((norb*2, norb*2), dtype=numpy.complex128)
for i in range(2*norb):
for j in range(2*norb):
h1e[i, j] += (i+j) * 0.02
h1e[i, i] += i * 2.0
hamil = fqe.get_general_hamiltonian((h1e,))
spec_lim = [-1.13199078e-03, 6.12720338e+00]
wfn = fqe.Wavefunction([[nele, nalpha - nbeta, norb]])
wfn.set_wfn(strategy='random')
if Print:
wfn.print_wfn()
evol_wfn = wfn.apply_generated_unitary(time, 'chebyshev', hamil, spec_lim=spec_lim)
if Print:
evol_wfn.print_wfn()
rdm1 = lihwfn.expectationValue('i^ j')
if Print:
print(rdm1)
val = lihwfn.expectationValue('5^ 3')
if Print:
print(2.*val)
trdm1 = fqe.expectationValue(lih_evolved, 'i j^', lihwfn)
if Print:
print(trdm1)
val = fqe.expectationValue(lih_evolved, '5 3^', lihwfn)
if Print:
print(2*val)
rdm2 = lihwfn.expectationValue('i^ j k l^')
if Print:
print(rdm2)
rdm2 = fqe.expectationValue(lihwfn, 'i^ j^ k l', lihwfn)
if Print:
print(rdm2)
li_h_energy = lihwfn.expectationValue(lih_hamiltonian)
if Print:
print(li_h_energy)
li_h_energy = fqe.expectationValue(lihwfn, lih_hamiltonian, lihwfn)
if Print:
print(li_h_energy)
op = fqe.get_s2_operator()
print(lihwfn.expectationValue(op))
op = fqe.get_sz_operator()
print(lihwfn.expectationValue(op))
op = fqe.get_time_reversal_operator()
print(lihwfn.expectationValue(op))
op = fqe.get_number_operator()
print(lihwfn.expectationValue(op))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. 矩阵
Step2: 1.2 从已有矩阵创建新矩阵
Step3: 2. 线性代数
Step4: 2.2 行列式
Step5: 2.3 求解线性方程组
Step6: 2.4 特征值和特征向量
Step7: 2.5 奇异值分解
Step8: *号表示共轭转置
Step9: 2.6 广义逆矩阵
Step10: 得到的结果并非严格意义上的单位矩阵,但是非常近似。
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
A = np.mat('1 2 3; 4 5 6; 7 8 9')
print "Creation from string:\n", A
# 转置
print "Transpose A :\n", A.T
# 逆矩阵
print "Inverse A :\n", A.I
# 通过NumPy数组创建矩阵
print "Creation from array: \n", np.mat(np.arange(9).reshape(3,3))
A = np.eye(2)
print "A:\n", A
B = 2 * A
print "B:\n", B
# 使用字符串创建复合矩阵
print "Compound matrix:\n", np.bmat("A B")
print "Compound matrix:\n", np.bmat("A B; B A")
A = np.mat("0 1 2; 1 0 3; 4 -3 8")
print "A:\n", A
inverse = np.linalg.inv(A)
print "inverse of A:\n", inverse
print "check inverse:\n", inverse * A
A = np.mat("3 4; 5 6")
print "A:\n", A
print "Determinant:\n", np.linalg.det(A)
A = np.mat("1 -2 1; 0 2 -8; -4 5 9")
print "A:\n", A
b = np.array([0,8,-9])
print "b:\n", b
x = np.linalg.solve(A, b)
print "Solution:\n", x
# check
print "Check:\n",b == np.dot(A, x)
print np.dot(A, x)
A = np.mat("3 -2; 1 0")
print "A:\n", A
print "Eigenvalues:\n", np.linalg.eigvals(A)
eigenvalues, eigenvectors = np.linalg.eig(A)
print "Eigenvalues:\n", eigenvalues
print "Eigenvectors:\n", eigenvectors
# check
# 计算 Ax = ax的左右两部分的值
for i in range(len(eigenvalues)):
print "Left:\n", np.dot(A, eigenvectors[:,i])
print "Right:\n", np.dot(eigenvalues[i], eigenvectors[:,i])
print
from IPython.display import Latex
Latex(r"$M=U \Sigma V^*$")
A = np.mat("4 11 14;8 7 -2")
print "A:\n", A
U, Sigma, V = np.linalg.svd(A, full_matrices=False)
print "U:\n", U
print "Sigma:\n", Sigma
print "V:\n", V
# Sigma矩阵是奇异值矩阵对角线上的值
np.diag(Sigma)
# check
M = U*np.diag(Sigma)*V
print "Product:\n", M
A = np.mat("4 11 14; 8 7 -2")
print "A:\n", A
pseudoinv = np.linalg.pinv(A)
print "Pseudo inverse:\n", pseudoinv
# check
print "Check pseudo inverse:\n", A*pseudoinv
A = np.mat("0 1 2; 1 0 3; 4 -3 8")
print "A:\n", A
inverse = np.linalg.inv(A)
print "inverse of A:\n", inverse
print "check inverse:\n", inverse * A
pseudoinv = np.linalg.pinv(A)
print "Pseudo inverse:\n", pseudoinv
print "Check pseudo inverse:\n", A*pseudoinv
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let Jupyter know that you're gonna be charting inline
Step2: Read in MLB data
Step3: Prep data for charting
Step4: Make a horizonal bar chart
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
# import a ticker formatting class from matplotlib
from matplotlib.ticker import FuncFormatter
%matplotlib inline
# create a data frame
df = pd.read_csv('data/mlb.csv')
# use head to check it out
df.head()
# group by team, aggregate on sum
grouped_by_team = df[['TEAM', 'SALARY']].groupby('TEAM') \
.sum() \
.reset_index() \
.set_index('TEAM') \
.sort_values('SALARY', ascending=False)
# get top 10
top_10 = grouped_by_team.head(10)
top_10
# make a horizontal bar chart
# set the figure size
bar_chart = top_10.plot.barh(figsize=(14, 6))
# sort the bars top to bottom
bar_chart.invert_yaxis()
# set the title
bar_chart.set_title('Top 10 opening day MLB payrolls, 2017')
# kill the legend
bar_chart.legend_.remove()
# kill y axis label
bar_chart.set_ylabel('')
# define a function to format x axis ticks
# otherwise they'd all run together (100000000)
# via https://stackoverflow.com/a/46454637
def millions(num, pos, m=1000000):
if num % m == 0:
num = int(num/m)
else:
num = float(num/m)
return '${}M'.format(num)
# format the x axis ticks using the function we just defined
bar_chart.xaxis.set_major_formatter(FuncFormatter(millions))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the beginning there was a sine wave
Step2: Next, define a 1d array to pass into sin() function.
Step3: Define a trivial function to plot a sine wave depending on frequency and amplitude inputs.
Step4: Test it with arbitrary arguments
Step5: Changing arguments
Step6: Just pass the function name into interact() as a first argument. Then add its arguments and their respective range (start, stop, step)
Step8: And voila, you can change frequency and amplitude interactively using the two independent sliders.
Step9: And then make the function interactive
Step10: ipywidgets + contourf + real data
Step11: As a sample data file we will use the same data.nc file from previous examples.
Step12: Here we create a function ncfun(), whose arguments are
Step13: This function is easily wrapped by interact()
Step14: ncview clone in Jupyter
Step16: For colour schemes we will use palettable package (brewer2mpl successor). It is available on PyPi (pip install palettable).
Step18: The interesting part is below. We use another function that have only one argument - a file name. It opens the file and then allows us to choose a variable to plot (in the previous example we had to know variable names prior to executing the function).
Step19: This is by no means a finished ncview-killer app. If you played with it, you could have noticed that it's much slower than ncview, even though the NetCDF file size is a little less than 10 Mb. However, you are free to customize this function in any possible way and use the power of Python and Jupyter.
|
<ASSISTANT_TASK:>
Python Code:
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
x = np.linspace(0,1,100)
def pltsin(freq, ampl):
y = ampl*np.sin(2*np.pi*x*freq)
plt.plot(x, y)
plt.ylim(-10,10) # fix limits of the vertical axis
pltsin(10, 3)
from ipywidgets import interact
_ = interact(pltsin, freq=(1,10,0.1), ampl=(1,10,1))
def primesfrom3to(n):
Returns a array of primes, 3 <= p < n
sieve = np.ones(n//2, dtype=np.bool)
for i in range(3,int(n**0.5)+1,2):
if sieve[i//2]:
sieve[i*i//2::i] = False
res = 2*np.nonzero(sieve)[0][1::]+1
seq = ''
for i in res:
seq += ' {}'.format(i)
return seq[1:]
_ = interact(primesfrom3to, n=(3,100,1)) # _ used to suppress output
import netCDF4 as nc
fpath = '../data/data.nc'
def ncfun(filename, varname='', time=0, lev=0):
with nc.Dataset(filename) as da:
arr = da.variables[varname][:]
lon = da.variables['longitude'][:]
lat = da.variables['latitude'][:]
fig = plt.figure(figsize=(8,5))
ax = fig.add_subplot(111)
c = ax.contourf(lon, lat, arr[time, lev, ...], cmap='viridis')
fig.colorbar(c, ax=ax, shrink=0.5)
_ = interact(ncfun, filename=fpath,
varname=['u','v'],
time=(0,1,1), lev=(0,3,1))
import iris
import cartopy.crs as ccrs
iris.FUTURE.netcdf_promote = True # see explanation in previous posts
import palettable
def plot_cube(cube, time=0, lev=0, cmap='viridis'):
Display a cross-section of iris.cube.Cube on a map
# Get cube data and extract a 2d lon-lat slice
arr = cube.data[time, lev, ...]
# Find longitudes and latitudes
lon = cube.coords(axis='x')[0].points
lat = cube.coords(axis='y')[0].points
# Create a figure with the size 8x5 inches
fig = plt.figure(figsize=(8,5))
# Create a geo-references Axes inside the figure
ax = fig.add_subplot(111, projection=ccrs.PlateCarree())
# Plot coastlines
ax.coastlines()
# Plot the data as filled contour map
c = ax.contourf(lon, lat, arr, cmap=cmap)
# Attach a colorbar shrinked by 50%
fig.colorbar(c, ax=ax, shrink=0.5)
def iris_view(filename):
Interactively display NetCDF data
# Load file as iris.cube.CubeList
cubelist = iris.load(filename)
# Create a dict of variable names and iris cubes
vardict = {i.name(): cubelist.extract(i.name())[0] for i in cubelist}
# Use sequential colorbrewer palettes for colormap keyword
cmaps = [i for i in palettable.colorbrewer.COLOR_MAPS['Sequential']]
interact(plot_cube,
cube=vardict,
time=(0,1,1),
lev=(0,3,1),
cmap=cmaps)
iris_view(fpath)
HTML(html)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Experimental condition
Step2: Program
Step3: Append results to dataframe
Step4: Save and load data frame
Step6: Perform experiment over conditions and trials
Step7: Run program
Step8: Result table
Step9: Bayes factor and accuracy
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%autosave 0
import sys, os
sys.path.insert(0, os.path.expanduser('~/work/git/github/taku-y/bmlingam'))
sys.path.insert(0, os.path.expanduser('~/work/git/github/pymc-devs/pymc3'))
import theano
theano.config.floatX = 'float64'
from copy import deepcopy
import hashlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import time
from expr1 import run_trial
from bmlingam import load_pklz, save_pklz
# from bmlingam import do_mcmc_bmlingam, InferParams, MCMCParams, save_pklz, load_pklz, define_hparam_searchspace, find_best_model
# from bmlingam.utils.gendata import GenDataParams, gen_artificial_data
conds = [
{
'totalnoise': totalnoise,
'L_cov_21s': L_cov_21s,
'n_samples': n_samples,
'n_confs': n_confs,
'data_noise_type': data_noise_type,
'model_noise_type': model_noise_type
}
for totalnoise in [0.25, 0.5, 1.0, 3.0]
for L_cov_21s in [[-.9, -.7, -.5, -.3, 0, .3, .5, .7, .9]]
for n_samples in [100]
for n_confs in [1, 3, 5, 10] # [1, 3, 5, 10]
for data_noise_type in ['laplace', 'uniform']
for model_noise_type in ['gg']
]
def make_id(ix_trial, n_samples, n_confs, data_noise_type, model_noise_type, L_cov_21s, totalnoise):
L_cov_21s_ = ' '.join([str(v) for v in L_cov_21s])
return hashlib.md5(
str((L_cov_21s_, ix_trial, n_samples, n_confs, data_noise_type, model_noise_type, totalnoise)).encode('utf-8')
).hexdigest()
# Test
print(make_id(55, 100, 12, 'all', 'gg', [1, 2, 3], 0.3))
def add_result_to_df(df, result):
if df is None:
return pd.DataFrame({k: [v] for k, v in result.items()})
else:
return df.append(result, ignore_index=True)
# Test
result1 = {'col1': 10, 'col2': 20}
result2 = {'col1': 30, 'col2': -10}
df1 = add_result_to_df(None, result1)
print('--- df1 ---')
print(df1)
df2 = add_result_to_df(df1, result2)
print('--- df2 ---')
print(df2)
def load_df(df_file):
if os.path.exists(df_file):
return load_pklz(df_file)
else:
return None
def save_df(df_file, df):
save_pklz(df_file, df)
def df_exist_result_id(df, result_id):
if df is not None:
return result_id in np.array(df['result_id'])
else:
False
def run_expr(conds, n_trials_per_cond=50):
Perform evaluation of BMLiNGAM given a set of experimental conditions.
For each condition, several trials are executed.
In a trial, BMLiNGAM is applied to causal inference for artificial data.
The average accuracy is computed for each condition.
# Filename of dataframe
data_dir = '.'
df_file = data_dir + '/20160822-eval-bml-results.pklz'
# Load results computed in previous
df = load_df(df_file)
# Loop over experimental conditions
n_skip = 0
for cond in conds:
print(cond)
# Loop over trials
for ix_trial in range(n_trials_per_cond):
# Identifier of a trial for (cond, ix_trial)
result_id = make_id(ix_trial, **cond)
# Check if the result has been already stored in the data frame
if df_exist_result_id(df, result_id):
n_skip += 1
else:
# `result` is a dict including results of trials.
# `ix_trial` is used as the random seed of the corresponding trial.
result = run_trial(ix_trial, cond)
result.update({'result_id': result_id})
df = add_result_to_df(df, result)
save_df(df_file, df)
print('Number of skipped trials = {}'.format(n_skip))
return df
df = run_expr(conds)
import pandas as pd
df_file = './20160822-eval-bml-results.pklz'
df = load_pklz(df_file)
df = pd.concat(
{
'2log(bf)': df['log_bf'],
'correct rate': df['correct_rate'],
'totalnoise': df['totalnoise'],
'data noise type': df['data_noise_type'],
'n_confs': df['n_confs']
}, axis=1
)
sg = df.groupby(['data noise type', 'n_confs', 'totalnoise'])
sg1 = sg['correct rate'].mean()
sg2 = sg['2log(bf)'].mean()
pd.concat(
{
'correct_rate': sg1,
'2log(bf)': sg2,
}, axis=1
)
import pandas as pd
def count(x): return np.sum(x.astype(int))
data_dir = '.'
df_file = data_dir + '/20160822-eval-bml-results.pklz'
df = load_pklz(df_file)
df = pd.concat(
{
'2log(bf)': df['log_bf'],
'correct rate': df['correct_rate'],
'count': df['correct_rate'],
'totalnoise': df['totalnoise'],
'data noise type': df['data_noise_type']
}, axis=1
)
df = df.pivot_table(values=['correct rate', 'count'],
index=['totalnoise', pd.cut(df['2log(bf)'], [0., 2., 6., 10., 100.])],
columns='data noise type',
aggfunc={'correct rate': np.mean, 'count': np.sum})
df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can build a model to characterize the relationship between $X$ and $Y$, recognizing that additional factors other than $X$ (the ones we have measured or are interested in) may influence the response variable $Y$.
Step2: Minimizing the sum of squares is not the only criterion we can use; it is just a very popular (and successful) one. For example, we can try to minimize the sum of absolute differences
Step3: We are not restricted to a straight-line regression model; we can represent a curved relationship between our variables by introducing polynomial terms. For example, a cubic model
Step4: Although polynomial model characterizes a nonlinear relationship, it is a linear problem in terms of estimation. That is, the regression model $f(y | x)$ is linear in the parameters.
Step7: In practice, we need not fit least squares models by hand because they are implemented generally in packages such as scikit-learn and statsmodels. For example, scikit-learn package implements least squares models in its LinearRegression class
Step8: One commonly-used statistical method in scikit-learn is the Principal Components Analysis, which is implemented in the PCA class
Step9: Similarly, there is a LinearRegression class we can use for our regression model
Step10: For more general regression model building, its helpful to use a tool for describing statistical models, called patsy. With patsy, it is easy to specify the desired combinations of variables for any particular analysis, using an "R-like" syntax. patsy parses the formula string, and uses it to construct the approriate design matrix for the model.
Step11: The dmatrix function returns the design matrix, which can be passed directly to the LinearRegression fitting method.
Step12: Logistic Regression
Step13: I have added random jitter on the y-axis to help visualize the density of the points, and have plotted fare on the log scale.
Step14: If we look at this data, we can see that for most values of fare, there are some individuals that survived and some that did not. However, notice that the cloud of points is denser on the "survived" (y=1) side for larger values of fare than on the "died" (y=0) side.
Step15: And here's the logit function
Step16: The inverse of the logit transformation is
Step17: So, now our model is
Step18: Remove null values from variables
Step19: ... and fit the model.
Step20: As with our least squares model, we can easily fit logistic regression models in scikit-learn, in this case using the LogisticRegression.
Step21: The LogisticRegression model in scikit-learn employs a regularization coefficient C, which defaults to 1. The amount of regularization is lower with larger values of C.
Step22: Exercise
Step23: Estimating Uncertainty
Step24: Bootstrap Percentile Intervals
Step25: Since we have estimated the expectation of the bootstrapped statistics, we can estimate the bias of T
Step26: Bootstrap error
Step27: Unsupvervised Learning
Step28: Let's start with $k=3$, arbitrarily assigned
Step29: We can use the function cdist from SciPy to calculate the distances from each point to each centroid.
Step30: We can make the initial assignment to centroids by picking the minimum distance.
Step31: Now we can re-assign the centroid locations based on the means of the current members' locations.
Step32: So, we simply iterate these steps until convergence.
Step33: k-means using scikit-learn
Step34: After fitting, we can retrieve the labels and cluster centers.
Step35: The resulting plot should look very similar to the one we fit by hand.
Step36: Exercise
Step37: Supervised Learning
Step38: One approach to building a predictive model is to subdivide the variable space into regions, by sequentially subdividing each variable. For example, if we split ltg at a threshold value of -0.01, it does a reasonable job of isolating the large values in one of the resulting subspaces.
Step39: However, that region still contains a fair number of low (light) values, so we can similarly bisect the region using a bmi value of -0.03 as a threshold value
Step40: We can use this partition to create a piecewise-constant function, which returns the average value of the observations in each region defined by the threshold values. We could then use this rudimentary function as a predictive model.
Step41: The choices for splitting the variables here were relatively arbitrary. Better choices can be made using a cost function $C$, such as residual sums of squares (RSS).
Step42: The recursive partitioning demonstrated above results in a decision tree. The regions defined by the trees are called terminal nodes. Locations at which a predictor is split, such as bmi=-0.03, are called internal nodes. As with this simple example, splits are not generally symmetric, in the sense that splits do not occur similarly on all branches.
Step43: However, if the variable splits responses into equal numbers of positive and negative values, then entropy is maximized, and we wish to know about the feature
Step44: The entropy calculation tells us how much additional information we would obtain with knowledge of the variable.
Step45: ID3
Step46: Consider a few variables from the titanic database
Step47: Here, we have selected pasenger class (pclass), sex, port of embarcation (embarked), and a derived variable called adult. We can calculate the information gain for each of these.
Step48: Hence, the ID3 algorithm computes the information gain for each variable, selecting the one with the highest value (in this case, adult). In this way, it searches the "tree space" according to a greedy strategy.
Step49: If you have GraphViz installed, you can draw the resulting tree
Step50: Pruning
Step51: Test error of a bagged model is measured by estimating out-of-bag error.
Step52: This approach is an ensemble learning method, because it takes a set of weak learners, and combines them to construct a strong learner that is more robust, with lower generalization error.
Step53: With random forests, it is possible to quantify the relative importance of feature inputs for classification. In scikit-learn, the Gini index (recall, a measure of error reduction) is calculated for each internal node that splits on a particular feature of a given tree, which is multiplied by the number of samples that were routed to the node (this approximates the probability of reaching that node). For each variable, this quantity is averaged over the trees in the forest to yield a measure of importance.
Step54: RandomForestClassifier uses the Gini impurity index by default; one may instead use the entropy information gain as a criterion.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from scipy.optimize import fmin
data = pd.DataFrame({'x':np.array([2.2, 4.3, 5.1, 5.8, 6.4, 8.0]),
'y':np.array([0.4, 10.1, 14.0, 10.9, 15.4, 18.5])})
data.plot.scatter('x', 'y', s=100)
sum_of_squares = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x) ** 2)
sum_of_squares([0,1], data.x, data.y)
b0,b1 = fmin(sum_of_squares, [0,1], args=(data.x, data.y))
b0,b1
axes = data.plot.scatter('x', 'y', s=50)
axes.plot([0,10], [b0, b0+b1*10])
axes.set_xlim(2, 9)
axes.set_ylim(0, 20)
axes = data.plot.scatter('x', 'y', s=50)
axes.plot([0,10], [b0, b0+b1*10])
axes = data.plot.scatter('x', 'y', s=50)
axes.plot([0,10], [b0, b0+b1*10])
for i,(xi, yi) in data.iterrows():
axes.plot([xi]*2, [yi, b0+b1*xi], 'k:')
axes.set_xlim(2, 9)
axes.set_ylim(0, 20)
sum_of_absval = lambda theta, x, y: np.sum(np.abs(y - theta[0] - theta[1]*x))
b0,b1 = fmin(sum_of_absval, [0,1], args=(data.x,data.y))
print('\nintercept: {0:.2}, slope: {1:.2}'.format(b0,b1))
axes = data.plot.scatter('x', 'y', s=50)
axes.plot([0,10], [b0, b0+b1*10])
sum_squares_quad = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2)) ** 2)
b0,b1,b2 = fmin(sum_squares_quad, [1,1,-1], args=(data.x, data.y))
print('\nintercept: {0:.2}, x: {1:.2}, x2: {2:.2}'.format(b0,b1,b2))
axes = data.plot.scatter('x', 'y', s=50)
xvals = np.linspace(0, 10, 100)
axes.plot(xvals, b0 + b1*xvals + b2*(xvals**2))
sum_squares_cubic = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2)
- theta[3]*(x**3)) ** 2)
wine = pd.read_table("../data/wine.dat", sep='\s+')
attributes = ['Grape',
'Alcohol',
'Malic acid',
'Ash',
'Alcalinity of ash',
'Magnesium',
'Total phenols',
'Flavanoids',
'Nonflavanoid phenols',
'Proanthocyanins',
'Color intensity',
'Hue',
'OD280/OD315 of diluted wines',
'Proline']
wine.columns = attributes
axes = wine.plot.scatter('Total phenols', 'Flavanoids', c='red')
phenols, flavanoids = wine[['Total phenols', 'Flavanoids']].T.values
b0,b1,b2,b3 = fmin(sum_squares_cubic, [0,1,-1,0], args=(phenols, flavanoids))
xvals = np.linspace(-2, 2)
axes.plot(xvals, b0 + b1*xvals + b2*(xvals**2) + b3*(xvals**3))
class Estimator(object):
def fit(self, X, y=None):
Fit model to data X (and y)
self.some_attribute = self.some_fitting_method(X, y)
return self
def predict(self, X_test):
Make prediction based on passed features
pred = self.make_prediction(X_test)
return pred
from sklearn.decomposition import PCA
wine_predictors = wine[wine.columns[1:]]
pca = PCA(n_components=2, whiten=True).fit(wine_predictors)
X_pca = pd.DataFrame(pca.transform(wine_predictors), columns=['Component 1' , 'Component 2'])
axes = X_pca.plot.scatter(x='Component 1' , y='Component 2', c=wine.Grape, cmap='Accent')
var_explained = pca.explained_variance_ratio_ * 100
axes.set_xlabel('First Component: {0:.1f}%'.format(var_explained[0]))
axes.set_ylabel('Second Component: {0:.1f}%'.format(var_explained[1]))
from sklearn import linear_model
straight_line = linear_model.LinearRegression()
straight_line.fit(data.x.reshape(-1, 1), data.y)
straight_line.coef_
axes = data.plot.scatter('x', 'y', s=50)
axes.plot(data.x, straight_line.predict(data.x[:, np.newaxis]), color='red',
linewidth=3)
from patsy import dmatrix
X = dmatrix('phenols + I(phenols**2) + I(phenols**3)')
pd.DataFrame(X).head()
poly_line = linear_model.LinearRegression(fit_intercept=False)
poly_line.fit(X, flavanoids)
poly_line.coef_
axes = wine.plot.scatter('Total phenols', 'Flavanoids', c='red')
axes.plot(xvals, poly_line.predict(dmatrix('xvals + I(xvals**2) + I(xvals**3)')), color='blue',
linewidth=3)
titanic = pd.read_excel("../data/titanic.xls", "titanic")
titanic.name.head()
jitter = np.random.normal(scale=0.02, size=len(titanic))
axes = (titanic.assign(logfar=np.log(titanic.fare), surv_jit=titanic.survived + jitter)
.plot.scatter('logfar', 'surv_jit', alpha=0.3))
axes.set_yticks([0,1])
axes.set_ylabel('survived')
axes.set_xlabel('log(fare)');
x = np.log(titanic.fare[titanic.fare>0])
y = titanic.survived[titanic.fare>0]
betas_titanic = fmin(sum_of_squares, [1,1], args=(x,y))
jitter = np.random.normal(scale=0.02, size=len(titanic))
axes = (titanic.assign(logfar=np.log(titanic.fare), surv_jit=titanic.survived + jitter)
.plot.scatter('logfar', 'surv_jit', alpha=0.3))
axes.set_yticks([0,1])
axes.set_ylabel('survived')
axes.set_xlabel('log(fare)')
axes.plot([0,7], [betas_titanic[0], betas_titanic[0] + betas_titanic[1]*7.])
logit = lambda p: np.log(p/(1.-p))
unit_interval = np.linspace(0,1)
plt.plot(unit_interval/(1-unit_interval), unit_interval)
plt.plot(logit(unit_interval), unit_interval)
invlogit = lambda x: 1. / (1 + np.exp(-x))
invlogit = lambda x: 1/(1 + np.exp(-x))
def logistic_like(theta, x, y):
p = invlogit(theta[0] + theta[1] * x)
# Return negative of log-likelihood
return -np.sum(y * np.log(p) + (1-y) * np.log(1 - p))
x, y = titanic[titanic.fare.notnull()][['fare', 'survived']].values.T
b0, b1 = fmin(logistic_like, [0.5,0], args=(x,y))
b0, b1
jitter = np.random.normal(scale=0.02, size=len(titanic))
axes = (titanic.assign(surv_jit=titanic.survived + jitter)
.plot.scatter('fare', 'surv_jit', alpha=0.3))
axes.set_yticks([0,1])
axes.set_ylabel('survived')
axes.set_xlabel('log(fare)')
xvals = np.linspace(0, 600)
axes.plot(xvals, invlogit(b0+b1*xvals),c='red')
axes.set_xlim(0, 600)
from sklearn.cross_validation import train_test_split
X0 = x[:, np.newaxis]
X_train, X_test, y_train, y_test = train_test_split(X0, y)
from sklearn.linear_model import LogisticRegression
lrmod = LogisticRegression(C=1000)
lrmod.fit(X_train, y_train)
pred_train = lrmod.predict(X_train)
pred_test = lrmod.predict(X_test)
pd.crosstab(y_train, pred_train,
rownames=["Actual"], colnames=["Predicted"])
pd.crosstab(y_test, pred_test,
rownames=["Actual"], colnames=["Predicted"])
lrmod.fit(x[:, np.newaxis], y)
lrmod.coef_
# Write your answer here
import numpy as np
R = 1000
boot_samples = np.empty((R, len(lrmod.coef_[0])))
for i in np.arange(R):
boot_ind = np.random.randint(0, len(X0), len(X0))
y_i, X_i = y[boot_ind], X0[boot_ind]
lrmod_i = LogisticRegression(C=1000)
lrmod_i.fit(X_i, y_i)
boot_samples[i] = lrmod_i.coef_[0]
boot_samples.sort(axis=0)
boot_samples[:10]
boot_samples[-10:]
boot_interval = boot_samples[[25, 975], :].T
boot_interval
lrmod.coef_[0]
boot_samples.mean() - lrmod.coef_[0]
boot_var = ((boot_samples - boot_samples.mean()) ** 2).sum() / (R-1)
boot_var
# Write your answer here
wine.plot.scatter('Flavanoids', 'Malic acid')
wine.plot.scatter('Flavanoids', 'Malic acid', c=np.array(list('rgbc'))[wine.Grape-1])
centroids = (-1, 2), (-1, -1), (1, 1)
axes = wine.plot.scatter('Flavanoids', 'Malic acid')
axes.scatter(*np.transpose(centroids), c='r', lw=3, marker='+', s=100)
from scipy.spatial.distance import cdist
distances = cdist(centroids, wine[['Flavanoids', 'Malic acid']])
distances.shape
labels = distances.argmin(axis=0)
labels
axes = wine.plot.scatter('Flavanoids', 'Malic acid', c=np.array(list('rgbc'))[labels])
axes.scatter(*np.transpose(centroids), c='r', marker='+', lw=3, s=100)
centroids
labels
new_centroids = [wine.loc[labels==i, ['Flavanoids', 'Malic acid']].values.mean(0) for i in range(len(centroids))]
axes = wine.plot.scatter('Flavanoids', 'Malic acid', c=np.array(list('rgbc'))[labels])
axes.scatter(*np.transpose(new_centroids), c='r', marker='+', s=100, lw=3)
centroids = new_centroids
iterations = 200
for _ in range(iterations):
distances = cdist(centroids, wine[['Flavanoids', 'Malic acid']])
labels = distances.argmin(axis=0)
centroids = [wine.loc[labels==i, ['Flavanoids', 'Malic acid']].values.mean(0) for i in range(len(centroids))]
axes = wine.plot.scatter('Flavanoids', 'Malic acid', c=np.array(list('rgbc'))[labels])
axes.scatter(*np.transpose(centroids), c='r', marker='+', s=100, lw=3)
from sklearn.cluster import KMeans
from numpy.random import RandomState
rng = RandomState(1)
# Instantiate model
kmeans = KMeans(n_clusters=3, random_state=rng)
# Fit model
kmeans.fit(wine[['Flavanoids', 'Malic acid']])
kmeans.labels_
kmeans.cluster_centers_
axes = wine.plot.scatter('Flavanoids', 'Malic acid', c=np.array(list('rgbc'))[labels])
axes.scatter(*kmeans.cluster_centers_.T, c='r', marker='+', s=100, lw=3)
## Write answer here
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
sns.set()
from sklearn.datasets import load_diabetes
# Predictors: "age" "sex" "bmi" "map" "tc" "ldl" "hdl" "tch" "ltg" "glu"
diabetes = load_diabetes()
y = diabetes['target']
bmi, ltg = diabetes['data'][:,[2,8]].T
plt.scatter(ltg, bmi, c=y, cmap="Reds")
plt.colorbar()
plt.xlabel('ltg'); plt.ylabel('bmi');
ltg_split = -0.01
plt.scatter(ltg, bmi, c=y, cmap="Reds")
plt.vlines(ltg_split, *plt.gca().get_ylim(), linestyles='dashed')
plt.colorbar()
plt.xlabel('ltg'); plt.ylabel('bmi');
bmi_split = -0.03
plt.scatter(ltg, bmi, c=y, cmap="Reds")
plt.vlines(ltg_split, *plt.gca().get_ylim(), linestyles='dashed')
plt.hlines(bmi_split, ltg_split, plt.gca().get_xlim()[1], linestyles='dashed')
plt.colorbar()
plt.xlabel('ltg'); plt.ylabel('bmi');
np.mean(y[(bmi>bmi_split) & (ltg>ltg_split)])
np.mean(y[(bmi<=bmi_split) & (ltg>ltg_split)])
np.mean(y[ltg<ltg_split])
# Write your answer here
import numpy as np
entropy = lambda p: -np.sum(p * np.log2(p)) if not 0 in p else 0
entropy([.4,.6])
entropy([0.5, 0.5])
pvals = np.linspace(0, 1)
plt.plot(pvals, [entropy([p,1-p]) for p in pvals])
gini = lambda p: 1. - (np.array(p)**2).sum()
pvals = np.linspace(0, 1)
plt.plot(pvals, [entropy([p,1-p])/2. for p in pvals], label='Entropy')
plt.plot(pvals, [gini([p,1-p]) for p in pvals], label='Gini')
plt.legend()
import numpy as np
def info_gain(X, y, feature):
# Calculates the information gain based on entropy
gain = 0
n = len(X)
# List the values that feature can take
values = list(set(X[feature]))
feature_counts = np.zeros(len(values))
E = np.zeros(len(values))
ivalue = 0
# Find where those values appear in X[feature] and the corresponding class
for value in values:
new_y = [y[i] for i,d in enumerate(X[feature].values) if d==value]
feature_counts[ivalue] += len(new_y)
# Get the values in newClasses
class_values = list(set(new_y))
class_counts = np.zeros(len(class_values))
iclass = 0
for v in class_values:
for c in new_y:
if c == v:
class_counts[iclass] += 1
iclass += 1
nc = float(np.sum(class_counts))
new_entropy = entropy([class_counts[c] / nc for c in range(len(class_values))])
E[ivalue] += new_entropy
# Computes both the Gini gain and the entropy
gain += float(feature_counts[ivalue])/n * E[ivalue]
ivalue += 1
return gain
titanic = pd.read_excel("../data/titanic.xls", "titanic")
titanic.head(1)
y = titanic['survived']
X = titanic[['pclass','sex','embarked']]
X['adult'] = titanic.age<17
info_gain(X, y, 'pclass')
info_gain(X, y, 'sex')
info_gain(X, y, 'embarked')
info_gain(X, y, 'adult')
wine = pd.read_table("../data/wine.dat", sep='\s+')
attributes = ['Alcohol',
'Malic acid',
'Ash',
'Alcalinity of ash',
'Magnesium',
'Total phenols',
'Flavanoids',
'Nonflavanoid phenols',
'Proanthocyanins',
'Color intensity',
'Hue',
'OD280/OD315 of diluted wines',
'Proline']
grape = wine.pop('region')
y = grape
wine.columns = attributes
X = wine
from sklearn import tree
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(
X, y, test_size=0.4, random_state=0)
clf = tree.DecisionTreeClassifier(criterion='entropy',
max_features="auto",
min_samples_leaf=10)
clf.fit(X_train, y_train)
with open("wine.dot", 'w') as f:
f = tree.export_graphviz(clf, out_file=f)
! dot -Tpng wine.dot -o wine.png
for i,x in enumerate(X.columns):
print(i,x)
from IPython.core.display import Image
Image("wine.png")
preds = clf.predict(X_test)
pd.crosstab(y_test, preds, rownames=['actual'],
colnames=['prediction'])
from sklearn.ensemble import BaggingClassifier
bc = BaggingClassifier(n_jobs=4, oob_score=True)
bc
bc.fit(X_train, y_train)
preds = bc.predict(X_test)
pd.crosstab(y_test, preds, rownames=['actual'],
colnames=['prediction'])
bc.oob_score_
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_jobs=4)
rf.fit(X_train, y_train)
preds = rf.predict(X_test)
pd.crosstab(y_test, preds, rownames=['actual'],
colnames=['prediction'])
importances = rf.feature_importances_
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. %s (%f)" % (f + 1, X.columns[indices[f]], importances[indices[f]]))
plt.figure()
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", align="center")
plt.xticks(range(X.shape[1]), X.columns[indices], rotation=90)
plt.xlim([-1, X.shape[1]]);
rf = RandomForestClassifier(n_jobs=4, criterion='entropy')
rf.fit(X_train, y_train)
importances = rf.feature_importances_
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. %s (%f)" % (f + 1, X.columns[indices[f]], importances[indices[f]]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install pysteps
Step2: Getting the example data
Step3: Next, we need to create a default configuration file that points to the downloaded data.
Step4: Since pysteps was already initialized in this notebook, we need to load the new configuration file and update the default configuration.
Step5: Let's see what the default parameters look like (these are stored in the
Step6: This should have printed the following lines
Step7: Let's have a look at the values returned by the load_dataset() function.
Step8: Note that the shape of the precipitation is 4 times smaller than the raw MRMS data (3500 x 7000).
Step9: Time to make a nowcast
Step10: Let's see what this 'training' precipitation event looks like using the pysteps.visualization.plot_precip_field function.
Step11: Did you note the shaded grey regions? Those are the regions were no valid observations where available to estimate the precipitation (e.g., due to ground clutter, no radar coverage, or radar beam blockage).
Step12: The histogram shows that rain rate values have a non-Gaussian and asymmetric distribution that is bounded at zero. Also, the probability of occurrence decays extremely fast with increasing rain rate values (note the logarithmic y-axis).
Step13: Let's inspect the resulting transformed precipitation distribution.
Step14: That looks more like a log-normal distribution. Note the large peak at -15dB. That peak corresponds to "zero" (below threshold) precipitation. The jump with no data in between -15 and -10 dB is caused by the precision of the data, which we had set to 1 decimal. Hence, the lowest precipitation intensities (above zero) are 0.1 mm/h (= -10 dB).
Step15: Extrapolate the observations
Step16: Let's inspect the last forecast time (hence this is the forecast rainfall an hour ahead).
Step17: Evaluate the forecast quality
|
<ASSISTANT_TASK:>
Python Code:
# These libraries are needed for the pygrib library in Colab.
# Note that is needed if you install pygrib using pip.
# If you use conda, the libraries will be installed automatically.
! apt-get install libeccodes-dev libproj-dev
# Install the python packages
! pip install pyproj
! pip install pygrib
# Uninstall existing shapely
# We will re-install shapely in the next step by ignoring the binary
# wheels to make it compatible with other modules that depend on
# GEOS, such as Cartopy (used here).
!pip uninstall --yes shapely
# To install cartopy in Colab using pip, we need to install the library
# dependencies first.
!apt-get install -qq libgdal-dev libgeos-dev
!pip install shapely --no-binary shapely
!pip install cartopy
# ! pip install git+https://github.com/pySTEPS/pysteps
! pip install pysteps
# Import the helper functions
from pysteps.datasets import download_pysteps_data, create_default_pystepsrc
# Download the pysteps data in the "pysteps_data"
download_pysteps_data("pysteps_data")
# If the configuration file is placed in one of the default locations
# (https://pysteps.readthedocs.io/en/latest/user_guide/set_pystepsrc.html#configuration-file-lookup)
# it will be loaded automatically when pysteps is imported.
config_file_path = create_default_pystepsrc("pysteps_data")
# Import pysteps and load the new configuration file
import pysteps
_ = pysteps.load_config_file(config_file_path, verbose=True)
# The default parameters are stored in pysteps.rcparams.
from pprint import pprint
pprint(pysteps.rcparams.data_sources['mrms'])
from pysteps.datasets import load_dataset
# We'll import the time module to measure the time the importer needed
import time
start_time = time.time()
# Import the data
precipitation, metadata, timestep = load_dataset('mrms',frames=35) # precipitation in mm/h
end_time = time.time()
print("Precipitation data imported")
print("Importing the data took ", (end_time - start_time), " seconds")
# Let's inspect the shape of the imported data array
precipitation.shape
timestep # In minutes
pprint(metadata)
# precipitation[0:5] -> Used to find motion (past data). Let's call it training precip.
train_precip = precipitation[0:5]
# precipitation[5:] -> Used to evaluate forecasts (future data, not available in "real" forecast situation)
# Let's call it observed precipitation because we will use it to compare our forecast with the actual observations.
observed_precip = precipitation[3:]
from matplotlib import pyplot as plt
from pysteps.visualization import plot_precip_field
# Set a figure size that looks nice ;)
plt.figure(figsize=(9, 5), dpi=100)
# Plot the last rainfall field in the "training" data.
# train_precip[-1] -> Last available composite for nowcasting.
plot_precip_field(train_precip[-1], geodata=metadata, axis="off")
plt.show() # (This line is actually not needed if you are using jupyter notebooks)
import numpy as np
# Let's define some plotting default parameters for the next plots
# Note: This is not strictly needed.
plt.rc('figure', figsize=(4,4))
plt.rc('figure', dpi=100)
plt.rc('font', size=14) # controls default text sizes
plt.rc('axes', titlesize=14) # fontsize of the axes title
plt.rc('axes', labelsize=14) # fontsize of the x and y labels
plt.rc('xtick', labelsize=14) # fontsize of the tick labels
plt.rc('ytick', labelsize=14) # fontsize of the tick labels
# Let's use the last available composite for nowcasting from the "training" data (train_precip[-1])
# Also, we will discard any invalid value.
valid_precip_values = train_precip[-1][~np.isnan(train_precip[-1])]
# Plot the histogram
bins= np.concatenate( ([-0.01,0.01], np.linspace(1,40,39)))
plt.hist(valid_precip_values,bins=bins,log=True, edgecolor='black')
plt.autoscale(tight=True, axis='x')
plt.xlabel("Rainfall intensity [mm/h]")
plt.ylabel("Counts")
plt.title('Precipitation rain rate histogram in mm/h units')
plt.show()
from pysteps.utils import transformation
# Log-transform the data to dBR.
# The threshold of 0.1 mm/h sets the fill value to -15 dBR.
train_precip_dbr, metadata_dbr = transformation.dB_transform(train_precip, metadata,
threshold=0.1,
zerovalue=-15.0)
# Only use the valid data!
valid_precip_dbr = train_precip_dbr[-1][~np.isnan(train_precip_dbr[-1])]
plt.figure(figsize=(4, 4), dpi=100)
# Plot the histogram
counts, bins, _ = plt.hist(valid_precip_dbr, bins=40, log=True, edgecolor="black")
plt.autoscale(tight=True, axis="x")
plt.xlabel("Rainfall intensity [dB]")
plt.ylabel("Counts")
plt.title("Precipitation rain rate histogram in dB units")
# Let's add a lognormal distribution that fits that data to the plot.
import scipy
bin_center = (bins[1:] + bins[:-1]) * 0.5
bin_width = np.diff(bins)
# We will only use one composite to fit the function to speed up things.
# First, remove the no precip areas."
precip_to_fit = valid_precip_dbr[valid_precip_dbr > -15]
fit_params = scipy.stats.lognorm.fit(precip_to_fit)
fitted_pdf = scipy.stats.lognorm.pdf(bin_center, *fit_params)
# Multiply pdf by the bin width and the total number of grid points: pdf -> total counts per bin.
fitted_pdf = fitted_pdf * bin_width * precip_to_fit.size
# Plot the log-normal fit
plt.plot(bin_center, fitted_pdf, label="Fitted log-normal")
plt.legend()
plt.show()
# Estimate the motion field with Lucas-Kanade
from pysteps import motion
from pysteps.visualization import plot_precip_field, quiver
# Import the Lucas-Kanade optical flow algorithm
oflow_method = motion.get_method("LK")
# Estimate the motion field from the training data (in dBR)
motion_field = oflow_method(train_precip_dbr)
## Plot the motion field.
# Use a figure size that looks nice ;)
plt.figure(figsize=(9, 5), dpi=100)
plt.title("Estimated motion field with the Lukas-Kanade algorithm")
# Plot the last rainfall field in the "training" data.
# Remember to use the mm/h precipitation data since plot_precip_field assumes
# mm/h by default. You can change this behavior using the "units" keyword.
plot_precip_field(train_precip[-1], geodata=metadata, axis="off")
# Plot the motion field vectors
quiver(motion_field, geodata=metadata, step=40)
plt.show()
from pysteps import nowcasts
start = time.time()
# Extrapolate the last radar observation
extrapolate = nowcasts.get_method("extrapolation")
# You can use the precipitation observations directly in mm/h for this step.
last_observation = train_precip[-1]
last_observation[~np.isfinite(last_observation)] = metadata["zerovalue"]
# We set the number of leadtimes (the length of the forecast horizon) to the
# length of the observed/verification preipitation data. In this way, we'll get
# a forecast that covers these time intervals.
n_leadtimes = observed_precip.shape[0]
# Advect the most recent radar rainfall field and make the nowcast.
precip_forecast = extrapolate(train_precip[-1], motion_field, n_leadtimes)
# This shows the shape of the resulting array with [time intervals, rows, cols]
print("The shape of the resulting array is: ", precip_forecast.shape)
end = time.time()
print("Advecting the radar rainfall fields took ", (end - start), " seconds")
# Plot precipitation at the end of the forecast period.
plt.figure(figsize=(9, 5), dpi=100)
plot_precip_field(precip_forecast[-1], geodata=metadata, axis="off")
plt.show()
from pysteps import verification
fss = verification.get_method("FSS")
# Compute fractions skill score (FSS) for all lead times for different scales using a 1 mm/h detection threshold.
scales = [
2,
4,
8,
16,
32,
64,
] # In grid points.
scales_in_km = np.array(scales)*4
# Set the threshold
thr = 1.0 # in mm/h
score = []
# Calculate the FSS for every lead time and all predefined scales.
for i in range(n_leadtimes):
score_ = []
for scale in scales:
score_.append(
fss(precip_forecast[i, :, :], observed_precip[i, :, :], thr, scale)
)
score.append(score_)
# Now plot it
plt.figure()
x = np.arange(1, n_leadtimes+1) * timestep
plt.plot(x, score, lw=2.0)
plt.xlabel("Lead time [min]")
plt.ylabel("FSS ( > 1.0 mm/h ) ")
plt.title("Fractions Skill Score")
plt.legend(
scales_in_km,
title="Scale [km]",
loc="center left",
bbox_to_anchor=(1.01, 0.5),
bbox_transform=plt.gca().transAxes,
)
plt.autoscale(axis="x", tight=True)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: Create new features
Step3: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
Step4: Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
Step5: Find what features had non-zero weight.
Step6: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
Step7: Next, we write a loop that does the following
Step8: QUIZ QUESTIONS
Step9: QUIZ QUESTION
Step10: Limit the number of nonzero weights
Step11: Exploring the larger range of values to find a narrow range with the desired sparsity
Step12: Now, implement a loop that search through this space of possible l1_penalty values
Step13: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
Step14: QUIZ QUESTIONS
Step15: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20)
Step16: QUIZ QUESTIONS
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
import numpy as np
sales = graphlab.SFrame('kc_house_data.gl/')
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to float, before creating a new feature.
sales['floors'] = sales['floors'].astype(float)
sales['floors_square'] = sales['floors']*sales['floors']
all_features = ['bedrooms', 'bedrooms_square',
'bathrooms',
'sqft_living', 'sqft_living_sqrt',
'sqft_lot', 'sqft_lot_sqrt',
'floors', 'floors_square',
'waterfront', 'view', 'condition', 'grade',
'sqft_above',
'sqft_basement',
'yr_built', 'yr_renovated']
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=1e10)
model_all.get('coefficients').print_rows(20)
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate
l1_all = np.logspace(1,7,num=13)
def find_best_lasso_0(l1_penalties, training, validation, features, target):
the_model = None
the_RSS = float('inf')
for l1 in l1_penalties:
model = graphlab.linear_regression.create(training, l1_penalty=l1, l2_penalty=0.,
verbose=False, validation_set=None,
target=target,
features=features)
predictions = model.predict(validation)
errors = predictions - validation[target]
RSS = np.dot(errors, errors)
if RSS < the_RSS:
the_RSS = RSS
the_model = model
return the_model
best_model = find_best_lasso_0(l1_all, training, validation, all_features, 'price')
predictions = best_model.predict(testing)
errors = predictions - testing['price']
RSS = np.dot(errors, errors)
print RSS
best_model.get('coefficients')
best_model['coefficients']['value'].nnz()
max_nonzeros = 7
l1_penalty_values = np.logspace(8, 10, num=20)
nnz = []
for l1 in l1_penalty_values:
model = graphlab.linear_regression.create(training, l1_penalty=l1, l2_penalty=0.,
verbose=False,
validation_set=None,
target='price',
features=all_features)
nnz.append(model['coefficients']['value'].nnz())
nnz
nnz[-6]
l1_penalty_min = l1_penalty_values[-6]
l1_penalty_max = l1_penalty_values[-5]
print l1_penalty_min
print l1_penalty_max
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
def find_best_lasso(l1_penalties, nonzero, training, validation, features, target):
the_model = None
the_RSS = float('inf')
for l1 in l1_penalties:
model = graphlab.linear_regression.create(training, l1_penalty=l1, l2_penalty=0.,
verbose=False, validation_set=None,
target=target,
features=features)
predictions = model.predict(validation)
errors = predictions - validation[target]
RSS = np.dot(errors, errors)
print RSS, the_RSS
print model['coefficients']['value'].nnz(), nonzero
if (RSS < the_RSS) and (model['coefficients']['value'].nnz() == nonzero):
the_RSS = RSS
the_model = model
return the_model
the_model = find_best_lasso(l1_penalty_values, max_nonzeros, training, validation, all_features, 'price')
the_model['coefficients'].print_rows(18)
the_model
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This little example shows a lot about the Python typing system. The variable a is not statically declared, after all it can contain only one type of data
Step2: The function works as expected, just echoes the given parameter
Step3: Pretty straightforward, isn't it? Well, if you come from a statically compiled language such as C or C++ you should be at least puzzled. What is a? I mean
Step4: there is no need to specify the type of the two input variables. The object a (the object contained in the variable a) shall be able to sum with the object b. This is a very beautiful and simple implementation of the polymorphism concept. Python functions are polymorphic simply because they accept everything and trust the input data to be able to perform some actions.
Step5: As you can see it is perfectly polymorphic
Step6: Ouch! Seems that the len() function is smart enough to deal with dictionaries, but not with integers. Well, after all, the length of an integer is not defined.
Step7: Very straightforward
Step8: A very simple class, as you can see, just enough to exemplify polymorphism. The Room class accepts a door variable, and the type of this variable is not specified. Duck typing in action
Step9: Both represent a door that can be open or closed, and they implement the concept in two different ways
|
<ASSISTANT_TASK:>
Python Code:
a = 5
print(a)
print(type(a))
print(hex(id(a)))
a = 'five'
print(a)
print(type(a))
print(hex(id(a)))
def echo(a):
return a
print(echo(5))
print(echo('five'))
def sum(a, b):
return a + b
l = [1, 2, 3]
print(len(l))
s = "Just a sentence"
print(len(s))
d = {'a': 1, 'b': 2}
print(len(d))
i = 5
try:
print(len(i))
except TypeError as e:
print(e)
print(l.__len__())
print(s.__len__())
print(d.__len__())
try:
print(i.__len__())
except AttributeError as e:
print(e)
class Room:
def __init__(self, door):
self.door = door
def open(self):
self.door.open()
def close(self):
self.door.close()
def is_open(self):
return self.door.is_open()
class Door:
def __init__(self):
self.status = "closed"
def open(self):
self.status = "open"
def close(self):
self.status = "closed"
def is_open(self):
return self.status == "open"
class BooleanDoor:
def __init__(self):
self.status = True
def open(self):
self.status = True
def close(self):
self.status = False
def is_open(self):
return self.status
door = Door()
bool_door = BooleanDoor()
room = Room(door)
bool_room = Room(bool_door)
room.open()
print(room.is_open())
room.close()
print(room.is_open())
bool_room.open()
print(bool_room.is_open())
bool_room.close()
print(bool_room.is_open())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download Data and Merge into DataFrame
Step2: Construct Figure
Step3: Create Animation and Save
Step4: Print Time to Run
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib
matplotlib.use("Agg")
import fredpy as fp
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('classic')
import matplotlib.animation as animation
import os
import time
# Approximately when the program started
start_time = time.time()
# start and end dates
start_date = '1965-01-01'
end_date = '2100-01-01'
file_name = '../video/US_Treasury_Yield_Curve_Animation'
# Download data into Fred objects
y1m= fp.series('DTB4WK')
y3m= fp.series('DTB3')
y6m= fp.series('DTB6')
y1 = fp.series('DGS1')
y5 = fp.series('DGS5')
y10= fp.series('DGS10')
y20= fp.series('DGS20')
y30= fp.series('DGS30')
# Give the series names
y1m.data.name = '1 mo'
y3m.data.name = '3 mo'
y6m.data.name = '6 mo'
y1.data.name = '1 yr'
y5.data.name = '5 yr'
y10.data.name = '10 yr'
y20.data.name = '20 yr'
y30.data.name = '30 yr'
yields = pd.concat([y1m.data,y3m.data,y6m.data,y1.data,y5.data,y10.data,y20.data,y30.data],axis=1)
yields = yields.loc[start_date:end_date]
yields = yields.dropna(thresh=1)
N = len(yields.index)
print('Date range: '+yields.index[0].strftime('%b %d, %Y')+' to '+yields.index[-1].strftime('%b %d, %Y'))
# Initialize figure
fig = plt.figure(figsize=(16,9))
ax = fig.add_subplot(1, 1, 1)
line, = ax.plot([], [], lw=8)
ax.grid()
ax.set_xlim(0,7)
ax.set_ylim(0,18)
ax.set_xticks(range(8))
ax.set_yticks([2,4,6,8,10,12,14,16,18])
xlabels = ['1m','3m','6m','1y','5y','10y','20y','30y']
ylabels = [2,4,6,8,10,12,14,16,18]
ax.set_xticklabels(xlabels,fontsize=20)
ax.set_yticklabels(ylabels,fontsize=20)
figure_title = 'U.S. Treasury Bond Yield Curve'
figure_xlabel = 'Time to maturity'
figure_ylabel = 'Percent'
plt.text(0.5, 1.03, figure_title,horizontalalignment='center',fontsize=30,transform = ax.transAxes)
plt.text(0.5, -.1, figure_xlabel,horizontalalignment='center',fontsize=25,transform = ax.transAxes)
plt.text(-0.05, .5, figure_ylabel,horizontalalignment='center',fontsize=25,rotation='vertical',transform = ax.transAxes)
ax.text(5.75,.25, 'Created by Brian C Jenkins',fontsize=11, color='black',alpha=0.5)#,
dateText = ax.text(0.975, 16.625, '',fontsize=18,horizontalalignment='right')
# Initialization function
def init_func():
line.set_data([], [])
return line,
# The animation function
def animate(i):
global yields
x = [0,1,2,3,4,5,6,7]
y = yields.iloc[i]
line.set_data(x, y)
dateText.set_text(yields.index[i].strftime('%b %d, %Y'))
return line ,dateText
# Set up the writer
Writer = animation.writers['ffmpeg']
writer = Writer(fps=25, metadata=dict(artist='Brian C Jenkins'), bitrate=3000)
# Make the animation
anim = animation.FuncAnimation(fig, animate, init_func=init_func,frames=N, interval=20, blit=True)
# Create a directory called 'Video' in the parent directory if it doesn't exist
try:
os.mkdir('../Video')
except:
pass
# Save the animation as .mp4
anim.save(file_name+'.mp4', writer = writer)
# Convert the .mp4 to .ogv
# os.system('ffmpeg -i '+file_name+'.mp4 -acodec libvorbis -ac 2 -ab 128k -ar 44100 -b:v 1800k '+file_name+'.ogv')
# Print runtime
seconds = time.time() - start_time
m, s = divmod(seconds, 60)
h, m = divmod(m, 60)
print("%dh %02dm %02ds"% (h, m, s))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here is the text of the first review
Step2: CountVectorizer converts a collection of text documents to a matrix of token counts (part of sklearn.feature_extraction.text).
Step3: fit_transform(trn) finds the vocabulary in the training set. It also transforms the training set into a term-document matrix. Since we have to apply the same transformation to your validation set, the second line uses just the method transform(val). trn_term_doc and val_term_doc are sparse matrices. trn_term_doc[i] represents training document i and it contains a count of words for each document for each word in the vocabulary.
Step4: Naive Bayes
Step5: Here is the formula for Naive Bayes.
Step6: ...and binarized Naive Bayes.
Step7: Logistic regression
Step8: ...and the regularized version
Step9: Trigram with NB features
Step10: Here we fit regularized logistic regression where the features are the trigrams.
Step11: Here is the $\text{log-count ratio}$ r.
Step12: Here we fit regularized logistic regression where the features are the trigrams' log-count ratios.
Step13: fastai NBSVM++
|
<ASSISTANT_TASK:>
Python Code:
PATH='data/aclImdb/'
names = ['neg','pos']
%ls {PATH}
%ls {PATH}train
%ls {PATH}train/pos | head
trn,trn_y = texts_from_folders(f'{PATH}train',names)
val,val_y = texts_from_folders(f'{PATH}test',names)
trn[0]
trn_y[0]
veczr = CountVectorizer(tokenizer=tokenize)
trn_term_doc = veczr.fit_transform(trn)
val_term_doc = veczr.transform(val)
trn_term_doc
trn_term_doc[0]
vocab = veczr.get_feature_names(); vocab[5000:5005]
w0 = set([o.lower() for o in trn[0].split(' ')]); w0
len(w0)
veczr.vocabulary_['absurd']
trn_term_doc[0,1297]
trn_term_doc[0,5000]
def pr(y_i):
p = x[y==y_i].sum(0)
return (p+1) / ((y==y_i).sum()+1)
x=trn_term_doc
y=trn_y
r = np.log(pr(1)/pr(0))
b = np.log((y==1).mean() / (y==0).mean())
pre_preds = val_term_doc @ r.T + b
preds = pre_preds.T>0
(preds==val_y).mean()
x=trn_term_doc.sign()
r = np.log(pr(1)/pr(0))
pre_preds = val_term_doc.sign() @ r.T + b
preds = pre_preds.T>0
(preds==val_y).mean()
m = LogisticRegression(C=1e8, dual=True)
m.fit(x, y)
preds = m.predict(val_term_doc)
(preds==val_y).mean()
m = LogisticRegression(C=1e8, dual=True)
m.fit(trn_term_doc.sign(), y)
preds = m.predict(val_term_doc.sign())
(preds==val_y).mean()
m = LogisticRegression(C=0.1, dual=True)
m.fit(x, y)
preds = m.predict(val_term_doc)
(preds==val_y).mean()
m = LogisticRegression(C=0.1, dual=True)
m.fit(trn_term_doc.sign(), y)
preds = m.predict(val_term_doc.sign())
(preds==val_y).mean()
veczr = CountVectorizer(ngram_range=(1,3), tokenizer=tokenize, max_features=800000)
trn_term_doc = veczr.fit_transform(trn)
val_term_doc = veczr.transform(val)
trn_term_doc.shape
vocab = veczr.get_feature_names()
vocab[200000:200005]
y=trn_y
x=trn_term_doc.sign()
val_x = val_term_doc.sign()
r = np.log(pr(1) / pr(0))
b = np.log((y==1).mean() / (y==0).mean())
m = LogisticRegression(C=0.1, dual=True)
m.fit(x, y);
preds = m.predict(val_x)
(preds.T==val_y).mean()
r.shape, r
np.exp(r)
x_nb = x.multiply(r)
m = LogisticRegression(dual=True, C=0.1)
m.fit(x_nb, y);
val_x_nb = val_x.multiply(r)
preds = m.predict(val_x_nb)
(preds.T==val_y).mean()
sl=2000
# Here is how we get a model from a bag of words
md = TextClassifierData.from_bow(trn_term_doc, trn_y, val_term_doc, val_y, sl)
learner = md.dotprod_nb_learner()
learner.fit(0.02, 1, wds=1e-6, cycle_len=1)
learner.fit(0.02, 2, wds=1e-6, cycle_len=1)
learner.fit(0.02, 2, wds=1e-6, cycle_len=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: APW RV's
Step2: RAVE
Step3: Get only ones where both stars have RV measurements
|
<ASSISTANT_TASK:>
Python Code:
from os import path
# Third-party
from astropy.io import ascii
from astropy.table import Table
import astropy.coordinates as coord
import astropy.units as u
from astropy.constants import G, c
import matplotlib.pyplot as plt
from matplotlib.colors import Normalize
import numpy as np
plt.style.use('apw-notebook')
%matplotlib inline
import sqlalchemy
from gwb.data import TGASData
from comoving_rv.log import logger
from comoving_rv.db import Session, Base, db_connect
from comoving_rv.db.model import (Run, Observation, TGASSource, SimbadInfo, PriorRV,
SpectralLineInfo, SpectralLineMeasurement, RVMeasurement,
GroupToObservations)
# base_path = '/Volumes/ProjectData/gaia-comoving-followup/'
base_path = '../../data/'
db_path = path.join(base_path, 'db.sqlite')
engine = db_connect(db_path)
session = Session()
base_q = session.query(Observation).join(RVMeasurement).filter(RVMeasurement.rv != None)
group_ids = np.array([x[0]
for x in session.query(Observation.group_id).distinct().all()
if x[0] is not None and x[0] > 0 and x[0] != 10])
len(group_ids)
star1_dicts = []
star2_dicts = []
for gid in np.unique(group_ids):
try:
gto = session.query(GroupToObservations).filter(GroupToObservations.group_id == gid).one()
obs1 = base_q.filter(Observation.id == gto.observation1_id).one()
obs2 = base_q.filter(Observation.id == gto.observation2_id).one()
except sqlalchemy.orm.exc.NoResultFound:
print('Skipping group {0}'.format(gid))
continue
raw_rv_diff = (obs1.measurements[0].x0 - obs2.measurements[0].x0) / 6563. * c.to(u.km/u.s)
mean_rv = np.mean([obs1.rv_measurement.rv.value,
obs2.rv_measurement.rv.value]) * obs2.rv_measurement.rv.unit
rv1 = mean_rv + raw_rv_diff/2.
rv_err1 = obs1.measurements[0].x0_error / 6563. * c.to(u.km/u.s)
rv2 = mean_rv - raw_rv_diff/2.
rv_err2 = obs2.measurements[0].x0_error / 6563. * c.to(u.km/u.s)
# -------
# Star 1:
row_dict = dict()
star1 = obs1.tgas_star()
for k in star1._data.dtype.names:
if k in ['J', 'J_err', 'H', 'H_err', 'Ks', 'Ks_err']: continue
row_dict[k] = star1._data[k]
row_dict['RV'] = rv1.to(u.km/u.s).value
row_dict['RV_err'] = rv_err1.to(u.km/u.s).value
row_dict['group_id'] = gid
star1_dicts.append(row_dict)
# -------
# Star 2:
row_dict = dict()
star2 = obs2.tgas_star()
for k in star2._data.dtype.names:
if k in ['J', 'J_err', 'H', 'H_err', 'Ks', 'Ks_err']: continue
row_dict[k] = star2._data[k]
row_dict['RV'] = rv2.to(u.km/u.s).value
row_dict['RV_err'] = rv_err2.to(u.km/u.s).value
row_dict['group_id'] = gid
star2_dicts.append(row_dict)
tbl1 = Table(star1_dicts)
tbl2 = Table(star2_dicts)
tbl1.write('../../data/tgas_apw1.fits', overwrite=True)
tbl2.write('../../data/tgas_apw2.fits', overwrite=True)
tgas = TGASData('../../../gaia-comoving-stars/data/stacked_tgas.fits')
star = ascii.read('../../../gaia-comoving-stars/paper/t1-1-star.txt')
rave_stars = star[(star['group_size'] == 2) & (~star['rv'].mask)]
rave_stars = rave_stars.group_by('group_id')
group_idx = np.array([i for i,g in enumerate(rave_stars.groups) if len(g) > 1])
rave_stars = rave_stars.groups[group_idx]
star1_dicts = []
star2_dicts = []
for gid in np.unique(rave_stars['group_id']):
rows = rave_stars[rave_stars['group_id'] == gid]
if len(rows) != 2:
print("skipping group {0} ({1})".format(gid, len(rows)))
continue
i1 = np.where(tgas._data['source_id'] == rows[0]['tgas_source_id'])[0][0]
i2 = np.where(tgas._data['source_id'] == rows[1]['tgas_source_id'])[0][0]
star1 = tgas[i1]
star2 = tgas[i2]
# -------
# Star 1:
row_dict = dict()
for k in star1._data.dtype.names:
if k in ['J', 'J_err', 'H', 'H_err', 'Ks', 'Ks_err']: continue
row_dict[k] = star1._data[k]
row_dict['RV'] = rows[0]['rv']
row_dict['RV_err'] = rows[0]['erv']
row_dict['group_id'] = gid
star1_dicts.append(row_dict)
# -------
# Star 2:
row_dict = dict()
for k in star2._data.dtype.names:
if k in ['J', 'J_err', 'H', 'H_err', 'Ks', 'Ks_err']: continue
row_dict[k] = star2._data[k]
row_dict['RV'] = rows[1]['rv']
row_dict['RV_err'] = rows[1]['erv']
row_dict['group_id'] = gid
star2_dicts.append(row_dict)
tbl1 = Table(star1_dicts)
tbl2 = Table(star2_dicts)
print(len(tbl1))
tbl1.write('../../data/tgas_rave1.fits', overwrite=True)
tbl2.write('../../data/tgas_rave2.fits', overwrite=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Provincias
Step2: Departamentos
Step3: Municipios
Step4: Los polígonos de algunos municipios están separados. Se juntan para solucionar la duplicidad de municipios.
Step5: Ejemplo
Step6: Se aplica el método "dissolve" para unir los polígonos de los municipios, generando una sola fila por id sin afectar la forma de los polígonos.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from simpledbf import Dbf5
import geopandas as gpd
import requests
import zipfile
import io
import os
%matplotlib inline
PROVINCIAS_URL = "http://www.ign.gob.ar/descargas/geodatos/SHAPES/ign_provincia.zip"
DEPARTAMENTOS_URL = "http://www.ign.gob.ar/descargas/geodatos/SHAPES/ign_departamento.zip"
MUNICIPIOS_URL = "http://www.ign.gob.ar/descargas/geodatos/SHAPES/ign_municipio.zip"
PROVINCIAS_OUTPUT = "provincias"
DEPARTAMENTOS_OUTPUT = "departamentos"
MUNICIPIOS_OUTPUT = "municipios"
def download_and_unzip(url):
print('Downloading shapefile...')
r = requests.get(url)
z = zipfile.ZipFile(io.BytesIO(r.content))
print("Done")
z.extractall(path=".") # extract to folder
filenames = [y for y in sorted(z.namelist()) for ending in ['dbf', 'prj', 'shp', 'shx'] if y.endswith(ending)]
print(filenames)
download_and_unzip(PROVINCIAS_URL)
provincias = gpd.read_file("Provincia")
provincias.head()
provincias.to_file(os.path.join(PROVINCIAS_OUTPUT, PROVINCIAS_OUTPUT) + ".shp")
download_and_unzip(DEPARTAMENTOS_URL)
departamentos = gpd.read_file("Departamento")
departamentos.head()
# se corrige el id de departamentos, sacando el dígito inicial
departamentos["IN1"] = departamentos["IN1"].str[1:]
departamentos.to_file(os.path.join(DEPARTAMENTOS_OUTPUT, DEPARTAMENTOS_OUTPUT) + ".shp")
download_and_unzip(MUNICIPIOS_URL)
municipios = gpd.read_file("Municipio")
municipios.head()
print("Existen {} municipios con id duplicado, tienen geometrías POLYGON separadas".format(
len(municipios) - len(municipios.drop_duplicates("IN1"))
))
len(municipios), len(municipios.drop_duplicates("IN1"))
municipios[municipios["IN1"] == "380224"]
municipios[municipios["IN1"] == "380224"].plot()
municipios_dissolved = municipios.dissolve(by='IN1').reset_index()
municipios_dissolved[municipios_dissolved["IN1"] == "380224"]
municipios_dissolved[municipios_dissolved["IN1"] == "380224"].plot()
municipios_dissolved.to_file(os.path.join(MUNICIPIOS_OUTPUT, MUNICIPIOS_OUTPUT) + ".shp")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: CKY parser
Step10: Agenda
Step11: Let's see how it works
Step12: we can push items into the agenda
Step13: and the agenda will make sure there are no duplicates
Step15: Axioms (CKY)
Step17: Optimised version
Step18: Scan
Step19: Complete
Step20: Forest from complete items
Step21: Complete program
Step22: Goal
|
<ASSISTANT_TASK:>
Python Code:
from cfg import read_grammar_rules
from cfg import WCFG
from rule import Rule
from symbol import is_terminal, is_nonterminal, make_symbol
# for convenience we will use `defaultdict` from the package collections
# which allows us to define a default constructor for values
from collections import defaultdict
G1 = WCFG(read_grammar_rules(open('examples/arithmetic', 'r')))
print G1
class Item(object): # this class is also available in `item.py`
A dotted rule used in CKY/Earley.
def __init__(self, rule, dots):
assert len(dots) > 0, 'I do not accept an empty list of dots'
self.rule_ = rule
self.dots_ = tuple(dots)
def __eq__(self, other):
return self.rule_ == other.rule_ and self.dots_ == other.dots_
def __ne__(self, other):
return not(self == other)
def __hash__(self):
return hash((self.rule_, self.dots_))
def __repr__(self):
return '{0} ||| {1}'.format(self.rule_, self.dots_)
def __str__(self):
return '{0} ||| {1}'.format(self.rule_, self.dots_)
@property
def lhs(self):
return self.rule_.lhs
@property
def rule(self):
return self.rule_
@property
def dot(self):
return self.dots_[-1]
@property
def start(self):
return self.dots_[0]
@property
def next(self):
return the symbol to the right of the dot (or None, if the item is complete)
if self.is_complete():
return None
return self.rule_.rhs[len(self.dots_) - 1]
def state(self, i):
return self.dots_[i]
def advance(self, dot):
return a new item with an extended sequence of dots
return Item(self.rule_, self.dots_ + (dot,))
def is_complete(self):
complete items are those whose dot reached the end of the RHS sequence
return len(self.rule_.rhs) + 1 == len(self.dots_)
r = Rule('[S]', ['[X]'], 0.0)
i1 = Item(r, [0])
i2 = i1.advance(1)
print i1
print i2
i1 != i2
i1.is_complete()
i2.is_complete()
i1.next
i2.next
class Agenda(object): # this class is also available in `agenda.py`
def __init__(self):
# we are organising active items in a stack (last in first out)
self._active = []
# an item should never queue twice, thus we will manage a set of items which we have already seen
self._seen = set()
# we organise incomplete items by the symbols they wait for at a certain position
# that is, if the key is a pair (Y, i)
# the value is a set of items of the form
# [X -> alpha * Y beta, [...i]]
self._incomplete = defaultdict(set)
# we organise complete items by their LHS symbol spanning from a certain position
# if the key is a pair (X, i)
# then the value is a set of items of the form
# [X -> gamma *, [i ... j]]
self._complete = defaultdict(set)
def __len__(self):
return the number of active items
return len(self._active)
def push(self, item):
push an item into the queue of active items
if item not in self._seen: # if an item has been seen before, we simply ignore it
self._active.append(item)
self._seen.add(item)
return True
return False
def pop(self):
pop an active item
assert len(self._active) > 0, 'I have no items left.'
return self._active.pop()
def make_passive(self, item):
if item.is_complete(): # complete items offer a way to rewrite a certain LHS from a certain position
self._complete[(item.lhs, item.start)].add(item)
else: # incomplete items are waiting for the completion of the symbol to the right of the dot
self._incomplete[(item.next, item.dot)].add(item)
def waiting(self, symbol, dot):
return self._incomplete.get((symbol, dot), set())
def complete(self, lhs, start):
return self._complete.get((lhs, start), set())
def itercomplete(self):
an iterator over complete items in arbitrary order
for items in self._complete.itervalues():
for item in items:
yield item
A = Agenda()
r1 = Rule('[S]', ['[S]', '[X]'], 1.0)
r1
A.push(Item(r1, [0])) # S -> S X, [0] (earley axiom)
A.push(Item(r1, [0]))
len(A)
i1 = Item(r1, [0])
i1
A.make_passive(i1)
A._incomplete
A.push(Item(Rule('[S]', ['[X]'], 1.0), [0]))
A.make_passive(Item(Rule('[S]', ['[X]'], 1.0), [0]))
A._incomplete
A.push(Item(Rule('[S]', ['[X]'], 1.0), [0, 1]))
A.make_passive(Item(Rule('[S]', ['[X]'], 1.0), [0, 1]))
A._complete
def axioms(cfg, sentence):
:params cfg: a context-free grammar (an instance of WCFG)
:params sentence: the input sentence (as a list or tuple)
:returns: a list of items
items = []
for rule in cfg:
for i in range(len(sentence)):
items.append(Item(rule, [i]))
return items
sentence = 'a * a'.split()
axioms(G1, sentence)
def axioms2(cfg, sentence):
:params cfg: a context-free grammar (an instance of WCFG)
:params sentence: the input sentence (as a list or tuple)
:returns: a list of items
by_rhs0 = defaultdict(list)
for rule in cfg:
by_rhs0[rule.rhs[0]].append(rule)
items = []
for i, word in enumerate(sentence):
for rule in by_rhs0.get(word, []):
items.append(Item(rule, [i]))
return items
axioms2(G1, sentence)
def scan(item, sentence):
assert is_terminal(item.next), 'Only terminal symbols can be scanned, got %s' % item.next
if item.dot < len(sentence) and sentence[item.dot] == item.next:
new = item.advance(item.dot + 1)
return new
else:
return None
S = []
for item in axioms2(G1, sentence):
new = scan(item, sentence)
if new is not None:
S.append(new)
S
def complete(item, agenda):
items = []
if item.is_complete():
# advance the dot for incomplete items waiting for item.lhs spanning from item.start
for incomplete in agenda.waiting(item.lhs, item.start):
items.append(incomplete.advance(item.dot))
else:
# look for completions of item.next spanning from item.dot
ends = set()
for complete in agenda.complete(item.next, item.dot):
ends.add(complete.dot)
# advance the dot of the input item for each position that complete a span
for end in ends:
items.append(item.advance(end))
return items
def make_forest(complete_items):
forest = WCFG()
for item in complete_items:
lhs = make_symbol(item.lhs, item.start, item.dot)
rhs = []
for i, sym in enumerate(item.rule.rhs):
rhs.append(make_symbol(sym, item.state(i), item.state(i + 1)))
forest.add(Rule(lhs, rhs, item.rule.prob))
return forest
def cky(cfg, sentence):
A = Agenda()
for item in axioms(cfg, sentence):
A.push(item)
while A:
item = A.pop()
if item.is_complete() or is_nonterminal(item.next):
for new in complete(item, A):
A.push(new)
else:
new = scan(item, sentence)
if new is not None:
A.push(new)
A.make_passive(item)
return make_forest(A.itercomplete())
forest = cky(G1, sentence)
print forest
goal = make_symbol('[E]', 0, len(sentence))
goal
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.