text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SASE1
Step2: We are interested in fast, i.e. pulse-resolved data from the instrument source SA1_XTD2_XGM/DOOCS
Step3: We are particularly interested in data for quantity "intensityTD". The xarray DataArray class is suited for work with axis-labeled data, and the karabo_data method get_array() serves the purpose of shaping a 2D array of that type from pulse-resolved data (which is originally stored "flat" in terms of pulses
Step4: Next, we will plot a portion of the data in two dimensions, taking the first 1500 trains for the x-Axis and the first 30 pulses per train for the y-Axis (1500, 30). Because the Matplotlib convention takes the slow axis to be y, we have to transpose to (30, 1500)
Step5: The pattern tells us what was done in this experiment
Step6: Accordingly for the odd "off" pulses
Step7: Now we can calculate the ratio of averages for every train - data types like numpy ndarray or xarray DataArray may be just divided "as such", a shortcut notation for dividing every corresponding element - and plot.
Step8: Moreover, the relative error of this ratio can be calculated by multiplicative error propagation as the square root of the sum of squared relative errors (enumerator and denominator), and from it the absolute error. The Numpy functions "sqrt" and "square" applied to array-like structures perform these operations element-wise, so the entire calculation can be conveniently done using the arrays as arguments, and we obtain individual errors for every train in the end.
Step9: We can as well plot the suppression ratio values with individual error bars according to the respective absolute error. Here, we restrict ourselves to the first 50 trains for clarity
Step10: Finally, we draw a histogram of suppression ratio values
Step11: We see that there is a suppression of signal from odd pulses to approximately 4% of the intensity of even pulses.
Step12: The difference here is that the selection scheme (indexing and slicing) shifts by one with respect to SASE1 data
Step13: The suppression ratio calculation and its plot
Step14: The error calculation with (selective) plot
Step15: The histogram
Step16: Here, suppression of signal for even "off" pulses is to approximately 0.5% of intensity from odd "on" pulses. The "suppression factor" is almost 10 times the value of SASE1. However, the relative error of these values is larger as well, as can be seen in the error-bar plot. For the smaller quantities, it is ~ 100% (!).
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
from karabo_data import RunDirectory
sa1_data = RunDirectory('/gpfs/exfel/exp/XMPL/201750/p700000/raw/r0008')
sa1_data.info()
sa1_data.keys_for_source('SA1_XTD2_XGM/XGM/DOOCS:output')
sa1_flux = sa1_data.get_array('SA1_XTD2_XGM/XGM/DOOCS:output', 'data.intensityTD')
print(sa1_flux)
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)
image = ax.imshow(sa1_flux[:1500, :30].transpose(), origin='lower', cmap='inferno')
ax.set_title('SASE1 XTD2 XGM intensity (fast)')
fig.colorbar(image, orientation='horizontal')
ax.set_xlabel('train index')
ax.set_ylabel('pulseIndex')
ax.set_aspect(15)
sa1_mean_on = np.mean(sa1_flux[:, :20:2], axis=1)
sa1_stddev_on = np.std(sa1_flux[:, :20:2], axis=1)
print(sa1_mean_on)
sa1_mean_off = np.mean(sa1_flux[:, 1:21:2], axis=1)
sa1_stddev_off = np.std(sa1_flux[:, 1:21:2], axis=1)
print(sa1_mean_off)
sa1_suppression = sa1_mean_off / sa1_mean_on
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
ax.plot(sa1_suppression.coords['trainId'].values, sa1_suppression)
ax.set_xlabel('train identifier')
ax.ticklabel_format(style='plain', useOffset=False)
plt.xticks(rotation=60)
ax.set_ylabel('suppression')
sa1_rel_error = np.sqrt(np.square(sa1_stddev_off / sa1_mean_off) + np.square(sa1_stddev_on / sa1_mean_on))
sa1_abs_error = sa1_rel_error * sa1_suppression
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
ax.errorbar(sa1_suppression.coords['trainId'].values[:50], sa1_suppression[:50], yerr=sa1_abs_error[:50], fmt='ro')
ax.set_xlabel('train identifier')
ax.ticklabel_format(style='plain', useOffset=False)
plt.xticks(rotation=60)
ax.set_ylabel('suppression')
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
_ = ax.hist(sa1_suppression, bins=50)
ax.set_xlabel('suppression')
ax.set_ylabel('frequency')
sa3_data = RunDirectory('/gpfs/exfel/exp/XMPL/201750/p700000/raw/r0009')
sa3_data.info()
sa3_flux = sa3_data.get_array('SA3_XTD10_XGM/XGM/DOOCS:output', 'data.intensityTD')
print(sa3_flux.shape)
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
image = ax.imshow(sa3_flux[:1500, :30].transpose(), origin='lower', cmap='inferno')
ax.set_title('SASE3 XTD10 XGM intensity (fast)')
fig.colorbar(image, orientation='horizontal')
ax.set_xlabel('train index')
ax.set_ylabel('pulseIndex')
ax.set_aspect(15)
sa3_mean_on = np.mean(sa3_flux[:, 1:21:2], axis=1)
sa3_stddev_on = np.std(sa3_flux[:, 1:21:2], axis=1)
print(sa3_mean_on)
sa3_mean_off = np.mean(sa3_flux[:, :20:2], axis=1)
sa3_stddev_off = np.std(sa3_flux[:, :20:2], axis=1)
print(sa3_mean_off)
sa3_suppression = sa3_mean_off / sa3_mean_on
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
ax.plot(sa3_suppression.coords['trainId'].values, sa3_suppression)
ax.set_xlabel('train identifier')
ax.ticklabel_format(style='plain', useOffset=False)
plt.xticks(rotation=60)
ax.set_ylabel('suppression')
sa3_rel_error = np.sqrt(np.square(sa3_stddev_off / sa3_mean_off) + np.square(sa3_stddev_on / sa3_mean_on))
sa3_abs_error = sa3_rel_error * sa3_suppression
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
ax.errorbar(sa1_suppression.coords['trainId'].values[:50], sa3_suppression[:50], yerr=sa3_abs_error[:50], fmt='ro')
ax.set_xlabel('train identifier')
ax.ticklabel_format(style='plain', useOffset=False)
plt.xticks(rotation=60)
ax.set_ylabel('suppression')
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
_ = ax.hist(sa3_suppression, bins=50)
ax.set_xlabel('suppression')
ax.set_ylabel('frequency')
sa1_on_all = np.array(sa1_flux[:, :20:2]).flatten()
sa1_on_all.shape
sa1_mean_on_overall = np.mean(sa1_on_all)
sa1_stddev_on_overall = np.std(sa1_on_all)
sa1_off_all = np.array(sa1_flux[:, 1:21:2]).flatten()
sa1_off_all.shape
sa1_mean_off_overall = np.mean(sa1_off_all)
sa1_stddev_off_overall = np.std(sa1_off_all)
sa1_suppression_overall = sa1_mean_off_overall / sa1_mean_on_overall
sa1_rel_error_overall = np.sqrt(np.square(sa1_stddev_off_overall / sa1_mean_off_overall) + \
np.square(sa1_stddev_on_overall / sa1_mean_on_overall))
sa1_abs_error_overall = sa1_rel_error_overall * sa1_suppression_overall
print('SA1 suppression ratio =', sa1_suppression_overall, '\u00b1', sa1_abs_error_overall)
sa3_on_all = np.array(sa3_flux[:, 1:21:2]).flatten()
sa3_on_all.shape
sa3_mean_on_overall = np.mean(sa3_on_all)
sa3_stddev_on_overall = np.std(sa3_on_all)
sa3_off_all = np.array(sa3_flux[:, :20:2]).flatten()
sa3_off_all.shape
sa3_mean_off_overall = np.mean(sa3_off_all)
sa3_stddev_off_overall = np.std(sa3_off_all)
sa3_suppression_overall = sa3_mean_off_overall / sa3_mean_on_overall
sa3_rel_error_overall = np.sqrt(np.square(sa3_stddev_off_overall / sa3_mean_off_overall) + \
np.square(sa3_stddev_on_overall / sa3_mean_on_overall))
sa3_abs_error_overall = sa3_rel_error_overall * sa3_suppression_overall
print('SA3 suppression ratio =', sa3_suppression_overall, '\u00b1', sa3_abs_error_overall)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Challenges
|
<ASSISTANT_TASK:>
Python Code:
from mcpi import minecraft
mc = minecraft.Minecraft.create(ip, port, my_name)
# send a chat message
mc.....
# teleport
mc.player.getPos() # returns a Vec3 instance; could also get pitch/orientation of player
mv.player.setPos(pos_vector)
# place blocks
from mcpi import block
mc.setBlock(x, y, z, block.STONE)
mv.setBlocks(x0, y0, z0, x1, y1, z2, block.STONE)
# detect blocks
mc.getBlock(x, y, z)
mc.getBlocks(x0, y0, z0, x1, y1, z1)
# clear all the things
mc.setBlocks(0, 0, 4, 100, 100, 100, block.AIR)
# introduce writing and using functions
wall()
tower(height)
sphere(radius)
pyramid(width, height)
house()
lava(x, y, z)
water(x, y z)
# create your own teleporter by watching for the player stepping on a certain block, then changing their position
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
a = tf.constant(
[[0.3232, -0.2321, 0.2332, -0.1231, 0.2435, 0.6728],
[0.2323, -0.1231, -0.5321, -0.1452, 0.5435, 0.1722],
[0.9823, -0.1321, -0.6433, 0.1231, 0.023, 0.0711]]
)
def g(a):
return tf.argmin(a,axis=0)
result = g(a.__copy__())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame([[1,1,0,1],[0,0,0,0],[1,0,0,1],[0,1,0,0],[1,1,0,1]],columns=['A','B','C','D'])
def g(df):
return df.loc[(df.sum(axis=1) != 0), (df.sum(axis=0) != 0)]
result = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a new Google Cloud Storage Bucket for ML model exports
Step2: Setup parameters for notebook scheduling
Step3: Locating the CSV files
Step4: Use tf.data to read the CSV files
Step5: Add data transformations and build the new DNN
Step6: Visualize the DNN model layers
Step7: Train the model
Step8: Visualize the Model Loss Curves
Step9: Evaluate over the full validation dataset
Step10: Although we get RMSE of around 10 (your answer will be different due to random seeds), remember that we trained on a really small subset of the data. We need a larger training dataset before making decisions about this model.
Step11: However, this is not realistic, because we can't expect client code to have a model object in memory. We'll have to export our model to a file, and expect client code to instantiate the model from that exported file.
Step12: Serve the model on Cloud AI Platform
Step13: In this notebook, we have looked at how to implement a custom Keras model using feature columns.
|
<ASSISTANT_TASK:>
Python Code:
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
import os, json, math, shutil
import datetime
import numpy as np
import logging
# SET TF ERROR LOG VERBOSITY
logging.getLogger("tensorflow").setLevel(logging.ERROR)
import tensorflow as tf
print(tf.version.VERSION)
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
%%bash
## Create new ML GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${PROJECT}-ml/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${PROJECT}-ml
echo -e "\nHere are your current buckets:"
gsutil ls
fi
# Note that this cell is special. It's got a tag (you can view tags by clicking on the wrench icon on the left menu in Jupyter)
# These are parameters that we will configure so that we can schedule this notebook
DATADIR = '../../data'
OUTDIR = './trained_model'
NBUCKETS = 10 # for feature crossing
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # remember the training dataset repeats, so this will wrap around
NUM_EVALS = 5 # evaluate this many times
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but no so much that it slows down
if DATADIR[:5] == 'gs://':
!gsutil ls $DATADIR/*.csv
else:
!ls -l $DATADIR/*.csv
CSV_COLUMNS = ['fare_amount', 'pickup_datetime',
'pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]
def features_and_labels(row_data):
for unwanted_col in ['key']: # keep the pickup_datetime!
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
pattern = '{}/{}'.format(DATADIR, pattern)
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
print("Repeating training dataset indefinitely")
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
def parse_datetime(s):
if type(s) is not str:
s = s.numpy().decode('utf-8') # if it is a Tensor
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
for s in ['2012-07-05 14:18:00 UTC']:
print(s)
for ts in [parse_datetime(s), parse_datetime(tf.constant(s))]: # as string, as tensor
print(ts.weekday())
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
print(DAYS[ts.weekday()])
## Add transformations
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),
ts_in
)
@tf.function
def fare_thresh(x):
return 60 * tf.keras.activations.relu(x)
def transform(inputs, NUMERIC_COLS, STRING_COLS):
print("BEFORE TRANSFORMATION")
print("INPUTS:", inputs.keys())
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in NUMERIC_COLS
}
# scale the lat, lon values to be in 0, 1
if True:
for lon_col in ['pickup_longitude', 'dropoff_longitude']: # in range -70 to -78
transformed[lon_col] = tf.keras.layers.Lambda(
lambda x: (x+78)/8.0,
name='scale_{}'.format(lon_col)
)(inputs[lon_col])
for lat_col in ['pickup_latitude', 'dropoff_latitude']: # in range 37 to 45
transformed[lat_col] = tf.keras.layers.Lambda(
lambda x: (x-37)/8.0,
name='scale_{}'.format(lat_col)
)(inputs[lat_col])
# add Euclidean distance. Doesn't have to be accurate calculation because NN will calibrate it
if True:
transformed['euclidean'] = tf.keras.layers.Lambda(euclidean, name='euclidean')([
inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']
])
feature_columns['euclidean'] = tf.feature_column.numeric_column('euclidean')
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
if True:
transformed['hourofday'] = tf.keras.layers.Lambda(
lambda x: tf.strings.to_number(tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),
name='hourofday'
)(inputs['pickup_datetime'])
feature_columns['hourofday'] = tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_identity('hourofday', num_buckets=24))
if False:
# https://buganizer.corp.google.com/issues/137795281
# day of week is hard because there is no TensorFlow function for date handling
transformed['dayofweek'] = tf.keras.layers.Lambda(
lambda x: dayofweek(x),
name='dayofweek_pyfun'
)(inputs['pickup_datetime'])
transformed['dayofweek'] = tf.keras.layers.Reshape((), name='dayofweek')(transformed['dayofweek'])
feature_columns['dayofweek'] = tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list(
'dayofweek', vocabulary_list = DAYS))
if True:
# https://buganizer.corp.google.com/issues/135479527
# featurecross lat, lon into nxn buckets, then embed
nbuckets = NBUCKETS
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = tf.feature_column.bucketized_column(feature_columns['pickup_latitude'], latbuckets)
b_dlat = tf.feature_column.bucketized_column(feature_columns['dropoff_latitude'], latbuckets)
b_plon = tf.feature_column.bucketized_column(feature_columns['pickup_longitude'], lonbuckets)
b_dlon = tf.feature_column.bucketized_column(feature_columns['dropoff_longitude'], lonbuckets)
ploc = tf.feature_column.crossed_column([b_plat, b_plon], nbuckets * nbuckets)
dloc = tf.feature_column.crossed_column([b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = tf.feature_column.crossed_column([ploc, dloc], nbuckets ** 4 )
feature_columns['pickup_and_dropoff'] = tf.feature_column.embedding_column(pd_pair, 100)
print("AFTER TRANSFORMATION")
print("TRANSFORMED:", transformed.keys())
print("FEATURES", feature_columns.keys())
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# input layer is all float except for pickup_datetime which is a string
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)
print(STRING_COLS)
print(NUMERIC_COLS)
inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(inputs, NUMERIC_COLS, STRING_COLS)
dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(transformed)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = tf.keras.layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = tf.keras.layers.Dense(8, activation='relu', name='h2')(h1)
if False:
# https://buganizer.corp.google.com/issues/136476088
# final output would normally have a linear activation because this is regression
# However, we know something about the taxi problem (fares are +ve and tend to be below $60).
# Use that here. (You can verify by running this query):
# SELECT APPROX_QUANTILES(fare_amount, 100) FROM serverlessml.cleaned_training_data
output = tf.keras.layers.Dense(1, activation=fare_thresh, name='fare')(h2)
else:
output = tf.keras.layers.Dense(1, name='fare')(h2)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
model = build_dnn_model()
print(model.summary())
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
trainds = load_dataset('taxi-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('taxi-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//10000) # evaluate on 1/10 final evaluation set
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
shutil.rmtree('{}/checkpoints/'.format(OUTDIR), ignore_errors=True)
checkpoint_path = '{}/checkpoints/taxi'.format(OUTDIR)
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch,
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[cp_callback])
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
evalds = load_dataset('taxi-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
model.evaluate(evalds)
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string),
}, steps=1)
import shutil, os, datetime
OUTPUT_DIR = os.path.join(OUTDIR, 'export/savedmodel')
if OUTPUT_DIR[:5] != 'gs://':
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {EXPORT_PATH}
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
%%bash
PROJECT=${PROJECT}
BUCKET=${PROJECT}-ml
REGION=us-east1
MODEL_NAME=taxifare
VERSION_NAME=v2
if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then
echo "$MODEL_NAME already exists"
else
# create model
echo "Creating $MODEL_NAME"
gcloud ai-platform models create --regions=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then
echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... "
gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 10
fi
# create model
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud ai-platform versions create --model=$MODEL_NAME $VERSION_NAME --async \
--framework=tensorflow --python-version=3.7 --runtime-version=1.15 \
--origin=$EXPORT_PATH --staging-bucket=gs://$BUCKET
%%writefile repro.json
{"pickup_longitude": -73.982683, "pickup_latitude": 40.742104, "dropoff_longitude": -73.983766, "dropoff_latitude": 40.755174, "passenger_count": 3.0, "pickup_datetime": "2010-02-08 09:17:00 UTC"}
!gcloud ai-platform predict --model taxifare --json-instances repro.json --version v2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ์์ ์๋ print ๋ฌธ
Step2: ์๋์ ๊ฐ์ด ์ธ๋ฑ์ฑ์ ์ด์ฉํ๋
Step3: ๋ฌธ์์ด ํ์ฉ
Step4: range ํจ์
Step5: ํ์ด์ฌ 2.x ๋ฒ์ ์์๋ range์ ๊ฑฐ์ ๋์ผํ ์ญํ ์ ์ํํ์ง๋ง ๋ฆฌ์คํธ ์ ์ฒด๋ฅผ ๋ณด์ฌ์ฃผ์ง ์๋ xrange๊ฐ ์๋ค.
Step6: ์ฃผ์
Step7: range ํจ์ ์ธ์
Step8: range ํจ์๋ for๋ฌธ์์ ์ ์ฉํ๊ฒ ํ์ฉ๋๋ค.
Step9: ๋จ์ํ ์นด์ดํธ ์ญํ ์ ์ํํ๋ ์ฉ๋๋ก rangeํจ์๋ฅผ ํ์ฉํ ์๋ ์๋ค.
Step10: C ๋๋ Java ์ธ์ด์์์๋ ๋ฌ๋ฆฌ ํ์ด์ฌ์์๋ for๋ฌธ์์ ์ฌ์ฉ๋๋ ๋ณ์๋ ์ง์ญ๋ณ์๊ฐ ์๋์ ์ฃผ์ํ ๊ฒ.
Step11: range ํจ์ ํ์ฉ
Step12: ์ฐ์ต
Step13: ์ฐ์ต
Step14: ์ฐ์ต
Step15: while ๋ฌธ ๋ฃจํ
Step16: ์์
Step17: ์ฐ์ต๋ฌธ์
Step18: [0.0, 0.1, 0.2, 0.3, ..., 0.9, 1.0]์ ๊ธฐ๋ํ์์ง๋ง ๋ค๋ฅด๊ฒ ๋์๋ค. n_divide ํจ์๋ฅผ ์๋์ ๊ฐ์ด ์ฝ๋ฉํด ๋ณด์.
|
<ASSISTANT_TASK:>
Python Code:
animals = ['cat', 'dog', 'mouse']
for x in animals:
print("This is the {}.".format(x))
for x in animals:
print("{}!, this is the {}.".format("Hi", x))
for x in animals:
print("{1}!, you are {0}.".format("animals", x))
for letter in "Hello World":
print(letter)
a = range(10)
a
b = xrange(10)
b
a[5]
b[5]
a[2:6]
c0 = range(4)
c0
c1 = range(1, 4)
c1
c2 = range(1, 10, 2)
c2
for i in range(6):
print("the square of {} is {}").format(i, i ** 2)
for i in range(5):
print("printing five times")
i
def range_double(x):
z = []
for y in range(x):
z.append(y*2)
return z
range_double(4)
def skip13(a, b):
result = []
for k in range (a,b):
if k == 13:
pass # ์๋ฌด ๊ฒ๋ ํ์ง ์์
else: result.append(k)
return result
skip13(1, 20)
q = []
def add(name):
q.append(name)
def next():
return q.pop(0)
def show():
for i in q:
print(i)
def length():
return len(q)
add("SeungMin")
q
add("Jisung")
q.pop(1)
q
next()
add("Park")
add("Kim")
q
show()
length()
next()
add("Hwang")
show()
next()
show()
def iterate(f, x, n):
a = x
for i in range(n):
a = f(a)
return a
# ๋ฐ ๋๋๊ธฐ๋ฅผ ์ํ๋ ๋งํผ ๋ฐ๋ณตํ๊ณ ์ ํ๋ฉด f ์ธ์์ ์๋ divide_2 ํจ์๋ฅผ ์
๋ ฅํ๋ฉด ๋๋ค.
def divide_2(x):
return x/2.0
iterate(divide_2, 64, 3)
x = 64
while x > 1:
x = x/2
print(x)
eps = 1.0
while eps + 1 > 1:
eps = eps / 2.0
eps + 1 > 1
# print("A very small epsilon is {}.".format(eps))
def n_divide(n):
a = []
for i in range(n+1):
d = 1-(float(n-i)/n)
a.append(d)
return a
n_divide(10)
def n_divide1(n):
a = []
for i in range(n+1):
d = (float(i)/n)
a.append(d)
return a
n_divide1(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's see which free resources are readily available. And then let's have a closer look at Shakespeare's Hamlet (to pretent we are literature freaks).
Step2: Regular Expressions
Step3: And which of them actually start with "wre"?
Step4: Find all words that start with "T" or "t", end with "r" and have exactly 3 other characters in the middle. To implement the "T" or "t" we use a character class specified by the brackets []. [Tt]matches either "T" or "t".
Step5: Did Shakespeare use any numbers (written as digits?) For macthing all the digits, we could similarly use [0123456789] or [0-9], but using the abbreviation \dis much more convenient.
Step6: And is there something that starts with z and ends with g?
Step7: In the last example we can not be sure whether there is definitely nothing or whether we got the regular expression wrong. To find out which one is the case, create a string you know should match and test your expression there.
|
<ASSISTANT_TASK:>
Python Code:
import nltk, re
from nltk import word_tokenize
# NOTE if the data (corpora, example files) is not yet downloaded, this needs to be done first
# nltk.download()
print(nltk.corpus.gutenberg.fileids())
hamlet = nltk.corpus.gutenberg.words('shakespeare-hamlet.txt')
print(len(hamlet))
[w for w in hamlet if re.search('wre', w)]
[w for w in hamlet if re.search('^wre', w)]
[w for w in hamlet if re.search('^[Tt]\w{5,5}r$', w)]
[w for w in hamlet if re.search('\d', w)]
[w for w in hamlet if re.search('^z.*g$', w)]
[w for w in ["zarhhg","zhang","zg","42"] if re.search('^z.*g$', w)]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mh', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ๆพๅบๅ็็็ถฒๅ
Step2: Q
Step3: ๅฆไฝๆ้ ?
Step4: ้ๅงๅๅไพๆๅๅง
Step5: ๅทฒ็ถๅฏไปฅๆไบ๏ผ้ๆไป้บผๅ้ก๏ผ
Step6: ็พๅจไพๅปบ็ซไธๅไป้ข
Step7: ๆๆๆดๅคๅปฃๅ
Step8: ๅฉ็จ thread
|
<ASSISTANT_TASK:>
Python Code:
import spynner
import os, sys
from IPython.display import display, Image
# ็จ spynner ๆ้็่ฆฝๅจ
browser = spynner.Browser(debug_level=spynner.ERROR, debug_stream=sys.stderr)
browser.show() # ๅ่จด browser๏ผ่ฆๅฎไนๅพไธ่ฆ้ฑ่บซ
# ๅจ ?????? ๅกซๅ
ฅ้ฉ็ถ็ถฒๅ
base_url = 'http://v.??????.com/online/comic-7340.html?ch='
book_no = 54
browser.load( base_url+str(book_no))
browser.load_jquery(True) # spynner ๅ
งๅปบๆ jquery๏ผ็จ้ๅ method ่ผๅ
ฅ๏ผๆฏ่ผๆนไพฟใ
img_url = browser.runjs('$("#TheImg").attr("src")')
print(img_url)
Image(img_url, width=200)
# ็ถ็ถไธ็จ jquery ไนๅฏไปฅ
browser.runjs('document.getElementById("TheImg").getAttribute("src")')
img_data = browser.download(img_url) # jpg ๆชๆกๅ
งๅฎน
with open("cover.jpg", 'wb') as jpg_file:
jpg_file.write(img_data)
Image(data=img_data, width=200)
#็ธฝๅ
ฑ้ ๆธ
total_pages = int(browser.runjs('ps'))
total_pages
for page in range(1, 1+total_pages):
browser.load("{}{}-{}".format(base_url, book_no, page))
img_url = browser.runjs('document.getElementById("TheImg").getAttribute("src")')
print(page, img_url)
with open("{}-{}.jpg".format(book_no, page), "wb") as f:
browser.download(img_url, outfd=f)
print("File saved in", os.getcwd())
from PySide.QtWebKit import QWebSettings # ็จไพ่จญๅฎ QtWebKit
# ๅฉ็จ
import ipywidgets as W
# ่จญๅฎไธ่ชๅ้กฏ็คบๅ็
settings = browser.webview.settings()
settings.setAttribute(QWebSettings.AutoLoadImages, False)
# ๅฏไปฅ็ๅฐ็่ฆฝๅจไธฆไธๆ่ฎ้ฒๅ็
total_pages = int(browser.runjs('ps'))
for page in range(1, 1+10):
browser.load("{}{}-{}".format(base_url, book_no, page))
img_url = browser.runjs('document.getElementById("TheImg").getAttribute("src")')
print(page, img_url)
# ๅปบ็ซ Image Widget ็จไพ้กฏ็คบๅ็้ ่ฆฝ
img = W.Image()
img.width = 300
# ้กฏ็คบไธ่ผ้ฒๅบฆ็ Progress bar
progress = W.IntProgress(min=1, value=1, max=total_pages)
display(img)
display(progress)
dir_name = os.path.join("download", "{:02d}".format(book_no))
if not os.path.exists(dir_name):
os.makedirs(dir_name)
print("Download to {}".format(os.path.join(os.getcwd(), dir_name)))
sys.stdout.flush()
# ้ๅงไธ่ผ
for page in range(1, total_pages+1):
# ๅๅพ image url
browser.load("{}{}-{}".format(base_url, book_no, page))
img_url = browser.runjs('document.getElementById("TheImg").getAttribute("src")')
# ไธ่ผๅ็
fn = os.path.join(dir_name, "{:03d}.jpg".format(page))
with open(fn, "wb") as f:
browser.download(img_url, outfd=f)
# ๆดๆฐ Widget ็็ๆ
progress.description = "%d/%d"%(page, total_pages)
progress.value = page
img.value = Image(filename=fn).data
from PySide.QtNetwork import QNetworkAccessManager, QNetworkRequest # ๆงๅถ browser ็็ถฒ่ทฏ้ฃ็ท
from PySide.QtCore import QUrl # Qt ็ Url ้กๅฅ
# ้ๆไนๅ็ browser
browser.close()
browser = spynner.Browser(debug_level=spynner.ERROR, debug_stream=sys.stderr)
# ๅปบ็ซไธๅ webview
# ๆๅไธ่จญๅฎ AutoLoadImages=False, ไฝๅขๅ ไธไบๅ
ถไป่จญๅฎ
# ้่ฃกไธฆไธๆฏ้้ป๏ผไฝ้ฉๅๆๅ็ๆ็จ
browser.create_webview()
settings = browser.webview.settings()
# settings.setAttribute(QWebSettings.AutoLoadImages, False)
settings.setAttribute(QWebSettings.JavaEnabled, False) # ไธ้่ฆ Java
settings.setAttribute(QWebSettings.DnsPrefetchEnabled, True) # ่ฉฆ่็ฏ็ Dns ่ฑ็ๆ้
settings.setAttribute(QWebSettings.PrivateBrowsingEnabled, True) # ไธ้่ฆ็่ฆฝ็ด้
# ๅปบ็ซไธๅ็ฉบ็ url
BLANK_REQUEST = QNetworkRequest(QUrl())
# ๅปบ็ซไธๅ็ฉบ็ๅ็ url
DUMMY_IMG_REQUEST = QNetworkRequest(QUrl("data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs="))
# ๅ ็บๅช้่ฆ็จไธๆฌก๏ผๅฏไปฅๅๅๅ่ญๅ้ท็ๅๅญ
class EightComicNetworkAccessManager(QNetworkAccessManager):
# ๅช้่ฆๅไปฃ createRequest ้ๅ method ๅณๅฏ
def createRequest(self, op, request, device=None):
url = str(request.url().toString()) # ๅๆธๅพๅค๏ผไฝๅชๅ url ๅฐฑๅค ็จ
if 'comic' not in url[:20]:
# ็จๅพ้็ๆนๅผไพๅคๆท้ 8comic ็ถฒ็ซ็ url
# ็จ็ฉบ็ url ๅไปฃๅๆฌ็ url
return QNetworkAccessManager.createRequest(self, self.GetOperation, BLANK_REQUEST)
elif not url.endswith('js') and not url.endswith('css') and '.html' not in url:
# ๅกๆฏ .js .css .html ไนๅค็๏ผ้ฝ็จ็ฉบ็ๅ็ url ๅไปฃๅๆฌ็ url
return QNetworkAccessManager.createRequest(self, self.GetOperation, DUMMY_IMG_REQUEST)
else:
# ๅณๅๅๆฌ็ url
return QNetworkAccessManager.createRequest(self, op, request, device)
# ่จญๅฎ browser ็ NetworkAccessManager
browser.webpage.setNetworkAccessManager(EightComicNetworkAccessManager())
browser.show()
browser.load(base_url+str(book_no))
total_pages = int(browser.runjs('ps'))
%%timeit -n 1 -r 1
# ๅปบ็ซ Image Widget ็จไพ้กฏ็คบๅ็้ ่ฆฝ
img = W.Image()
img.width = 300
# ้กฏ็คบไธ่ผ้ฒๅบฆ็ Progress bar
progress = W.IntProgress(min=1, value=1, max=total_pages)
display(img)
display(progress)
dir_name = os.path.join("download", "{:02d}".format(book_no))
if not os.path.exists(dir_name):
os.makedirs(dir_name)
print("Download to {}".format(os.path.join(os.getcwd(), dir_name)))
sys.stdout.flush()
# ้ๅงไธ่ผ
for page in range(1, total_pages+1):
# ๅๅพ image url
browser.load("{}{}-{}".format(base_url, book_no, page))
img_url = browser.runjs('document.getElementById("TheImg").getAttribute("src")')
# ไธ่ผๅ็
fn = os.path.join(dir_name, "{:03d}.jpg".format(page))
with open(fn, "wb") as f:
browser.download(img_url, outfd=f)
# ๆดๆฐ Widget ็็ๆ
progress.description = "%d/%d"%(page, total_pages)
progress.value = page
img.value = Image(filename=fn).data
from urllib.request import urlopen
from multiprocessing.pool import ThreadPool
%%timeit -n 1 -r 1
book_no = 63
browser.load(base_url+str(book_no))
total_pages = int(browser.runjs('ps'))
def save_img(img_url, page):
fn = os.path.join(dir_name, "{:03d}.jpg".format(page))
with urlopen(img_url) as img_src:
with open(fn, "wb") as f:
f.write(img_src.read())
# ๆดๆฐ widget ็็ๆ
progress.value += 1
progress.description = "img: %d/%d"%(progress.value, total_pages)
img.value = "<img src='{}' height=300 />".format(fn)
# ๅปบ็ซ Image Widget ็จไพ้กฏ็คบๅ็้ ่ฆฝ
img = W.HTML()
# ้กฏ็คบไธ่ผ้ฒๅบฆ็ Progress bar
progress = W.IntProgress(min=1, value=1, max=total_pages)
display(progress)
display(img)
dir_name = os.path.join("download", "{:02d}".format(book_no))
if not os.path.exists(dir_name):
os.makedirs(dir_name)
print("Download to {}".format(os.path.join(os.getcwd(), dir_name)))
sys.stdout.flush()
pool = ThreadPool(5)
for page in range(1, total_pages+1):
# ๅๅพ image url
browser.load("{}{}-{}".format(base_url, book_no, page))
img_url = browser.runjs('document.getElementById("TheImg").getAttribute("src")')
pool.apply_async(save_img, (img_url, page))
pool.close()
pool.join()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Logistic regression analysis
Step2: Now we will load our sample data and create a 'Alcabuse' variable from 'Dalc' and 'Walc' (weekday and weekend alcohol consumption level, respectively) and make it a binary variable by calling it alcohol abuse if a student scores more than 8 on drinking alcohol in a week. The dataset was retrieved from here but slightly modified to better illustrate our purpose.
Step3: The hypothesis we want to test is
Step4: Interpreting logistic model coefficients for prediction
Step5: Plotting the predictions of the model
Step6: Interpreting coefficients as odds
Step7: Cross-validation
Step8: 2. Fit the model on the training dataset
Step9: 3. Calculate predictions on the test dataset
Step10: 4. Calculate accuracy of the predictions
Step11: Question
Step12: Accuracy, sensitivity, and specificity are all affected if we change the cutoff value to .7
Step13: Regularization
Step14: How do we determine the penalty value?
Step15: Now let's create a plot showing how well each alpha value performs across cross-validation folds
Step16: Ridge regression
Step17: The points on the left vertical axis (the left ends of the lines) are the ordinary least squares regression values.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
import seaborn as sns
from IPython.display import display, HTML
# Plot a sigmoid function
plt.figure(figsize=(5,5)) # open a figure base and determine the size (width, height)
def sigmoid(t): # Define the sigmoid function
return (1/(1 + np.e**(-t)))
plot_range = np.arange(-6, 6, 0.1) # range of x values to plot
y_values = sigmoid(plot_range)
# Plot curve
plt.plot(plot_range, # X-axis range
y_values, # Predicted values
color="red",linewidth=3)
plt.title("Example of a sigmoid function",fontsize=18)
plt.ylabel("Probability",fontsize=16)
# Load the data
df = pd.read_csv('../Data/student-mat.csv')
df['Alcabuse']=0
df.loc[df['Dalc']+df['Walc'] >=8,'Alcabuse']=1 # Let's call it abuse if more than 8 alcohol consumption per week
logm = smf.logit('Alcabuse ~ freetime',data=df).fit()
print(logm.summary())
x = 5
print('x = %i' %x) # x
print('b0 = %f' %logm.params[0]) # b0
print('b1 = %f' %logm.params[1]) # b1
palcabuse = sigmoid(logm.params[0] + logm.params[1] * x)
print('palcabuse = %f' %palcabuse)
print('If a student has %i hours of free time per week, then he has %.2f %% probability of alcohol abuse' % (x,palcabuse*100))
# Note that we had to use double percent sign %% to print %
# because a single % is reserved for printing inline numbers as in %i or %.2f
# Plot model fit
xs = np.arange(min(df.freetime)-1,max(df.freetime)+1,0.1)
ys = np.array([sigmoid(logm.params[0] + logm.params[1] * x) for x in xs])
plt.figure(figsize=(8,4))
plt.plot(xs,ys,linewidth=3)
jitter = np.random.random(len(df.Alcabuse))/5
ys_outcomes = np.abs((df.Alcabuse==1)-0.01-jitter)
alpha = 0.7
plt.plot(df.freetime[df.Alcabuse == 1], ys_outcomes[df.Alcabuse ==1], 'b.', alpha=alpha)
plt.plot(df.freetime[df.Alcabuse == 0], ys_outcomes[df.Alcabuse ==0], 'r.', alpha=alpha)
plt.xlabel("Freetime",fontsize=16)
plt.ylabel("Probability of being an alcohol abuser",fontsize=16)
plt.title("Predicting alcohol abuse by free time",fontsize=18)
odd1 = sigmoid(logm.params[0] + logm.params[1] * 5) / (1-sigmoid(logm.params[0] + logm.params[1] * 5))
odd2 = sigmoid(logm.params[0] + logm.params[1] * 4) / (1-sigmoid(logm.params[0] + logm.params[1] * 4))
dodds = odd1/odd2
print('A student with 5hrs of free time compared to a student with 4 is %.2f times more likely to be an alcohol abuser' %dodds)
from sklearn.cross_validation import train_test_split
np.random.seed(12345678) # set the random number generator seed to get consistent results
trainratio = .6
train, test = train_test_split(df, train_size=trainratio, random_state=1)
print('train-test ratio %.2f' %(float(len(train))/(len(train)+len(test))))
# We run the model on the train dataset
logmt = smf.logit('Alcabuse ~ freetime + goout',data=train).fit()
print(logmt.summary())
ytestpred = logmt.predict(test)
np.round(ytestpred,2)
cutoff = .5
# Calculate accuracy for train
ytrainpred = logmt.predict(train)
ytrainpred[ytrainpred>=cutoff] =1
ytrainpred[ytrainpred<cutoff] =0
accs_train = pd.crosstab(ytrainpred,train["Alcabuse"])
# Calculate accuracy for test
ytestpred = logmt.predict(test)
ytestpred[ytestpred>=cutoff] =1
ytestpred[ytestpred<cutoff] =0
accs_test = pd.crosstab(ytestpred,test["Alcabuse"])
# Overall accuracy
print('Overall Accuracy in Training set : %.2f ' %((accs_train[0][0]+accs_train[1][1])/float(train.shape[0])))
print('overall Accuracy in Testing set : %.2f ' %((accs_test[0][0]+accs_test[1][1])/float(test.shape[0])))
# Sensitivity and false negatives
print('Training')
display(accs_train)
print('')
print('sensitivity : %.2f ' %(float(accs_train[1][1])/(accs_train[1][1]+accs_train[1][0])))
print('false negative : %.2f ' %(float(accs_train[1][0])/(accs_train[1][1]+accs_train[1][0])))
print('specificity : %.2f ' %(float(accs_train[0][0])/(accs_train[0][0]+accs_train[0][1])))
print('false positives : %.2f ' %(float(accs_train[0][1])/(accs_train[0][0]+accs_train[0][1])))
# Specificity and false positives
print('\n')
print('Test')
display(accs_test)
print('\n')
print('sensitivity : %.2f ' %(float(accs_test[1][1])/(accs_test[1][1]+accs_test[1][0])))
print('false negative : %.2f ' %(float(accs_test[1][0])/(accs_test[1][1]+accs_test[1][0])))
print('specificity : %.2f ' %(float(accs_test[0][0])/(accs_test[0][0]+accs_test[0][1])))
print('false positives : %.2f ' %(float(accs_test[0][1])/(accs_test[0][0]+accs_test[0][1])))
ytestpred = logmt.predict(test)
cutoff = .7
ytestpred[ytestpred>=cutoff] =1
ytestpred[ytestpred<cutoff] =0
accs = pd.crosstab(ytestpred,test["Alcabuse"])
# Overall accuracy
print ('overall accuracy : %.2f ' %((accs[0][0]+accs[1][1])/float(test.shape[0])))
# Sensitivity and false negatives
print('')
print('sensitivity : %.2f ' %(float(accs[1][1])/(accs[1][1]+accs[1][0])))
print('false negative : %.2f ' %(float(accs[1][0])/(accs[1][1]+accs[1][0])))
# Specificity and false positives
print('')
print('specificity : %.2f ' %(float(accs[0][0])/(accs[0][0]+accs[0][1])))
print('false positives : %.2f ' %(float(accs[0][1])/(accs[0][0]+accs[0][1])))
display(accs)
df['AlcSum']=df['Dalc']+df['Walc']
# We have a lot of features to choose from
df.keys()
# Make a model with all the feature variables
formula = 'AlcSum ~ school+sex+age+address+famsize+Pstatus+Medu+\
Fedu+Mjob+Fjob+reason+guardian+traveltime+\
studytime+failures+schoolsup+famsup+paid+activities+nursery+higher+\
internet+romantic+famrel+freetime+goout+health+absences+G1+G2+G3'
# Generate a design matrix with the formula
# This automatically generates separate regressors for categorical variables.
from patsy import dmatrices
y, X = dmatrices(formula_like=formula, data=df, return_type='dataframe')
# Ridge if L1_wt = 0 , Lasso if L1_wt = 1
penalty = .036
regm = smf.OLS(y,X).fit_regularized(method='coord_descent', maxiter=1000, alpha=penalty, L1_wt=1.0)
print regm.summary()
from sklearn.linear_model import LassoCV, LassoLarsCV, LassoLarsIC
import time
# LassoLarsIC: least angle regression with BIC/AIC criterion
model_bic = LassoLarsIC(criterion='bic')
model_bic.fit(X, y)
alpha_bic_ = model_bic.alpha_
model_aic = LassoLarsIC(criterion='aic')
model_aic.fit(X, y)
alpha_aic_ = model_aic.alpha_
def plot_ic_criterion(model, name, value, color):
alpha_ = model.alpha_
alphas_ = model.alphas_
criterion_ = model.criterion_
plt.plot(-np.log10(alphas_), criterion_, '--', color=color,
linewidth=3, label='%s criterion' % name)
plt.axvline(-np.log10(alpha_), color=color, linewidth=3,
label='%s estimate alpha: %.3f' %(name,value) )
plt.xlabel('-log(alpha)',fontsize=16)
plt.ylabel('Model Fit Criterion',fontsize=16)
plt.figure(figsize=(15,5))
plot_ic_criterion(model_aic, 'AIC', alpha_aic_ , 'b')
plot_ic_criterion(model_bic, 'BIC', alpha_bic_ , 'r')
plt.legend()
plt.title('Information-criterion for model selection',fontsize=18)
# LassoCV: coordinate descent
y, X = dmatrices(formula_like=formula, data=df, return_type='dataframe')
# Compute paths
print("Computing regularization path using the coordinate descent lasso...")
t1 = time.time()
model = LassoCV(cv=5).fit(X, y)
t_lasso_cv = time.time() - t1
# Display results
m_log_alphas = -np.log10(model.alphas_)
# Display results
plt.figure(figsize=(15,5))
plt.plot(m_log_alphas, model.mse_path_, ':')
plt.plot(m_log_alphas, model.mse_path_.mean(axis=-1), 'k',
label='Average across the folds', linewidth=2)
plt.axvline(-np.log10(model.alpha_), linestyle='--', color='k',
label='CV estimate alpha: %.3f' %model.alpha_ )
plt.legend()
plt.xlabel('-log(alpha)',fontsize=16)
plt.ylabel('Mean square error',fontsize=16)
plt.title('Mean square error on each fold: coordinate descent',fontsize=18)
plt.axis('tight')
from sklearn import linear_model
y, X = dmatrices(formula_like=formula, data=df, return_type='dataframe')
n_alphas = 200
alphas = np.logspace(-2, 6, n_alphas)
# Get optimal lambda
clf = linear_model.RidgeCV(alphas=alphas)
clf.fit(X,y)
aestim = clf.alpha_ # Estimated regularization param
# Get paths
clf = linear_model.Ridge(fit_intercept=False)
coefs = []
for a in alphas:
clf.set_params(alpha=a)
clf.fit(X, y)
coefs.append(clf.coef_)
###############################################################################
# Display results
plt.figure(figsize=(15,8))
ax = plt.gca()
ax.plot(alphas, np.squeeze(coefs))
ax.set_xscale('log')
plt.axvline(aestim, linestyle='--', color='k', label='alpha: CV estimate')
plt.legend()
plt.xlabel('Lambda',fontsize=16)
plt.ylabel('Weights',fontsize=16)
plt.title('Ridge coefficients as a function of the regularization',fontsize=18)
plt.axis('tight')
clf = linear_model.Ridge(fit_intercept=False)
a = aestim
clf.set_params(alpha=a)
clf.fit(X, y)
np.round(clf.coef_,3)
# Ridge if L1_wt = 0 , Lasso if L1_wt = 1
penalty = aestim
regm = smf.OLS(y,X).fit_regularized(method='coord_descent', maxiter=1000, alpha=penalty, L1_wt=0)
print regm.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Poner NFQ en un mapa
Step2: Leer un archivo Excel
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from sklearn import linear_model
from matplotlib import pylab as plt
plt.style.use('bmh')
%matplotlib notebook
wine = pd.read_csv('data/winequality-white.csv',delimiter=';')
wine.describe()
fig = plt.figure(2)
ax = [fig.add_subplot(3,4,i) for i in range(1,12)]
models = [linear_model.LinearRegression() for i in range(11)]
for column, model in zip(wine.columns, models):
model.fit(wine['quality'].reshape(-1,1),
wine[column].as_matrix().reshape(-1,1))
for qual, group in wine.groupby('quality'):
for column, axis in zip(group.columns, ax):
axis.plot(qual, group[column].mean(), 'ob')
axis.set_title(column + ' (avg)', fontsize=10)
qual = np.arange(3,10)
for model, axi in zip(models, ax):
axi.plot(qual, model.coef_[0][0]*qual + model.intercept_,
'r--', linewidth=4, label='Regression')
axi.legend(fontsize=6)
fig.tight_layout()
import folium
madrid = folium.Map(location=[40.429857, -3.685812], tiles="Stamen toner",
zoom_start=15)
nfqsolutions = folium.Marker([40.429857, -3.685812], popup='NFQ Solutions')
madrid.add_children(nfqsolutions)
madrid.save('madrid.html')
madrid
with pd.ExcelFile('./data/winequality-white.xls') as xls:
wines = pd.read_excel(xls, 'Sheet1')
wines.describe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Wiki Talk Comments Toxicity Prediction
Step2: Note that depending on when you run the cell below, you may receive a warning about the default version of TensorFlow in Colab switching to TensorFlow 2.X soon. You can safely ignore that warning as this notebook was designed to be compatible with TensorFlow 1.X and 2.X.
Step3: Though TFCO is compatible with eager and graph execution, this notebook assumes that eager execution is enabled by default. To ensure that nothing breaks, eager execution will be enabled in the cell below.
Step4: Hyper-parameters
Step5: Load and pre-process dataset
Step6: The comment column contains the discussion comments and is_toxic column indicates whether or not a comment is annotated as toxic.
Step7: Next, we tokenize the textual comments using the Tokenizer provided by Keras. We use the training set comments alone to build a vocabulary of tokens, and use them to convert all the comments into a (padded) sequence of tokens of the same length.
Step8: Finally, we identify comments related to certain sensitive topic groups. We consider a subset of the <a href="https
Step9: We then create separate group membership matrices for the train, test and validation sets, where the rows correspond to comments, the columns correspond to the four sensitive groups, and each entry is a boolean indicating whether the comment contains a term from the topic group.
Step10: As shown below, all four topic groups constitute only a small fraction of the overall dataset, and have varying proportions of toxic comments.
Step11: We see that only 1.3% of the dataset contains comments related to sexuality. Among them, 37% of the comments have been annotated as being toxic. Note that this is significantly larger than the overall proportion of comments annotated as toxic. This could be because the few comments that used those identity terms did so in pejorative contexts. As mentioned above, this could cause our model to disporportionately misclassify comments as toxic when they include those terms. Since this is the concern, we'll make sure to look at the False Positive Rate when we evaluate the model's performance.
Step12: We use the downloaded GloVe embeddings to create an embedding matrix, where the rows contain the word embeddings for the tokens in the Tokenizer's vocabulary.
Step13: We are now ready to specify the Keras layers. We write a function to create a new model, which we will invoke whenever we wish to train a new model.
Step14: We also define a method to set random seeds. This is done to ensure reproducible results.
Step15: Fairness indicators
Step16: Train unconstrained model
Step17: Note
Step18: Having trained the unconstrained model, we plot various evaluation metrics for the model on the test set.
Step19: As explained above, we are concentrating on the false positive rate. In their current version (0.1.2), Fairness Indicators select false negative rate by default. After running the line below, go ahead and deselect false_negative_rate and select false_positive_rate to look at the metric we are interested in.
Step20: While the overall false positive rate is less than 2%, the false positive rate on the sexuality-related comments is significantly higher. This is because the sexuality group is very small in size, and has a disproportionately higher fraction of comments annotated as toxic. Hence, training a model without constraints results in the model believing that sexuality-related terms are a strong indicator of toxicity.
Step21: We instantiate a new model, and compute predictions for minibatches from the two streams.
Step22: We then set up a constrained optimization problem with the error rate as the objective and with constraints on the per-group false positive rate.
Step23: We are ready to train the model. We maintain a separate counter for the two minibatch streams. Every time we perform a gradient update, we will have to copy the minibatch contents from the first stream to the tensors features_tensor and labels_tensor, and the minibatch contents from the second stream to the tensors features_tensor_sen, labels_tensor_sen and groups_tensor_sen.
Step24: Having trained the constrained model, we plot various evaluation metrics for the model on the test set.
Step25: As with last time, remember to select false_positive_rate.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title pip installs
!pip install git+https://github.com/google-research/tensorflow_constrained_optimization
!pip install git+https://github.com/tensorflow/fairness-indicators
#@title Import Modules
import io
import os
import shutil
import sys
import tempfile
import time
import urllib
import zipfile
import apache_beam as beam
from IPython.display import display
from IPython.display import HTML
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.preprocessing import text
import tensorflow_constrained_optimization as tfco
import tensorflow_model_analysis as tfma
import fairness_indicators as fi
from tensorflow_model_analysis.addons.fairness.view import widget_view
from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_evaluate_graph
from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_extractor
from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_predict as agnostic_predict
#@title Enable Eager Execution and Print Versions
if tf.__version__ < "2.0.0":
tf.enable_eager_execution()
print("Eager execution enabled.")
else:
print("Eager execution enabled by default.")
print("TensorFlow " + tf.__version__)
print("TFMA " + tfma.__version__)
print("FI " + fi.version.__version__)
hparams = {
"batch_size": 128,
"cnn_filter_sizes": [128, 128, 128],
"cnn_kernel_sizes": [5, 5, 5],
"cnn_pooling_sizes": [5, 5, 40],
"constraint_learning_rate": 0.01,
"embedding_dim": 100,
"embedding_trainable": False,
"learning_rate": 0.005,
"max_num_words": 10000,
"max_sequence_length": 250
}
toxicity_data_url = ("https://github.com/conversationai/unintended-ml-bias-analysis/"
"raw/e02b9f12b63a39235e57ba6d3d62d8139ca5572c/data/")
data_train = pd.read_csv(toxicity_data_url + "wiki_train.csv")
data_test = pd.read_csv(toxicity_data_url + "wiki_test.csv")
data_vali = pd.read_csv(toxicity_data_url + "wiki_dev.csv")
data_train.head()
labels_train = data_train["is_toxic"].values.reshape(-1, 1) * 1.0
labels_test = data_test["is_toxic"].values.reshape(-1, 1) * 1.0
labels_vali = data_vali["is_toxic"].values.reshape(-1, 1) * 1.0
tokenizer = text.Tokenizer(num_words=hparams["max_num_words"])
tokenizer.fit_on_texts(data_train["comment"])
def prep_text(texts, tokenizer, max_sequence_length):
# Turns text into into padded sequences.
text_sequences = tokenizer.texts_to_sequences(texts)
return sequence.pad_sequences(text_sequences, maxlen=max_sequence_length)
text_train = prep_text(data_train["comment"], tokenizer, hparams["max_sequence_length"])
text_test = prep_text(data_test["comment"], tokenizer, hparams["max_sequence_length"])
text_vali = prep_text(data_vali["comment"], tokenizer, hparams["max_sequence_length"])
terms = {
'sexuality': ['gay', 'lesbian', 'bisexual', 'homosexual', 'straight', 'heterosexual'],
'gender identity': ['trans', 'transgender', 'cis', 'nonbinary'],
'religion': ['christian', 'muslim', 'jewish', 'buddhist', 'catholic', 'protestant', 'sikh', 'taoist'],
'race': ['african', 'african american', 'black', 'white', 'european', 'hispanic', 'latino', 'latina',
'latinx', 'mexican', 'canadian', 'american', 'asian', 'indian', 'middle eastern', 'chinese',
'japanese']}
group_names = list(terms.keys())
num_groups = len(group_names)
def get_groups(text):
# Returns a boolean NumPy array of shape (n, k), where n is the number of comments,
# and k is the number of groups. Each entry (i, j) indicates if the i-th comment
# contains a term from the j-th group.
groups = np.zeros((text.shape[0], num_groups))
for ii in range(num_groups):
groups[:, ii] = text.str.contains('|'.join(terms[group_names[ii]]), case=False)
return groups
groups_train = get_groups(data_train["comment"])
groups_test = get_groups(data_test["comment"])
groups_vali = get_groups(data_vali["comment"])
print("Overall label proportion = %.1f%%" % (labels_train.mean() * 100))
group_stats = []
for ii in range(num_groups):
group_proportion = groups_train[:, ii].mean()
group_pos_proportion = labels_train[groups_train[:, ii] == 1].mean()
group_stats.append([group_names[ii],
"%.2f%%" % (group_proportion * 100),
"%.1f%%" % (group_pos_proportion * 100)])
group_stats = pd.DataFrame(group_stats,
columns=["Topic group", "Group proportion", "Label proportion"])
group_stats
zip_file_url = "http://nlp.stanford.edu/data/glove.6B.zip"
zip_file = urllib.request.urlopen(zip_file_url)
archive = zipfile.ZipFile(io.BytesIO(zip_file.read()))
embeddings_index = {}
glove_file = "glove.6B.100d.txt"
with archive.open(glove_file) as f:
for line in f:
values = line.split()
word = values[0].decode("utf-8")
coefs = np.asarray(values[1:], dtype="float32")
embeddings_index[word] = coefs
embedding_matrix = np.zeros((len(tokenizer.word_index) + 1, hparams["embedding_dim"]))
num_words_in_embedding = 0
for word, i in tokenizer.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
num_words_in_embedding += 1
embedding_matrix[i] = embedding_vector
def create_model():
model = keras.Sequential()
# Embedding layer.
embedding_layer = layers.Embedding(
embedding_matrix.shape[0],
embedding_matrix.shape[1],
weights=[embedding_matrix],
input_length=hparams["max_sequence_length"],
trainable=hparams['embedding_trainable'])
model.add(embedding_layer)
# Convolution layers.
for filter_size, kernel_size, pool_size in zip(
hparams['cnn_filter_sizes'], hparams['cnn_kernel_sizes'],
hparams['cnn_pooling_sizes']):
conv_layer = layers.Conv1D(
filter_size, kernel_size, activation='relu', padding='same')
model.add(conv_layer)
pooled_layer = layers.MaxPooling1D(pool_size, padding='same')
model.add(pooled_layer)
# Add a flatten layer, a fully-connected layer and an output layer.
model.add(layers.Flatten())
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(1))
return model
def set_seeds():
np.random.seed(121212)
tf.compat.v1.set_random_seed(212121)
def create_examples(labels, predictions, groups, group_names):
# Returns tf.examples with given labels, predictions, and group information.
examples = []
sigmoid = lambda x: 1/(1 + np.exp(-x))
for ii in range(labels.shape[0]):
example = tf.train.Example()
example.features.feature['toxicity'].float_list.value.append(
labels[ii][0])
example.features.feature['prediction'].float_list.value.append(
sigmoid(predictions[ii][0])) # predictions need to be in [0, 1].
for jj in range(groups.shape[1]):
example.features.feature[group_names[jj]].bytes_list.value.append(
b'Yes' if groups[ii, jj] else b'No')
examples.append(example)
return examples
def evaluate_results(labels, predictions, groups, group_names):
# Evaluates fairness indicators for given labels, predictions and group
# membership info.
examples = create_examples(labels, predictions, groups, group_names)
# Create feature map for labels, predictions and each group.
feature_map = {
'prediction': tf.io.FixedLenFeature([], tf.float32),
'toxicity': tf.io.FixedLenFeature([], tf.float32),
}
for group in group_names:
feature_map[group] = tf.io.FixedLenFeature([], tf.string)
# Serialize the examples.
serialized_examples = [e.SerializeToString() for e in examples]
BASE_DIR = tempfile.gettempdir()
OUTPUT_DIR = os.path.join(BASE_DIR, 'output')
with beam.Pipeline() as pipeline:
model_agnostic_config = agnostic_predict.ModelAgnosticConfig(
label_keys=['toxicity'],
prediction_keys=['prediction'],
feature_spec=feature_map)
slices = [tfma.slicer.SingleSliceSpec()]
for group in group_names:
slices.append(
tfma.slicer.SingleSliceSpec(columns=[group]))
extractors = [
model_agnostic_extractor.ModelAgnosticExtractor(
model_agnostic_config=model_agnostic_config),
tfma.extractors.slice_key_extractor.SliceKeyExtractor(slices)
]
metrics_callbacks = [
tfma.post_export_metrics.fairness_indicators(
thresholds=[0.5],
target_prediction_keys=['prediction'],
labels_key='toxicity'),
tfma.post_export_metrics.example_count()]
# Create a model agnostic aggregator.
eval_shared_model = tfma.types.EvalSharedModel(
add_metrics_callbacks=metrics_callbacks,
construct_fn=model_agnostic_evaluate_graph.make_construct_fn(
add_metrics_callbacks=metrics_callbacks,
config=model_agnostic_config))
# Run Model Agnostic Eval.
_ = (
pipeline
| beam.Create(serialized_examples)
| 'ExtractEvaluateAndWriteResults' >>
tfma.ExtractEvaluateAndWriteResults(
eval_shared_model=eval_shared_model,
output_path=OUTPUT_DIR,
extractors=extractors,
compute_confidence_intervals=True
)
)
fairness_ind_result = tfma.load_eval_result(output_path=OUTPUT_DIR)
# Also evaluate accuracy of the model.
accuracy = np.mean(labels == (predictions > 0.0))
return fairness_ind_result, accuracy
def plot_fairness_indicators(eval_result, title):
fairness_ind_result, accuracy = eval_result
display(HTML("<center><h2>" + title +
" (Accuracy = %.2f%%)" % (accuracy * 100) + "</h2></center>"))
widget_view.render_fairness_indicator(fairness_ind_result)
def plot_multi_fairness_indicators(multi_eval_results):
multi_results = {}
multi_accuracy = {}
for title, (fairness_ind_result, accuracy) in multi_eval_results.items():
multi_results[title] = fairness_ind_result
multi_accuracy[title] = accuracy
title_str = "<center><h2>"
for title in multi_eval_results.keys():
title_str+=title + " (Accuracy = %.2f%%)" % (multi_accuracy[title] * 100) + "; "
title_str=title_str[:-2]
title_str+="</h2></center>"
# fairness_ind_result, accuracy = eval_result
display(HTML(title_str))
widget_view.render_fairness_indicator(multi_eval_results=multi_results)
# Set random seed for reproducible results.
set_seeds()
# Optimizer and loss.
optimizer = tf.keras.optimizers.Adam(learning_rate=hparams["learning_rate"])
loss = lambda y_true, y_pred: tf.keras.losses.binary_crossentropy(
y_true, y_pred, from_logits=True)
# Create, compile and fit model.
model_unconstrained = create_model()
model_unconstrained.compile(optimizer=optimizer, loss=loss)
model_unconstrained.fit(
x=text_train, y=labels_train, batch_size=hparams["batch_size"], epochs=2)
scores_unconstrained_test = model_unconstrained.predict(text_test)
eval_result_unconstrained = evaluate_results(
labels_test, scores_unconstrained_test, groups_test, group_names)
plot_fairness_indicators(eval_result_unconstrained, "Unconstrained")
# Set random seed.
set_seeds()
# Features tensors.
batch_shape = (hparams["batch_size"], hparams['max_sequence_length'])
features_tensor = tf.Variable(np.zeros(batch_shape, dtype='int32'), name='x')
features_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='int32'), name='x_sen')
# Labels tensors.
batch_shape = (hparams["batch_size"], 1)
labels_tensor = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='labels')
labels_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='labels_sen')
# Groups tensors.
batch_shape = (hparams["batch_size"], num_groups)
groups_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='groups_sen')
# Create model, and separate prediction functions for the two streams.
# For the predictions, we use a nullary function returning a Tensor to support eager mode.
model_constrained = create_model()
def predictions():
return model_constrained(features_tensor)
def predictions_sen():
return model_constrained(features_tensor_sen)
epsilon = 0.02 # Desired false-positive rate threshold.
# Set up separate contexts for the two minibatch streams.
context = tfco.rate_context(predictions, lambda:labels_tensor)
context_sen = tfco.rate_context(predictions_sen, lambda:labels_tensor_sen)
# Compute the objective using the first stream.
objective = tfco.error_rate(context)
# Compute the constraint using the second stream.
# Subset the examples belonging to the "sexuality" group from the second stream
# and add a constraint on the group's false positive rate.
context_sen_subset = context_sen.subset(lambda: groups_tensor_sen[:, 0] > 0)
constraint = [tfco.false_positive_rate(context_sen_subset) <= epsilon]
# Create a rate minimization problem.
problem = tfco.RateMinimizationProblem(objective, constraint)
# Set up a constrained optimizer.
optimizer = tfco.ProxyLagrangianOptimizerV2(
optimizer=tf.keras.optimizers.Adam(learning_rate=hparams["learning_rate"]),
num_constraints=problem.num_constraints)
# List of variables to optimize include the model weights,
# and the trainable variables from the rate minimization problem and
# the constrained optimizer.
var_list = (model_constrained.trainable_weights + list(problem.trainable_variables) +
optimizer.trainable_variables())
# Indices of sensitive group members.
protected_group_indices = np.nonzero(groups_train.sum(axis=1))[0]
num_examples = text_train.shape[0]
num_examples_sen = protected_group_indices.shape[0]
batch_size = hparams["batch_size"]
# Number of steps needed for one epoch over the training sample.
num_steps = int(num_examples / batch_size)
start_time = time.time()
# Loop over minibatches.
for batch_index in range(num_steps):
# Indices for current minibatch in the first stream.
batch_indices = np.arange(
batch_index * batch_size, (batch_index + 1) * batch_size)
batch_indices = [ind % num_examples for ind in batch_indices]
# Indices for current minibatch in the second stream.
batch_indices_sen = np.arange(
batch_index * batch_size, (batch_index + 1) * batch_size)
batch_indices_sen = [protected_group_indices[ind % num_examples_sen]
for ind in batch_indices_sen]
# Assign features, labels, groups from the minibatches to the respective tensors.
features_tensor.assign(text_train[batch_indices, :])
labels_tensor.assign(labels_train[batch_indices])
features_tensor_sen.assign(text_train[batch_indices_sen, :])
labels_tensor_sen.assign(labels_train[batch_indices_sen])
groups_tensor_sen.assign(groups_train[batch_indices_sen, :])
# Gradient update.
optimizer.minimize(problem, var_list=var_list)
# Record and print batch training stats every 10 steps.
if (batch_index + 1) % 10 == 0 or batch_index in (0, num_steps - 1):
hinge_loss = problem.objective()
max_violation = max(problem.constraints())
elapsed_time = time.time() - start_time
sys.stdout.write(
"\rStep %d / %d: Elapsed time = %ds, Loss = %.3f, Violation = %.3f" %
(batch_index + 1, num_steps, elapsed_time, hinge_loss, max_violation))
scores_constrained_test = model_constrained.predict(text_test)
eval_result_constrained = evaluate_results(
labels_test, scores_constrained_test, groups_test, group_names)
plot_fairness_indicators(eval_result_constrained, "Constrained")
multi_results = {
'constrained':eval_result_constrained,
'unconstrained':eval_result_unconstrained,
}
plot_multi_fairness_indicators(multi_eval_results=multi_results)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Transformation classes
Step2: PSDEstimator
Step3: Source power comodulation (SPoC)
Step4: Decoding over time
Step5: You can retrieve the spatial filters and spatial patterns if you explicitly
Step6: Temporal generalization
Step7: Plot the full (generalization) matrix
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import (SlidingEstimator, GeneralizingEstimator, Scaler,
cross_val_multiscore, LinearModel, get_coef,
Vectorizer, CSP)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
tmin, tmax = -0.200, 0.500
event_id = {'Auditory/Left': 1, 'Visual/Left': 3} # just use two
raw = mne.io.read_raw_fif(raw_fname, preload=True)
# The subsequent decoding analyses only capture evoked responses, so we can
# low-pass the MEG data. Usually a value more like 40 Hz would be used,
# but here low-pass at 20 so we can more heavily decimate, and allow
# the examlpe to run faster. The 2 Hz high-pass helps improve CSP.
raw.filter(2, 20)
events = mne.find_events(raw, 'STI 014')
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=('grad', 'eog'), baseline=(None, 0.), preload=True,
reject=dict(grad=4000e-13, eog=150e-6), decim=10)
epochs.pick_types(meg=True, exclude='bads') # remove stim and EOG
X = epochs.get_data() # MEG signals: n_epochs, n_meg_channels, n_times
y = epochs.events[:, 2] # target: Audio left or right
# Uses all MEG sensors and time points as separate classification
# features, so the resulting filters used are spatio-temporal
clf = make_pipeline(Scaler(epochs.info),
Vectorizer(),
LogisticRegression(solver='lbfgs'))
scores = cross_val_multiscore(clf, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
score = np.mean(scores, axis=0)
print('Spatio-temporal: %0.1f%%' % (100 * score,))
csp = CSP(n_components=3, norm_trace=False)
clf = make_pipeline(csp, LogisticRegression(solver='lbfgs'))
scores = cross_val_multiscore(clf, X, y, cv=5, n_jobs=1)
print('CSP: %0.1f%%' % (100 * scores.mean(),))
# Fit CSP on full data and plot
csp.fit(X, y)
csp.plot_patterns(epochs.info)
csp.plot_filters(epochs.info, scalings=1e-9)
# We will train the classifier on all left visual vs auditory trials on MEG
clf = make_pipeline(StandardScaler(), LogisticRegression(solver='lbfgs'))
time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True)
scores = cross_val_multiscore(time_decod, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot
fig, ax = plt.subplots()
ax.plot(epochs.times, scores, label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC') # Area Under the Curve
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Sensor space decoding')
clf = make_pipeline(StandardScaler(),
LinearModel(LogisticRegression(solver='lbfgs')))
time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True)
time_decod.fit(X, y)
coef = get_coef(time_decod, 'patterns_', inverse_transform=True)
evoked = mne.EvokedArray(coef, epochs.info, tmin=epochs.times[0])
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
evoked.plot_joint(times=np.arange(0., .500, .100), title='patterns',
**joint_kwargs)
# define the Temporal generalization object
time_gen = GeneralizingEstimator(clf, n_jobs=1, scoring='roc_auc',
verbose=True)
scores = cross_val_multiscore(time_gen, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot the diagonal (it's exactly the same as the time-by-time decoding above)
fig, ax = plt.subplots()
ax.plot(epochs.times, np.diag(scores), label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC')
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Decoding MEG sensors over time')
fig, ax = plt.subplots(1, 1)
im = ax.imshow(scores, interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=epochs.times[[0, -1, 0, -1]], vmin=0., vmax=1.)
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Temporal generalization')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
plt.colorbar(im, ax=ax)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PCAs
Step2: Standardizing
Step3: 1 - Eigendecomposition - Computing Eigenvectors and Eigenvalues
Step4: eigendecomposition on the covariance matrix
Step5: Correlation Matrix
Step6: Eigendecomposition of the raw data based on the correlation matrix
Step7: 2 - Selecting Principal Components
Step8: Projection Matrix
Step9: 3 - Projection Onto the New Feature Space
Step10: import mca
Step11: TODO
|
<ASSISTANT_TASK:>
Python Code:
%run "../Functions/1. Google form analysis.ipynb"
binarized = getAllBinarized()
score = np.dot(binarized,np.ones(len(binarized.columns)))
dimensions = binarized.shape[1]
dimensions
binarized['class'] = 'default'
# split data table into data X and class labels y
X = binarized.iloc[:,0:dimensions].values
y = binarized.iloc[:,dimensions].values
from sklearn.preprocessing import StandardScaler
X_std = StandardScaler().fit_transform(X)
mean_vec = np.mean(X_std, axis=0)
cov_mat = (X_std - mean_vec).T.dot((X_std - mean_vec)) / (X_std.shape[0]-1)
print('Covariance matrix \n%s' %cov_mat)
print('NumPy covariance matrix: \n%s' %np.cov(X_std.T))
cov_mat = np.cov(X_std.T)
eig_vals, eig_vecs = np.linalg.eig(cov_mat)
#print('Eigenvectors \n%s' %eig_vecs)
print('\nEigenvalues \n%s' %eig_vals)
cor_mat1 = np.corrcoef(X_std.T)
eig_vals, eig_vecs = np.linalg.eig(cor_mat1)
#print('Eigenvectors \n%s' %eig_vecs)
print('\nEigenvalues \n%s' %eig_vals)
u,s,v = np.linalg.svd(X_std.T)
s
for ev in eig_vecs:
np.testing.assert_array_almost_equal(1.0, np.linalg.norm(ev))
print('Everything ok!')
# Make a list of (eigenvalue, eigenvector) tuples
eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eig_pairs.sort()
eig_pairs.reverse()
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in descending order:')
for i in eig_pairs:
print(i[0])
tot = sum(eig_vals)
var_exp = [(i / tot)*100 for i in sorted(eig_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
plt.bar(range(dimensions), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(dimensions), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
var_exp[:5]
cum_var_exp[:5]
matrix_w = np.hstack((eig_pairs[0][1].reshape(dimensions,1),
eig_pairs[1][1].reshape(dimensions,1)))
print('Matrix W:\n', matrix_w)
gform.columns
colors = ('blue','red','green','magenta','cyan','purple','yellow','black','white')
len(colors)
Y = X_std.dot(matrix_w)
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
ax = plt.subplot(111)
plt.scatter(Y[:, 0], Y[:, 1])
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.title("base PCA")
plt.show()
# classNames is a tuple
def classifyAndPlot(classNames, classes, title = '', rainbow = False):
defaultClassName = ''
sampleSize = 0
for classIndex in range(0, len(classes)):
sampleSize += len(classes[classIndex])
if(sampleSize < gform.shape[0]):
if(len(classNames) == len(classes) + 1):
defaultClassName = classNames[-1]
else:
defaultClassName = 'other'
classNames.append(defaultClassName)
for labelIndex in binarized.index:
i = int(labelIndex[len('corrections'):])
isUserSet = False
for classIndex in range(0, len(classes)):
if(gform.iloc[i][localplayerguidkey] in classes[classIndex].values):
binarized.loc[labelIndex,'class'] = classNames[classIndex]
isUserSet = True
if not isUserSet:
if not (defaultClassName in classNames):
print("unexpected error: check the exhaustiveness of the provided classes")
binarized.loc[labelIndex,'class'] = defaultClassName
y = binarized.iloc[:,dimensions].values
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
ax = plt.subplot(111)
colors = ('blue','red','green','magenta','cyan','purple','yellow','black','white')
if (rainbow or len(classNames) > len(colors)):
colors = plt.cm.rainbow(np.linspace(1, 0, len(classNames)))
colors = colors[:len(classNames)]
for lab, col in zip(classNames,colors):
plt.scatter(Y[y==lab, 0],
Y[y==lab, 1],
label=lab,
c=col)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
# source https://stackoverflow.com/questions/4700614/how-to-put-the-legend-out-of-the-plot
# Put a legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
if(len(title) > 0):
plt.title(title)
plt.show()
answered = binarized[binarized['Guess: the bacterium would glow yellow...'] == 1]
indices = answered.index.map(lambda label: int(label[len('corrections'):]))
surveys = gform.iloc[indices][localplayerguidkey]
#classifyAndPlot(['guessed', 'did not'], [surveys])
title = 'test title'
rainbow = True
alreadyDefaultClassName = True
classNames = ['guessed', 'did not']
classes = [surveys]
# classNames is a tuple
#def classifyAndPlot(classNames, classes, title = '', rainbow = False):
defaultClassName = ''
sampleSize = 0
for classIndex in range(0, len(classes)):
sampleSize += len(classes[classIndex])
if(sampleSize < gform.shape[0]):
if(len(classNames) == len(classes) + 1):
defaultClassName = classNames[-1]
else:
defaultClassName = 'other'
classNames.append(defaultClassName)
for labelIndex in binarized.index:
i = int(labelIndex[len('corrections'):])
isUserSet = False
for classIndex in range(0, len(classes)):
if(gform.iloc[i][localplayerguidkey] in classes[classIndex].values):
binarized.loc[labelIndex,'class'] = classNames[classIndex]
isUserSet = True
if not isUserSet:
if not (defaultClassName in classNames):
print("unexpected error: check the exhaustiveness of the provided classes")
binarized.loc[labelIndex,'class'] = defaultClassName
y = binarized.iloc[:,dimensions].values
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
ax = plt.subplot(111)
colors = ('blue','red','green','magenta','cyan','purple','yellow','black','white')
if (rainbow or len(classNames) > len(colors)):
colors = plt.cm.rainbow(np.linspace(1, 0, len(classNames)))
colors = colors[:len(classNames)]
for lab, col in zip(classNames,colors):
plt.scatter(Y[y==lab, 0],
Y[y==lab, 1],
label=lab,
c=col)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
# source https://stackoverflow.com/questions/4700614/how-to-put-the-legend-out-of-the-plot
# Put a legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
if(len(title) > 0):
plt.title(title)
plt.show()
answered = binarized[binarized['Guess: the bacterium would glow yellow...'] == 1]
indices = answered.index.map(lambda label: int(label[len('corrections'):]))
surveys = gform.iloc[indices][localplayerguidkey]
classifyAndPlot(['guessed', 'did not'], [surveys])
classifyAndPlot(['biologist', 'other'], [getSurveysOfBiologists(gform, True)[localplayerguidkey]], title = 'biologists and non-biologists')
classifyAndPlot(['gamer', 'other'], [getSurveysOfGamers(gform, True)[localplayerguidkey]], title = 'gamers and non-gamers')
classNames = []
classes = []
for answer in gform['Are you interested in biology?'].value_counts().index:
classNames.append(answer)
classes.append(gform[gform['Are you interested in biology?'] == answer][localplayerguidkey])
classNames.append('other')
classifyAndPlot(classNames, classes, rainbow = True, title = 'interest in biology')
#np.plot(score)
classNames = []
classes = []
for thisScore in np.unique(score):
classNames.append(thisScore)
index = np.where(score == thisScore)[0]
classes.append( gform.loc[index][localplayerguidkey])
classifyAndPlot(classNames, classes, rainbow = True, title = 'score')
classNames = []
classes = []
question = 'How old are you?'
for answer in np.sort(gform[question].unique()):
classNames.append(answer)
classes.append(gform[gform[question] == answer][localplayerguidkey])
classifyAndPlot(classNames, classes, rainbow = True, title = 'age')
gform.columns[:5]
# questions to avoid:
#0 Timestamp
#3 Age
#40 Remarks
#41 ID
from itertools import chain
questionRange = chain(range(1,3), range(4,40), range(42,44))
for questionIndex in questionRange:
question = gform.columns[questionIndex]
classNames = []
classes = []
for answer in gform[question].value_counts().index:
classNames.append(answer)
classes.append(gform[gform[question] == answer][localplayerguidkey])
classifyAndPlot(classNames, classes, title = question, rainbow = False)
eig_vals
eig_vecs[0]
maxComponentIndex = np.argmax(abs(eig_vecs[0]))
binarized.columns[maxComponentIndex]
sum(eig_vecs[0]*eig_vecs[0])
eig_vecs[0]
sortedIndices = []
descendingWeights = np.sort(abs(eig_vecs[0]))[::-1]
for sortedComponent in descendingWeights:
sortedIndices.append(np.where(abs(eig_vecs[0]) == sortedComponent)[0][0])
sortedQuestions0 = pd.DataFrame(index = descendingWeights, data = binarized.columns[sortedIndices])
sortedQuestions0
def accessFirst(a):
return a[0]
sortedQuestionsLastIndex = 10
array1 = np.arange(sortedQuestionsLastIndex+1.)/(sortedQuestionsLastIndex + 1.)
sortedQuestionsLastIndex+1,\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Accent(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Dark2(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Paired(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Pastel1(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Pastel2(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Set1(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Set2(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Set3(array1)))),\
from matplotlib import cm
def displayQuestionsContributions(\
sortedQuestions,\
title = "Contributions of questions to component",\
sortedQuestionsLastIndex = 10\
):
colors=cm.Set3(np.arange(sortedQuestionsLastIndex+1.)/(sortedQuestionsLastIndex + 1.))
sortedQuestionsLabelsArray = np.append(sortedQuestions.values.flatten()[:sortedQuestionsLastIndex], 'others')
sortedQuestionsValuesArray = np.append(sortedQuestions.index[:sortedQuestionsLastIndex], sum(sortedQuestions.index[sortedQuestionsLastIndex:]))
fig1, ax1 = plt.subplots()
ax1.pie(sortedQuestionsValuesArray, labels=sortedQuestionsLabelsArray, autopct='%1.1f%%', startangle=100, colors = colors)
ax1.axis('equal')
# cf https://matplotlib.org/users/customizing.html
plt.rcParams['patch.linewidth'] = 0
plt.rcParams['text.color'] = '#2b2b2b'
plt.title(title)
plt.tight_layout()
plt.show()
displayQuestionsContributions(sortedQuestions0, sortedQuestionsLastIndex = 10, title = 'Contributions of questions to component 1')
sum(sortedQuestions0.index**2)
sortedIndices = []
descendingWeights = np.sort(abs(eig_vecs[1]))[::-1]
for sortedComponent in descendingWeights:
sortedIndices.append(np.where(abs(eig_vecs[1]) == sortedComponent)[0][0])
sortedQuestions1 = pd.DataFrame(index = descendingWeights, data = binarized.columns[sortedIndices])
sortedQuestions1
displayQuestionsContributions(sortedQuestions1, sortedQuestionsLastIndex = 10, title = 'Contributions of questions to component 2')
sum(sortedQuestions1.index**2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Better ML Engineering with ML Metadata
Step2: Install and import TFX
Step3: Import packages
Step4: Check the TFX, and MLMD versions.
Step5: Download the dataset
Step6: Create an InteractiveContext
Step7: Construct the TFX Pipeline
Step8: Instantiate and run the StatisticsGen Component
Step9: Instantiate and run the SchemaGen Component
Step10: Instantiate and run the Trainer Component
Step11: Run the Trainer component.
Step12: Evaluate and push the model
Step13: Running the TFX pipeline populates the MLMD Database. In the next section, you use the MLMD API to query this database for metadata information.
Step14: Create some helper functions to view the data from the MD store.
Step15: First, query the MD store for a list of all its stored ArtifactTypes.
Step16: Next, query all PushedModel artifacts.
Step17: Query the MD store for the latest pushed model. This tutorial has only one pushed model.
Step18: One of the first steps in debugging a pushed model is to look at which trained model is pushed and to see which training data is used to train that model.
Step19: Query the parent artifacts for the pushed model.
Step20: Query the properties for the model.
Step21: Query the upstream artifacts for the model.
Step22: Get the training data the model trained with.
Step23: Now that you have the training data that the model trained with, query the database again to find the training step (execution). Query the MD store for a list of the registered execution types.
Step24: The training step is the ExecutionType named tfx.components.trainer.component.Trainer. Traverse the MD store to get the trainer run that corresponds to the pushed model.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import colab
!pip install --upgrade pip
except:
pass
!pip install -q -U tfx
import os
import tempfile
import urllib
import pandas as pd
import tensorflow_model_analysis as tfma
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import ml_metadata as mlmd
print('MLMD version: {}'.format(mlmd.__version__))
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
_data_filepath = os.path.join(_data_root, "penguins_processed.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
interactive_context = InteractiveContext()
example_gen = tfx.components.CsvExampleGen(input_base=_data_root)
interactive_context.run(example_gen)
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
interactive_context.run(statistics_gen)
infer_schema = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'], infer_feature_shape=True)
interactive_context.run(infer_schema)
# Define the module file for the Trainer component
trainer_module_file = 'penguin_trainer.py'
%%writefile {trainer_module_file}
# Define the training algorithm for the Trainer module file
import os
from typing import List, Text
import tensorflow as tf
from tensorflow import keras
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_metadata.proto.v0 import schema_pb2
# Features used for classification - culmen length and depth, flipper length,
# body mass, and species.
_LABEL_KEY = 'species'
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
def _input_fn(file_pattern: List[Text],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema, batch_size: int) -> tf.data.Dataset:
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY), schema).repeat()
def _build_keras_model():
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
d = keras.layers.Dense(8, activation='relu')(d)
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
return model
def run_fn(fn_args: tfx.components.FnArgs):
schema = schema_pb2.Schema()
tfx.utils.parse_pbtxt_file(fn_args.schema_path, schema)
train_dataset = _input_fn(
fn_args.train_files, fn_args.data_accessor, schema, batch_size=10)
eval_dataset = _input_fn(
fn_args.eval_files, fn_args.data_accessor, schema, batch_size=10)
model = _build_keras_model()
model.fit(
train_dataset,
epochs=int(fn_args.train_steps / 20),
steps_per_epoch=20,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
model.save(fn_args.serving_model_dir, save_format='tf')
trainer = tfx.components.Trainer(
module_file=os.path.abspath(trainer_module_file),
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=50))
interactive_context.run(trainer)
_serving_model_dir = os.path.join(tempfile.mkdtemp(),
'serving_model/penguins_classification')
eval_config = tfma.EvalConfig(
model_specs=[
tfma.ModelSpec(label_key='species', signature_name='serving_default')
],
metrics_specs=[
tfma.MetricsSpec(metrics=[
tfma.MetricConfig(
class_name='SparseCategoricalAccuracy',
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.6})))
])
],
slicing_specs=[tfma.SlicingSpec()])
evaluator = tfx.components.Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
schema=infer_schema.outputs['schema'],
eval_config=eval_config)
interactive_context.run(evaluator)
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
interactive_context.run(pusher)
connection_config = interactive_context.metadata_connection_config
store = mlmd.MetadataStore(connection_config)
# All TFX artifacts are stored in the base directory
base_dir = connection_config.sqlite.filename_uri.split('metadata.sqlite')[0]
def display_types(types):
# Helper function to render dataframes for the artifact and execution types
table = {'id': [], 'name': []}
for a_type in types:
table['id'].append(a_type.id)
table['name'].append(a_type.name)
return pd.DataFrame(data=table)
def display_artifacts(store, artifacts):
# Helper function to render dataframes for the input artifacts
table = {'artifact id': [], 'type': [], 'uri': []}
for a in artifacts:
table['artifact id'].append(a.id)
artifact_type = store.get_artifact_types_by_id([a.type_id])[0]
table['type'].append(artifact_type.name)
table['uri'].append(a.uri.replace(base_dir, './'))
return pd.DataFrame(data=table)
def display_properties(store, node):
# Helper function to render dataframes for artifact and execution properties
table = {'property': [], 'value': []}
for k, v in node.properties.items():
table['property'].append(k)
table['value'].append(
v.string_value if v.HasField('string_value') else v.int_value)
for k, v in node.custom_properties.items():
table['property'].append(k)
table['value'].append(
v.string_value if v.HasField('string_value') else v.int_value)
return pd.DataFrame(data=table)
display_types(store.get_artifact_types())
pushed_models = store.get_artifacts_by_type("PushedModel")
display_artifacts(store, pushed_models)
pushed_model = pushed_models[-1]
display_properties(store, pushed_model)
def get_one_hop_parent_artifacts(store, artifacts):
# Get a list of artifacts within a 1-hop of the artifacts of interest
artifact_ids = [artifact.id for artifact in artifacts]
executions_ids = set(
event.execution_id
for event in store.get_events_by_artifact_ids(artifact_ids)
if event.type == mlmd.proto.Event.OUTPUT)
artifacts_ids = set(
event.artifact_id
for event in store.get_events_by_execution_ids(executions_ids)
if event.type == mlmd.proto.Event.INPUT)
return [artifact for artifact in store.get_artifacts_by_id(artifacts_ids)]
parent_artifacts = get_one_hop_parent_artifacts(store, [pushed_model])
display_artifacts(store, parent_artifacts)
exported_model = parent_artifacts[0]
display_properties(store, exported_model)
model_parents = get_one_hop_parent_artifacts(store, [exported_model])
display_artifacts(store, model_parents)
used_data = model_parents[0]
display_properties(store, used_data)
display_types(store.get_execution_types())
def find_producer_execution(store, artifact):
executions_ids = set(
event.execution_id
for event in store.get_events_by_artifact_ids([artifact.id])
if event.type == mlmd.proto.Event.OUTPUT)
return store.get_executions_by_id(executions_ids)[0]
trainer = find_producer_execution(store, exported_model)
display_properties(store, trainer)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Donc attention aux nom donnรฉ aux paramรชtres formels, il peut รชtre aussi utile de se premunir contre des fautes, en utilisant la commande try qui permet la gestion des exceptions
Step2: Chaรฎnes et Listes
Step3: En cherchant une solution plus dรฉtaillรฉ, on peut inclure un test conditionnel
Step4: On peut รฉgalement utiliser une dรฉclaration comme try on appelle cela un catch
Step5: Les chaines de caractรจres, sont d'abord utiles pour retravailler la prรฉsentation d'un tableau
Step6: dans cet exemple on va faire l'appel
Step7: Erreurs classiques sur les listes
Step8: Visualisation d'un phรฉnomรจne physique
|
<ASSISTANT_TASK:>
Python Code:
def tronquer_1( l ):
return l[1:]
l=[1,2,3]
tronquer_1(l)
def tronquer_liste( ma_liste ):
try:
return ma_liste[1:]
except TypeError:
print("Cette fonction n'accepte que des listes ou des chaรฎnes de caractรจres")
l=[1,2,3]
tronquer_liste(l)
def extraire_avant_premier(s):
return s[ : str.find(s,",") ]
print( extraire_avant_premier("chat") ) ## cha
def extraire_avant_premier(s):
if str.find(s,",") == -1:
return s
return s[ : str.find(s,",") ]
print( extraire_avant_premier("chat") ) ## Affiche chat
def extraire_avant_premier(s):
try:
1/(1+str.find(s,","))
return s[ : str.find(s,",") ]
except ZeroDivisionError:
return s
print( extraire_avant_premier("chat, chien") ) ## Affiche chat
def completer_2(s,n):
return(s.ljust(n,' '))
Mon_Nom1 = 'DE SAINT-EXUPERY'
Mon_Nom2 = 'CAMUS'
#? Mon_Nom.ljust
print(Mon_Nom1.ljust(25,' '),'present !')
print(Mon_Nom2.ljust(25,' '),"c'est moi")
#Les Delistes
def tronquer_2(n):
if n== []:
return []
else:
del n[0]
return n
liste_1 = [1,2,3]
liste_2 = tronquer_2( liste_1 )
print( liste_1 ) ## affiche [2,3]
print( liste_2 ) ## affiche [2,3]
#Les Popistes
def tronquer_2(a):
if a == [] :
return []
a.pop(0)
return a
liste_1 = [1,2,3]
liste_2 = tronquer_2( liste_1 )
print( liste_1 ) ## affiche [2,3]
print( liste_2 ) ## affiche [2,3]
# -*- coding: utf-8 -*-
import numpy as np
# Constantes
my0=4*np.pi*1e-7; # permeabilite du vide
I0=-1; # intensitรฉ du courant
# le courant circule de gauche a droite
# Dimensions
d=25*1e-3 # Diametre de la spire (m)
r=d/2 # rayon de la spire (m)
segments=100 # discretisation de la spire
alpha = 2*np.pi/(segments-1) # discretisation de l'angle
# initialisation de la spire
x=[i*0 for i in range(segments)]
y=[r*np.sin(i*alpha) for i in range(segments)]
z=[-r*np.cos(i*alpha) for i in range(segments)]
# Definition du sens du positif du courant : gauche -> droite
# pour le calcul les longeurs sont exprimees en m
x_spire=np.array([x]);
y_spire=np.array([y]);
z_spire=np.array([z]);
%matplotlib inline
#%%%%%%%%%%%%%%% Affichage de la spire en 3D %%%%%%%%%%%%%%%%#
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import matplotlib.patches as patches
fig = plt.figure()
ax = fig.gca(projection='3d')
#titre du graphe
plt.title ('Simulation numerique d une spire', color='b')
# spire
plt.plot(y_spire [0], z_spire[0], 'g.', linewidth=1, label='spire')
plt.plot(y_spire, z_spire, 'g.', linewidth=1)
# vecteur champ magnetique Bx
plt.plot(([0.003,0.003]),([0,0]),([-.05,0.05]), 'b-',label='axe de la spire',linewidth=1)
plt.plot(([0.003,0.003]),([0,0]),([-.01,0.02]), 'm-',label='champ magnetique',linewidth=1)
plt.plot([0.003],[0],[0.025], 'm', marker='^', linewidth=2)
ax.text(0.004,0,0,'Bx', color='m')
#courant
plt.plot([0],[0.0125],[0], 'r', marker='<')
ax.text(0,0.0125,0.01,'I', color='r')
#legende du graphe
plt.legend (loc='lower left', prop={'size':8})
plt.show()
#%%%%%%%%%% Calcul du champ magnetique d apres Biot et Savart
ndp=50 # Nombre de points
# limites x
xmin,xmax=-0.05,0.05
# limites y
ymin, ymax=-0.05, 0.05
# limites z
zmin, zmax=-0.05,0.05
dx=(xmax-xmin)/(ndp-1) #increment x
dy=(ymax-ymin)/(ndp-1) # increment y
dz=(zmax-zmin)/(ndp-1) # increment z
#%%%%%%%%%%%%%%% Calcul magnetostatique %%%%%%%%%%%%%%%%%%%%%%%%%%%#
bxf=np.zeros(ndp) # initialisation de la composante Bx du champ
byf=np.zeros(ndp) # initialisation de la composante By du champ
bzf=np.zeros(ndp) # initialisation de la composante Bz du champ
I0f1=my0*I0/(4*np.pi) # Magnetostatique (on multiplie le courant
#I0 par mu0/(4.pi))
# Integration du champ induit en un point de l'axe bleu
# par le courant circulant sur chaque segment de la spire verte
bfx,bfy,bfz=0,0,0
nseg=np.size(z_spire)-1
for i in range(ndp):
#Initialisation des positions
xM=(xmin+i*dx)
yM,zM=0,0
#Initialisation des champs locaux
bfx,bfy,bfz=0,0,0
R=np.array([xM,yM,zM])
# vecteur position sur
# le point qui doit etre calcule
# en integrant la contribution
# de tous les courants le long
# de la boucle verte
for wseg in range(nseg):
xs=x_spire[0][wseg]
ys=y_spire[0][wseg]
zs=z_spire[0][wseg]
Rs=np.array([xs, ys, zs])
drsx=(x_spire[0][wseg+1]-x_spire[0][wseg])
drsy=(y_spire[0][wseg+1]-y_spire[0][wseg])
drsz=(z_spire[0][wseg+1]-z_spire[0][wseg])
drs=np.array([drsx, drsy, drsz])
#direction du courant
Delta_R= Rs - R
#vecteur entre l'รฉlement de spire et
#le point oรน est calcul le champ
Delta_Rdist=np.sqrt(Delta_R[0]**2+Delta_R[1]**2+Delta_R[2]**2)
#Delta_Rdis2=Delta_Rdist**2
Delta_Rdis3=Delta_Rdist**3
b2=1.0/Delta_Rdis3
b12=I0f1*b2*(-1)
# Produit vectoriel
Delta_Rxdrs_x=Delta_R[1]*drsz-Delta_R[2]*drsy
Delta_Rxdrs_y=Delta_R[2]*drsx-Delta_R[0]*drsz
Delta_Rxdrs_z=Delta_R[0]*drsy-Delta_R[1]*drsx
#Intรฉgration
bfx=bfx+b12*Delta_Rxdrs_x
bfy=bfy+b12*Delta_Rxdrs_y
bfz=bfz+b12*Delta_Rxdrs_z
# Il faut utiliser un champ defini comme 3 listes :
# une liste pour chaque abscisse
bxf[i]+=bfx
byf[i]+=bfy
bzf[i]+=bfz
#%%%%%%%%%%% Modele Theorique %%%%%%%%%%%%%%%%%%#
#r=d/2; # rayon de la spire en mm
#r=r*1e-3; # rayon de la spire en m
#valeur absolue de la composante Bx du champ magnetique en fonction de la position sur l axe de la spire
bx_analytique=[abs(my0*I0)*(r)**2/(2*((r)**2+(x)**2)**(3/2)) for x in np.linspace(xmin, xmax, ndp, endpoint = True)]
#%%%%%%%%%%% Visualisation %%%%%%%%%%%%%%%%%%%%%%%#
#trace de la valeur absolue de la composante Bx du champ magnetique en fonction de x
plt.plot(np.linspace(xmin, xmax, ndp, endpoint = True) , bxf,'bo')
plt.plot(np.linspace(xmin, xmax, ndp, endpoint = True) , bx_analytique,'r-')
plt.title ('Champ magnetique Bx le long de l axe Ox de la spire', color='b')
plt.xlabel('position sur l axe Ox', color='b')
plt.ylabel ('Bx', color='b')
#etude de la composante bx1 du champ magnetique en differents points du plan xOy
#initialisation du champ magnรฉtique bx1
bx1=0
by1=0
#Initialisation de l energie
Em=0
Emf=0
#Indice pour les listes
l=0
#ndp-1 intervalles
ndp1=41
#### pour verification de Bx sur l'axe Ox avec yM,zM=0,0
##BD seul cas ou la comparaison analytique est possible
####
#initialisation des listes des coordonnees des points
s = (ndp1,ndp1)
xM1f=np.zeros(s)
yM1f=np.zeros(s)
#zM1f=np.zeros(ndp1*ndp1)
u = np.zeros(s)
X = np.zeros(s)
Y = np.zeros(s)
#initialisation de la liste des valeurs de la composante Bx1 en chaque point
bx1f=np.zeros(s)
#initialisation de la liste des valeurs de la composante By1 en chaque point
by1f=np.zeros(s)
#initialisation de la liste des valeurs de la composante By1 en chaque point
bz1f=np.zeros(s)
#calcul exact du coefficient b0 de l'integrale servant a calculer Bx
#selon la formule: b0=mu0*I0*r/(2*pi)
b0=2*10**(-7)*I0*r
#coordonnees des points
xM1f = np.linspace(-.03,.03,ndp1)
yM1f = np.linspace(-.03,.03,ndp1)
(X,Y) = np.meshgrid(xM1f,yM1f)
#calcul de Bx1 en des points regulierement espaces du plan xoy autour de la spire
##BD Physiquement ce calcul doit s'appuyer sur un modele numerique car l'equation analytique est injustifiable des que l'on est excentre
#Initialisation des positions
xM, yM,zM=0,0,0
for i in range(ndp1):
#Initialisation des champs locaux
bfx,bfy,bfz=0,0,0
for j in range(ndp1):
xM=X[i][j]
yM=Y[i][j]
#Initialisation des champs locaux
bfx,bfy,bfz=0,0,0
R=np.array([xM,yM,zM])
# vecteur position sur
# le point qui doit etre calcule
# en integrant la contribution
# de tous les courants le long
# de la boucle verte
for wseg in range(nseg):
xs=x_spire[0][wseg]
ys=y_spire[0][wseg]
zs=z_spire[0][wseg]
Rs=np.array([xs, ys, zs])
drsx=(x_spire[0][wseg+1]-x_spire[0][wseg])
drsy=(y_spire[0][wseg+1]-y_spire[0][wseg])
drsz=(z_spire[0][wseg+1]-z_spire[0][wseg])
drs=np.array([drsx, drsy, drsz])
#direction du courant
Delta_R= Rs - R
#vecteur entre l'รฉlement de spire et
#le point oรน est calcul le champ
Delta_Rdist=np.sqrt(Delta_R[0]**2+Delta_R[1]**2+Delta_R[2]**2)
#Delta_Rdis2=Delta_Rdist**2
Delta_Rdis3=Delta_Rdist**3
b2=1.0/Delta_Rdis3
b12=I0f1*b2*(-1)
# Produit vectoriel
Delta_Rxdrs_x=Delta_R[1]*drsz-Delta_R[2]*drsy
Delta_Rxdrs_y=Delta_R[2]*drsx-Delta_R[0]*drsz
Delta_Rxdrs_z=Delta_R[0]*drsy-Delta_R[1]*drsx
#Intรฉgration
bfx=bfx+b12*Delta_Rxdrs_x
bfy=bfy+b12*Delta_Rxdrs_y
bfz=bfz+b12*Delta_Rxdrs_z
# Il faut utiliser un champ defini comme 3 listes :
# une liste pour chaque abscisse
bx1f[i][j]+=bfx
by1f[i][j]+=bfy
bz1f[i][j]+=bfz
#tracรฉ des vecteurs Bx en chaque point
#tracรฉ des vecteurs B dans le plan xOy pour vรฉrifier les lignes de champ magnรฉtique
q = plt.quiver(X,Y,bx1f,by1f,color='r')
plt.title ('Cartographie des composantes Bx,By dans le plan xOy',color='b')
plt.xlabel ('axe y',color='b')
plt.ylabel ('axe x',color='b')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and Augment the Data
Step2: Visualize a Batch of Training Data
Step3: View an Image in More Detail
Step4: Define the Network Architecture
Step5: Specify Loss Function and Optimizer
Step6: Train the Network
Step7: Load the Model with the Lowest Validation Loss
Step8: Test the Trained Network
Step9: Visualize Sample Test Results
|
<ASSISTANT_TASK:>
Python Code:
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
model.load_state_dict(torch.load('model_augmented.pt'))
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that the image above is captured and displayed by Jekyll.
Step2: You can also remove only the code so that images and other output still show up.
Step3: And here we'll only display a Pandas DataFrame. Again, this was generated with Python code
Step4: You can configure the text that Textbooks with Jupyter uses for this by modifying your site's _config.yml file.
|
<ASSISTANT_TASK:>
Python Code:
from matplotlib import rcParams, cycler
import matplotlib.pyplot as plt
import numpy as np
plt.ion()
# Fixing random state for reproducibility
np.random.seed(19680801)
N = 10
data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)]
data = np.array(data).T
cmap = plt.cm.coolwarm
rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N)))
from matplotlib.lines import Line2D
custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4),
Line2D([0], [0], color=cmap(.5), lw=4),
Line2D([0], [0], color=cmap(1.), lw=4)]
fig, ax = plt.subplots(figsize=(10, 5))
lines = ax.plot(data)
ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']);
thisvariable = "none of this should show up in the textbook"
fig, ax = plt.subplots()
x = np.random.randn(100)
y = np.random.randn(100)
ax.scatter(x, y, s=np.abs(x*100), c=x, cmap=plt.cm.coolwarm)
ax.text(0, .5, thisvariable, fontsize=20, transform=ax.transAxes)
ax.set_axis_off()
# NO CODE
thisvariable = "this plot *will* show up in the textbook."
fig, ax = plt.subplots()
x = np.random.randn(100)
y = np.random.randn(100)
ax.scatter(x, y, s=np.abs(x*100), c=x, cmap=plt.cm.coolwarm)
ax.text(0, .5, thisvariable, fontsize=20, transform=ax.transAxes)
ax.set_axis_off()
# NO CODE
import pandas as pd
pd.DataFrame([['hi', 'there'], ['this', 'is'], ['a', 'DataFrame']], columns=['Word A', 'Word B'])
import folium
m = folium.Map(
location=[45.372, -121.6972],
zoom_start=12,
tiles='Stamen Terrain'
)
folium.Marker(
location=[45.3288, -121.6625],
popup='Mt. Hood Meadows',
icon=folium.Icon(icon='cloud')
).add_to(m)
folium.Marker(
location=[45.3311, -121.7113],
popup='Timberline Lodge',
icon=folium.Icon(color='green')
).add_to(m)
folium.Marker(
location=[45.3300, -121.6823],
popup='Some Other Location',
icon=folium.Icon(color='red', icon='info-sign')
).add_to(m)
m
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: How we have a map of the world, with bright blue state borders. This isn't exactly what we want.
Step2: Next, we'll plot some points on the map. Los Angeles and New York.
Step3: Now we've plotted two points on the map. The 'c*' stands for cyan star.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
m = Basemap(projection='mill',
llcrnrlat=-90,
llcrnrlon=-180,
urcrnrlat=90,
urcrnrlon=180,
resolution='l')
m.drawcoastlines()
m.drawcountries(linewidth=2)
m.drawstates(color='b')
m.bluemarble()
m = Basemap(projection='mill',
llcrnrlat=24.9493,
llcrnrlon=-125.0011,
urcrnrlat=49.5904,
urcrnrlon=-66.9326,
resolution='l')
m.drawcoastlines()
m.drawcountries(linewidth=1.5)
m.drawstates(color='black')
water_color = '#C1D2D2'
continent_color = '#EBECEC'
m = Basemap(projection='mill',
llcrnrlat=24.5,
llcrnrlon=-125.0011,
urcrnrlat=49.5904,
urcrnrlon=-66.9326,
resolution='l')
m.drawcoastlines()
m.drawcountries(linewidth=1.5)
m.drawstates(color='black')
m.fillcontinents(color=continent_color, lake_color=water_color)
m.drawmapboundary(fill_color=water_color)
NYClat, NYClon = 40.7127, -74.0059
xpt, ypt = m(NYClon, NYClat)
m.plot(xpt, ypt, 'c*', markersize=15, alpha=0.75)
LAlat, LAlon = 34.05, -118.25
xpt, ypt = m(LAlon, LAlat)
m.plot(xpt, ypt, 'g^', markersize=15, alpha=0.75)
water_color = '#C1D2D2'
continent_color = '#EBECEC'
plt.figure(figsize=(12,6))
m = Basemap(projection='mill',
llcrnrlat=24.5,
llcrnrlon=-125.0011,
urcrnrlat=49.5904,
urcrnrlon=-66.9326,
resolution='l')
m.drawcoastlines()
m.drawcountries(linewidth=1)
m.drawstates(color='black')
m.fillcontinents(color=continent_color, lake_color=water_color)
m.drawmapboundary(fill_color=water_color)
NYClat, NYClon = 40.7127, -74.0059
# xpt, ypt = m(NYClon, NYClat)
# m.plot(xpt, ypt, 'c*', markersize=15, alpha=0.75)
LAlat, LAlon = 34.05, -118.25
# xpt, ypt = m(LAlon, LAlat)
# m.plot(xpt, ypt, 'g^', markersize=15, alpha=0.75)
xpt, ypt = m([NYClon, LAlon], [NYClat, LAlat])
m.plot(xpt, ypt, 'g^', markersize=15, alpha=0.75)
plt.savefig('us_test.png', bbox_inches='tight', pad_inches=0.0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
Step6: Let's run it
Step7: Problem
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random valued following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
num_hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights1 = tf.Variable(
tf.truncated_normal([image_size * image_size, num_hidden_nodes]))
biases1 = tf.Variable(tf.zeros([num_hidden_nodes]))
weights2 = tf.Variable(
tf.truncated_normal([num_hidden_nodes, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
lay1_train = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1)
logits = tf.matmul(lay1_train, weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
lay1_valid = tf.nn.relu(tf.matmul(tf_valid_dataset, weights1) + biases1)
valid_prediction = tf.nn.softmax(tf.matmul(lay1_valid, weights2) + biases2)
lay1_test = tf.nn.relu(tf.matmul(tf_test_dataset, weights1) + biases1)
test_prediction = tf.nn.softmax(tf.matmul(lay1_test, weights2) + biases2)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make The Datasets
Step3: Because ScScore is trained on relative complexities, we want the X tensor in our dataset to have 3 dimensions (sample_id, molecule_id, features). The molecule_id dimension has size 2 because a sample is a pair of molecules. The label is 1 if the first molecule is more complex than the second molecule. The function create_dataset we introduce below pulls random pairs of SMILES strings out of a given list and ranks them according to this complexity measure.
Step4: With our complexity ranker in place we can now construct our dataset. Let's start by randomly splitting the list of molecules into training and test sets.
Step5: We'll featurize all our molecules with the ECFP fingerprint with chirality (matching the source paper), and will then construct our pairwise dataset using the function defined above.
Step6: Now that we have our dataset created, let's train a ScScoreModel on this dataset.
Step7: Model Performance
Step8: Let's now plot the length of the smiles string of the molecule against the SaScore using matplotlib.
|
<ASSISTANT_TASK:>
Python Code:
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
import deepchem
deepchem.__version__
import deepchem as dc
tasks, datasets, transformers = dc.molnet.load_tox21(featurizer='Raw', splitter=None)
molecules = datasets[0].X
from rdkit import Chem
import random
from deepchem.feat import CircularFingerprint
import numpy as np
def create_dataset(fingerprints, smiles_lens, ds_size=100000):
m1: list of np.Array
fingerprints for molecules
m2: list of int
length of a molecules SMILES string
returns:
dc.data.Dataset for input into ScScore Model
Dataset.X
shape is (sample_id, molecule_id, features)
Dataset.y
shape is (sample_id,)
values is 1 if the 0th index molecule is more complex
0 if the 1st index molecule is more complex
X, y = [], []
all_data = list(zip(fingerprints, smiles_lens))
while len(y) < ds_size:
i1 = random.randrange(0, len(smiles_lens))
i2 = random.randrange(0, len(smiles_lens))
m1 = all_data[i1]
m2 = all_data[i2]
if m1[1] == m2[1]:
continue
if m1[1] > m2[1]:
y.append(1.0)
else:
y.append(0.0)
X.append([m1[0], m2[0]])
return dc.data.NumpyDataset(np.array(X), np.expand_dims(np.array(y), axis=1))
molecule_ds = dc.data.NumpyDataset(np.array(molecules))
splitter = dc.splits.RandomSplitter()
train_mols, test_mols = splitter.train_test_split(molecule_ds)
n_features = 1024
featurizer = dc.feat.CircularFingerprint(size=n_features, radius=2, chiral=True)
train_features = featurizer.featurize(train_mols.X)
train_smiles_len = [len(Chem.MolToSmiles(x)) for x in train_mols.X]
train_dataset = create_dataset(train_features, train_smiles_len)
model = dc.models.ScScoreModel(n_features=n_features)
model.fit(train_dataset, nb_epoch=20)
import matplotlib.pyplot as plt
%matplotlib inline
mol_scores = model.predict_mols(test_mols.X)
smiles_lengths = [len(Chem.MolToSmiles(x)) for x in test_mols.X]
plt.figure(figsize=(20,16))
plt.scatter(smiles_lengths, mol_scores)
plt.xlim(0,80)
plt.xlabel("SMILES length")
plt.ylabel("ScScore")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: How to Get The Dataset
Step2: Get rid of any rows that have nans in them
Step3: Display the dtypes of your dset
Step4: Coerce the Date feature (which is currently a string object) into real date, and confirm by displaying the dtypes again. This might be a slow executing process...
Step5: Filter out the data so that it only contains samples that have a Date > '2011-01-01', using indexing. Then, in a new figure, plot the crime incidents, as well as a new K-Means run's centroids.
|
<ASSISTANT_TASK:>
Python Code:
# .. your code here ..
# .. your code here ..
# .. your code here ..
# .. your code here ..
# .. your code here ..
def doKMeans(df):
# Let's plot your data with a '.' marker, a 0.3 alpha at the Longitude,
# and Latitude locations in your dataset. Longitude = x, Latitude = y
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(df.Longitude, df.Latitude, marker='.', alpha=0.3)
# TODO: Filter `df` using indexing so it only contains Longitude and Latitude,
# since the remaining columns aren't really applicable for this lab:
#
# .. your code here ..
# TODO: Use K-Means to try and find seven cluster centers in this df.
# Be sure to name your kmeans model `model` so that the printing works.
#
# .. your code here ..
# Now we can print and plot the centroids:
centroids = model.cluster_centers_
print(centroids)
ax.scatter(centroids[:,0], centroids[:,1], marker='x', c='red', alpha=0.5, linewidths=3, s=169)
plt.show()
# Print & Plot your data
doKMeans(df)
# .. your code here ..
# Print & Plot your data
doKMeans(df)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Verify tables exist
Step2: Create the baseline model
Step3: REMINDER
Step4: Lab Task #2
Step5: Resource for an explanation of the Regression Metrics.
|
<ASSISTANT_TASK:>
Python Code:
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
%%bigquery
CREATE OR REPLACE MODEL
babyweight.baseline_model
OPTIONS (
MODEL_TYPE=# TODO: Add model type,
INPUT_LABEL_COLS=[# TODO: label column name],
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
# TODO: Add features and label
FROM
# TODO: Add train table
%%bigquery
-- Information from model training
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.baseline_model)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL # TODO: Add model name,
(
SELECT
# TODO: Add features and label
FROM
# TODO: Add eval table
))
%%bigquery
SELECT
# TODO: Select just the calculated RMSE
FROM
ML.EVALUATE(MODEL # TODO: Add model name,
(
SELECT
# TODO: Add features and label
FROM
# TODO: Add eval table
))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, I should be able to import $\delta$a$\delta$i
Step2: I have turned the 1D folded SFS's from realSFS into $\delta$d$\delta$i format by hand according to the description in section 3.1 of the manual. I have left out the masking line from the input file.
Step3: $\delta$a$\delta$i is detecting that the spectrum is folded (as given in the input file), but it is also automatically masking the 0th and 18th count category. This is a not a good behaviour.
Step4: Single population statistics
Step5: I have next added a masking line to the input file, setting it to '1' for the first position, i. e. the 0-count category.
Step6: $\delta$a$\delta$i is issuing the following message when executing the above command
Step7: I do not understand this warning from $\delta$a$\delta$i. The 18-count category is sensical for a folded spectrum with even sample size, so should not be masked. Anyway, I do not understand why $\delta$a$\delta$i is so reluctant to keep all positions, including the non-variable one.
Step8: The function that returns $\pi$ produces the same output with or without the last count category masked ?! I think that is because even if the last count class (966.62...) is masked, it is still included in the calculation of $\pi$. However, there is no obvious unmasking in the pi function. Strange!
Step9: This is the value of $\pi_{site}$ that I calculated previously and included in the first draft of the thesis.
Step10: This is the same as the output of dadi's pi function on the same SFS.
Step11: So, 1.6 million sites went into the ery spectrum.
Step12: Apart from the incorrect small sample size correction by $\delta$a$\delta$i in case of folded spectra ($n$ refers to sampled sequences, not individuals), Gillespie's formula leads to a much higher estimate of $\pi_{site}$ than Wakeley's. Why is that?
Step13: As can be seen from the insignificant difference (must be due to numerical inaccuracies) between the $\pi_{Wakeley}$ and the $\pi_{Gillespie}$ estimates, they are equivalent with the calculation for folded spectra given above as well as the correct small sample size correction. Beware
Step14: I think for now it would be best to import unfolded spectra from realSFS and fold them if necessary in dadi.
Step18: ML estimate of $\theta$ from 1D folded spectrum
Step19: The counts in each frequency class should be Poisson distributed with rate equal to $E[\eta_i]$ as given above. The lowest frequency class has the highest rate and therefore also the highest variance
Step20: The following function will take the file name of a file containing the flat 1D folded frequency spectrum of one population and plots it together with the best fitting neutral expectation.
Step22: Univariate function minimizers or 1D scalar minimisation
Step23: It would be interesting to know whether the cost function is convex or not.
Step24: Within the specified bounds (the observed $\theta$, i. e. derived from the data, cannot lie outside these bounds), the cost function is convex. This is therefore an easy optimisation problem. See here for more details.
Step25: The fitted values of $\theta$ are similar to the ones obtained above with the least_squares function. The estimates for ery deviate more than for par.
Step26: Wow! Sympy is a replacement for Mathematica. There is also Sage, which may include even more functionality.
Step28: Curve_fit is another function that can be used for optimization.
Step29: I am not sure whether these standard errors (perr) are correct. It may be that it is assumed that errors are normally distributed, which they are not exactly in this case. They should be close to Poisson distributed (see Fu1995), which should be fairly similar to normal with such high expected values as here.
Step30: There must be a way to get more than one bootstrap sample per call.
Step31: It would be good to get the 5% and 95% quantiles from the bootstrap samples of each frequency class and add those intervals to the plot of the observed frequency spectrum and the fitted neutral spectrum. This would require to find a quantile function and to find out how to add lines to a plot with matplotlib.
Step32: Marginalizing
Step33: Note, that these marginalised 1D SFS's are not identical to the 1D SFS estimated directly with realSFS. This is because, for the estimation of the 2D SFS, realSFS has only taken sites that had data from at least 9 individuals in each population (see assembly.sh, lines 1423 onwards).
Step34: These marginal unfolded spectra look similar in shape to the 1D folded spectra of each subspecies (see above).
Step35: $\delta$a$\delta$i's pi function seems to calculate the correct value of $\pi$ for this unfolded spectrum. However, it is worrying that $\pi$ from this marginal spectrum is about 20 times larger than the one calculated from the directly estimated 1D folded spectrum (see above the $\pi$ calculated from the folded 1D spectrum).
Step36: <font color="red">The sum over the marginalised 1D spectra should be the same as the sum over the 2D spectrum !</font>
Step37: So, during the marginalisation the masking of data in the fixed categories (0, 36) is the problem, producing incorrectly marginalised counts in those masked categories. This is shown in the following
Step38: This is very close to the estimate of $\pi$ derived from the folded 1D spectrum of ery! (see above)
Step39: This is also nicely close to the estimate of $\pi_{site}$ of par from its folded 1D spectrum.
Step40: Now, I am calculating Tajima's D from the ery marginal spectrum by hand in order to check whether $\delta$a$\delta$i is doing the right thing.
Step41: $\delta$a$\delta$i seems to do the right thing. Note, that the estimate of Tajima's D from this marginal spectrum of ery is slightly different from the estimate derived from the folded 1D spectrum of ery (see /data3/claudius/Big_Data/ANGSD/SFS/SFS.Rmd). The folded 1D spectrum resulted in a Tajima's D estimate of $\sim$0.05, i. e. a difference of almost 0.1. Again, the 2D spectrum is based on only those sites for which there were at least 9 individiuals with data in both populations, whereas the 1D folded spectrum of ery included all sites for which there were 9 ery individuals with data (see line 1571 onwards in assembly.sh).
Step42: My estimate from the folded 1D spectrum of par was -0.6142268 (see /data3/claudius/Big_Data/ANGSD/SFS/SFS.Rmd).
Step43: The 2D spectrum contains counts from 60k sites that are variable in par or ery or both.
Step44: This estimate of $F_{ST}$ according to Weir and Cockerham (1984) is well below the estimate of $\sim$0.3 from ANGSD according to Bhatia/Hudson (2013). Note, however, that this estimate showed a positive bias of around 0.025 in 100 permutations of population labels of individuals. Taking the positive bias into account, both estimates of $F_{ST}$ are quite similar.
Step45: So, this is how the 2D SFS would look like if ery and par were not genetically differentiated.
Step46: The $F_{ST}$ from the scrambled SFS is much lower than the $F_{ST}$ of the observed SFS. That should mean that there is significant population structure. However, the $F_{ST}$ from the scrambled SFS is not 0. I don't know why that is.
Step47: Plotting
Step48: The folded 2D spectrum is not a minor allele frequency spectrum as are the 1D folded spectra of ery and par. This is because an allele that is minor in one population can be the major allele in the other. What is not counted are the alleles that are major in both populations, i. e. the upper right corner.
Step49: Let's use the formula (1.2) from Wakeley2009 to fold the 1D spectrum manually
Step50: Here is how to flatten an array of arrays with list comprehension
Step51: Set new mask for the folded spectrum
Step52: The fold() function works correctly for 1D spectra, at least. How about 2D spectra?
Step53: I am going to go through every step in the fold function of dadi
Step54: With the variable length list of slice objects, one can generalise the reverse of arrays with any dimensions.
Step55: Here, folding doesn't mask new cells.
Step56: The transformation we have done with the upper-right diagonal 2D array above should be identical to projecting it across a vertical center line (creating an upper left triangular matrix) and then projecting it across a horizontal center line (creating the final lower left triangular matrix). Note, that this is not like mirroring the upper-right triangular 2D array across the 36-36 diagonal!
Step57: SNP's with joint frequencies in the True cells are counted twice at the moment due to the folding and the fact that the sample sizes are even.
Step58: These are the values in the diagonal before folding.
Step59: These are the values that got added to the diagonal during folding. Comparing with the previous plot, one can see for instance that the value in the (0, 36) class got added to the value in the (36, 0) class and vice versa. The two frequency classes are equivalent, since it is arbitrary which allele we call minor in the total sample (of 72 gene copies). These SNP's are therefore counted twice.
|
<ASSISTANT_TASK:>
Python Code:
sys.path.insert(0, '/home/claudius/Downloads/dadi')
sys.path
import dadi
dir(dadi)
import pylab
%matplotlib inline
x = pylab.linspace(0, 4*pylab.pi, 1000)
pylab.plot(x, pylab.sin(x), '-r')
%%sh
# this allows me to execute a shell command
ls
fs_ery = dadi.Spectrum.from_file('ERY.FOLDED.sfs.dadi_format')
fs_ery
# number of segregating sites
fs_ery.data[1:].sum()
fs_ery.pi()
fs_ery = dadi.Spectrum.from_file('ERY.FOLDED.sfs.dadi_format', mask_corners=False)
fs_ery
fs_ery.pi()
# Calcualting pi with the formula from Wakeley2009
n = 36 # 36 sequences sampled from 18 diploid individuals
pi_Wakeley = (sum( [i*(n-i)*fs_ery[i] for i in range(1, n/2+1)] ) * 2.0 / (n*(n-1)))/pylab.sum(fs_ery.data)
# note fs_ery.data gets the whole fs_ery list, including masked entries
pi_Wakeley
fs_ery.mask
fs_ery.data # gets all data, including the masked one
# Calculating pi with the formula from Gillespie:
n = 18
p = pylab.arange(0, n+1)/float(n)
p
# Calculating pi with the formula from Gillespie:
n / (n-1.0) * 2 * pylab.sum(fs_ery * p*(1-p))
# the sample size (n) that dadi stores in this spectrum object and uses as n in the pi function
fs_ery.sample_sizes[0]
# what is the total number of sites in the spectrum
pylab.sum(fs_ery.data)
# pi per site
n / (n-1.0) * 2 * pylab.sum(fs_ery * p*(1-p)) / pylab.sum(fs_ery.data)
# with correct small sample size correction
2 * n / (2* n-1.0) * 2 * pylab.sum(fs_ery * p*(1-p)) / pylab.sum(fs_ery.data)
# Calculating pi with the formula from Gillespie:
n = 18
p = pylab.arange(0, n+1)/float(n)
p = p/2 # with a folded spectrum, we are summing over minor allele freqs only
pi_Gillespie = 2*n / (2*n-1.0) * 2 * pylab.sum(fs_ery * p*(1-p)) / pylab.sum(fs_ery.data)
pi_Gillespie
pi_Wakeley - pi_Gillespie
fs_ery.folded
fs_par = dadi.Spectrum.from_file('PAR.FOLDED.sfs.dadi_format')
pylab.plot(fs_ery, 'r', label='ery')
pylab.plot(fs_par, 'g', label='par')
pylab.legend()
from scipy.optimize import least_squares
def model(theta, eta, n):
theta: scaled population mutation rate parameter [scalar]
eta: the folded 1D spectrum, including 0-count cat. [list]
n: number of sampled gene copies, i. e. 2*num_ind [scalar]
returns a numpy array
i = pylab.arange(1, eta.size)
delta = pylab.where(i == n-i, 1, 0)
return theta * 1/i + 1/(n-i) / (1 + delta)
?pylab.where
# test
i = pylab.arange(1, 19)
n = 36
print i == n-i
#
print pylab.where(i == n-i, 1, 0)
# get a theta estimate from pi:
theta = pi_Wakeley * fs_ery.data.sum()
print theta
#
print len(fs_ery)
#
model(theta, fs_ery, 36)
def fun(theta, eta, n):
return residuals between model and data
return model(theta, eta, n) - eta[1:]
def jac(theta, eta, n, test=False):
creates a Jacobian matrix
J = pylab.empty((eta.size-1, theta.size))
i = pylab.arange(1, eta.size, dtype=float)
delta = pylab.where(i == n-i, 1, 0)
num = 1/i + 1/(n-i)
den = 1 + delta
if test:
print i
print num
print den
J[:,0] = num / den
return J
# test
jac(theta, fs_ery, 36, test=True)
# starting value
theta0 = theta # pi_Wakeley from above
# sum over unmasked entries, i. e. without 0-count category, i. e. returns number of variable sites
fs_ery.sum()
# optimize
res = least_squares(fun, x0=theta0, jac=jac, bounds=(0,fs_ery.sum()),
kwargs={'eta': fs_ery, 'n': 36}, verbose=1)
res.success
?least_squares
print res.x
print theta
pylab.rcParams['figure.figsize'] = [12.0, 8.0]
import matplotlib.pyplot as plt
plt.rcParams['font.size'] = 14.0
i = range(1, len(fs_ery))
eta_model = model(res.x, eta=fs_ery, n=36) # get predicted values with optimal theta
plt.plot(i, fs_ery[1:], "bo", label="data from ery") # plot observed spectrum
ymax = max( fs_ery[1:].max(), eta_model.max() )
plt.axis([0, 19, 0, ymax*1.1]) # set axis range
plt.xlabel("minor allele frequency (i)")
plt.ylabel(r'$\eta_i$', fontsize='large', rotation='horizontal')
plt.title("folded SFS of ery")
plt.plot(i, eta_model, "go-",
label="\nneutral model"
+ "\n"
+ r'$\theta_{opt} = $' + str(round(res.x, 1))
) # plot model prediction with optimal theta
plt.legend()
#?plt.ylabel
#print plt.rcParams
fs_ery[1:].max()
#?pylab
os.getcwd()
%%sh
ls
def plot_folded_sfs(filename, n, pop = ''):
# read in spectrum from file
data = open(filename, 'r')
sfs = pylab.array( data.readline().split(), dtype=float )
data.close() # should close connection to file
#return sfs
# get starting value for theta from Watterson's theta
S = sfs[1:].sum()
T_total = sum([1.0/i for i in range(1, n)]) # onhe half the expected total length of the genealogy
theta0 = S / T_total # see eq. 4.7 in Wakeley2009
# optimize
res = least_squares(fun, x0=theta0, jac=jac, bounds=(0, sfs.sum()),
kwargs={'eta': sfs, 'n': 36}, verbose=1)
#print "Optimal theta per site is {0:.4f}".format(res.x[0]/sfs.sum())
#print res.x[0]/sfs.sum()
#return theta0, res
# plot
plt.rcParams['font.size'] = 14.0
i = range(1, len(sfs))
eta_model = model(res.x, eta=sfs, n=36) # get predicted values with optimal theta
plt.plot(i, sfs[1:], "rs", label="data of " + pop) # plot observed spectrum
ymax = max( sfs[1:].max(), eta_model.max() )
plt.axis([0, 19, 0, ymax*1.1]) # set axis range
plt.xlabel("minor allele frequency (i)")
plt.ylabel(r'$\eta_i$', fontsize='large', rotation='horizontal')
plt.title("folded SFS")
plt.text(5, 10000,
r"Optimal neutral $\theta$ per site is {0:.4f}".format(res.x[0]/sfs.sum()))
plt.plot(i, eta_model, "go-",
label="\nneutral model"
+ "\n"
+ r'$\theta_{opt} = $' + str(round(res.x, 1))
) # plot model prediction with optimal theta
plt.legend()
plot_folded_sfs('PAR.FOLDED.sfs', n=36, pop='par')
plot_folded_sfs('ERY.FOLDED.sfs', n=36, pop='ery')
from scipy.optimize import minimize_scalar
?minimize_scalar
# define cost function
def f(theta, eta, n):
return sum of squared deviations between model and data
return sum( (model(theta, eta, n) - eta[1:])**2 ) # see above for definition of the 'model' function
theta = pylab.arange(0, fs_ery.data[1:].sum()) # specify range of theta
cost = [f(t, fs_ery.data, 36) for t in theta]
plt.plot(theta, cost, 'b-', label='ery')
plt.xlabel(r'$\theta$')
plt.ylabel('cost')
plt.title("cost function for ery")
plt.legend(loc='best')
?plt.legend
res = minimize_scalar(f, bounds = (0, fs_ery.data[1:].sum()), method = 'bounded', args = (fs_ery.data, 36))
res
# number of segregating sites
fs_par.data[1:].sum()
res = minimize_scalar(f, bounds = (0, fs_par.data[1:].sum()), method = 'bounded', args = (fs_par.data, 36))
res
from sympy import *
x0 , x1 = symbols('x0 x1')
init_printing(use_unicode=True)
diff(0.5*(1-x0)**2 + (x1-x0**2)**2, x0)
diff(0.5*(1-x0)**2 + (x1-x0**2)**2, x1)
from scipy.optimize import curve_fit
?curve_fit
def model(i, theta):
i: indpendent variable, here minor SNP frequency classes
theta: scaled population mutation rate parameter [scalar]
returns a numpy array
n = len(i)
delta = pylab.where(i == n-i, 1, 0)
return theta * 1/i + 1/(n-i) / (1 + delta)
i = pylab.arange(1, fs_ery.size)
popt, pcov = curve_fit(model, i, fs_ery.data[1:])
# optimal theta
print popt
perr = pylab.sqrt(pcov)
perr
print str(int(popt[0] - 1.96*perr[0])) + ' < ' + str(int(popt[0])) + ' < ' + str(int(popt[0] + 1.96*perr[0]))
popt, pcov = curve_fit(model, i, fs_par.data[1:])
perr = pylab.sqrt(pcov)
print str(int(popt[0] - 1.96*perr[0])) + ' < ' + str(int(popt[0])) + ' < ' + str(int(popt[0] + 1.96*perr[0]))
%pwd
% ll
! cat ERY.FOLDED.sfs.dadi_format
fs_ery = dadi.Spectrum.from_file('ERY.FOLDED.sfs.dadi_format', mask_corners=False)
fs_ery
fs_ery.pop_ids = ['ery']
# get a Poisson sample from the observed spectrum
fs_ery_param_boot = fs_ery.sample()
fs_ery_param_boot
fs_ery_param_boot.data
%psource fs_ery.sample
fs_ery_param_boot = pylab.array([fs_ery.sample() for i in range(100)])
# get the first 3 boostrap samples from the doubleton class
fs_ery_param_boot[:3, 2]
# read in the flattened 2D SFS
EryPar_unfolded_2dsfs = dadi.Spectrum.from_file('EryPar.unfolded.2dsfs.dadi_format', mask_corners=True)
# check dimension
len(EryPar_unfolded_2dsfs[0,])
EryPar_unfolded_2dsfs.sample_sizes
# add population labels
EryPar_unfolded_2dsfs.pop_ids = ["ery", "par"]
EryPar_unfolded_2dsfs.pop_ids
# marginalise over par to get 1D SFS for ery
fs_ery = EryPar_unfolded_2dsfs.marginalize([1])
# note the argument is an array with dimensions, one can marginalise over more than one dimension at the same time,
# but that is only interesting for 3-dimensional spectra, which I don't have here
fs_ery
# marginalise over ery to get 1D SFS for par
fs_par = EryPar_unfolded_2dsfs.marginalize([0])
fs_par
# plot 1D spectra for each population
pylab.plot(fs_par, 'g', label="par")
pylab.plot(fs_ery, 'r', label="ery")
pylab.legend()
fs_ery.pi() / pylab.sum(fs_ery.data)
fs_ery.data
n = 36 # 36 sequences sampled from 18 diploid individuals
pi_Wakeley = (sum( [i*(n-i)*fs_ery[i] for i in range(1, n)] ) * 2.0 / (n*(n-1)))
pi_Wakeley = pi_Wakeley / pylab.sum(fs_ery.data)
pi_Wakeley
fs_par.pi() / pylab.sum(fs_par.data)
pylab.sum(fs_par.data)
pylab.sum(EryPar_unfolded_2dsfs.data)
# from dadi's marginalise function:
fs_ery.data
sfs2d = EryPar_unfolded_2dsfs.copy()
# this should get the marginal spectrum for ery
ery_mar = [pylab.sum(sfs2d.data[i]) for i in range(0, len(sfs2d))]
ery_mar
# this should get the marginal spectrum for ery and then take the sum over it
sum([pylab.sum(sfs2d.data[i]) for i in range(0, len(sfs2d))])
# look what happens if I include masking
sum([pylab.sum(sfs2d[i]) for i in range(0, len(sfs2d))])
fs_ery.data - ery_mar
sfs2d[0]
pylab.sum(sfs2d[0])
# from dadi's marginalise function:
fs_ery.data
# dividing by the correct number of sites to get pi per site:
fs_ery.pi() / pylab.sum(sfs2d.data)
fs_par.pi() / pylab.sum(sfs2d.data)
fs_ery.Watterson_theta() / pylab.sum(sfs2d.data)
fs_ery.Tajima_D()
fs_par.Tajima_D()
n = 36
pi_Wakeley = (sum( [i*(n-i)*fs_ery.data[i] for i in range(1, n+1)] )
* 2.0 / (n*(n-1)))
#/ pylab.sum(sfs2d.data)
pi_Wakeley
# number of segregating sites
# this sums over all unmasked positions in the array
pylab.sum(fs_ery)
fs_ery.S()
S = pylab.sum(fs_ery)
theta_Watterson = S / pylab.sum(1.0 / (pylab.arange(1, n)))
theta_Watterson
# normalizing constant, see page 45 in Gillespie
a1 = pylab.sum(1.0 / pylab.arange(1, n))
#print a1
a2 = pylab.sum(1.0 / pylab.arange(1, n)**2.0)
#print a2
b1 = (n+1.0)/(3.0*(n-1))
#print b1
b2 = 2.0*(n**2 + n + 3)/(9.0*n*(n-1))
#print b2
c1 = b1 - (1.0/a1)
#print c1
c2 = b2 - (n+2.0)/(a1*n) + a2/a1**2
#print c2
C = ((c1/a1)*S + (c2/(a1**2.0 + a2))*S*(S-1))
C = C**(1/2.0)
ery_Tajimas_D = (pi_Wakeley - theta_Watterson) / C
print '{0:.6f}'.format(ery_Tajimas_D)
ery_Tajimas_D - fs_ery.Tajima_D()
fs_par.Tajima_D()
EryPar_unfolded_2dsfs.S()
EryPar_unfolded_2dsfs.Fst()
%psource EryPar_unfolded_2dsfs.scramble_pop_ids
# plot the scrambled 2D SFS
dadi.Plotting.plot_single_2d_sfs(EryPar_unfolded_2dsfs.scramble_pop_ids(), vmin=1)
# get Fst for scrambled SFS
EryPar_unfolded_2dsfs.scramble_pop_ids().Fst()
# folding
EryPar_folded_2dsfs = EryPar_unfolded_2dsfs.fold()
EryPar_folded_2dsfs
EryPar_folded_2dsfs.mask
dadi.Plotting.plot_single_2d_sfs(EryPar_unfolded_2dsfs, vmin=1)
dadi.Plotting.plot_single_2d_sfs(EryPar_folded_2dsfs, vmin=1)
# unfolded spectrum from marginalisation of 2D unfolded spectrum
fs_ery
len(fs_ery)
fs_ery.fold()
fs_ery_folded = fs_ery.copy() # make a copy of the UNfolded spectrum
n = len(fs_ery)-1
for i in range(len(fs_ery)):
fs_ery_folded[i] += fs_ery[n-i]
if i == n/2.0:
fs_ery_folded[i] /= 2
fs_ery_folded[0:19]
isinstance(fs_ery_folded, pylab.ndarray)
mask = [True]
mask.extend([False] * 18)
mask.extend([True] * 18)
print mask
print sum(mask)
mask = [True] * 37
for i in range(len(mask)):
if i > 0 and i < 19:
mask[i] = False
print mask
print sum(mask)
mask = [[True], [False] * 18, [True] * 18]
print mask
print [elem for a in mask for elem in a]
fs_ery_folded.mask = mask
fs_ery_folded.folded = True
fs_ery_folded - fs_ery.fold()
EryPar_unfolded_2dsfs.sample_sizes
EryPar_unfolded_2dsfs._total_per_entry()
# copy the unfolded 2D spectrum for folding
import copy
sfs2d_folded = copy.deepcopy(EryPar_unfolded_2dsfs)
n = len(sfs2d_folded)-1
m = len(sfs2d_folded[0])-1
for i in range(n+1):
for j in range(m+1):
sfs2d_folded[i,j] += sfs2d_folded[n-i, m-j]
if i == n/2.0 and j == m/2.0:
sfs2d_folded[i,j] /= 2
mask = sfs2d_folded._total_per_entry() > (n+m)/2
mask
sfs2d_folded.mask = mask
sfs2d_folded.fold = True
dadi.Plotting.plot_single_2d_sfs(sfs2d_folded, vmin=1)
# copy the unfolded 2D spectrum for folding
import copy
sfs2d_unfolded = copy.deepcopy(EryPar_unfolded_2dsfs)
total_samples = pylab.sum(sfs2d_unfolded.sample_sizes)
total_samples
total_per_entry = dadi.Spectrum(sfs2d_unfolded._total_per_entry(), pop_ids=['ery', 'par'])
#total_per_entry.pop_ids = ['ery', 'par']
dadi.Plotting.plot_single_2d_sfs(total_per_entry, vmin=1)
total_per_entry = sfs2d_unfolded._total_per_entry()
total_per_entry
where_folded_out = total_per_entry > total_samples/2
where_folded_out
original_mask = sfs2d_unfolded.mask
original_mask
pylab.logical_or([True, False, True], [False, False, True])
# get the number of elements along each axis
sfs2d_unfolded.shape
[slice(None, None, -1) for i in sfs2d_unfolded.shape]
matrix = pylab.array([
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]
])
reverse_slice = [slice(None, None, -1) for i in matrix.shape]
reverse_slice
matrix[reverse_slice]
matrix[::-1,::-1]
final_mask = pylab.logical_or(original_mask, dadi.Numerics.reverse_array(original_mask))
final_mask
?pylab.where
pylab.where(matrix < 6, matrix, 0)
# this takes the part of the spectrum that is non-sensical if the derived allele is not known
# and sets the rest to 0
print pylab.where(where_folded_out, sfs2d_unfolded, 0)
# let's plot the bit of the spectrum that we are going to fold onto the rest:
dadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(pylab.where(where_folded_out, sfs2d_unfolded, 0)), vmin=1)
# now let's reverse this 2D array, i. e. last row first and last element of each row first:
_reversed = dadi.Numerics.reverse_array(pylab.where(where_folded_out, sfs2d_unfolded, 0))
_reversed
dadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(_reversed), vmin=1)
# This shall now be added to the original unfolded 2D spectrum.
sfs2d_folded = pylab.ma.masked_array(sfs2d_unfolded.data + _reversed)
dadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(sfs2d_folded), vmin=1)
sfs2d_folded.data
sfs2d_folded.data[where_folded_out] = 0
sfs2d_folded.data
dadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(sfs2d_folded), vmin=1)
sfs2d_folded.shape
where_ambiguous = (total_per_entry == total_samples/2.0)
where_ambiguous
# this extracts the diagonal values from the UNfolded spectrum and sets the rest to 0
ambiguous = pylab.where(where_ambiguous, sfs2d_unfolded, 0)
dadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(ambiguous), vmin=1)
reversed_ambiguous = dadi.Numerics.reverse_array(ambiguous)
dadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(reversed_ambiguous), vmin=1)
a = -1.0*ambiguous + 0.5*ambiguous + 0.5*reversed_ambiguous
b = -0.5*ambiguous + 0.5*reversed_ambiguous
a == b
sfs2d_folded += -0.5*ambiguous + 0.5*reversed_ambiguous
final_mask = pylab.logical_or(final_mask, where_folded_out)
final_mask
sfs2d_folded = dadi.Spectrum(sfs2d_folded, mask=final_mask, data_folded=True, pop_ids=['ery', 'par'])
pylab.rcParams['figure.figsize'] = [12.0, 8.0]
dadi.Plotting.plot_single_2d_sfs(sfs2d_folded, vmin=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The number of unique sites in Location file is around 2100. However, as you can see below,
|
<ASSISTANT_TASK:>
Python Code:
ozone_daily['site'][ozone_daily['site'].isin([2778, 2783])]
len(ozone_daily['site'].unique())
#this was done once in the beginning to
#save data set as ".csv" file.
#ozone_daily.to_csv('daily_ozone_obs_1980_2014.csv', sep = ',')
locations = pd.read_excel('Location.xlsx')
def get_county_site(locations, county = 'Colusa'):
county_of_interest = (locations.set_index(['County Name', 'Site']).loc[county])
county_of_interest = county_of_interest.reset_index()
county_sites = county_of_interest['Site']
return county_sites
colusa_sites = get_county_site(locations).dropna()
colusa_daily_ozone = ozone_daily[ozone_daily['site'].isin(colusa_sites)]
colusa_daily_ozone = (colusa_daily_ozone.reset_index().
drop('index', axis = 1))
#this also was done only once to save the output as csv file.
colusa_daily_ozone.to_csv('colusa_daily_ozone_1980_2014.csv', sep = ',')
colusa_daily_ozone.head()
colusa_daily_ozone['site'].unique()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get information about code base
Step2: The cost is an arbitrary attribute to indicate if the information is retrieved from a wfs query (cost = 1),
Step3: Try-out of use cases
Step4: Select interpretations in a bbox with selected properties
Step5: The property feature methodes listed above are available from the owslib module. These were not adapted for use in pydov.
Step6: Select interpretations in a municipality
Step7: Get interpretations using info from wfs fields, not available in the standard output dataframe
Step8: Visualize results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os, sys
import inspect
import pydov
from pydov.search.interpretaties import HydrogeologischeStratigrafieSearch
ip_hydrogeo = HydrogeologischeStratigrafieSearch()
# information about the HydrogeologischeStratigrafie type (In Dutch):
print(ip_hydrogeo.get_description())
# information about the available fields for a HydrogeologischeStratigrafie object
fields = ip_hydrogeo.get_fields()
# print available fields
for f in fields.values():
print(f['name'])
# print information for a certain field
fields['aquifer']
# if an attribute can have several values, these are listed under 'values', e.g. for 'Type_proef':
fields['Type_proef']
from pydov.util.location import Within, Box
# Get all borehole data in a bounding box (llx, lly, ulx, uly)
# the pkey_boring link is not available below, but is in the df
df = ip_hydrogeo.search(location=Within(Box(153145, 206930, 153150, 206935)))
print(df.head())
# list available query methods
methods = [i for i,j in inspect.getmembers(sys.modules['owslib.fes'],
inspect.isclass)
if 'Property' in i]
print(methods)
from owslib.fes import PropertyIsGreaterThanOrEqualTo
# Get deep boreholes in a bounding box
from owslib.fes import PropertyIsEqualTo
# the propertyname can be any of the fields of the hydrogeological interpretations object that belong to the wfs source
# the literal is always a string, no matter what its definition is in the boring object (string, float...)
query = PropertyIsGreaterThanOrEqualTo(
propertyname='betrouwbaarheid_interpretatie', literal='goed')
df = ip_hydrogeo.search(location=Within(Box(153145, 206930, 153150, 206935)),
query=query
)
print(df.head())
query = PropertyIsEqualTo(propertyname='gemeente',
literal='Aartselaar')
df = ip_hydrogeo.search(query=query)
print(df.head())
from owslib.fes import And
from owslib.fes import PropertyIsEqualTo
query = And([PropertyIsEqualTo(propertyname='gemeente',
literal='Antwerpen'),
PropertyIsEqualTo(propertyname='Type_proef',
literal='Boring')]
)
df = ip_hydrogeo.search(query=query, )
print(df.head())
# import the necessary modules (not included in the requirements of pydov!)
import folium
from folium.plugins import MarkerCluster
from pyproj import Transformer
# convert the coordinates to lat/lon for folium
def convert_latlon(x1, y1):
transformer = Transformer.from_crs("epsg:31370", "epsg:4326", always_xy=True)
x2,y2 = transformer.transform(x1, y1)
return x2, y2
df['lon'], df['lat'] = zip(*map(convert_latlon, df['x'], df['y']))
# convert to list
loclist = df[['lat', 'lon']].values.tolist()
# initialize the Folium map on the centre of the selected locations, play with the zoom until ok# initia
fmap = folium.Map(location=[df['lat'].mean(), df['lon'].mean()], zoom_start=12)
marker_cluster = MarkerCluster().add_to(fmap)
for loc in range(0, len(loclist)):
folium.Marker(loclist[loc], popup=df['aquifer'][loc]).add_to(marker_cluster)
fmap
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now that we have created a toy example for playing around with the DataFrame, let's print it out in different ways.
|
<ASSISTANT_TASK:>
Python Code:
# Note: Do not change this code!
import numpy as np
import pandas
import sys
import modin
pandas.__version__
modin.__version__
# Implement your answer here. You are also free to play with the size
# and shape of the DataFrame, but beware of exceeding your memory!
import pandas as pd
frame_data = np.random.randint(0, 100, size=(2**10, 2**5))
df = pd.DataFrame(frame_data)
# ***** Do not change the code below! It verifies that
# ***** the exercise has been done correctly. *****
try:
assert df is not None
assert frame_data is not None
assert isinstance(frame_data, np.ndarray)
except:
raise AssertionError("Don't change too much of the original code!")
assert "modin.pandas" in sys.modules, "Not quite correct. Remember the single line of code change (See above)"
import modin.pandas
assert pd == modin.pandas, "Remember the single line of code change (See above)"
assert hasattr(df, "_query_compiler"), "Make sure that `df` is a modin.pandas DataFrame."
print("Success! You only need to change one line of code!")
# Print the first 10 lines.
df.head(10)
# Print the DataFrame.
df
# Free cell for custom interaction (Play around here!)
df.add_prefix("col")
df.count()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled so the image resolution is decreased from 28x28 to 14x14.
Step2: The step-size for moving the filter across the input is called the stride. There is a stride for moving the filter horizontally (x-axis) and another stride for moving vertically (y-axis).
Step3: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step4: Configuration of Neural Network
Step5: Load Data
Step6: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step7: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
Step8: Data Dimensions
Step9: Helper-function for plotting images
Step10: Plot a few images to see if data is correct
Step11: TensorFlow Graph
Step12: Helper-function for creating a new Convolutional Layer
Step13: Helper-function for flattening a layer
Step14: Helper-function for creating a new Fully-Connected Layer
Step15: Placeholder variables
Step16: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is
Step17: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step18: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
Step19: Create placeholder var for dropout
Step20: Convolutional Layer 1
Step21: Check the shape of the tensor that will be output by the convolutional layer. It is (?, 14, 14, 16) which means that there is an arbitrary number of images (this is the ?), each image is 14 pixels wide and 14 pixels high, and there are 16 different channels, one channel for each of the filters.
Step22: Convolutional Layer 2
Step23: Check the shape of the tensor that will be output from this convolutional layer. The shape is (?, 7, 7, 36) where the ? again means that there is an arbitrary number of images, with each image having width and height of 7 pixels, and there are 36 channels, one for each filter.
Step24: Flatten Layer
Step25: Check that the tensors now have shape (?, 1764) which means there's an arbitrary number of images which have been flattened to vectors of length 1764 each. Note that 1764 = 7 x 7 x 36.
Step26: Fully-Connected Layer 1
Step27: Check that the output of the fully-connected layer is a tensor with shape (?, 128) where the ? means there is an arbitrary number of images and fc_size == 128.
Step28: Fully-Connected Layer 2
Step29: Predicted Class
Step30: The class-number is the index of the largest element.
Step31: Cost-function to be optimized
Step32: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
Step33: Optimization Method
Step34: Performance Measures
Step35: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
Step36: TensorFlow Run
Step37: Initialize variables
Step38: Helper-function to perform optimization iterations
Step39: Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
Step40: Helper-function to plot example errors
Step41: Helper-function to plot confusion matrix
Step42: Helper-function for showing the performance
Step43: Performance before any optimization
Step44: Performance after 1 optimization iteration
Step45: Performance after 100 optimization iterations
Step46: Performance after 1000 optimization iterations
Step47: Performance after 10,000 optimization iterations
Step48: Visualization of Weights and Layers
Step49: Helper-function for plotting the output of a convolutional layer
Step50: Input Images
Step51: Plot an image from the test-set which will be used as an example below.
Step52: Plot another example image from the test-set.
Step53: Convolution Layer 1
Step54: Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer. Note that these images are down-sampled to 14 x 14 pixels which is half the resolution of the original input image.
Step55: The following images are the results of applying the convolutional filters to the second image.
Step56: It is difficult to see from these images what the purpose of the convolutional filters might be. It appears that they have merely created several variations of the input image, as if light was shining from different angles and casting shadows in the image.
Step57: There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
Step58: It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.
Step59: And these are the results of applying the filter-weights to the second image.
Step60: From these images, it looks like the second convolutional layer might detect lines and patterns in the input images, which are less sensitive to local variations in the original input images.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image('images/02_network_flowchart.png')
Image('images/02_convolution.png')
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
tf.__version__
# Convolutional Layer 1.
filter_size1 = 5 # Convolution filters are 5 x 5 pixels.
num_filters1 = 9 # There are 16 of these filters.
# Convolutional Layer 2.
filter_size2 = 5 # Convolution filters are 5 x 5 pixels.
num_filters2 = 9 # There are 36 of these filters.
# Convolutional Layer 2.
#filter_size3 = 5 # Convolution filters are 5 x 5 pixels.
#num_filters3 = 25 # There are 25 of these filters.
# Fully-connected layer.
fc_size = 32 # Number of neurons in fully-connected layer.
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
data.test.cls = np.argmax(data.test.labels, axis=1)
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
def new_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
def new_biases(length):
return tf.Variable(tf.constant(0.05, shape=[length]))
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of each filter.
num_filters, # Number of filters.
use_pooling=True): # Use 2x2 max-pooling.
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
weights = new_weights(shape=shape)
# Create new biases, one for each filter.
biases = new_biases(length=num_filters)
# Create the TensorFlow operation for convolution.
# Note the strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# But e.g. strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Use pooling to down-sample the image resolution?
if use_pooling:
# This is 2x2 max-pooling, which means that we
# consider 2x2 windows and select the largest value
# in each window. Then we move 2 pixels to the next window.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x)) we can
# save 75% of the relu-operations by max-pooling first.
# We return both the resulting layer and the filter-weights
# because we will plot the weights later.
return layer, weights
def flatten_layer(layer):
# Get the shape of the input layer.
layer_shape = layer.get_shape()
# The shape of the input layer is assumed to be:
# layer_shape == [num_images, img_height, img_width, num_channels]
# The number of features is: img_height * img_width * num_channels
# We can use a function from TensorFlow to calculate this.
num_features = layer_shape[1:4].num_elements()
# Reshape the layer to [num_images, num_features].
# Note that we just set the size of the second dimension
# to num_features and the size of the first dimension to -1
# which means the size in that dimension is calculated
# so the total size of the tensor is unchanged from the reshaping.
layer_flat = tf.reshape(layer, [-1, num_features])
# The shape of the flattened layer is now:
# [num_images, img_height * img_width * num_channels]
# Return both the flattened layer and the number of features.
return layer_flat, num_features
def new_fc_layer(input, # The previous layer.
num_inputs, # Num. inputs from prev. layer.
num_outputs, # Num. outputs.
use_relu=True): # Use Rectified Linear Unit (ReLU)?
# Create new weights and biases.
weights = new_weights(shape=[num_inputs, num_outputs])
biases = new_biases(length=num_outputs)
# Calculate the layer as the matrix multiplication of
# the input and weights, and then add the bias-values.
layer = tf.matmul(input, weights) + biases
# Use ReLU?
if use_relu:
layer = tf.nn.relu(layer)
return layer
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
y_true_cls = tf.argmax(y_true, dimension=1)
keep_prob = tf.placeholder(tf.float32)
layer_conv1, weights_conv1 = \
new_conv_layer(input=x_image,
num_input_channels=num_channels,
filter_size=filter_size1,
num_filters=num_filters1,
use_pooling=True)
layer_conv1
layer_conv2, weights_conv2 = \
new_conv_layer(input=layer_conv1,
num_input_channels=num_filters1,
filter_size=filter_size2,
num_filters=num_filters2,
use_pooling=True)
layer_conv2
#layer_conv3, weights_conv3 = \
# new_conv_layer(input=layer_conv2,
# num_input_channels=num_filters2,
# filter_size=filter_size3,
# num_filters=num_filters3,
# use_pooling=True)
#layer_conv3
layer_flat, num_features = flatten_layer(layer_conv2)
layer_flat
num_features
layer_fc1 = new_fc_layer(input=layer_flat,
num_inputs=num_features,
num_outputs=fc_size,
use_relu=True)
layer_fc1
layer_dropout1 = tf.nn.dropout(layer_fc1, keep_prob)
#layer_fc2 = new_fc_layer(input=layer_dropout1,
# num_inputs=fc_size,
# num_outputs=fc_size,
# use_relu=True)
#layer_dropout2 = tf.nn.dropout(layer_fc2, keep_prob)
layer_fc3 = new_fc_layer(input=layer_dropout1,
num_inputs=fc_size,
num_outputs=num_classes,
use_relu=False)
layer_fc3
y_pred = tf.nn.softmax(layer_fc3)
y_pred_cls = tf.argmax(y_pred, dimension=1)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc3,
labels=y_true)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(cost)
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
session = tf.Session()
session.run(tf.initialize_all_variables())
train_batch_size = 64
print(data.train)
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch,
keep_prob: 0.5}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels,
keep_prob:1}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
print_test_accuracy()
optimize(num_iterations=1)
print_test_accuracy()
optimize(num_iterations=99) # We already performed 1 iteration above.
print_test_accuracy(show_example_errors=True)
optimize(num_iterations=900) # We performed 100 iterations above.
print_test_accuracy(show_example_errors=True)
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
def plot_conv_layer(layer, image):
# Assume layer is a TensorFlow op that outputs a 4-dim tensor
# which is the output of a convolutional layer,
# e.g. layer_conv1 or layer_conv2.
# Create a feed-dict containing just one image.
# Note that we don't need to feed y_true because it is
# not used in this calculation.
feed_dict = {x: [image],
keep_prob: 1}
# Calculate and retrieve the output values of the layer
# when inputting that image.
values = session.run(layer, feed_dict=feed_dict)
# Number of filters used in the conv. layer.
num_filters = values.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot the output images of all the filters.
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i<num_filters:
# Get the output image of using the i'th filter.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap='binary')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
def plot_image(image):
plt.imshow(image.reshape(img_shape),
interpolation='nearest',
cmap='binary')
plt.show()
image1 = data.test.images[0]
plot_image(image1)
image2 = data.test.images[13]
plot_image(image2)
plot_conv_weights(weights=weights_conv1)
plot_conv_layer(layer=layer_conv1, image=image1)
plot_conv_layer(layer=layer_conv1, image=image2)
plot_conv_weights(weights=weights_conv2, input_channel=0)
plot_conv_weights(weights=weights_conv2, input_channel=1)
plot_conv_layer(layer=layer_conv2, image=image1)
plot_conv_layer(layer=layer_conv2, image=image2)
#plot_conv_layer(layer=layer_conv3, image=image1)
#plot_conv_layer(layer=layer_conv3, image=image2)
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's also load the plotting and numerical libraries so we have them ready for visualization later on.
Step2: Now, we load the MPI libraries into the engine namespaces, and do a simple printing of their MPI rank information to verify that all nodes are operational and they match our cluster's real capacity.
Step4: We write a utility that reorders a list according to the mpi ranks of the engines, since all gather operations will return data in engine id order, not in MPI rank order. We'll need this later on when we want to reassemble in IPython data structures coming from all the engines
Step5: We now define a local (to this notebook) plotting function that fetches data from the engines' global namespace. Once it has retrieved the current state of the relevant variables, it produces and returns a figure
|
<ASSISTANT_TASK:>
Python Code:
from ipyparallel import Client, error
cluster = Client()
view = cluster[:]
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
%%px
# MPI initialization, library imports and sanity checks on all engines
from mpi4py import MPI
# Load data publication API so engines can send data to notebook client
from ipykernel.datapub import publish_data
import numpy as np
import time
mpi = MPI.COMM_WORLD
bcast = mpi.bcast
barrier = mpi.barrier
rank = mpi.rank
size = mpi.size
print("MPI rank: %i/%i" % (mpi.rank,mpi.size))
ranks = view['rank']
engine_mpi = np.argsort(ranks)
def mpi_order(seq):
Return elements of a sequence ordered by MPI rank.
The input sequence is assumed to be ordered by engine ID.
return [seq[x] for x in engine_mpi]
mpi_order(view.apply_sync(lambda : mpi.rank))
%%px --noblock
import numpy as np
from ipykernel.datapub import publish_data
N = 1000
stride = N // size
x = np.linspace(0, 10, N)[stride * rank:stride*(rank+1)]
publish_data({'x': x})
for i in range(10):
y = np.sin(x * i)
publish_data({'y': y, 'i': i})
time.sleep(1)
ar = _
def stitch_data(ar, key):
data = mpi_order(ar.data)
parts = [ d[key] for d in data ]
return np.concatenate(parts)
ar.wait(1)
while not ar.ready():
clear_output(wait=True)
x = stitch_data(ar, 'x')
y = stitch_data(ar, 'y')
plt.plot(x, y)
plt.show()
ar.wait(1)
ar.get()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that source is by default a Ricker wavelet with a central frequency of seis.csts['f0'], so 20 Hz here. We then generate a synthetic shot, which will serve as the observed data $\pmb{d}^0$.
Step2: We can then change the frequency of the source to 15 Hz and recompute the data with the bad source signature.
Step3: If we look at the observed data and the modeled data, they indeed have a different source signature.
Step4: We can then call the source inversion routine of SeisCL, which needs the source to be updated src2, the observed data data and the modelled data data2. If we provide a srcid, it will overwrite the source function in seis.src_all for this srcid with the new wavelet.
Step5: The source function contained in seis.src_all is now the corrected wavelet, which is the same as the initial, that is a Ricker wavelet of 20 Hz.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from SeisCL import SeisCL
import matplotlib.pyplot as plt
import numpy as np
seis = SeisCL()
# Constants for the modeling
N = 200
seis.N = np.array([N, N])
seis.ND = 2
seis.dt = 0.25e-03
seis.NT = 1000
seis.dh = dh= 2
seis.f0 = 20
seis.freesurf = 0
# Source and receiver positions
sx = N//2 * dh
sy = 0
sz = N//2 * dh
gx = np.arange(N//4 * dh, (N - N//4)*dh, dh)
gy = gx * 0
gz = gx * 0 + N//4*dh
gsid = gz * 0
gid = np.arange(0, len(gz))
seis.src_pos_all = np.stack([[sx], [sy], [sz], [0], [0]], axis=0)
seis.rec_pos_all = np.stack([gx, gy, gz, gsid, gid, gx * 0, gx * 0, gx * 0], axis=0)
# We start with a simple model
vp_a = np.zeros(seis.N) + 3500
vs_a = np.zeros(seis.N) + 2000
rho_a = np.zeros(seis.N) + 2000
seis.set_forward(seis.src_pos_all[3, :], {"vp": vp_a, "rho": rho_a, "vs": vs_a}, withgrad=False)
seis.execute()
data = seis.read_data()
src1 = seis.src_all * 1.0
seis.f0 = 15
seis.src_all = None
seis.set_forward(seis.src_pos_all[3, :], {"vp": vp_a, "rho": rho_a, "vs": vs_a}, withgrad=False)
seis.execute()
data2 = seis.read_data()
src2 = seis.src_all * 1.0
fig, ax = plt.subplots(1, 2)
ax[0].imshow(data2[0], aspect='auto', cmap=plt.get_cmap('Greys'))
ax[0].set_title("Observed data")
ax[1].imshow(data[0], aspect='auto', cmap=plt.get_cmap('Greys'))
ax[1].set_title("Data with wrong wavelet")
plt.show()
src3 = seis.invert_source(src2, data[0], data2[0], srcid=0)
plt.plot(src1, label="true wavelet")
plt.plot(src2, label="initial wavelet")
plt.plot(seis.src_all[:,0], label="inverted wavelet")
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 1b
Step2: Problem 1c
Step3: Problem 1d
Step4: An important, but necessary, aside โโ
Step5: The common Fourier pairs are especially useful in light of the convolution theorem. Fourier transforms convert convolutions into point-wise products. We define a convolution as
Step6: Sampling a signal directly at the Nyquist frequency results in a lack of any variability. But does this just mean that $f_\mathrm{Ny}$ is special? What happens at $f > f_\mathrm{Ny}$?
Step7: From the plot the signal is clearly variable (unlike when $f = f_\mathrm{Ny}$). However, there are fewer than 2 observations per cycle.
Step8: The observations are identical! Here is what you need to remember about the Nyquist frequency
Step9: Problem 3b
Step10: Problem 3c
Step11: Problem 3d
Step12: Problem 3e
Step13: Problem 3f
Step14: Congratulations
Step15: Problem 4) Other Considerations and Faster Implementations
Step16: Unlike ls_periodogram, LombScargle effectively takes no time to run on the simulated data.
Step17: There are many choices regarding the calculation of the periodogram, so read the docs.
Step18: We can see that the best fit model doesn't match the signal in the case where we do not allow a floating mean.
Step19: Window Functions
Step20: Problem 4e
Step21: Interestingly, there are very strong peaks in the data at $P \approx 3\,\mathrm{d} \;\&\; 365\,\mathrm{d}$.
Step22: Uncertainty on the best-fit period
Step23: Problem 5b
Step24: Problem 5c
Step25: Herein lies a fundamental issue regarding the LS periodogram
Step26: One way to combat this very specific issue is to include more Fourier terms at the harmonic of the best fit period. This is easy to implement in LombScargle with the nterms keyword. [Though always be weary of adding degrees of freedom to a model, especially at the large pipeline level of analysis.]
Step27: Interestingly, for the $n=2, 3, 4$ harmonics, it appears as though we get the period that we have visually confirmed. However, by $n=5$ harmonics we no longer get a reasonable answer. Again - be very careful about adding harmonics, especially in large analysis pipelines.
Step28: This example also shows why it is somewhat strange to provide an uncertainty with a LS best-fit period. Errors tend to be catastrophic, and not some small fractional percentage, with the LS periodogram.
|
<ASSISTANT_TASK:>
Python Code:
def gen_periodic_data(x, period=1, amplitude=1, phase=0, noise=0):
'''Generate periodic data given the function inputs
y = A*cos(x/p - phase) + noise
Parameters
----------
x : array-like
input values to evaluate the array
period : float (default=1)
period of the periodic signal
amplitude : float (default=1)
amplitude of the periodic signal
phase : float (default=0)
phase offset of the periodic signal
noise : float (default=0)
variance of the noise term added to the periodic signal
Returns
-------
y : array-like
Periodic signal evaluated at all points x
'''
y = amplitude*np.sin(2*np.pi*x/(period) - phase) + np.random.normal(0, np.sqrt(noise), size=len(x))
return y
x = # complete
y = gen_periodic_data( # complete
fig, ax = plt.subplots()
ax.scatter( # complete
# complete
# complete
fig.tight_layout()
def phase_plot(x, y, period, y_unc = 0.0):
'''Create phase-folded plot of input data x, y
Parameters
----------
x : array-like
data values along abscissa
y : array-like
data values along ordinate
period : float
period to fold the data
y_unc : array-like
uncertainty of the
'''
phases = (x/period) % 1
if type(y_unc) == float:
y_unc = np.zeros_like(x)
plot_order = np.argsort(phases)
fig, ax = plt.subplots()
ax.errorbar(phases[plot_order], y[plot_order], y_unc[plot_order],
fmt='o', mec="0.2", mew=0.1)
ax.set_xlabel("phase")
ax.set_ylabel("signal")
fig.tight_layout()
phase_plot( # complete
fourier_pairs_plot()
x = # complete
y = # complete
x_signal = # complete
y_signal = # complete
fig, ax = plt.subplots()
ax.scatter( # complete
ax.plot( # complete
fig.tight_layout()
x = # complete
y = # complete
x_signal = # complete
y_signal = # complete
fig, ax = plt.subplots()
ax.scatter( # complete
ax.plot( # complete
fig.tight_layout()
x = # complete
y = # complete
x_signal = # complete
y_signal = # complete
# complete
# complete
# complete
# complete
def chi2( # complete
# complete
# complete
# complete
from scipy.optimize import minimize
def min_chi2( # complete
# complete
# complete
def ls_periodogram( # complete
psd = np.empty_like(f_grid)
chi2_0 = # complete
for f_num, f in enumerate(f_grid):
psd[f_num] = # complete
return psd
np.random.seed(23)
# calculate the periodogram
x = # complete
y = # complete
y_unc = # complete
f_grid = # complete
psd_ls = # complete
# plot the periodogram
fig, ax = plt.subplots()
ax.plot( # complete
# complete
# complete
fig.tight_layout()
# calculate the periodogram
f_grid = # complete
psd_ls = # complete
# plot the periodogram
fig,ax = plt.subplots()
ax.plot(# complete
# complete
# complete
fig.tight_layout()
print("The best fit period is: {:.4f}".format( # complete
phase_plot( # complete
f_min = # complete
f_max = # complete
delta_f = # complete
f_grid = np.arange( # complete
print("{:d} grid points are needed to sample the periodogram".format( # complete
from astropy.stats import LombScargle
frequency, power = LombScargle(x, y, y_unc).autopower()
fig, ax = plt.subplots()
ax.plot( # complete
# complete
# complete
# complete
fig.tight_layout()
# complete
freq_no_mean, power_no_mean = LombScargle( # complete
freq_fit_mean, power_fit_mean = LombScargle( # complete
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
ax1.plot( # complete
ax2.plot( # complete
ax1.set_xlim(0,15)
fig.tight_layout()
fit_mean_model = LombScargle(x[bright], y[bright], y_unc[bright],
fit_mean=True).model(np.linspace(0,10,1000),
freq_fit_mean[np.argmax(power_fit_mean)])
no_mean_model = LombScargle(x[bright], y[bright], y_unc[bright],
fit_mean=False).model(np.linspace(0,10,1000),
freq_no_mean[np.argmax(power_no_mean)])
fig, ax = plt.subplots()
ax.errorbar(x[bright], y[bright], y_unc[bright], fmt='o', label='data')
ax.plot(np.linspace(0,10,1000), fit_mean_model, label='fit mean')
ax.plot(np.linspace(0,10,1000), no_mean_model, label='no mean')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.legend()
fig.tight_layout()
# set up simulated observations
t_obs = np.arange(0, 10*365, 3) # 3d cadence
# complete
# complete
# complete
y = gen_periodic_data( # complete
# complete
fig, ax = plt.subplots()
ax.errorbar( # complete
ax.set_xlabel("Time (d)")
ax.set_ylabel("Flux (arbitrary units)")
ls = LombScargle( # complete
freq_window, power_window = # complete
fig, ax = plt.subplots()
ax.plot( # complete
ax.set_ylabel("Power")
ax.set_xlabel("Period (d)")
ax.set_xlim(0,500)
axins = plt.axes([.2, .65, .5, .2])
axins.plot( # complete
axins.set_xlim(0,5)
ls = LombScargle( # complete
frequency, power = # complete
fig, (ax,ax2) = plt.subplots(2,1, sharex=True)
ax.plot( # complete
ax.set_ylabel("Power")
ax.set_ylim(0,1)
ax2.plot( # complete
ax2.set_ylabel("Power")
ax2.set_xlabel("Period (d)")
ax2.set_xlim(0,10)
fig.tight_layout()
data = # complete
fig, ax = plt.subplots()
ax.errorbar( # complete
ax.set_xlabel('HJD (d)')
ax.set_ylabel('V (mag)')
ax.set_ylim(ax.get_ylim()[::-1])
fig.tight_layout()
frequency, power = # complete
# complete
fig,ax = plt.subplots()
ax.plot(# complete
ax.set_ylabel("Power")
ax.set_xlabel("Period (d)")
ax.set_xlim(0, 800)
axins = plt.axes([.25, .55, .6, .3])
axins.plot( # complete
axins.set_xlim(0,5)
fig.tight_layout()
# plot the phase folded light curve
phase_plot( # complete
phase_plot( # complete
fig, ax = plt.subplots()
ax.errorbar(data['hjd']/ls_period % 1, data['mag'], data['mag_unc'], fmt='o', zorder=-1)
ax.plot(np.linspace(0,1,1000),
LombScargle(data['hjd'],data['mag'], data['mag_unc']).model(np.linspace(0,1,1000)*ls_period, 1/ls_period) )
ax.set_xlabel('Phase')
ax.set_ylabel('V (mag)')
ax.set_ylim(ax.get_ylim()[::-1])
fig.tight_layout()
for i in np.arange(1,6):
frequency, power = # complete
# complete
print('For {:d} harmonics, P_LS = {:.8f}'.format( # complete
best_period = 0.73508568
fig, ax = plt.subplots()
ax.errorbar((data['hjd'])/best_period % 1, data['mag'], data['mag_unc'], fmt='o',zorder=-1)
ax.plot(np.linspace(0,1,1000),
LombScargle(data['hjd'],data['mag'], data['mag_unc'],
nterms=4).model(np.linspace(0,1,1000)*best_period, 1/best_period) )
ax.set_xlabel('Phase')
ax.set_ylabel('V (mag)')
ax.set_ylim(ax.get_ylim()[::-1])
fig.tight_layout()
def fourier_pairs_plot():
fig, ax = plt.subplots(4, 2, figsize=(10, 6))
fig.subplots_adjust(left=0.04, right=0.98, bottom=0.02, top=0.95,
hspace=0.3, wspace=0.2)
x = np.linspace(-5, 5, 1000)
for axi in ax.flat:
axi.xaxis.set_major_formatter(plt.NullFormatter())
axi.yaxis.set_major_formatter(plt.NullFormatter())
# draw center line
axi.axvline(0, linestyle='dotted', color='gray')
axi.axhline(0, linestyle='dotted', color='gray')
style_re = dict(linestyle='solid', color='k', linewidth=2)
style_im = dict(linestyle='solid', color='gray', linewidth=2)
text_style = dict(size=14, color='gray')
# sine -> delta
ax[0, 0].plot(x, np.cos(x),**style_re)
ax[0, 0].set(xlim=(-5, 5), ylim=(-1.2, 1.2))
ax[0, 0].annotate('', (-np.pi, 0), (np.pi, 0),
arrowprops=dict(arrowstyle='|-|', color='gray'))
ax[0, 0].text(0, 0, '$1/f_0$', ha='center', va='bottom', **text_style)
ax[0, 0].set_title('Sinusoid')
ax[0, 1].plot([-5, 2, 2, 2, 5], [0, 0, 1, 0, 0], **style_re)
ax[0, 1].plot([-5, -2, -2, -2, 5], [0, 0, 1, 0, 0], **style_re)
ax[0, 1].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[0, 1].annotate('', (0, 0.4), (2, 0.4), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[0, 1].annotate('', (0, 0.4), (-2, 0.4), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[0, 1].text(1, 0.45, '$+f_0$', ha='center', va='bottom', **text_style)
ax[0, 1].text(-1, 0.45, '$-f_0$', ha='center', va='bottom', **text_style)
ax[0, 1].set_title('Delta Functions')
# gaussian -> gaussian
ax[1, 0].plot(x, np.exp(-(2 * x) ** 2), **style_re)
ax[1, 0].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[1, 0].annotate('', (0, 0.35), (0.6, 0.35), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[1, 0].text(0, 0.4, '$\sigma$', ha='center', va='bottom', **text_style)
ax[1, 0].set_title('Gaussian')
ax[1, 1].plot(x, np.exp(-(x / 2) ** 2), **style_re)
ax[1, 1].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[1, 1].annotate('', (0, 0.35), (2, 0.35), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[1, 1].text(0, 0.4, '$(2\pi\sigma)^{-1}$', ha='center', va='bottom', **text_style)
ax[1, 1].set_title('Gaussian')
# top hat -> sinc
ax[2, 0].plot([-2, -1, -1, 1, 1, 2], [0, 0, 1, 1, 0, 0], **style_re)
ax[2, 0].set(xlim=(-2, 2), ylim=(-0.3, 1.2))
ax[2, 0].annotate('', (-1, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[2, 0].text(0.0, 0.5, '$T$', ha='center', va='bottom', **text_style)
ax[2, 0].set_title('Top Hat')
ax[2, 1].plot(x, np.sinc(x), **style_re)
ax[2, 1].set(xlim=(-5, 5), ylim=(-0.3, 1.2))
ax[2, 1].annotate('', (-1, 0), (1, 0), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[2, 1].text(0.0, 0.0, '$2/T$', ha='center', va='bottom', **text_style)
ax[2, 1].set_title('Sinc')
# comb -> comb
ax[3, 0].plot([-5.5] + sum((3 * [i] for i in range(-5, 6)), []) + [5.5],
[0] + 11 * [0, 1, 0] + [0], **style_re)
ax[3, 0].set(xlim=(-5.5, 5.5), ylim=(-0.2, 1.2))
ax[3, 0].annotate('', (0, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[3, 0].text(0.5, 0.6, '$T$', ha='center', va='bottom', **text_style)
ax[3, 0].set_title('Dirac Comb')
ax[3, 1].plot([-5.5] + sum((3 * [i] for i in range(-5, 6)), []) + [5.5],
[0] + 11 * [0, 1, 0] + [0], **style_re)
ax[3, 1].set(xlim=(-2.5, 2.5), ylim=(-0.2, 1.2));
ax[3, 1].annotate('', (0, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[3, 1].text(0.5, 0.6, '$1/T$', ha='center', va='bottom', **text_style)
ax[3, 1].set_title('Dirac Comb')
for i, letter in enumerate('abcd'):
ax[i, 0].set_ylabel('({0})'.format(letter), rotation=0)
# Draw arrows between pairs of axes
for i in range(4):
left = ax[i, 0].bbox.inverse_transformed(fig.transFigure).bounds
right = ax[i, 1].bbox.inverse_transformed(fig.transFigure).bounds
x = 0.5 * (left[0] + left[2] + right[0])
y = left[1] + 0.5 * left[3]
fig.text(x, y, r'$\Longleftrightarrow$',
ha='center', va='center', size=30)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PageCount API
Step2: Summing up accross mobile-app and mobile-site for PageViews into a single access type called 'mobile'. Also, grouping by year, month and accessTypes to get the results
Step3: Generating the final CSV Data frame used for visualization!!
Step4: Plotting the time series Visualization
|
<ASSISTANT_TASK:>
Python Code:
import requests
import pandas
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
headers={'User-Agent' : 'https://github.com/r1rajiv92', 'From' : 'rajiv92@uw.edu'}
yearMonthCombinations = { '2015' : [ 7, 8, 9, 10, 11, 12],
'2016' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2017' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9] }
for accessType in [ 'desktop', 'mobile-web', 'mobile-app' ]:
for year in range(2015, 2018):
for month in yearMonthCombinations[str(year)]:
if int(month / 10) == 0:
startMonth = ''.join( [ '0', str(month) ] )
startParam = ''.join([ str(year), startMonth, '0100' ])
if int((month+1) / 10) == 0:
endParam = ''.join([ str(year), '0', str(month+1), '0100' ])
else:
endParam = ''.join([ str(year), str(month+1), '0100' ])
else:
startMonth = str(month)
startParam = ''.join([ str(year), startMonth, '0100' ])
endParam = ''.join([ str(year), str(month+1), '0100' ])
if month + 1 == 13:
endParam = ''.join( [str(year+1), '01', '0100'])
params = {'project' : 'en.wikipedia.org',
'access' : accessType,
'agent' : 'user',
'granularity' : 'monthly',
'start' : startParam,
'end' : endParam
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
for result in response['items']:
result['year'] = str(year)
result['month'] = startMonth
if 'PageViewDataFrame' in locals():
PageViewDataFrame = pandas.concat([ PageViewDataFrame, pandas.DataFrame.from_dict(response['items']) ])
else:
PageViewDataFrame = pandas.DataFrame.from_dict(response['items'])
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access-site}/{granularity}/{start}/{end}'
headers={'User-Agent' : 'https://github.com/r1rajiv92', 'From' : 'rajiv92@uw.edu'}
yearMonthCombinations = { '2008' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2009' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2010' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2011' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2012' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2013' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2014' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2015' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2016' : [ 1, 2, 3, 4, 5, 6, 7]
}
for accessType in [ 'desktop-site', 'mobile-site' ]:
for year in range(2008, 2017):
for month in yearMonthCombinations[str(year)]:
if int(month / 10) == 0:
startMonth = ''.join( [ '0', str(month) ] )
startParam = ''.join([ str(year), startMonth, '0100' ])
if int((month+1) / 10) == 0:
endParam = ''.join([ str(year), '0', str(month+1), '0100' ])
else:
endParam = ''.join([ str(year), str(month+1), '0100' ])
else:
startMonth = str(month)
startParam = ''.join([ str(year), startMonth, '0100' ])
endParam = ''.join([ str(year), str(month+1), '0100' ])
if month + 1 == 13:
endParam = ''.join( [str(year+1), '01', '0100'])
params = {'project' : 'en.wikipedia.org',
'access-site' : accessType,
'granularity' : 'monthly',
'start' : startParam,
'end' : endParam
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
if 'items' in response.keys():
for result in response['items']:
result['year'] = str(year)
result['month'] = startMonth
else:
print('Page Count Data Missing for', accessType, 'on', month, year )
continue
if 'PageCountDataFrame' in locals():
PageCountDataFrame = pandas.concat([ PageCountDataFrame, pandas.DataFrame.from_dict(response['items']) ])
else:
PageCountDataFrame = pandas.DataFrame.from_dict(response['items'])
PageViewDataFrame.loc[PageViewDataFrame.access != 'desktop', 'access'] = 'mobile'
PageCountDataFrame['access'] = "desktop"
PageCountDataFrame.loc[PageCountDataFrame['access-site'] != 'desktop-site', 'access'] = 'mobile'
pageCounts = pandas.DataFrame( PageCountDataFrame.groupby(['year', 'month', 'access'])['count'].sum().reset_index() )
pageViews = pandas.DataFrame( PageViewDataFrame.groupby(['year', 'month', 'access'])['views'].sum().reset_index() )
finalCSVDataFrame = pandas.DataFrame( columns = ['year', 'month', 'pagecount_all_views', 'pagecount_desktop_views', 'pagecount_mobile_views',
'pageview_all_views', 'pageview_desktop_views', 'pageview_mobile_views' ] )
for year in range( 2008, 2018):
for month in range(1,13):
if year == 2017 and month > 9:
continue
else:
if month < 9:
monthString = ''.join(['0', str(month)])
pagecount_desktop_views = pageCounts[(pageCounts['year'] == str(year)) & (pageCounts['month'] == monthString)
& (pageCounts['access'] == 'desktop' )]
if len(pagecount_desktop_views) != 0:
pagecount_desktop_views = int(pagecount_desktop_views['count'])
else:
pagecount_desktop_views = 0
pagecount_mobile_views = pageCounts[(pageCounts['year'] == str(year)) & (pageCounts['month'] == monthString)
& (pageCounts['access'] == 'mobile' )]
if len(pagecount_mobile_views) != 0:
pagecount_mobile_views = int(pagecount_mobile_views['count'])
else:
pagecount_mobile_views = 0
pagecount_all_views = pagecount_desktop_views + pagecount_mobile_views
pageview_desktop_views = pageViews[(pageViews['year'] == str(year)) & (pageViews['month'] == monthString)
& (pageViews['access'] == 'desktop' )]
if len(pageview_desktop_views) != 0:
pageview_desktop_views = int(pageview_desktop_views['views'])
else:
pageview_desktop_views = 0
pageview_mobile_views = pageViews[(pageViews['year'] == str(year)) & (pageViews['month'] == monthString)
& (pageViews['access'] == 'mobile' )]
if len(pageview_mobile_views) != 0:
pageview_mobile_views = int(pageview_mobile_views['views'])
else:
pageview_mobile_views = 0
pageview_all_views = pageview_desktop_views + pageview_mobile_views
finalCSVDataFrame = finalCSVDataFrame.append( {'year': int(year),
'month': int(month),
'pagecount_all_views': int(pagecount_all_views),
'pagecount_desktop_views': int(pagecount_desktop_views),
'pagecount_mobile_views': int(pagecount_mobile_views),
'pageview_all_views': int(pageview_all_views),
'pageview_desktop_views': int(pageview_desktop_views),
'pageview_mobile_views': int(pageview_mobile_views)
}, ignore_index=True )
finalCSVDataFrame
finalCSVDataFrame.to_csv('finalCSV.csv')
import matplotlib.pyplot as plt
dates = pandas.date_range('2008-01', '2017-10',freq='M')
plt.plot(dates, finalCSVDataFrame["pagecount_all_views"]/1000000, label = " All PageCounts",
color = "black", linewidth = 0.5)
plt.plot(dates, finalCSVDataFrame["pagecount_desktop_views"]/1000000, label = "Desktop PageCounts",
color = "blue", linewidth = 0.5)
plt.plot(dates, finalCSVDataFrame["pagecount_mobile_views"]/1000000, label = "Mobile PageCounts",
color = "brown", linewidth = 0.5)
plt.plot(dates, finalCSVDataFrame["pageview_all_views"]/1000000, label = " All PageViews",
color = "black", linewidth = 2)
plt.plot(dates, finalCSVDataFrame["pageview_desktop_views"]/1000000, label = "Desktop PageViews",
color = "blue", linewidth = 2)
plt.plot(dates, finalCSVDataFrame["pageview_mobile_views"]/1000000, label = "Mobile PageViews",
color = "brown", linewidth = 2)
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 15
fig_size[1] = 10
plt.xlabel("Dates")
plt.ylabel("PageViews - millions")
plt.title("Pageviews on English Wikipedia from July 2015 to Sept 2017")
plt.legend(loc=2)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <p style="text-align
Step2: <p style="text-align
Step3: <p style="text-align
Step4: <span style="text-align
Step5: <p style="text-align
Step6: <p style="text-align
Step7: <div style="text-align
|
<ASSISTANT_TASK:>
Python Code:
shoes_in_my_drawer = int(input("How many shoes do you have in your drawer? "))
if shoes_in_my_drawer % 2 == 1:
print("You have an odd number of shoes. Something is wrong!")
shoes_in_my_drawer = int(input("How many shoes do you have in your drawer? "))
if shoes_in_my_drawer % 2 == 1:
print("You have an odd number of shoes. Something is wrong!")
if shoes_in_my_drawer % 2 != 1:
print("You have an even number of shoes. Congratulations!")
shoes_in_my_drawer = int(input("How many shoes do you have in your drawer? "))
if shoes_in_my_drawer % 2 == 1:
print("You have an odd number of shoes. Something is wrong!")
else:
print("You have an even number of shoes. Congratulations!")
is_boiler_on = input("Is your boiler turned on? [yes/no] ")
hour = int(input("Enter the hour (00-23): "))
minute = int(input("Enter the minute (00-59): "))
if is_boiler_on == 'yes':
is_boiler_on = True
else:
is_boiler_on = False
if not is_boiler_on and hour == 7 and minute > 0:
is_boiler_on = True
else:
if is_boiler_on:
is_boiler_on = False
if is_boiler_on:
boiler_status = "on"
else:
boiler_status = "off"
print("Boiler is " + boiler_status + " right now.")
copies_sold = int(input("How many copies were sold? "))
# ืื ืื ื ืืฉืชืืฉืื ืืืืชืืืช ืืืืืืช ืขืืืจ ืฉืืืช ืืืฉืชื ืื ืืืื.
# ืื ืืืกืืื ืืื ืืชืื ืชืื ืฉืืืงืฉืช ืืืืืืฉ ืฉืขืจืื ืืืฉืชื ืื ืื ืงืืืขืื, ืืื ืืืืืื ืืืฉืชื ืืช ืืืืื ืืชืืื ืืช.
SILVER_ALBUM = 100000
GOLD_ALBUM = 500000
PLATINUM_ALBUM = 1000000
DIAMOND_ALBUM = 10000000
if copies_sold >= DIAMOND_ALBUM:
print("Diamond album")
else:
if copies_sold >= PLATINUM_ALBUM:
print("Platinum album")
else:
if copies_sold >= GOLD_ALBUM:
print("Gold album")
else:
if copies_sold >= SILVER_ALBUM:
print("Silver album")
else:
print("Your album is not a best-seller")
copies_sold = int(input("How many copies were sold? "))
SILVER_ALBUM = 100000
GOLD_ALBUM = 500000
PLATINUM_ALBUM = 1000000
DIAMOND_ALBUM = 10000000
if copies_sold >= DIAMOND_ALBUM:
print("Diamond album")
if copies_sold >= PLATINUM_ALBUM and copies_sold < DIAMOND_ALBUM:
print("Platinum album")
if copies_sold >= GOLD_ALBUM and copies_sold < PLATINUM_ALBUM:
print("Gold album")
if copies_sold >= SILVER_ALBUM and copies_sold < GOLD_ALBUM:
print("Silver album")
if copies_sold < SILVER_ALBUM:
print("Your album is not a best-seller")
copies_sold = int(input("How many copies were sold? "))
SILVER_ALBUM = 100000
GOLD_ALBUM = 500000
PLATINUM_ALBUM = 1000000
DIAMOND_ALBUM = 10000000
if copies_sold >= DIAMOND_ALBUM:
print("Diamond album")
elif copies_sold >= PLATINUM_ALBUM:
print("Platinum album")
elif copies_sold >= GOLD_ALBUM:
print("Gold album")
elif copies_sold >= SILVER_ALBUM:
print("Silver album")
else:
print("Your album is not a best-seller")
age = int(input("Please enter your age: ")))
if age < 0:
print("Your age is invalid."
if age < 18:
print("Younger than 18.")
else
print("You are so old!")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the above example we can see that we can replace the letter "c" with a "b" using the replace 'method'. What happens if I have the number 711 and want to change the 7 to a 9 to make 911?
Step6: When we try to use replace with integers we get an error; "'int' object has no attribute 'replace'". Basically this is Python's way of telling us that 'replace' is not a method belonging to the integer class.
Step12: The above code doesn't work, but it should give you a feel for what a "ball UI" could look like in Python. Now lets do something similar for a football player.
Step13: Let's suppose you are a developer on this project and a colleague of yours has implemented these methods already. The really cool thing is this UI adds a layer of abstraction that allows you to meaningfully interact with the ball/player without needing to actually understand exactly how this stuff works.
Step16: So what does the above code do? Well, even though we haven't covered indentation and if/else statements (yet) my well-chosen variables names and comments were probably sufficient for you to figure out what is going on.
Step17: Okay, so what does this code to? Well, it creates an object which I call โTimeโ. A Time object takes two integers as input (hours and minutes). Letโs create a Time object and see what happens when I print it.
Step18: When we print time objects they are represented as a string that looks just like an alarm clock; Time(12, 30) returns "12
Step19: As a minor implementation detail my code adds Time to other Time objects, if we try to add an integer (e.g 4) to the time 10
|
<ASSISTANT_TASK:>
Python Code:
x = "I love cats." # <= x is a string...
print(x.upper()) # converts string to upper case
print(x.replace("c", "b")) # cats? I'm a bat kinda guy myself!
print(x.__add__(x)) # x.__add__(x) is EXACTLY the same as x + x.
print(x.__mul__(3)) # Equivalent to x * 3
z = 711
print(z.replace(7, 1))
# With this said, integers do have some of the "same" methods as well.
# Please note however these methods are the "same" in name only, behind the scenes they work very differently from each other.
i = 5
print(i.__add__(i)) # i + i => 5 + 5
print(i.__mul__(i)) # i * i => 5 * 5
# Note the following code doesn't work, it is for demonstration purposes only!
class Ball():
def get_speed():
The get_speed method returns the current speed of the ball.
# magic goes here...
return speed
def get_direction():
The get_direction method returns the direction the ball is currently traveling in (in 3 dimensions).
# magic goes here...
return direction
def get_position():
The get_position method returns the current position of the ball.
# magic goes here...
return position
def bounce_off_of_object(other_object):
This method calculates a new speed and position of the ball after colliding with another object.
# magic goes here...
return # something
wilson = Ball() # creates a ball called Wilson.
# Note the following code doesn't work, it is for demonstration purposes only!
class Football_Player():
def name(name):
This method gives the footballer a name.
# magic goes here...
return name
def get_position():
The get_position method returns the current position of the player.
# magic goes here...
return position
def get_speed():
The get_speed method returns the current speed of the player.
# magic goes here...
return speed
def move(direction, speed):
The move method makes the player run in X direction at Y speed.
# magic goes here...
return # new_speed of self
def kick_ball(ball, power, direction):
This method kicks ball Z with X power in Y direction.
# magic goes here...
return # new_speed of ball, direction of ball.
# Note the following code doesn't work, it is for demonstration purposes only!
Messi = Football_Player().name("Messi") # This line creates a footballer called 'Messi'
football = Ball() # create a ball object.
if ball.get_position == Messi.get_position: # This line asks if Messi and the football are at the same position in space.
Messi.kick_ball(football, "100mph", direction="NorthWest") # Messi kicks the balls 100mph to the northwest.
else: # If Messi is NOT near the ball then...
target_position = football.get_position() # Messi wants to know where the football is.
Messi.move(target_position, "12mph") # Messi is now running at 12mph towards the ball.
class Time(object):
# Do you remember in the variable names lecture I said that
# you shouldn't name a variable '__something' unless you know what you are doing?
# Well, thats because double underscore is usually reserved for "hidden" class properties/functions.
# In this particular case, __init__, __repr__, __add__, all have very special meanings in Python.
def __init__(self, hours, mins=0):
assert isinstance(hours, int) and 0<= hours <=24 # Hours should be between 0 and 24 for a 24-hour clock.
assert isinstance(mins, int) and 0<= mins <=60
self.hours = hours
self.mins = mins
def __repr__(self):
return format_time(self.hours, self.mins)
def __add__(self, other):
This function takes itself and another Time object and adds them together.
a = self.mins
b = other.mins
x = self.hours
y = other.hours
c = str( (a + b) % 60 ).zfill(2) # Integer division and Modulo Arithmetic. You have seen this before!
z = str((((a+b) // 60) + x + y) % 25).zfill(2)
return format_time(z, c)
def format_time(h, m):
This function takes hours (int) and mins(int) and returns them in a pretty format. 2, 59 => 02:59
return "{}:{}".format(str(h).zfill(2), str(m).zfill(2))
a = Time(4,50)
b = Time(0, 1)
c = Time(24, 59)
print(a, b, c, sep="\n")
print( Time(10,40) + Time(0,20) ) # 10:40 + 00:20 => 11:00
print( Time(0,0) + Time(7,30) ) # 00:00 + 07:30 => 7:30
print( Time(20,20) + 10 ) # Error, can't add integer to a Time object
# Notice that, just like my "Time" object, integers, strings, floats, etc are all implemented in Python as classes.
print(type(Time(10,0)),type("hello"),type(10), type(5.67), sep="\n")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Energy Surfaces of Al-Ni (N. Dupin et al., 2001)
Step2: Al-Zn (S. Mey et al., 1993)
Step4: Prototyping New Models (Advanced)
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from pycalphad import Database, binplot
db_alfe = Database('alfe_sei.TDB')
my_phases_alfe = ['LIQUID', 'B2_BCC', 'FCC_A1', 'HCP_A3', 'AL5FE2', 'AL2FE', 'AL13FE4', 'AL5FE4']
fig = plt.figure(figsize=(9,6))
pdens = [{'B2_BCC': 20000}, 2000]
%time binplot(db_alfe, ['AL', 'FE', 'VA'] , my_phases_alfe, 'X(AL)', 300, 2000, steps=100, \
pdens=pdens, ax=fig.gca(), mode='numexpr')
plt.show()
%matplotlib inline
from pycalphad import energy_surf, Database
from pycalphad.plot.utils import phase_legend
import numpy as np
import matplotlib.pyplot as plt
db_alni = Database('NI_AL_DUPIN_2001.TDB')
my_phases_alni = ['LIQUID', 'FCC_L12', 'BCC_B2', 'AL3NI5', 'AL3NI2', 'AL3NI1']
fig = plt.figure(figsize=(9,6))
ax = fig.gca()
%time nrg_df = energy_surf(db_alni, ['AL', 'NI', 'VA'], my_phases_alni, T=1000.0)
legend_handles, colorlist = phase_legend(my_phases_alni)
group_df = nrg_df.groupby('Phase')
for phase, phase_df in group_df:
ax = phase_df.plot(kind='scatter', x='X(AL)', y='GM', ax=ax, color=colorlist[phase.upper()], s=1, xlim=(0,1), ylim=(-11e4, -3e4))
ax = ax.legend(handles=legend_handles, loc='center left', bbox_to_anchor=(1, 0.6))
plt.title('Al-Ni Gibbs Energy at 1000 K', fontsize=20)
plt.show()
nrg_df
nrg_df.to_excel('alni_energies.xlsx')
from pycalphad import Model, Database, binplot
db_alzn = Database('alzn_noparams.tdb')
assessed_parameters = {
'LIQALZN0H': 10465.3,
'LIQALZN0S': 3.39259,
'FCCALZN0H': 7297.5,
'FCCALZN0S': -0.47512,
'FCCALZN1H': 6612.9,
'FCCALZN1S': 4.5911,
'FCCALZN2H': -3097.2,
'FCCALZN2S': -3.306635,
'HCPALZN0H': 18821.0,
'HCPALZN0S': 8.95255,
'HCPALZN3H': -702.8,
'HCPALZN3S': 0.0
}
# Initialize phase models
assessed_models = {
'LIQUID': Model(db_alzn, ['AL', 'ZN'], 'LIQUID', parameters=assessed_parameters),
'FCC_A1': Model(db_alzn, ['AL', 'ZN', 'VA'], 'FCC_A1', parameters=assessed_parameters),
'HCP_A3': Model(db_alzn, ['AL', 'ZN', 'VA'], 'HCP_A3', parameters=assessed_parameters)
}
fig = plt.figure(figsize=(9,6))
ax = binplot(db_alzn, ['AL', 'ZN', 'VA'] , ['LIQUID', 'FCC_A1', 'HCP_A3'], \
'X(ZN)', 300, 1000, ax=plt.gca(), model=assessed_models)
ax = ax.set_title('Al-Zn', fontsize=20)
blank_parameters = {
'LIQALZN0H': 0.0,
'LIQALZN0S': 0.0,
'FCCALZN0H': 0.0,
'FCCALZN0S': 0.0,
'FCCALZN1H': 0.0,
'FCCALZN1S': 0.0,
'FCCALZN2H': 0.0,
'FCCALZN2S': 0.0,
'HCPALZN0H': 0.0,
'HCPALZN0S': 0.0,
'HCPALZN3H': 0.0,
'HCPALZN3S': 0.0
}
models = {
'LIQUID': Model(db_alzn, ['AL', 'ZN'], 'LIQUID', parameters=blank_parameters),
'FCC_A1': Model(db_alzn, ['AL', 'ZN', 'VA'], 'FCC_A1', parameters=blank_parameters),
'HCP_A3': Model(db_alzn, ['AL', 'ZN', 'VA'], 'HCP_A3', parameters=blank_parameters)
}
fig = plt.figure(figsize=(9,6))
ax = binplot(db_alzn, ['AL', 'ZN', 'VA'] , ['LIQUID', 'FCC_A1', 'HCP_A3'], \
'X(ZN)', 300, 1000, ax=plt.gca(), model=models)
ax = ax.set_title('Al-Zn with endmembers only', fontsize=20)
from sympy import Piecewise, log
from tinydb import where
import pycalphad.variables as v
class MyMagneticModel(Model):
def __init__(self, db, comps, phase, **kwargs):
super().__init__(db, comps, phase, **kwargs)
self.models['mag'] = self.my_magnetic_energy(db.phases[phase], db.search)
def my_magnetic_energy(self, phase, param_search):
Return the energy from magnetic ordering in symbolic form.
The implemented model is the Inden-Hillert-Jarl formulation.
The approach follows from the background section of W. Xiong, 2011.
Note that this is a demonstration model; use pycalphad's builtin
implementation as a reference.
site_ratio_normalization = self._site_ratio_normalization(phase)
# define basic variables
afm_factor = phase.model_hints['ihj_magnetic_afm_factor']
bm_param_query = (
(where('phase_name') == phase.name) & \
(where('parameter_type') == 'BMAGN') & \
(where('constituent_array').test(self._array_validity))
)
tc_param_query = (
(where('phase_name') == phase.name) & \
(where('parameter_type') == 'TC') & \
(where('constituent_array').test(self._array_validity))
)
mean_magnetic_moment = \
self.redlich_kister_sum(phase, param_search, bm_param_query)
beta = Piecewise(
(mean_magnetic_moment, mean_magnetic_moment > 0),
(mean_magnetic_moment/afm_factor, mean_magnetic_moment <= 0)
)
curie_temp = \
self.redlich_kister_sum(phase, param_search, tc_param_query)
tc = Piecewise(
(curie_temp, curie_temp > 0),
(curie_temp/afm_factor, curie_temp <= 0)
)
tau = Piecewise(
(v.T / tc, tc > 0),
(10000., tc == 0) # tau is 'infinity'
)
# define model parameters
p = phase.model_hints['ihj_magnetic_structure_factor']
A = 518/1125 + (11692/15975)*(1/p - 1)
# factor when tau < 1
sub_tau = 1 - (1/A) * ((79/(140*p))*(tau**(-1)) + (474/497)*(1/p - 1) \
* ((tau**3)/6 + (tau**9)/135 + (tau**15)/600)
)
# factor when tau >= 1
super_tau = -(1/A) * ((tau**-5)/10 + (tau**-15)/315 + (tau**-25)/1500)
expr_cond_pairs = [(sub_tau, tau < 1),
(super_tau, tau >= 1)
]
g_term = Piecewise(*expr_cond_pairs)
return 1e6 * v.R * v.T * log(beta+1) * \
g_term / site_ratio_normalization
%matplotlib inline
import matplotlib.pyplot as plt
from pycalphad import Database, binplot
db_alfe = Database('alfe_sei.TDB')
my_phases_alfe = ['LIQUID', 'B2_BCC', 'FCC_A1', 'HCP_A3', 'AL5FE2', 'AL2FE', 'AL13FE4', 'AL5FE4']
models = [{'FCC_A1': MyMagneticModel}, Model]
fig = plt.figure(figsize=(9,6))
pdens = [{'B2_BCC': 20000}, 2000]
%time binplot(db_alfe, ['AL', 'FE', 'VA'] , my_phases_alfe, 'X(AL)', 300, 2000, steps=100, \
pdens=pdens, ax=fig.gca(), model=models, mode='numexpr')
plt.title('Al-Fe with toy magnetic model', fontsize=20)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: But, why?
Step2: Our fledgeling Python script kiddie then enters the following code
Step3: resulting in this workflow
Step4: How does it work?
Step5: Mocking a 'real' Python object
Step6: Merging workflows into a function call
Step7: eeeehm, What can we do (sort of)?
Step8:
Step9: embedded workflows
Step10:
Step11: Using objects
Step12:
Step13:
|
<ASSISTANT_TASK:>
Python Code:
from noodles import schedule, run, run_parallel, gather
@schedule
def add(a, b):
return a+b
@schedule
def sub(a, b):
return a-b
@schedule
def mul(a, b):
return a*b
u = add(5, 4)
v = sub(u, 3)
w = sub(u, 2)
x = mul(v, w)
draw_workflow('callgraph1.png', x._workflow)
run_parallel(x, n_threads = 2)
def schedule(f):
@wraps(f)
def wrapped(*args, **kwargs):
bound_args = signature(f).bind(*args, **kwargs)
bound_args.apply_defaults()
return PromisedObject(merge_workflow(f, bound_args))
return wrapped
class PromisedObject:
def __init__(self, workflow):
self._workflow = workflow
def __call__(self, *args, **kwargs):
return _do_call(self._workflow, *args, **kwargs)
def __getattr__(self, attr):
if attr[0] == '_':
return self.__dict__[attr]
return _getattr(self._workflow, attr)
def __setattr__(self, attr, value):
if attr[0] == '_':
self.__dict__[attr] = value
return
self._workflow = get_workflow(_setattr(self._workflow, attr, value))
def merge_workflow(f, bound_args):
variadic = next((x.name for x in bound_args.signature.parameters.values()
if x.kind == Parameter.VAR_POSITIONAL), None)
if variadic:
bound_args.arguments[variadic] = list(bound_args.arguments[variadic])
node = FunctionNode(f, bound_args)
idx = id(node)
nodes = {idx: node}
links = {idx: set()}
for address in serialize_arguments(bound_args):
workflow = get_workflow(
ref_argument(bound_args, address))
if not workflow:
continue
set_argument(bound_args, address, Parameter.empty)
for n in workflow.nodes:
if n not in nodes:
nodes[n] = workflow.nodes[n]
links[n] = set()
links[n].update(workflow.links[n])
links[workflow.top].add((idx, address))
return Workflow(id(node), nodes, links)
from noodles import schedule, run, run_parallel, gather
@schedule
def sum(a, buildin_sum = sum):
return buildin_sum(a)
r1 = add(1, 1)
r2 = sub(3, r1)
def foo(a, b, c):
return mul(add(a, b), c)
multiples = [foo(i, r2, r1) for i in range(6)]
r5 = sum(gather(*multiples))
draw_workflow('callgraph2.png', r5._workflow)
run_parallel(r5, n_threads = 4)
@schedule
def sqr(a):
return a*a
@schedule
def map(f, lst):
return gather(*[f(x) for x in lst])
@schedule
def num_range(a, b):
return range(a, b)
wf = sum(map(sqr, num_range(0, 1000)))
draw_workflow('callgraph3.png', wf._workflow)
run_parallel(wf, n_threads=4)
@schedule
class A:
def __init__(self, value):
self.value = value
def multiply(self, factor):
self.value *= factor
a = A(5)
a.multiply(10)
a.second = 7
draw_workflow("callgraph4.png", a._workflow)
@schedule
class A:
def __init__(self, value):
self.value = value
def multiply(self, factor):
self.value *= factor
return self
a = A(5)
a = a.multiply(10)
a.second = 7
draw_workflow("callgraph5.png", a._workflow)
result = run_parallel(a, n_threads=4)
print(result.value, result.second)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Source
Step2: Kharrufa method. Kharrufa (1985) derived an equation through correlation of ET/p and T in the form of
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv('../data/hbv_s_data.csv', index_col=0, parse_dates=True)
evap_true = np.array([0.6,1.9,2.4,1.8,1.4,1.3,1.0,0.8,0.6,0.4,0.2,0.3])*1.2 #evapo for jan-dec
def romanenko(data):
Ta = np.array([data.ix[data.index.month == x, 'Temp'].mean() for x in range(1, 13)])
Td = 2
def et(T):
return 33.8679*( ((0.00738*T + 0.8072)**8) - 0.000019*(np.abs(1.8*T + 48)) + 0.001316)
Rh = et(Td)*100/et(Ta)
# best results with manual evap setup
# Rh = np.array([60, 0, 0, 45, 70, 80, 85, 90, 90, 90, 90, 76])
return (0.0018*((25+Ta)**2))*(100-Rh)/30
Ta = np.array([data.ix[data.index.month == x, 'Temp'].mean() for x in range(1, 13)])
Td = np.array([0, 0, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10])
et(Td)*100/et(Ta)
Ta
plt.plot(range(1,13), romanenko(data), 'b',
range(1,13), evap_true, 'r')
def kharufa(data):
Ta = np.array([data.ix[data.index.month == x, 'Temp'].mean() for x in range(1, 13)])
p = 0.85
return 0.34*p*(Ta**1.3)
plt.plot(range(1,13), kharufa(data), 'b',
range(1,13), romanenko(data), 'g',
range(1,13), evap_true, 'r')
print(kharufa(data))
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import sys
sys.path.append('../models/')
import hbv_s
swe_riv = pd.read_csv('../data/hbv_s_data.csv', index_col=0, parse_dates=True)
Qobs = swe_riv.Qobs
Qsim = hbv_s.simulation(swe_riv)
pd.DataFrame({'obs':Qobs, 'sim':Qsim}, index=swe_riv.index).plot()
swe_riv2 = swe_riv.drop('Evap', 1)
swe_riv2['Evap'] = swe_riv2.index.map(lambda x: romanenko(swe_riv2)[x.month-1])
Qsim = hbv_s.simulation(swe_riv2)
pd.DataFrame({'obs':Qobs, 'sim':Qsim}, index=swe_riv2.index).plot()
swe_riv2['Evap'].plot();
swe_riv['Evap'].plot()
evap_true
a = np.array([0.6,1.9,2.4,1.8,1.4,1.3,1.0,0.8,0.6,0.4,0.2,0.3])
np.savetxt('../data/Evap_monthly_constants.txt', a)
np.loadtxt('../data/Evap_monthly_constants.txt')
swe_riv2['Evap2'] = swe_riv2.index.map(lambda x: np.loadtxt('../data/Evap_monthly_constants.txt')[x.month-1])
swe_riv2[['Evap', 'Evap2']].plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is an example of applying learning rate decay in TensorFlow, using Gradient Descent to solve a simple Linear Regression.
Step2: Now we're ready to set up the TensorFlow solution
Step3: First we define placeholders and a weights vector that we'll learn from the data.
Step4: Then we use calculate the MSE between the prediction $w^TX$ and the actual y and use a GradientDescentOptimizer to minimize the loss using a staircased learning rate.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import numpy as np
import pandas as pd
X_df = pd.DataFrame(np.random.rand(100, 3))
w_actual = pd.Series(data=[2, 3, 5])
y_true = X_df.dot(w_actual)
# add a small noise to y
y_true += np.random.normal(scale=1E-6, size=100)
import tensorflow as tf
tf.reset_default_graph()
X = tf.placeholder(shape=X_df.shape, dtype=tf.float32, name='X')
y = tf.placeholder(shape=[len(y_true), 1], dtype=tf.float32, name='y')
w = tf.get_variable(
'coeff',
initializer=tf.truncated_normal_initializer(dtype=tf.float32), shape=[3, 1]
)
y_pred = tf.matmul(X, w)
loss = tf.losses.mean_squared_error(y_pred, y)
starter_learning_rate = 0.08
learning_rate = tf.train.exponential_decay(
learning_rate=starter_learning_rate,
global_step=tf.train.get_or_create_global_step(),
decay_steps=10,
decay_rate=0.8,
staircase=True,
name='learning_rate')
train_op = tf.train.GradientDescentOptimizer(
learning_rate=learning_rate).minimize(loss, global_step=tf.train.get_global_step())
losses = []
learning_rates = []
input_data = {
X: X_df.values,
y: y_true.values.reshape([100, 1])
}
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(1, 500):
_, loss_val, lr = sess.run([train_op, loss, learning_rate], feed_dict=input_data)
losses.append(loss_val)
learning_rates.append(lr)
w_estimated = sess.run(w).squeeze()
_, ax = plt.subplots(1, 3, figsize=(12, 3))
losses = pd.Series(losses)
_ = losses.rolling(10).mean().plot(title='Loss over Training Steps', ax=ax[0])
w_df = pd.DataFrame.from_records(
[w_actual.values, w_estimated],
index=['actual', 'estimated']).T
_ = w_df.plot(kind='bar', rot=0, ax=ax[1], title='Actual vs. Estimated Weights')
learning_rates = pd.Series(learning_rates)
_ = learning_rates.plot(ax=ax[2], title='Learning Rate over Training Steps')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we open the file for reading. We use the with statement that takes care of closing the file
Step2: In the code above, we have imported the module csv, which allows us to parse character-delimited files. In this case, we do not need to specify any special option, as we're reading a plain-vanilla csv file, delimited by commas.
Step3: Now all the cities are stored in the set cities, with all the duplicates automatically removed (as we're using a set)
Step4: Write a program that creates a dictionary where the keys are the cities, and the values are the number of records (rows) for that city in the data.
Step5: The code above shows that when the key is not already, present, the key will be added, and its value will be initially 1. If on the other hand the key is present, we will simply increment its associated value
Step6: With this at hand, we can write our program
Step7: That's it. Let's print the counts for a few cities
Step8: Write a program that calculates the mean population for each city, obtained by averaging the value of pop.
Step9: Excellent. Now each key in the dictionary indexes a list whose first element is the sum of all the population values, and the second element is the number of records that contributed to the sum. To obtain the average population, we divide the first by the second
Step10: Let's see some of the averages to make sure they make sense
Step11: If we want print only a few decimals, we can use round
Step12: Write a program that calculates the mean population for each city and year.
Step13: Let's look at the results for Chicago
|
<ASSISTANT_TASK:>
Python Code:
cities = set([]) # initialize an empty set
import csv # we this module to handle csv files
with open('../data/Dalziel2016_data.csv', 'r') as f: # 'r' stands for reading
my_csv = csv.DictReader(f) # set up the csv reader
for line in my_csv: # loop over all lines
print(line)
break # break the loop after printing the first line to inspect results
import csv # we use the csv module, as we want to read a csv file
with open('../data/Dalziel2016_data.csv', 'r') as f: # 'r' stands for reading
my_csv = csv.DictReader(f)
for line in my_csv:
cities.add(line['loc'])
cities
a = {} # an empty dictionary
a['my_new_key'] = a.get('my_new_key', 0) + 1
a
a['my_new_key'] = a.get('my_new_key', 0) + 1
a
citycount = {} # initiate an empty dictionary
import csv # we use the csv module, as we want to read a csv file
with open('../data/Dalziel2016_data.csv', 'r') as f: # 'r' stands for reading
my_csv = csv.DictReader(f)
for line in my_csv:
# this is the city to update
mycity = line['loc']
# if it's present, increment the value
# if it's not present, initialize to 1
citycount[mycity] = citycount.get(mycity, 0)
citycount[mycity] = citycount[mycity] + 1
for city in ['CHICAGO', 'LOS ANGELES', 'NEW YORK']:
print(city, citycount[city])
citypop = {}
import csv # we use the csv module, as we want to read a csv file
with open('../data/Dalziel2016_data.csv', 'r') as f: # 'r' stands for reading
my_csv = csv.DictReader(f)
for line in my_csv:
# this is the city to update
mycity = line['loc']
# current pop
pop = float(line['pop']) # transform to float
# if it's present, increment the value
# if it's not present, initialize a list with both population and count as zero
citypop[mycity] = citypop.get(mycity, [0,0])
# update population (stored as first value of list)
citypop[mycity][0] = citypop[mycity][0] + pop
# update number of records (stored as second value of list)
citypop[mycity][1] = citypop[mycity][1] + 1
citypop
for city in citypop.keys():
citypop[city][0] = citypop[city][0] / citypop[city][1]
for city in ['CHICAGO', 'LOS ANGELES', 'NEW YORK']:
print(city, citypop[city][0])
for city in ['CHICAGO', 'LOS ANGELES', 'NEW YORK']:
print(city, round(citypop[city][0],1))
cityyear = {}
cityyear = {}
import csv # we use the csv module, as we want to read a csv file
with open('../data/Dalziel2016_data.csv', 'r') as f: # 'r' stands for reading
my_csv = csv.DictReader(f)
for line in my_csv:
# this is the city to update
mycity = line['loc']
# this is the year to update
year = line['year']
# current pop
pop = float(line['pop']) # transform to float
# make sure the city is in the dictionary, or initialize
cityyear[mycity] = cityyear.get(mycity, {})
# make sure the year is in the sub-dictionary, or initialize
cityyear[mycity][year] = cityyear[mycity].get(year, [0,0])
# now proceed as for exercise 3 but access the inner dictionary
# update population
cityyear[mycity][year][0] = cityyear[mycity][year][0] + pop
# update number of records
cityyear[mycity][year][1] = cityyear[mycity][year][1] + 1
# now compute averages
for city in cityyear.keys():
for year in cityyear[city].keys():
cityyear[city][year][0] = cityyear[city][year][0] / cityyear[city][year][1]
# a dictionary has no natural order but here we want to order by year
# store the years in a list
years = list(cityyear['CHICAGO'].keys())
# sort the years
years.sort() # this is done in place!
# now print population for each year
for year in years:
print(year, round(cityyear['CHICAGO'][year][0]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: try now for bouillon
|
<ASSISTANT_TASK:>
Python Code:
act = Database("ecoinvent 3.2 cutoff").search("pineapple")
act
act = Database("ecoinvent 3.2 cutoff").search("pineapple")[1]
act
lca = LCA(
{act.key: 1},
method=('IPCC 2013', 'climate change', 'GWP 100a'),
)
lca.lci()
lca.lcia()
lca.score
bou = Database("bouillon").search("Paste")
bou
lca = LCA(
demand={bouillon.random(): 1},
method=('IPCC 2013', 'climate change', 'GWP 100a'),
)
lca.lci()
lca.lcia()
lca.score
lca = LCA(
demand={bouillonZ.random(): 1},
method=('IPCC 2013', 'climate change', 'GWP 100a'),
)
lca.lci()
lca.lcia()
lca.score
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Petite histoire
Step2: Six mois plus tard, vous crรฉez une fonction f5 qui appelle une fonction f4 et la fonction f2.
Step3: Ah au fait, ce faisant, vous modifiez la fonction f2 et vous avez un peu oubliรฉ ce que faisait la fonction f3... Bref, vous ne savez pas si la fonction f3 sera impactรฉe par la modification introduite dans la fonction f2 ? C'est ce type de problรจme qu'on rencontre tous les jours quand on รฉcrit un logiciel ร plusieurs et sur une longue durรฉe. Ce notebook prรฉsente les briques classiques pour s'assurer de la robustesse d'un logiciel.
Step4: Ecrire un test unitaire
Step5: Il y a des badges un peu pour tout.
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
from pyensae.graphhelper import draw_diagram
draw_diagram("blockdiag { f0 -> f1 -> f3; f2 -> f3;}")
draw_diagram('blockdiag { f0 -> f1 -> f3; f2 -> f3; f2 -> f5 [color="red"]; f4 -> f5 [color="red"]; }')
def solve_polynom(a, b, c):
# ....
return None
from IPython.display import SVG
SVG("https://travis-ci.com/sdpython/ensae_teaching_cs.svg?branch=master")
SVG("https://codecov.io/github/sdpython/ensae_teaching_cs/coverage.svg?branch=master")
SVG("https://badge.fury.io/py/ensae_teaching_cs.svg")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Structure of this module
Step2: Convinient functiones
Step3: textwrap.fill(text, width=70, **kwargs)
Step4: textwrap.dedent(text)
Step5: you probaly notice that the result of textwrap.fill have some unwanted space. in this case we can work around this by dedent it first then fill.
Step6: textwrap.indent(text, prefix, predicate=None)
Step7: To control which lines receive the new prefix, pass a callable as the predicate argument to indent(). The callable will be invoked for each line of text in turn and the prefix will be added for lines where the return value is true.
Step8: This example adds the prefix EVEN to lines that contain an even number of characters.
Step9: textwrap.TextWrapper(**kwargs)
|
<ASSISTANT_TASK:>
Python Code:
import textwrap
sample_text = '''
The textwrap module can be used to format text for output in
situations where pretty-printing is desired. It offers
programmatic functionality similar to the paragraph wrapping
or filling features found in many text editors.
'''
wrap_result = textwrap.wrap(sample_text,width=30)
wrap_result
fill_result = textwrap.fill(sample_text,width=30)
fill_result
# before dedent
sample_text
dedent_result = textwrap.dedent(sample_text)
# after dedent
dedent_result
dedent_result_wrap = textwrap.fill(dedent_result,width=30)
dedent_result_wrap
indent_result = textwrap.indent(sample_text,prefix="=A=")
indent_result
print(indent_result)
def should_indent(line):
print('Indent {!r}?'.format(line))
return len(line.strip()) % 2 == 0
dedented_text = textwrap.dedent(sample_text)
wrapped = textwrap.fill(dedented_text, width=50)
final = textwrap.indent(wrapped, 'EVEN ',
predicate=should_indent)
print('\nQuoted block:\n')
print(final)
shorten_result = textwrap.shorten(sample_text,width=90)
shorten_result
# use different placeholder
shorten_result_1 = textwrap.shorten(sample_text,width=90,placeholder='....')
shorten_result_1
dedented_text = textwrap.dedent(sample_text).strip()
print(textwrap.fill(dedented_text,
initial_indent='',
subsequent_indent=' ' * 4,
width=50,
))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ์ ๊ทํ
Step2: ๋ฐ์ดํฐ์ธํธ ์ค๋นํ๊ธฐ
Step3: ๊ทธ๋ฃน ์ ๊ทํ ํํ ๋ฆฌ์ผ
Step4: ์ธ์คํด์ค ์ ๊ทํ ํํ ๋ฆฌ์ผ
Step5: ๋ ์ด์ด ์ ๊ทํ ํํ ๋ฆฌ์ผ
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
# Reshape into "channels last" setup.
tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)),
tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format="channels_last"),
# Groupnorm Layer
tfa.layers.GroupNormalization(groups=5, axis=3),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_test, y_test)
model = tf.keras.models.Sequential([
# Reshape into "channels last" setup.
tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)),
tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format="channels_last"),
# LayerNorm Layer
tfa.layers.InstanceNormalization(axis=3,
center=True,
scale=True,
beta_initializer="random_uniform",
gamma_initializer="random_uniform"),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_test, y_test)
model = tf.keras.models.Sequential([
# Reshape into "channels last" setup.
tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)),
tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format="channels_last"),
# LayerNorm Layer
tf.keras.layers.LayerNormalization(axis=3 , center=True , scale=True),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_test, y_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up ...
Step2: Heat equation
Step3: Wave equation
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from matplotlib import rcParams
%matplotlib notebook
rcParams['mathtext.fontset'] = 'cm'
rcParams['font.size'] = 14
red = "#e41a1c"
blue = "#377eb8"
gray = "#eeeeee"
def heat_update(n):
ax0.cla()
for cont in range(ntime):
u_heat[1:N-1] = dt/(factor*dx**2)*(u_heat[2:N] + u_heat[0:N-2] -
2*u_heat[1:N-1]) + u_heat[1:N-1]
ax0.plot(x, u_heat)
ax0.set_ylim(-1.2, 1.2)
ax0.set_title("$t = {:.2f}$".format(n*ntime*dt))
N = 2001
x = np.linspace(-1, 1, N)
dx = x[1] - x[0]
diff = 1
factor = 10
dt = dx**2/(factor*diff)
ntime = 10
u_heat = 1 - abs(4*x)
u_heat[x < -0.25] = 0
u_heat[x > 0.25] = 0
fig0 = plt.figure(figsize=(5, 5))
ax0 = fig0.add_subplot(111)
ani0 = animation.FuncAnimation(fig0, heat_update, range(100), blit=False, repeat=False)
plt.show()
def wave_update(n):
ax1.cla()
for cont in range(ntime):
u_aux = u.copy()
u_aux2 = u_old.copy()
u_old[:] = u[:]
u_w[1:N-1] = alpha**2*(u_aux[2:N] + u_aux[0:N-2] - 2*u_aux[1:N-1]) \
+ 2*u_aux[1:N-1] - u_aux2[1:N-1]
ax1.plot(x, u_w)
ax1.set_ylim(-1.2, 1.2)
ax1.set_title("$t = {:.2f}$".format(n*ntime*dt))
N = 2001
x = np.linspace(-1, 1, N)
dx = x[1] - x[0]
vel = 1
factor = 10
dt = dx/(factor*vel)
alpha = vel*dt/dx
ntime = 1000
u_old = 1 - abs(4*x)
u_old[x < -0.25] = 0
u_old[x > 0.25] = 0
u_w = u_old.copy()
fig1 = plt.figure(figsize=(5, 5))
ax1 = fig1.add_subplot(111)
ani1 = animation.FuncAnimation(fig1, wave_update, range(100), blit=False, repeat=False)
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in the letters saved by the lettersketch app after converting into grayscale and resizing
Step2: Reshape the image arrays
Step3: Convert white into black background
Step4: Load the test data, Convert white into black background, Reshape the data, and Plot a sample
Step5: Create a label dictionary for the letters in the training set
Step6: Assign labels to the images in the training set
Step7: Vectorize the labels
Step8: Join data and data labels
Step12: <font color='green'>Utilities for working with a standard neural net</font>
Step13: <font color='green'>Fit the neural net to the input data</font>
Step14: Save the weights and biases in JSON
Step15: <font color='green'>Test how well the model is performing</font>
|
<ASSISTANT_TASK:>
Python Code:
dirName = "../lettersketch/assets/train_images/UpperCase/StraightLines/"
fileNames = []
fileLetters = []
for fileName in os.listdir(dirName):
if fileName.endswith(".png") and (not "__" in fileName):
fileNames.append(dirName+fileName)
letter = fileName.split("_")[1]
fileLetters.append(letter)
#print fileNames
#def rgb2gray(rgb):
# return np.dot(rgb[...,:3], [0.299, 0.587, 0.144])
#
#data = np.array([rgb2gray(mpimg.imread(fileName)) for fileName in fileNames], dtype = np.float64)
#data.shape
data = np.array([np.asarray(Image.open(fileName).convert('L').resize((28, 28), Image.NEAREST))
for fileName in fileNames], dtype = np.float64)
print data.shape
data = data.reshape(data.shape[0], data.shape[1]*data.shape[2])
print data.shape
for dt in np.nditer(data, op_flags=['readwrite']):
dt[...] = dt/255.0
if (dt == 1.0):
dt[...] = 0.0
testDirName = "../lettersketch/assets/test_images/"
testNames = []
for fileName in os.listdir(testDirName):
if fileName.endswith(".png") and ("__" in fileName):
testNames.append(testDirName+fileName)
testData = np.array([np.asarray(Image.open(fileName).convert('L').resize((28, 28), Image.NEAREST))
for fileName in testNames], dtype = np.float64)
print testData.shape
testData = testData.reshape(testData.shape[0], testData.shape[1]*testData.shape[2])
print testData.shape
for dt in np.nditer(testData, op_flags=['readwrite']):
dt[...] = dt/255.0
if (dt == 1.0):
dt[...] = 0.0
imgplot = plt.imshow(data[9].reshape(28,28), cmap="gray")
testplot = plt.imshow(testData[1].reshape(28,28), cmap ="gray")
labelDict = {'E':0, 'F':1, 'H':2, 'I':3, 'L':4, 'T':5}
print fileLetters
fileLabels = [labelDict[letter] for letter in fileLetters]
print fileLabels
def vectorizeLabels(label):
vector = np.zeros((6))
vector[label] = 1.0
return vector
dataLabels = np.array([vectorizeLabels(label) for label in fileLabels])
print dataLabels[0]
print data.shape
print dataLabels.shape
training_data = zip(data, dataLabels)
#print training_data[0]
%load nnutils.py
#---------------------------------------------------
# A neural net class
#---------------------------------------------------
class NeuralNet(object):
Constructor
# layers: vector of length numLayers containing the
# the number of neurons in each layer
# e.g., layers = (4, 3, 2) -> 4 input values, 3 neurons in hidden layer, 2 output values
# biases: initialized with random numbers except for first layer
# e.g., biases = [ [b11, b21, b31]^T,
# [b12, b22]^T ]
# weights: initialized with random numbers
# e.g., [ [[w111, w121, w131, w141],
# [w211, w221, w231, w241],
# [w311, w321, w331, w341]],
# [[w112, w122, w132],
# [w212, w222, w232]] ]
def __init__(self, layers):
self.numLayers = len(layers)
self.numNeurons = layers
self.biases = [random.randn(layer, 1) for layer in layers[1:]]
self.weights = [random.randn(layer2, layer1)
for layer1, layer2 in zip(layers[:-1], layers[1:])]
Batch stochastic gradient descent to find minimum of objective function
# training_data: [(x1,y1),(x2,y2),....]
# where x1, x2, x3, ... are input data vectors
# y1, y2, y3, ... are labels
# max_iterations: number of iterations
# batch_size: size of training batch
# learning_rate: gradient descent parameter
def batchStochasticGradientDescent(self, training_data, max_iterations, batch_size,
learning_rate):
# Get the number of training images
nTrain = len(training_data)
# Loop thru iterations
for it in xrange(max_iterations):
# Shuffle the training data
random.shuffle(training_data)
# Choose subsets of the training data
batches = [ training_data[start:start+batch_size]
for start in xrange(0, nTrain, batch_size) ]
# Loop thru subsets
for batch in batches:
self.updateBatch(batch, learning_rate)
#print "Iteration {0} complete".format(it)
#print "weights = ", self.weights
#print "biases = ", self.biases
Partial update of weights and biases using gradient descent
# with back propagation
def updateBatch(self, batch, learning_rate):
# Initialize gradC_w and gradC_b
gradC_w = [np.zeros(w.shape) for w in self.weights]
gradC_b = [np.zeros(b.shape) for b in self.biases]
# Loop through samples in the batch
for xx, yy in batch:
# Compute correction to weights & biases using forward and backprop
delta_gradC_w, delta_gradC_b = self.updateGradient(xx, yy)
# Update the gradients
gradC_w = [grad + delta_grad for grad, delta_grad in zip(gradC_w, delta_gradC_w)]
gradC_b = [grad + delta_grad for grad, delta_grad in zip(gradC_b, delta_gradC_b)]
# Update the weight and biases
self.weights = [ weight - (learning_rate/len(batch))*grad
for weight, grad in zip(self.weights, gradC_w) ]
self.biases = [ bias - (learning_rate/len(batch))*grad
for bias, grad in zip(self.biases, gradC_b) ]
# Forward and then backpropagation to compute the gradient of the objective function
def updateGradient(self, xx, yy):
# Reshape into column vectors
xx = np.reshape(xx, (len(xx), 1))
yy = np.reshape(yy, (len(yy), 1))
# Initialize gradC_w and gradC_b
gradC_w = [np.zeros(w.shape) for w in self.weights]
gradC_b = [np.zeros(b.shape) for b in self.biases]
# Compute forward pass through net
# Initial activation value = input value
activationValue = xx
activationValues = [xx]
layerOutputValues = []
# Loop through layers
for weight, bias in zip(self.weights, self.biases):
#print weight.shape
#print activationValue.shape
layerOutputValue = np.dot(weight, activationValue) + bias
layerOutputValues.append(layerOutputValue)
activationValue = self.activationFunction(layerOutputValue)
activationValues.append(activationValue)
# Compute backpropagation corrections
# Initial deltas
delta = self.derivOfCostFunction(activationValues[-1], yy) * self.derivActivationFunction(layerOutputValues[-1])
gradC_b[-1] = delta
gradC_w[-1] = np.dot(delta, activationValues[-2].transpose())
# Loop backward thru layers
for layer in xrange(2, self.numLayers):
layerOutputValue = layerOutputValues[-layer]
derivActivation = self.derivActivationFunction(layerOutputValue)
delta = np.dot(self.weights[-layer + 1].transpose(), delta)*derivActivation
gradC_b[-layer] = delta
gradC_w[-layer] = np.dot(delta, activationValues[-layer-1].transpose())
# Return updated gradients
return(gradC_w, gradC_b)
# The activation function
def activationFunction(self, xx):
return 1.0/(1.0 + np.exp(-xx))
# Derivative of activation function
def derivActivationFunction(self, xx):
return self.activationFunction(xx)*(1.0 - self.activationFunction(xx))
# Derivative of the cost function with respect to output values
def derivOfCostFunction(self, xx, yy):
return (xx - yy)
# The feedforward output computation for the network
# inputVector: (n, 1) array
# n = number of inputs to network
# outputVector: ????
def forwardCompute(self, inputVector):
for bias, weight in zip(self.biases, self.weights):
xx = np.dot(weight, inputVector) + bias
inputVector = self.activationFunction(xx)
return inputVector
net = NeuralNet([4, 3, 2])
net.weights
net.biases
testNN = NeuralNet([28*28, 49, 16, 6])
max_iterations = 100
batch_size = 1
learning_rate = 0.01
testNN.batchStochasticGradientDescent(training_data, max_iterations, batch_size,
learning_rate)
weightsList = testNN.weights
for weights in weightsList:
print weights.shape
biasesList = testNN.biases
for biases in biasesList:
print biases.shape
weightsFileName = "../lettersketch/assets/json/UpperCase_StraightLines_weights.json"
biasesFileName = "../lettersketch/assets/json/UpperCase_StraightLines_biases.json"
for weights in weightsList:
json.dump(weights.tolist(), codecs.open(weightsFileName, 'a', encoding='utf-8'),
separators=(',', ':'), sort_keys=True, indent=2)
for biases in biasesList:
json.dump(biases.tolist(), codecs.open(biasesFileName, 'a', encoding='utf-8'),
separators=(',', ':'), sort_keys=True, indent=2)
fig = plt.figure()
num_image_rows = np.ceil(np.sqrt(testData.shape[0]))
for i in range(0, testData.shape[0]):
a = fig.add_subplot(num_image_rows/2, num_image_rows*2, i+1)
a.set_title(i)
testplot = plt.imshow(testData[i].reshape(28,28), cmap ="gray")
plt.axis('off')
result = testNN.forwardCompute(np.reshape(testData[60], (28*28,1)))
letterIndex = np.argmax(result)
print letterIndex
print labelDict.keys()[labelDict.values().index(letterIndex)]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Podemos graficar estas raรญces, de la forma siguiente
Step2: Note que estas raรญces no estรกn igualmente espaciadas, lo que se aprecia mejor al graficar la diferencia entre una raiz y la siguiente
Step3: Vemos sin embargo que la diferencia $\alpha_{\nu,n+1}-\alpha_{\nu,n}$ tiende a $\pi$ para valores grandes de $n$.
Step4: En este caso, el grรกfico de las diferencias es tambiรฉn similar al caso anterior
|
<ASSISTANT_TASK:>
Python Code:
for nu in range(5):
print('nu = '+str(nu)+': '+str(jn_zeros(nu,5)))
figure()
nmax = 20
numax = 10
for nu in range(numax):
scatter(nu*ones(nmax),jn_zeros(nu,nmax))
ylim(0,50)
xlim(-1,numax)
xlabel('$\\nu$', fontsize=15)
ylabel(r'$\alpha_{\nu,n}$', fontsize=15)
xticks(range(numax))
grid()
savefig('../figs/fig-Bessel-ceros-01.pdf')
figure()
nmax = 20
numax = 10
for nu in range(numax):
plot(range(nmax-1),diff(jn_zeros(nu,nmax)), marker='o', markersize=5, lw=1, label='$\\nu = %d$'%nu)
xlim(-1,nmax-1)
ylim(3,)
xlabel('$n$', fontsize=15)
ylabel('$\\alpha_{\\nu,n+1}-\\alpha_{\\nu,n}$', fontsize=15)
xticks(range(nmax))
legend()
grid()
savefig('../figs/fig-Bessel-ceros-02.pdf')
for nu in range(5):
print('nu = '+str(nu)+': '+str(jnp_zeros(nu,5)))
figure()
nmax = 20
numax = 10
for nu in range(numax):
scatter(nu*ones(nmax),jnp_zeros(nu,nmax))
ylim(0,50)
xlim(-1,numax)
xlabel('$\\nu$', fontsize=15)
ylabel('$\\beta_{\\nu,n}$', fontsize=15)
xticks(range(numax))
grid()
figure()
nmax = 20
numax = 10
for nu in range(numax):
plot(range(nmax-1),diff(jnp_zeros(nu,nmax)), marker='o', markersize=5, lw=1, label='$\\nu = %d$'%nu)
xlim(-1,nmax-1)
ylim(3,)
xlabel('$n$', fontsize=15)
ylabel('$\\beta_{\\nu,n+1}-\\beta_{\\nu,n}$', fontsize=15)
xticks(range(nmax))
legend()
grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Curve data
Step2: You can also access curves by index (remembering that the first index is zero but the first curve is the depth). For example, the last twelve values
Step3: All the curves are stored as a two-dimensional ndarray under the data attribute
Step4: Header information
Step5: In order from top to bottom as they usually are in LAS files, the section attributes are
Step6: A simpler way of viewing this is to use the special _d attribute of each section's dictionary, but note that any changes to this dictionary do not have any effect
Step7: To change metadata information you have to edit the Metadata object's particular field explicitly
Step8: The other sections have slightly different classes for records
Step9: For convenience there's a special attribute which is an ordinary dictionary that brings together all the metadata in the file. Again, changes to this dictionary have no effect.
Step10: As per the standard, the ~Other section is free text. I know many companies put LAS-formatted metadata in here, but this isn't parsed yet.
Step11: Finally there is some useful information in the provenance attribute that depends on how you opened the file in the first place
Step12: Curves (more detail)
Step13: Also, the index of the LAS file can be accessed directly by the .index attribute
|
<ASSISTANT_TASK:>
Python Code:
import las_reader
l = las_reader.read("GT14574.LAS")
print(type(l))
print(l._text)
l.keys()
l['NEUT']
print(l['GCPS'][-12:])
print(l[1][-12:])
print(l['NEUT'][-12:])
print(l[-1][-12:])
print(l.data.shape)
print(l.data)
m = las_reader.las.Metadata(mnemonic="SPEED", unit='something?', value=1.5, descr='The speed of light')
m.value = 3 * 10 ** 8
m.unit = 'm/s'
print(m)
l.version
l.version['VERS'].value
l.well
l.well._d
l.well['SRVC'].value = "My logging company -- not anyone's"
l.well._d
l.params
l.metadata
print l.other
l.provenance
l.curves
neutron_log = l.get_curve_name("NEUT")
print(neutron_log)
neutron_log.data
print(l["DEPT"][:10])
print(l.index[:10])
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
%matplotlib inline
l2 = las_reader.read("GT1457B.LAS")
fig = plt.figure(figsize=(18, 5))
gs = gridspec.GridSpec(1, len(l2.curves)-1, wspace=0.2)
for i, curve in enumerate(l2.curves[1:]):
if i > 0:
shareax = ax
else:
shareax = False
ax = fig.add_subplot(gs[i], sharey=None)
ax.plot(curve.data, l2.index)
ax.set_title(curve.mnemonic)
if i > 0:
ax.set_yticklabels([])
ax.set_ylim(*ax.get_ylim()[::-1])
ax.set_xticklabels(["%.0f" % xi if xi in ax.get_xlim() else "" for xi in ax.get_xticks()], rotation=30)
print(l2._text)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Marginalisation
Step5: We now use pythons lambda functions to generate the specific prior and likelihoods
Step6: We now have a choice about how to initialise the walkers. In the emcee docs the
Step8: We can plot the progress of each walker during the MCMC exploration of
Step9: We now can plot the marginal distributions in a so-called triangle plot. First "burning"
|
<ASSISTANT_TASK:>
Python Code:
# Model parameters
A = 2.3
f = 1
X0 = 2.5
# Noise parameter
sigma = 0.6
N = 300
t = np.sort(np.random.uniform(0, 10, N))
X = X0 + A * np.sin(2 * np.pi * f * t) + np.random.normal(0, sigma, N)
plt.plot(t, X, "o")
plt.xlabel("$t$")
plt.ylabel(r"$X_{\mathrm{obs}}$")
plt.show()
def Generic_lnuniformprior(theta, theta_lims):
Generic uniform priors on theta
Parameters
----------
theta : array_like
Value of the parameters
theta_lims: array_like of shape (len(theta), 2)
Array of pairs [min, max] for each theta paramater
theta_lims = np.array(theta_lims)
if all(theta - theta_lims[:, 0] > 0 ) and all(theta_lims[:, 1] - theta > 0):
return np.prod(1.0/np.diff(theta_lims, axis=1))
return -np.inf
def Generic_lnlike(theta, t, x, model):
Generic likelihood function for signal in Gaussian noise
Parameters
----------
theta : array_like
Value of the parameters, the noise strength `sigma` should ALWAYS be
the last element in the list
t : array_like
The independant variable
x : array_like
The observed dependent variable
model : func
Signal model, calling `model(theta[:-1], t)` should
produced the corresponding value of the signal
alone `x_val`, without noise.
sigma2 = theta[-1]**2
return -0.5*(np.sum((x-model(theta[:-1], t))**2 / sigma2 + np.log(2*np.pi*sigma2)))
# Prior
theta_lims = [[2.0, 2.5],
[0.99, 1.01],
[1.0, 5.0],
[0.3, 0.9]
]
lnprior = lambda theta: Generic_lnuniformprior(theta, theta_lims)
# Likelihood
def model(theta, t):
A, f, X0 = theta
return X0 + A * np.sin(2 * np.pi * f * t)
lnlike = lambda theta, x, y: Generic_lnlike(theta, x, y, model)
# Tie the two together
def lnprob(theta, x, y):
Generic function ties together the lnprior and lnlike
lp = lnprior(theta)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(theta, x, y)
ndim = 4
nwalkers = 100
# Initialise the walkers
walker_pos = [[np.random.uniform(tl[0], tl[1]) for tl in theta_lims]
for i in range(nwalkers)]
# Run the sampler
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=(t, X))
out = sampler.run_mcmc(walker_pos, 2000)
def PlotWalkers(sampler, symbols=None, alpha=0.4, color="k"):
Plot all the chains from a sampler
nwalkers, nsteps, ndim = sampler.chain.shape
fig, axes = plt.subplots(ndim, 1, sharex=True, figsize=(8, 9))
for i in range(ndim):
axes[i].plot(sampler.chain[:, :, i].T, color="k", alpha=alpha)
if symbols:
axes[i].set_xlabel(symbols[i])
symbols = ["$A$", "$f$", "$X_{0}$", "$\sigma$"]
PlotWalkers(sampler, symbols=symbols)
plt.show()
samples = sampler.chain[:, 1000:, :].reshape((-1, ndim))
fig = triangle.corner(samples, labels=symbols,
truths=[A, f, X0, sigma]
)
axes = fig.get_axes()
ax = axes[0]
def prior(x):
return np.zeros(len(x)) + 7419.1
xlims = ax.get_xlim()
x = np.linspace(xlims[0], xlims[1], 100)
ax.plot(x, prior(x), "r", lw=10)
print ax.get_ylim(), ax.get_xlim()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We want to compute the adjustment of the economy when this tax, goes back progressively from 10% to 0%, over 10 periods.
|
<ASSISTANT_TASK:>
Python Code:
model.calibration
model.residuals()
exo_g = linspace(0.1,0,10) # this is a vector of size 10
exo_g = atleast_2d(exo_g).T # the solver expects a 1x10 vector
print(exo_g.shape)
exo_g
# Let's solve for the optimal adjustment by assuming that the
# economy returns to steady-state after T=50 periods.
from dolo.algos.perfect_foresight import deterministic_solve
sim = deterministic_solve(model, shocks=exo_g, T=50)
display(sim) # it returns a timeseries object
model
plt.plot(figsize=(10,10))
plt.subplot(221)
plt.plot(sim['k'], label='capital')
plt.plot(sim['y'], label='production')
plt.legend()
plt.subplot(222)
plt.plot(sim['g'], label='gvt. spending')
plt.plot(sim['c'], label='consumption')
plt.legend()
plt.subplot(223)
plt.plot(sim['n'], label='work')
plt.plot(sim['i'], label='investment')
plt.legend()
plt.subplot(224)
plt.plot(sim['w'], label='wages')
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Encoding syllables
Step2: Once the syllabic segments have been marked as such in the phone inventory, the next step is to actually create the syllable
Step3: The encode_syllables function uses a maximum onset algorithm based on all existing word-initial sequences of phones not
Step4: Once pauses are encoded, the next step is to actually create the utterance annotations as follows
Step5: In many cases, it is desirable
Step6: Once enrichment is complete, we can then query information and extract information about these characteristics of speakers.
Step7: Following this enrichment step, words will have a type property of stress_pattern and syllables will have a token property
Step8: Next we will encode the number of syllables per word
|
<ASSISTANT_TASK:>
Python Code:
import os
from polyglotdb import CorpusContext
corpus_root = '/mnt/e/Data/pg_tutorial'
syllabics = ["ER0", "IH2", "EH1", "AE0", "UH1", "AY2", "AW2", "UW1", "OY2", "OY1", "AO0", "AH2", "ER1", "AW1",
"OW0", "IY1", "IY2", "UW0", "AA1", "EY0", "AE1", "AA0", "OW1", "AW0", "AO1", "AO2", "IH0", "ER2",
"UW2", "IY0", "AE2", "AH0", "AH1", "UH2", "EH2", "UH0", "EY1", "AY0", "AY1", "EH0", "EY2", "AA2",
"OW2", "IH1"]
with CorpusContext('pg_tutorial') as c:
c.encode_type_subset('phone', syllabics, 'syllabic')
with CorpusContext('pg_tutorial') as c:
c.encode_syllables(syllabic_label='syllabic')
pause_labels = ['<SIL>']
with CorpusContext('pg_tutorial') as c:
c.encode_pauses(pause_labels)
with CorpusContext('pg_tutorial') as c:
c.encode_utterances(min_pause_length=0.15)
speaker_enrichment_path = os.path.join(corpus_root, 'enrichment_data', 'speaker_info.csv')
with CorpusContext('pg_tutorial') as c:
c.enrich_speakers_from_csv(speaker_enrichment_path)
lexicon_enrichment_path = os.path.join(corpus_root, 'enrichment_data', 'iscan_lexicon.csv')
with CorpusContext('pg_tutorial') as c:
c.enrich_lexicon_from_csv(lexicon_enrichment_path)
with CorpusContext('pg_tutorial') as c:
c.encode_stress_from_word_property('stress_pattern')
with CorpusContext('pg_tutorial') as c:
c.encode_rate('utterance', 'syllable', 'speech_rate')
with CorpusContext('pg_tutorial') as c:
c.encode_count('word', 'syllable', 'num_syllables')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll also import our demand data
Step2: As we have fixed costs and variable costs, we'll need to model both production and the status of the factory i.e. whether it is on or off.
Step3: Factory status is modelled as a binary variable. It will have a value of 1 if the factory is on and a value of 0 when the factory is off.
Step4: We instantiate our model and use LpMinimize as the aim is to minimise costs.
Step5: In our objective function we include our 2 costs
Step6: We build up our constraints
Step7: An issue we run into here is that in linear programming we can't use conditional constraints.
Step8: We then solve the model
Step9: Let's take a look at the optimal production schedule output for each month from each factory. For ease of viewing we'll output the data to a pandas DataFrame.
Step10: Notice above that the factory status is 0 when not producing and 1 when it is producing
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import pulp
factories = pd.DataFrame.from_csv('csv/factory_variables.csv', index_col=['Month', 'Factory'])
factories
factories.index
demand = pd.DataFrame.from_csv('csv/monthly_demand.csv', index_col=['Month'])
demand
production = pulp.LpVariable.dicts("production",
((month, factory) for month, factory in factories.index),
lowBound=0,
cat='Integer')
production
factory_status = pulp.LpVariable.dicts("factory_status",
((month, factory) for month, factory in factories.index),
cat='Binary')
model = pulp.LpProblem("Cost minimising scheduling problem", pulp.LpMinimize)
model += pulp.lpSum(
[production[month, factory] * factories.loc[(month, factory), 'Variable_Costs'] for month, factory in factories.index]
+ [factory_status[month, factory] * factories.loc[(month, factory), 'Fixed_Costs'] for month, factory in factories.index]
)
# Production in any month must be equal to demand
months = demand.index
for month in months:
model += production[(month, 'A')] + production[(month, 'B')] == demand.loc[month, 'Demand']
# Production in any month must be between minimum and maximum capacity, or zero.
for month, factory in factories.index:
min_production = factories.loc[(month, factory), 'Min_Capacity']
max_production = factories.loc[(month, factory), 'Max_Capacity']
model += production[(month, factory)] >= min_production * factory_status[month, factory]
model += production[(month, factory)] <= max_production * factory_status[month, factory]
#model += production[(month, factory)] >= min_production
#model += production[(month, factory)] <= max_production
# Factory B is off in May
model += factory_status[5, 'B'] == 0
model += production[5, 'B'] == 0
model.solve()
pulp.LpStatus[model.status]
x=production[1, 'A']
type(x)
output = []
for month, factory in production:
var_output = {
'Month': month,
'Factory': factory,
'Production': production[(month, factory)].varValue,
'Factory Status': factory_status[(month, factory)].varValue
}
output.append(var_output)
output_df = pd.DataFrame.from_records(output).sort_values(['Month', 'Factory'])
output_df.set_index(['Month', 'Factory'], inplace=True)
output_df
# Print our objective function value (Total Costs)
print(pulp.value(model.objective))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: See UCI Data page
Step2: 3. Data Preparation
Step3: 4. Modeling
|
<ASSISTANT_TASK:>
Python Code:
## import statements
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn
from sklearn import datasets
from sklearn.cross_validation import cross_val_predict
from sklearn import linear_model
boston_cols = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
df = pd.read_csv("data/boston/housing.data", sep='\s+', names=boston_cols)
df.head()
sns.pairplot(df[["CRIM", "AGE", "RM", "MEDV"]])
df.describe()
X = df
boston = datasets.load_boston()
y = boston.target
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X, y)
print('Estimated intercept coefficent:', lm.intercept_)
print('Number of coefficients:', len(lm.coef_))
# def predict_house(crim, zn, indus, chas, nox, rm, age, dis, rad, tax, ptratio, b, lstat):
# return lm.intercept_ + lm.coef_[0] * crim \
# + zn * lm.coef_[1] \
# + indus * lm.coef_[2] \
# + chas * lm.coef_[3] \
# + nox * lm.coef_[4] \
# + rm * lm.coef_[5] \
# + age * lm.coef_[6] \
# + dist * lm.coef_[7] \
# + rad * lm.coef_[8] \
# + tax * lm.coef_[9] \
# + ptratio * lm.coef_[10] \
# + b * lm.coef_[11] \
# + lstat * lm.coef_[12]
#print(X.iloc[[41]].values)
#print(y[41])
print("Predicted House price", lm.predict(X)[41] * 1000)
print("Equation house price", (lm.intercept_ + np.dot(lm.coef_, X.iloc[[41]].values[0])) * 1000)
print("Target price", y[41] * 1000)
print("Residual price", (lm.predict(X)[41] - y[41]) * 1000)
print("R^2 =", lm.score(X, y))
predicted = cross_val_predict(lm, boston.data, y, cv=10)
fig, ax = plt.subplots()
ax.scatter(y, predicted)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
ax.set_xlabel('Measured')
ax.set_ylabel('Predicted')
plt.show()
errors = {}
for feature in boston.feature_names:
lm = LinearRegression()
lm.fit(X[[feature]], y)
mean_squared_error = np.mean((y - lm.predict(X[[feature]])) ** 2)
errors[feature] = mean_squared_error
errors = sorted(errors.items(), key=lambda x: x[1])
print("Features that Minimize Error")
print(30 * "-")
for e in errors:
print(e[0], "\t", e[1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Harmonic Minimization
Step2: Next we need to generate some random positions as well as particle sizes.
Step3: Then we need to construct our FIRE minimization function. Like all simulations in JAX MD, the FIRE optimizer is two functions
Step4: Now let's actually do minimization, keepting track of the energy and particle positions as we go.
Step5: Let's plot the nearest distance for different species pairs. We see that particles on average have neighbors that are the right distance apart.
Step6: Now let's plot the system. It's nice and minimized!
Step7: If we want, we can visualize the entire minimization.
Step8: Finally, let's plot the energy trajectory that we observer during FIRE minimization.
|
<ASSISTANT_TASK:>
Python Code:
#@title Imports & Utils
!pip install jax-md
import numpy as onp
import jax.numpy as np
from jax.config import config
config.update('jax_enable_x64', True)
from jax import random
from jax import jit
from jax_md import space, smap, energy, minimize, quantity, simulate
from jax_md.colab_tools import renderer
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(style='white')
def format_plot(x, y):
plt.grid(True)
plt.xlabel(x, fontsize=20)
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 1)):
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
plt.tight_layout()
N = 1000
dimension = 2
box_size = quantity.box_size_at_number_density(N, 0.8, dimension)
displacement, shift = space.periodic(box_size)
key = random.PRNGKey(0)
R = box_size * random.uniform(key, (N, dimension), dtype=np.float32)
# The system ought to be a 50:50 mixture of two types of particles, one
# large and one small.
sigma = np.array([[1.0, 1.2], [1.2, 1.4]])
N_2 = int(N / 2)
species = np.where(np.arange(N) < N_2, 0, 1)
energy_fn = energy.soft_sphere_pair(displacement, species=species, sigma=sigma)
fire_init, fire_apply = minimize.fire_descent(energy_fn, shift)
fire_apply = jit(fire_apply)
fire_state = fire_init(R)
E = []
trajectory = []
for i in range(200):
fire_state = fire_apply(fire_state)
E += [energy_fn(fire_state.position)]
trajectory += [fire_state.position]
R = fire_state.position
trajectory = np.stack(trajectory)
metric = lambda R: space.distance(space.map_product(displacement)(R, R))
dr = metric(R)
plt.plot(np.min(dr[:N_2, :N_2] + 5 * np.eye(N_2, N_2), axis=0), 'o',
label='$\\sigma_{AA}$')
plt.plot(np.min(dr[:N_2, N_2:] + 5 * np.eye(N_2, N_2), axis=0), 'o',
label='$\\sigma_{AB}$')
plt.plot(np.min(dr[N_2:, N_2:] + 5 * np.eye(N_2, N_2), axis=0), 'o',
label='$\\sigma_{BB}$')
plt.legend()
format_plot('', 'min neighbor distance')
finalize_plot()
ms = 45
R_plt = onp.array(fire_state.position)
plt.plot(R_plt[:N_2, 0], R_plt[:N_2, 1], 'o', markersize=ms * 0.5)
plt.plot(R_plt[N_2:, 0], R_plt[N_2:, 1], 'o', markersize=ms * 0.7)
plt.xlim([0, np.max(R[:, 0])])
plt.ylim([0, np.max(R[:, 1])])
plt.axis('off')
finalize_plot((2, 2))
diameter = np.where(species, 1.4, 1.0)
color = np.where(species[:, None],
np.array([[1.0, 0.5, 0.05]]),
np.array([[0.15, 0.45, 0.8]]))
renderer.render(box_size,
{ 'particles': renderer.Disk(trajectory, diameter, color)},
buffer_size=50)
plt.plot(E, linewidth=3)
format_plot('step', '$E$')
finalize_plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Cifar
Step2: Train Logistic Regression Model
Step3: Deploy Logistic Regression Model
Step4: Link your app to your model
Step5: You can view which models your app is linked to by running the code below.
Step7: Now that you've deployed and linked your model to your app, go ahead and check back on your running frontend application from part 1. You should see the accuracy rise from around 50% to the accuracy of your SKLearn model (68-74%), without having to stop or modify your application at all!
Step8: Deploy TensorFlow Model
Step9: Inspect Clipper Metrics
Step10: Congratulations! You've now successfully completed the tutorial. You started Clipper, created an application and queried it from a frontend client, and deployed two models trained in two different machine learning frameworks (Scikit-Learn and TensorFlow) to the running system.
|
<ASSISTANT_TASK:>
Python Code:
import sys
import os
from clipper_admin import Clipper
# Change the username if necessary
user = ""
# Set the path to the SSH key
key = ""
# Set the SSH host
host = ""
clipper = Clipper(host, user, key)
cifar_loc = ""
import cifar_utils
train_x, train_y = cifar_utils.filter_data(
*cifar_utils.load_cifar(cifar_loc, cifar_filename="cifar_train.data", norm=True))
test_x, test_y = cifar_utils.filter_data(
*cifar_utils.load_cifar(cifar_loc, cifar_filename="cifar_test.data", norm=True))
from sklearn import linear_model as lm
def train_sklearn_model(m, train_x, train_y):
m.fit(train_x, train_y)
return m
lr_model = train_sklearn_model(lm.LogisticRegression(), train_x, train_y)
print("Logistic Regression test score: %f" % lr_model.score(test_x, test_y))
model_name = "birds_vs_planes_classifier"
model_added = clipper.deploy_model(
model_name,
1,
lr_model,
"clipper/sklearn_cifar_container:latest",
"doubles",
num_containers=1
)
print("Model deploy successful? {success}".format(success=model_added))
clipper.link_model_to_app(app_name, model_name)
clipper.get_linked_models(app_name)
import os
import tensorflow as tf
import numpy as np
tf_cifar_model_path = os.path.abspath("tf_cifar_model/cifar10_model_full")
tf_session = tf.Session('', tf.Graph())
with tf_session.graph.as_default():
saver = tf.train.import_meta_graph("%s.meta" % tf_cifar_model_path)
saver.restore(tf_session, tf_cifar_model_path)
def tensorflow_score(session, test_x, test_y):
NOTE: This predict method expects pre-whitened (normalized) images
logits = session.run('softmax_logits:0',
feed_dict={'x:0': test_x})
relevant_activations = logits[:, [cifar_utils.negative_class, cifar_utils.positive_class]]
preds = np.argmax(relevant_activations, axis=1)
return float(np.sum(preds == test_y)) / float(len(test_y))
print("TensorFlow CNN test score: %f" % tensorflow_score(tf_session, test_x, test_y))
model_added = clipper.deploy_model(
model_name,
2,
os.path.abspath("tf_cifar_model"),
"clipper/tf_cifar_container:latest",
"doubles",
num_containers=1
)
print("Model deploy successful? {success}".format(success=model_added))
clipper.inspect_instance()
clipper.stop_all()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Acquiring and seeing trends in multidimensional data
Step2: FIGURE 3.1. For the Advertising data, the least squares fit for the regression of sales onto TV is shown. The fit is found by minimizing the sum of squared errors.
Step3: Let $\hat{y}_i = \hat{\beta_0} + \hat{\beta_1}x_i$ be the prediction for $Y$ based on the $i$-th value of $X$.
Step4: The residual is the difference between the model output and the observed output
Step5: The residual is not very useful directly because balances over-estimates and under-estimates to produce the best overall estimate with the least error. We can understand the overall goodness of fit by the residual sum of squares - RSS.
Step6: With this, we can build an independent model for each of the inputs and look at the associates RSS.
Step7: We can look at how well each of these inputs predict the output by visualizing the residuals.
Step8: Figure 3.2 - RSS and least squares regression
Step9: 3.1.2 Assessing the accuracy of coefficient estimates
Step10: "The property of unbiasedness holds for the least squares coefficient estimates given by (3.4) as well
Step11: 3.1.3 Assessing the Acurracy of the Model
Step12: The $R^2$ statistic provides an alternative measure of fit. It takes the form of a proportionโthe proportion of variance explainedโand so it always takes on a value between 0 and 1, and is independent of the scale of Y.
|
<ASSISTANT_TASK:>
Python Code:
# HIDDEN
# For Tables reference see http://data8.org/datascience/tables.html
# This useful nonsense should just go at the top of your notebook.
from datascience import *
%matplotlib inline
import matplotlib.pyplot as plots
import numpy as np
from sklearn import linear_model
plots.style.use('fivethirtyeight')
plots.rc('lines', linewidth=1, color='r')
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
# datascience version number of last run of this notebook
version.__version__
import sys
sys.path.append("..")
from ml_table import ML_Table
import locale
locale.setlocale( locale.LC_ALL, 'en_US.UTF-8' )
# Getting the data
advertising = ML_Table.read_table("./data/Advertising.csv")
advertising.relabel(0, "id")
ax = advertising.plot_fit_1d('Sales', 'TV', advertising.regression_1d('Sales', 'TV'))
_ = ax.set_xlim(-20,300)
lr = advertising.linear_regression('Sales', 'TV')
advertising.plot_fit_1d('Sales', 'TV', lr.model)
advertising.lm_summary_1d('Sales', 'TV')
# Get the actual parameters that are captured within the model
advertising.regression_1d_params('Sales', 'TV')
# Regression yields a model. The computational representation of a model is a function
# That can be applied to an input, 'TV' to get an estimate of an output, 'Sales'
advertise_model_tv = advertising.regression_1d('Sales', 'TV')
# Sales with no TV advertising
advertise_model_tv(0)
# Sales with 100 units TV advertising
advertise_model_tv(100)
# Here's the output of the model applied to the input data
advertise_model_tv(advertising['TV'])
residual = advertising['Sales'] - advertise_model_tv(advertising['TV'])
residual
# Residual Sum of Squares
RSS = sum(residual*residual)
RSS
# This is common enough that we have it provided as a method
advertising.RSS_model('Sales', advertising.regression_1d('Sales', 'TV'), 'TV')
# And we should move toward a general regression framework
advertising.RSS_model('Sales', advertising.linear_regression('Sales', 'TV').model, 'TV')
advertising_models = Table().with_column('Input', ['TV', 'Radio', 'Newspaper'])
advertising_models['Model'] = advertising_models.apply(lambda i: advertising.regression_1d('Sales', i), 'Input')
advertising_models['RSS'] = advertising_models.apply(lambda i, m: advertising.RSS_model('Sales', m, i), ['Input', 'Model'])
advertising_models
advertising_models = Table().with_column('Input', ['TV', 'Radio', 'Newspaper'])
advertising_models['Model'] = advertising_models.apply(lambda i: advertising.linear_regression('Sales', i).model, 'Input')
advertising_models['RSS'] = advertising_models.apply(lambda i, m: advertising.RSS_model('Sales', m, i), ['Input', 'Model'])
advertising_models
for mode, mdl in zip(advertising_models['Input'], advertising_models['Model']) :
advertising.plot_fit_1d('Sales', mode, mdl)
# RSS at arbitrary point
res = lambda b0, b1: advertising.RSS('Sales', b0 + b1*advertising['TV'])
res(7.0325935491276965, 0.047536640433019729)
ax = advertising.RSS_contour('Sales', 'TV', sensitivity=0.2)
ax = advertising.RSS_wireframe('Sales', 'TV', sensitivity=0.2)
# The minima point
advertising.linear_regression('Sales', 'TV').params
# Some other points along the trough
points = [(0.042, 8.0), (0.044, 7.6), (0.050, 6.6), (0.054, 6.0)]
ax = advertising.RSS_contour('Sales', 'TV', sensitivity=0.2)
ax.plot([b0 for b1, b0 in points], [b1 for b1, b0 in points], 'ro')
[advertising.RSS('Sales', b0 + b1*advertising['TV']) for b1,b0 in points]
# Models as lines corresponding to points along the near-minimal vectors.
ax = advertising.plot_fit_1d('Sales', 'TV', advertising.linear_regression('Sales', 'TV').model)
for b1, b0 in points:
fit = lambda x: b0 + b1*x
ax.plot([0, 300], [fit(0), fit(300)])
_ = ax.set_xlim(-20,300)
def model (x):
return 3*x + 2
def population(n, noise_scale = 1):
sample = ML_Table.runiform('x', n, -2, 2)
noise = ML_Table.rnorm('e', n, sd=noise_scale)
sample['Y'] = sample.apply(model, 'x') + noise['e']
return sample
data = population(100, 2)
data.scatter('x')
ax = data.plot_fit('Y', data.linear_regression('Y').model)
data.linear_regression('Y').params
# A random sample of the population
sample = data.sample(10)
sample.plot_fit('Y', sample.linear_regression('Y').model)
nsamples = 5
ax = data.plot_fit('Y', data.linear_regression('Y').model, linewidth=3)
for s in range(nsamples):
fit = data.sample(10).linear_regression('Y').model
ax.plot([-2, 2], [fit(-2), fit(2)], linewidth=1)
adv_sigma = advertising.RSE_model('Sales', advertising.linear_regression('Sales', 'TV').model, 'TV')
adv_sigma
b0, b1 = advertising.linear_regression('Sales', 'TV').params
b0, b1
advertising.RSS_model('Sales', advertising.linear_regression('Sales', 'TV').model, 'TV')
SE_b0, SE_b1 = advertising.SE_1d_params('Sales', 'TV')
SE_b0, SE_b1
# b0 95% confidence interval
(b0-2*SE_b0, b0+2*SE_b0)
# b1 95% confidence interval
(b1-2*SE_b1, b1+2*SE_b1)
# t-statistic of the slope
b0/SE_b0
# t-statistics of the intercept
b1/SE_b1
# Similar to summary of a linear model in R
advertising.lm_summary_1d('Sales', 'TV')
# We can just barely reject the null hypothesis for Newspaper
# advertising effecting sales
advertising.lm_summary_1d('Sales', 'Newspaper')
adver_model = advertising.regression_1d('Sales', 'TV')
advertising.RSE_model('Sales', adver_model, 'TV')
advertising.R2_model('Sales', adver_model, 'TV')
# the other models of advertising suggest that there is more going on
advertising.R2_model('Sales', adver_model, 'Radio')
advertising.R2_model('Sales', adver_model, 'Newspaper')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Building an Animation
Step2: You should see the box slide 1 meter to the right in the viewer. If you missed the animation, you can run it again from the viewer. Click "Open Controls", find the "Animations" section, and click "play".
Step3: To animate the camera, we just have to do that same kind of settransform! to individual frames in an animation
Step4: We can also animate object properties. For example, let's animate the camera's zoom property to smoothly zoom out and then back in. Note that to do this, we have to access a deeper path in the visualizer to get to the actual camera object. For more information, see
Step5: Recording an Animation
|
<ASSISTANT_TASK:>
Python Code:
import meshcat
from meshcat.geometry import Box
vis = meshcat.Visualizer()
## To open the visualizer in a new browser tab, do:
# vis.open()
## To open the visualizer inside this jupyter notebook, do:
# vis.jupyter_cell()
vis["box1"].set_object(Box([0.1, 0.2, 0.3]))
from meshcat.animation import Animation
import meshcat.transformations as tf
anim = Animation()
with anim.at_frame(vis, 0) as frame:
# `frame` behaves like a Visualizer, in that we can
# call `set_transform` and `set_property` on it, but
# it just stores information inside the animation
# rather than changing the current visualization
frame["box1"].set_transform(tf.translation_matrix([0, 0, 0]))
with anim.at_frame(vis, 30) as frame:
frame["box1"].set_transform(tf.translation_matrix([0, 1, 0]))
# `set_animation` actually sends the animation to the
# viewer. By default, the viewer will play the animation
# right away. To avoid that, you can also pass `play=false`.
vis.set_animation(anim)
vis["/Cameras/default"].set_transform(tf.translation_matrix([0, 0, 1]))
anim = Animation()
with anim.at_frame(vis, 0) as frame:
frame["/Cameras/default"].set_transform(tf.translation_matrix([0, 0, 0]))
with anim.at_frame(vis, 30) as frame:
frame["/Cameras/default"].set_transform(tf.translation_matrix([0, 0, 1]))
# we can repeat the animation playback with the
# repetitions argument:
vis.set_animation(anim, repetitions=2)
anim = Animation()
camera_path = "/Cameras/default/rotated/<object>"
with anim.at_frame(vis, 0) as frame:
frame[camera_path].set_property("zoom", "number", 1)
with anim.at_frame(vis, 30) as frame:
frame[camera_path].set_property("zoom", "number", 0.5)
with anim.at_frame(vis, 60) as frame:
frame[camera_path].set_property("zoom", "number", 1)
# While we're animating the camera zoom, we can also animate any other
# properties we want. Let's simultaneously translate the box during
# the same animation:
with anim.at_frame(vis, 0) as frame:
frame["box1"].set_transform(tf.translation_matrix([0, -1, 0]))
with anim.at_frame(vis, 60) as frame:
frame["box1"].set_transform(tf.translation_matrix([0, 1, 0]))
vis.set_animation(anim)
from meshcat.animation import convert_frames_to_video
convert_frames_to_video("/home/rdeits/Downloads/meshcat_1528401494656.tar", overwrite=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Time and space discretization as a Taylor expansion.
Step2: Solve forward in time
Step3: Define the discrete model
Step4: Create functions for the PDE
Step5: A Forward propagation example
Step6: Adjoint test
Step7: Least square objective Gradient
Step8: Adjoint test for the gradient
Step9: Jacobian test
Step10: Gradient test
|
<ASSISTANT_TASK:>
Python Code:
p=Function('p')
b=Function('b')
m,s,h,r = symbols('m s h r')
m=M(x,y)
q=Q(x,y,t)
d=D(x,y,t)
e=E(x,y)
r=rho(x,y)
dtt=as_finite_diff(p(x,y,t).diff(t,t), [t-s,t, t+s])
dt=as_finite_diff(p(x,y,t).diff(t), [t-s, t+s])
# Spacial finite differences can easily be extended to higher order by increasing the list of sampling point in the next expression.
# Be sure to keep this stencil symmetric and everything else in the notebook will follow.
dxx=as_finite_diff(p(x,y,t).diff(x,x), [x-h,x, x+h])
dyy=as_finite_diff(p(x,y,t).diff(y,y), [y-h,y, y+h])
dy=as_finite_diff(p(x,y,t).diff(y), [y-h, y+h])
dx=as_finite_diff(p(x,y,t).diff(x), [x-h, x+h])
dyr=as_finite_diff(b(x,y).diff(y), [y-h, y+h])
dxr=as_finite_diff(b(x,y).diff(x), [x-h, x+h])
dtt,dxx,dyy,dt,dx,dy
# In practice, compute rho. grad(rho) upfront (this is a constant) and just compute the following expression, it avoids
# the recomputation of the gradient at every time step. int the following X and Y are the component of rho.grad(rho) in X and Y respectively
gradgrad=X*dx+Y*dy
expand(gradgrad)
lap=(dxx+dyy)
expand(lap + gradgrad)
# Forward wave equation
wave_equation = m*dtt- (dxx+dyy)- r*(dxr*dx+dyr*dy) - q + e*dt
stencil = solve(wave_equation,p(x,y,t+s))[0]
ts=lambdify((p(x,y,t-s),p(x-h,y,t), p(x,y,t), p(x+h,y,t),p(x,y-h,t), p(x,y+h,t),b(x-h,y), b(x,y), b(x+h,y),b(x,y-h), b(x,y+h), q , m,r, s, h,e),stencil,"numpy")
stencil
# Adjoint wave equation
wave_equationA = m*dtt- (dxx+dyy) - r*(dxr*dx+dyr*dy) - d - e*dt
stencilA = solve(wave_equationA,p(x,y,t-s))[0]
tsA=lambdify((p(x,y,t+s),p(x-h,y,t), p(x,y,t), p(x+h,y,t),p(x,y-h,t), p(x,y+h,t),b(x-h,y), b(x,y), b(x+h,y),b(x,y-h), b(x,y+h), d, m,r, s, h,e),stencilA,"numpy")
stencilA
import matplotlib.pyplot as plt
from matplotlib import animation
hstep=25 #space increment d = minv/(10*f0);
tstep=2 #time increment dt < .5 * hstep /maxv;
tmin=0.0 #initial time
tmax=600 #simulate until
xmin=-500.0 - 10*hstep #left bound
xmax=500.0 + 10*hstep #right bound...assume packet never reaches boundary
ymin=-500.0 - 10*hstep #left bound
ymax=500.0 + 10*hstep #right bound...assume packet never reaches boundary
f0=.010
t0=1/.010
nbpml=10
nx = int((xmax-xmin)/hstep) + 1 #number of points on x grid
ny = int((ymax-ymin)/hstep) + 1 #number of points on x grid
nt = int((tmax-tmin)/tstep) + 2 #number of points on t grid
xsrc=-400
ysrc=0.0
xrec = nbpml+4
#set source as Ricker wavelet for f0
def source(x,y,t):
r = (np.pi*f0*(t-t0))
val = (1-2.*r**2)*np.exp(-r**2)
if abs(x-xsrc)<hstep/2 and abs(y-ysrc)<hstep/2:
return val
else:
return 0.0
def dampx(x):
dampcoeff=1.5*np.log(1.0/0.001)/(5.0*hstep);
if x<nbpml:
return dampcoeff*((nbpml-x)/nbpml)**2
elif x>nx-nbpml-1:
return dampcoeff*((x-nx+nbpml)/nbpml)**2
else:
return 0.0
def dampy(y):
dampcoeff=1.5*np.log(1.0/0.001)/(5.0*hstep);
if y<nbpml:
return dampcoeff*((nbpml-y)/nbpml)**2
elif y>ny-nbpml-1:
return dampcoeff*((y-ny+nbpml)/nbpml)**2
else:
return 0.0
# Velocity and density models
def smooth10(vel,nx,ny):
out=np.ones((nx,ny))
out[:,:]=vel[:,:]
for a in range(5,nx-6):
out[a,:]=np.sum(vel[a-5:a+5,:], axis=0) /10
return out
# True velocity
vel=np.ones((nx,ny)) + 2.0
rho=np.ones((nx,ny))
vel[floor(nx/2):nx,:]=4.5
rho[floor(nx/2):nx,:]=2.0
rho=rho**-1
mt=vel**-2
# Smooth velocity
v0=smooth10(vel,nx,ny)
m0=v0**-2
dm=m0-mt
def Forward(nt,nx,ny,m,rho):
u=np.zeros((nt,nx,ny))
rec=np.zeros((nt,ny-2))
for ti in range(0,nt):
for a in range(1,nx-1):
for b in range(1,ny-1):
src = source(xmin+a*hstep,ymin+b*hstep,tstep*ti)
damp=dampx(a)+dampy(b)
if ti==0:
u[ti,a,b]=ts(0,0,0,0,0,0,rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
src,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
elif ti==1:
u[ti,a,b]=ts(0,u[ti-1,a-1,b],u[ti-1,a,b],u[ti-1,a+1,b],u[ti-1,a,b-1],u[ti-1,a,b+1],rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
src,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
else:
u[ti,a,b]=ts(u[ti-2,a,b],u[ti-1,a-1,b],u[ti-1,a,b],u[ti-1,a+1,b],u[ti-1,a,b-1],u[ti-1,a,b+1],rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
src,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
if a==xrec :
rec[ti,b-1]=u[ti,a,b]
return rec,u
def Adjoint(nt,nx,ny,m,rho,rec):
v=np.zeros((nt,nx,ny))
srca=np.zeros((nt))
for ti in range(nt-1, -1, -1):
for a in range(1,nx-1):
for b in range(1,ny-1):
if a==xrec:
resid=rec[ti,b-1]
else:
resid=0
damp=dampx(a)+dampy(b)
if ti==nt-1:
v[ti,a,b]=tsA(0,0,0,0,0,0,rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
resid,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
elif ti==nt-2:
v[ti,a,b]=tsA(0,v[ti+1,a-1,b],v[ti+1,a,b],v[ti+1,a+1,b],v[ti+1,a,b-1],v[ti+1,a,b+1],rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
resid,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
else:
v[ti,a,b]=tsA(v[ti+2,a,b],v[ti+1,a-1,b],v[ti+1,a,b],v[ti+1,a+1,b],v[ti+1,a,b-1],v[ti+1,a,b+1],rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
resid,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
if abs(xmin+a*hstep-xsrc)<hstep/2 and abs(ymin+b*hstep-ysrc)<hstep/2:
srca[ti]=v[ti,a,b]
return srca,v
def Gradient(nt,nx,ny,m,rho,rec,u):
v1=np.zeros((nx,ny))
v2=np.zeros((nx,ny))
v3=np.zeros((nx,ny))
grad=np.zeros((nx,ny))
for ti in range(nt-1,-1,-1):
for a in range(1,nx-1):
for b in range(1,ny-1):
if a==xrec:
resid=rec[ti,b-1]
else:
resid=0
damp=dampx(a)+dampy(b)
v3[a,b]=tsA(v1[a,b],v2[a-1,b],v2[a,b],v2[a+1,b],v2[a,b-1],v2[a,b+1],rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
resid,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
grad[a,b]=grad[a,b]-(v3[a,b]-2*v2[a,b]+v1[a,b])*(u[ti,a,b])
v1,v2,v3=v2,v3,v1
# No update inside the pml, only in the physical domain
# grad[0:nbpml-1,:]=0
# grad[nx-nbpml-1:nx-1,:]=0
# grad[:,0:nbpml-1]=0
# grad[:,ny-nbpml-1:ny-1]=0
return tstep**-2*grad
def Born(nt,nx,ny,m,rho,dm):
u1=np.zeros((nx,ny))
U1=np.zeros((nx,ny))
u2=np.zeros((nx,ny))
U2=np.zeros((nx,ny))
u3=np.zeros((nx,ny))
U3=np.zeros((nx,ny))
rec=np.zeros((nt,ny-2))
src2=0
for ti in range(0,nt):
for a in range(1,nx-1):
for b in range(1,ny-1):
damp=dampx(a)+dampy(b)
src = source(xmin+a*hstep,ymin+b*hstep,tstep*ti)
u3[a,b]=ts(u1[a,b],u2[a-1,b],u2[a,b],u2[a+1,b],u2[a,b-1],u2[a,b+1],rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
src,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
src2 = -tstep**-2*(u3[a,b]-2*u2[a,b]+u1[a,b])*dm[a,b]
U3[a,b]=ts(U1[a,b],U2[a-1,b],U2[a,b],U2[a+1,b],U2[a,b-1],U2[a,b+1],rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
src2,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
if a==xrec :
rec[ti,b-1]=U3[a,b]
u1,u2,u3=u2,u3,u1
U1,U2,U3=U2,U3,U1
return rec
(rect,ut)=Forward(nt,nx,ny,mt,rho)
fig = plt.figure()
plts = [] # get ready to populate this list the Line artists to be plotted
plt.hold("off")
for i in range(nt):
r = plt.imshow(ut[i,:,:]) # this is how you'd plot a single line...
plts.append( [r] )
ani = animation.ArtistAnimation(fig, plts, interval=50, repeat = False) # run the animation
#plt.show()
fig2 = plt.figure()
plt.hold("off")
shotrec = plt.imshow(rect,vmin=-10,vmax=10) # this is how you'd plot a single line...
#plt.show()
(rec0,u0)=Forward(nt,nx,ny,m0,rho)
(srca,v)=Adjoint(nt,nx,ny,m0,rho,rec0)
plts = [] # get ready to populate this list the Line artists to be plotted
plt.hold("off")
for i in range(0,nt):
r = plt.imshow(v[i,:,:]) # this is how you'd plot a single line...
plts.append( [r] )
ani = animation.ArtistAnimation(fig, plts, interval=50, repeat = False) # run the animation
#plt.show()
shotrec = plt.plot(srca) # this is how you'd plot a single line...
plt.show()
# Actual adjoint test
term1=0
for ti in range(0,nt):
term1=term1+srca[ti]*source(xsrc,ysrc,(ti)*tstep)
term2=LA.norm(rec0)**2
term1,term2,term1-term2,term1/term2
#if abs(term11/term21-1)<1e-9
# print('Adjoint test passed')
# Misfit
F0=.5*LA.norm(rec0-rect)**2
F0
Im1=Gradient(nt,nx,ny,m0,rho,rec0-rect,u0)
shotrec = plt.imshow(rec0-rect,vmin=-100,vmax=100) # this is how you'd plot a single line...
#plt.show()
shotrec = plt.imshow(Im1,vmin=-100000,vmax=100000) # this is how you'd plot a single line...
#plt.show()
Im2=Gradient(nt,nx,ny,m0,rho,rec0-rect,u0)
du1=Born(nt,nx,ny,m0,rho,dm)
term11=np.dot((rec0-rect).reshape(-1),du1.reshape(-1))
term21=np.dot(Im2.reshape(-1),dm.reshape(-1))
term11,term21,term11-term21,term11/term21
#if abs(term11/term21-1)<1e-9
# print('Adjoint test passed')
shotrec = plt.imshow(Im2,vmin=-10000,vmax=10000) # this is how you'd plot a single line...
#plt.show()
H=[1,0.1,0.01,.001,0.0001,0.00001,0.000001]
(D1,u0)=Forward(nt,nx,ny,m0,rho)
dub=Born(nt,nx,ny,m0,rho,dm)
error1=np.zeros((7))
error2=np.zeros((7))
for i in range(0,7):
mloc=m0+H[i]*dm
(d,u)=Forward(nt,nx,ny,mloc,rho)
error1[i] = LA.norm(d - D1,ord=1)
error2[i] = LA.norm(d - D1 - H[i]*dub,ord=1)
hh=np.zeros((7))
for i in range(0,7):
hh[i]=H[i]*H[i]
shotrec = plt.loglog(H,error1,H,H) # this is how you'd plot a single line...
plt.show()
shotrec = plt.loglog(H,error2,H,hh) # this is howyou'd plot a single line...
plt.show()
(DT,uT)=Forward(nt,nx,ny,mt,rho)
(D1,u0)=Forward(nt,nx,ny,m0,rho)
F0=.5*LA.norm(D1-DT)**2
g=Gradient(nt,nx,ny,m0,rho,D1-DT,u0)
G=np.dot(g.reshape(-1),dm.reshape(-1));
error21=np.zeros((7))
error22=np.zeros((7))
for i in range(0,7):
mloc=m0+H[i]*dm
(D,u)=Forward(nt,nx,ny,mloc,rho)
error21[i] = .5*LA.norm(D-DT)**2 -F0
error22[i] = .5*LA.norm(D-DT)**2 -F0 - H[i]*G
shotrec = plt.loglog(H,error21,H,H) # this is how you'd plot a single line...
plt.show()
shotrec = plt.loglog(H,abs(error22),H,hh) # this is how you'd plot a single line...
plt.show()
error22
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generating synthetic data
Step3: Fit a line of best fit using least squares (not ODR)
Step6: Line of best fit using ODR
Step7: Let's do it!
Step9: Our results are now stored in an odr object. We can extract the parameters by typing odr.beta[0] (first parameter, intercept) or odr.beta[1] for the slope.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy as scipy
import scipy.odr as odr
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import rc
# set to use tex, but make sure it is sans-serif fonts only
rc('text', usetex=True)
rc('text.latex', preamble=r'\usepackage{cmbright}')
rc('font', **{'family': 'sans-serif', 'sans-serif': ['Helvetica']})
# Magic function to make matplotlib inline;
# other style specs must come AFTER
%matplotlib inline
# This enables SVG graphics inline.
# There is a bug, so uncomment if it works.
%config InlineBackend.figure_formats = {'png', 'retina'}
# JB's favorite Seaborn settings for notebooks
rc = {'lines.linewidth': 2,
'axes.labelsize': 18,
'axes.titlesize': 18,
'axes.facecolor': 'DFDFE5'}
sns.set_context('notebook', rc=rc)
sns.set_style("dark")
mpl.rcParams['xtick.labelsize'] = 16
mpl.rcParams['ytick.labelsize'] = 16
mpl.rcParams['legend.fontsize'] = 14
n = 15
x = np.linspace(0, 10, n)
xerr = np.abs(np.random.normal(0, 2.5, n))
x = np.random.normal(x, xerr, n)
y = np.linspace(0, 20, n)
yerr = np.abs(np.random.normal(0, 2.5, n))
y = np.random.normal(y, yerr)
def line(x, a):
The line of best fit.
# unpack the parameters:
y = a*x
return y
popt, pcov = scipy.optimize.curve_fit(line, x, y)
# unpack the parameters of the line of best fit:
a = popt
plt.errorbar(x, y, xerr, yerr, 'o')
plt.plot(x, line(x, a), label='fit')
plt.legend(title='Legend')
def odr_line(p, x):
The line of best fit.
# unpack the parameters:
y = p*x
return y
def perform_odr(x, y, xerr, yerr):
Finds the ODR for data {x, y} and returns the result
linear = odr.Model(odr_line)
mydata = odr.Data(x, y, wd=1./xerr, we=1./yerr)
myodr = odr.ODR(mydata, linear, beta0=[0])
output = myodr.run()
return output
regression = perform_odr(x, y, xerr, yerr)
plt.errorbar(x, y, xerr, yerr, 'o', alpha=0.7)
plt.plot(x, line(x, a), label='least squares', lw=3, color='blue')
plt.plot(x, line(x, regression.beta[0]), label='ODR', lw=3, color='purple')
# plot the true line:
X = np.linspace(-10, 10, 10)
plt.plot(X, 2*X, label='True', lw=3, ls='--')
plt.legend(title='Legend', loc=(1, .5))
plt.xlim(np.min(x)-1, np.max(x)+1)
plt.ylim(np.min(y)-1, np.max(y)+1)
print(
The true model is y = 2.00x
Least squares returns y = {0:.2g}x
ODR returns y = {1:.2g}x
.format(a[0], regression.beta[0]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-2', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulation
Step2: Experiment 1
Step3: Experiment 2
Step4: Experiment 3
Step5: Networkx
|
<ASSISTANT_TASK:>
Python Code:
import networkx as netx
import numpy as np
import matplotlib.pyplot as plt
import warnings
import random
import itertools
def power_law_graph(G):
histo = netx.degree_histogram(G)
_ = plt.loglog(histo, 'b-', marker='o')
_ = plt.ylabel("k(x)")
_ = plt.xlabel("k")
plt.show()
def plot(T,skew=True):
plt.axis('off')
pos=netx.spring_layout(T)
if skew:
D = dict(netx.degree(T))
sizes = [v * 200 for k,v in D.items()]
netx.draw_networkx(T, pos=pos, with_labels=True, nodelist=D.keys(), node_size=sizes)
else:
netx.draw_networkx(T, pos=pos, with_labels=True)
plt.show()
def hist(T,bins=25):
degree_lookup = dict(netx.degree(T))
degrees = list(degree_lookup.values())
_ = plt.title("Degree Distribution")
_ = plt.hist(degrees,bins=bins)
plt.show()
def generate(nodes):
n = 0
while n < nodes:
n = n + 1
yield n
# number of attachments per round
m = 1
# nodes at time t = 0
a = 1
t0 = m + a
# build graph
G = netx.Graph()
paths = list(generate(t0))
G.add_path(paths)
plot(G)
## builds a distribution node sequence
def linear_distribute(G,disregard=[]):
for node in G.nodes(data = False):
if node in disregard:
continue
for edge in netx.edges(G,node):
yield node
# randomly gets attachments of the new node
def get_linear_attachments(G,m=1):
if m < 1:
return []
nodes = []
i = 0
while i < m:
distribution = list(linear_distribute(G,disregard=nodes))
nodes.append(random.choice(distribution))
i = i + 1
return nodes
def simulate(G,node,m=1,rounds=100):
while node <= rounds:
attachments = get_linear_attachments(G,m=m)
G.add_node(node)
for attachment in attachments:
G.add_edge(node,attachment)
node = node + 1
T = G.copy()
simulate(T,m+1,m,rounds=15)
plot(T,skew=True)
# ramp up...
T1 = G.copy()
simulate(T1,m+1,m,rounds=3000)
hist(T1)
power_law_graph(T1)
def draw(G,disregard=[],alpha=1):
nodes = { }
lookup = dict(G.degree())
denominator = 0.0
for k,v in lookup.items():
if k in disregard:
continue
nodes[k] = (v ** alpha)
denominator = denominator + nodes[k]
#print(np.array(list(nodes.values()))/denominator)
drawing = random.uniform(0,1)
key = None
bottom = 0.0
top = 0.0
for k,v in nodes.items():
key = k
top = bottom + (v / denominator)
if (drawing >= bottom) & (drawing < top):
key = k
break
bottom = top
# print(drawing, " ", bottom, "-", top, ":", key)
return key
# randomly gets attachments of the new node
def get_attachments(G,alpha=1,m=1):
if m < 1:
return []
nodes = []
i = 0
while i < m:
node = draw(G,disregard=nodes,alpha=alpha)
nodes.append(node)
i = i + 1
# print()
return nodes
def simulate2(G,node,alpha=1,m=1,rounds=100):
while node <= rounds:
attachments = get_attachments(G,alpha,m=m)
G.add_node(node)
for attachment in attachments:
G.add_edge(node,attachment)
node = node + 1
T2 = G.copy()
simulate2(T2,m+1,.5,m,3000)
hist(T2)
power_law_graph(T2)
T3 = G.copy()
simulate2(T3,m+1,3.5,m,3000)
hist(T3)
nodes = sorted(list(dict(T3.degree()).values()),reverse=True)
# print segment of list out ...
nodes[0:5]
power_law_graph(T3)
T4 = netx.generators.random_graphs.barabasi_albert_graph(8,1)
plot(T4)
scores = netx.preferential_attachment(G=T4)
for u,v,score in scores:
print('(%d, %d) -> score = %d' % (u, v, score))
len(list(T4.neighbors(1))) * len(list(T4.neighbors(7)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The returned images NumPy array will have the shape $n \times m$, where $n$ is the number of samples, and $m$ is the number of features. The images in the MNIST dataset consist of $28 \times 28$ pixels, and each pixel is represented by a grayscale intensity value. Here, we unroll the $28 \times 28$ images into 1D row vectors, which represent the rows in our matrix; thus $m=784$.
Step2: To check if the pixels were retrieved correctly, let us print a few images
Step3: Lastly, we can save the NumPy arrays as CSV files for more convenient retrievel, however, I wouldn't recommend storing the 60,000 samples as CSV files due to the enormous file size.
|
<ASSISTANT_TASK:>
Python Code:
import os
import struct
import numpy as np
def load_mnist(path, which='train'):
if which == 'train':
labels_path = os.path.join(path, 'train-labels-idx1-ubyte')
images_path = os.path.join(path, 'train-images-idx3-ubyte')
elif which == 'test':
labels_path = os.path.join(path, 't10k-labels-idx1-ubyte')
images_path = os.path.join(path, 't10k-images-idx3-ubyte')
else:
raise AttributeError('`which` must be "train" or "test"')
with open(labels_path, 'rb') as lbpath:
magic, n = struct.unpack('>II', lbpath.read(8))
labels = np.fromfile(lbpath, dtype=np.uint8)
with open(images_path, 'rb') as imgpath:
magic, n, rows, cols = struct.unpack('>IIII', imgpath.read(16))
images = np.fromfile(imgpath, dtype=np.uint8).reshape(len(labels), 784)
return images, labels
X, y = load_mnist('/Users/sebastian/Desktop/mnist', which='train')
print('Labels: %d' % y.shape[0])
print('Rows: %d, columns: %d' % (X.shape[0], X.shape[1]))
import matplotlib.pyplot as plt
%matplotlib inline
def plot_digit(X, y, idx):
img = X[idx].reshape(28,28)
plt.imshow(img, cmap='Greys', interpolation='nearest')
plt.title('true label: %d' % y[idx])
plt.show()
for i in range(4):
plot_digit(X, y, i)
np.savetxt('train_img.csv', X[:3000, :], delimiter=',')
np.savetxt('train_labels.csv', y[:3000], delimiter=',')
X = np.genfromtxt('train_img.csv', delimiter=',')
y = np.genfromtxt('train_labels.csv', delimiter=',')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
import matplotlib.pyplot as plt
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(5,5))
for images, labels in ds.take(1):
for i in range(3):
for j in range(3):
ax[i][j].imshow(images[i*3+j].numpy().astype("uint8"))
ax[i][j].set_title(ds.class_names[labels[i*3+j]])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: MMW staging (test) API Rapid Watershed Delineation (RWD) "watershed" endpoint
Step2: 2. Construct and issue the job request
Step3: 3. Fetch the job result once it's confirmed as done
Step4: 4. Examine the results
Step5: The results (result) are made up of two dictionary items
Step6: watershed
Step7: Render the geospatial results on an interactive map
|
<ASSISTANT_TASK:>
Python Code:
import json
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
def requests_retry_session(
retries=3,
backoff_factor=0.3,
status_forcelist=(500, 502, 504),
session=None,
):
session = session or requests.Session()
retry = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
return session
s = requests.Session()
APIToken = 'Token YOURTOKEN' # Replace YOURTOKEN with your actual token string/key
s.headers.update({
'Authorization': APIToken,
'Content-Type': 'application/json'
})
post_url = 'https://modelmywatershed.org/api/watershed/'
payload = {
'location': [40.746054, -111.847987], # [latitude, longitude]
'snappingOn': True,
'dataSource': 'nhd'}
json_dat = json.dumps(payload)
post_req = requests_retry_session(session=s).post(post_url, data=json_dat)
json_out = json.loads(post_req.content)
json_out
get_url = 'https://modelmywatershed.org/api/jobs/{job}/'.format
result = ''
while not result:
get_req = requests_retry_session(session=s).get(get_url(job=json_out['job']))
result = json.loads(get_req.content)['result']
type(result), result.keys()
input_pt_geojson = result['input_pt']
input_pt_geojson
watershed_geojson = result['watershed']
watershed_geojson.keys(), watershed_geojson['geometry'].keys()
watershed_geojson['properties']
# watershed has just one feature -- a single polygon
print("Number of polygon features: {} \nFeature type: {} \nNumber of vertices in polygon: {}".format(
len(watershed_geojson['geometry']['coordinates']),
watershed_geojson['geometry']['type'],
len(watershed_geojson['geometry']['coordinates'][0])
))
%matplotlib inline
import folium
# Initialize Folium map
m = folium.Map(location=[input_pt_geojson['properties']['Lat'], input_pt_geojson['properties']['Lon']],
tiles='CartoDB positron', zoom_start=12)
# Add RWD watershed and drainage point onto map
folium.GeoJson(watershed_geojson).add_to(m);
folium.GeoJson(input_pt_geojson).add_to(m);
m
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Choosing the class assignment objective
Step3: Balanced classes
Step4: Looking at the prediction results, we see that both class assignment objectives lead to qualitatively similar results.
Step5: For balanced classes both class assignment objectives achieve comparable accuracy of 90.5% and 90.9%.
Step6: We see that results are qualitatively different. The model with the class assignment objective maximize_accuracy tends to predict most of the points as class 0 (blue). This misclassification is especially obvious for the individual data points outside of the "bulk" of the distribution. The model with the class assignment objective maximize_minimum_recall tends to predict those points correctly. However, it misclassified the class 0 points in the middle.
Step7: For unbalanced data set, maximize_accuracy leads to a higher overall accuracy (96% vs. 90%). However, the recall for the class 1 is only 64%, since the model tends to classify most data points as class 0. For maximize_minimum_recall the recall of class 1 is 88%. The price that we pay is that now even some points from class 0 are classified as class 1 (class 1 precision of 51%).
|
<ASSISTANT_TASK:>
Python Code:
# Some general notebook setting
host = 'http://localhost:9200'
# IMPORTANT: create a file credentials.json with credentials for your Elasticsearch instance!
with open('credentials.json') as f:
data = json.load(f)
username = data['username']
password = data['password']
es = Elasticsearch(host, http_auth=(username, password))
def run_job(analysis_job_id: str, source_index: str, dest_index:str, class_assignment_objective:str):
Run a data frame analytics job to completion.
Args:
analysis_job_id (str): Base name of the analysis job
source_index (str): Source index
dest_index (str): Base name of the destination index
class_assignment_objective (str): Class assignment objective. Can be 'maximize_accuracy'
or 'maximize_minimum_recall'
destination = "_".join([dest_index, class_assignment_objective])
job_id = "_".join([analysis_job_id, class_assignment_objective])
api = '/_ml/data_frame/analytics/{}'.format(job_id)
config = {
"description": "",
"source": {
"index": source_index,
"query": {
"match_all": {}
}
},
"dest": {
"index": destination,
"results_field": "ml"
},
"analysis": {
"classification": {
"dependent_variable": "target",
"training_percent": 85,
"randomize_seed": 42,
"class_assignment_objective": class_assignment_objective
}
},
"model_memory_limit": "1gb",
"max_num_threads": 1
}
requests.put(host+api, auth=HTTPBasicAuth(username, password), json=config).json()
api = "/_ml/data_frame/analytics/{}/_start".format(job_id)
requests.post(host+api, auth=HTTPBasicAuth(username, password)).json()
api = "/_ml/data_frame/analytics/{}/_stats".format(job_id)
while True:
results = requests.get(host+api, auth=HTTPBasicAuth(username, password),).json()
state = results['data_frame_analytics'][0]['state']
if state == 'stopped':
break
time.sleep(1)
def evaluate(dest_index: str, class_assignment_objective: str):
Evaluation prediction results form the data frame analytics job.
Args:
dest_index (str): Base name of the destination index
class_assignment_objective (str): Class assignment objective. Can be 'maximize_accuracy'
or 'maximize_minimum_recall'
destination = "_".join([dest_index, class_assignment_objective])
evaluate_train_config = {
"index": destination,
"evaluation": {
"classification": {
"actual_field": "target",
"predicted_field": "ml.target_prediction",
"metrics": {
"accuracy": {},
"precision": {},
"recall": {}
}
}
}
}
api = "/_ml/data_frame/_evaluate"
results = requests.post(host+api, auth=HTTPBasicAuth(username, password), json=evaluate_train_config).json()
return results
analysis_job_id = 'classification_balanced'
dest_index = 'classification_balanced_result'
source_index = 'classification_balanced'
X_balanced,y_balanced = make_classification(n_samples=1000, n_features=2, n_informative=2,
n_redundant=0, n_repeated=0, n_classes=2, n_clusters_per_class=2,
class_sep=0.5,flip_y=0,weights=[0.5,0.5], random_state=42)
balanced_pd_df = pd.DataFrame(X_balanced, columns=['x1','x2'])
balanced_pd_df['target'] = y_balanced
balanced_ed_df = ed.pandas_to_eland(pd_df=balanced_pd_df, es_client=es,
es_dest_index=source_index, es_if_exists='replace',
es_refresh=True)
# train models with different class assignment objectives
run_job(analysis_job_id=analysis_job_id, source_index=source_index,
dest_index=dest_index, class_assignment_objective='maximize_accuracy')
run_job(analysis_job_id=analysis_job_id, source_index=source_index,
dest_index=dest_index, class_assignment_objective='maximize_minimum_recall')
# plot results
results_balanced_maxacc = ed.DataFrame(es_client=es,
es_index_pattern="_".join([dest_index, 'maximize_accuracy']),
columns=['x1','x2','target','ml.target_prediction',
'ml.is_training', 'ml.prediction_probability', 'ml.prediction_score',
'ml.top_classes'])
results_balanced_maxminrecall = ed.DataFrame(es_client=es,
es_index_pattern="_".join([dest_index, 'maximize_minimum_recall']),
columns=['x1','x2','target','ml.target_prediction',
'ml.is_training', 'ml.prediction_probability', 'ml.prediction_score',
'ml.top_classes'])
results_balanced_maxacc = ed.eland_to_pandas(results_balanced_maxacc)
results_balanced_maxminrecall = ed.eland_to_pandas(results_balanced_maxminrecall)
f, (ax1, ax2, ax3) = plt.subplots(nrows=1, ncols=3, figsize=(20,5))
sns.scatterplot(x=balanced_pd_df['x1'], y=balanced_pd_df['x2'],
hue=balanced_pd_df['target'], ax=ax1)
ax1.set_title('Original balanced data set')
sns.scatterplot(x=results_balanced_maxacc['x1'], y=results_balanced_maxacc['x2'],
hue=results_balanced_maxacc['ml.target_prediction'], ax=ax2)
ax2.set_title('Maximum accuracy classification')
sns.scatterplot(x=results_balanced_maxminrecall['x1'], y=results_balanced_maxminrecall['x2'],
hue=results_balanced_maxminrecall['ml.target_prediction'], ax=ax3)
ax3.set_title('Maximum minimum recall classification')
plt.show()
display(Markdown("**Evaluation results for maximize_accuracy**"))
pprint.pprint(evaluate(dest_index=dest_index, class_assignment_objective='maximize_accuracy'))
display(Markdown("**Evaluation results for maximize_minimum_recall**"))
pprint.pprint(evaluate(dest_index=dest_index, class_assignment_objective='maximize_minimum_recall'))
analysis_job_id = 'classification_unbalanced'
dest_index = 'classification_unbalanced_result'
source_index = 'classification_unbalanced'
X_unbalanced,y_unbalanced = make_classification(n_samples=1000, n_features=2, n_informative=2,
n_redundant=0, n_repeated=0, n_classes=2, n_clusters_per_class=2,
class_sep=0.5,flip_y=0,weights=[0.9,0.1], random_state=42)
unbalanced_pd_df = pd.DataFrame(X_unbalanced, columns=['x1','x2'])
unbalanced_pd_df['target'] = y_unbalanced
unbalanced_ed_df = ed.pandas_to_eland(pd_df=unbalanced_pd_df, es_client=es,
es_dest_index=source_index, es_if_exists='replace',
es_refresh=True)
# train models with different class assignment objectives
run_job(analysis_job_id=analysis_job_id, source_index=source_index,
dest_index=dest_index, class_assignment_objective='maximize_accuracy')
run_job(analysis_job_id=analysis_job_id, source_index=source_index,
dest_index=dest_index, class_assignment_objective='maximize_minimum_recall')
# plot results
results_unbalanced_maxacc = ed.DataFrame(es_client=es,
es_index_pattern="_".join([dest_index, 'maximize_accuracy']),
columns=['x1','x2','target','ml.target_prediction',
'ml.is_training', 'ml.prediction_probability', 'ml.prediction_score',
'ml.top_classes'])
results_unbalanced_maxminrecall = ed.DataFrame(es_client=es,
es_index_pattern="_".join([dest_index, 'maximize_minimum_recall']),
columns=['x1','x2','target','ml.target_prediction',
'ml.is_training', 'ml.prediction_probability', 'ml.prediction_score',
'ml.top_classes'])
results_unbalanced_maxacc = ed.eland_to_pandas(results_unbalanced_maxacc)
results_unbalanced_maxminrecall = ed.eland_to_pandas(results_unbalanced_maxminrecall)
f, (ax1, ax2, ax3) = plt.subplots(nrows=1, ncols=3, figsize=(20,5))
sns.scatterplot(x=unbalanced_pd_df['x1'], y=unbalanced_pd_df['x2'],
hue=unbalanced_pd_df['target'], ax=ax1)
ax1.set_title('Original unbalanced data set')
sns.scatterplot(x=results_unbalanced_maxacc['x1'], y=results_unbalanced_maxacc['x2'],
hue=results_unbalanced_maxacc['ml.target_prediction'], ax=ax2)
ax2.set_title('Maximum accuracy classification')
sns.scatterplot(x=results_unbalanced_maxminrecall['x1'], y=results_unbalanced_maxminrecall['x2'],
hue=results_unbalanced_maxminrecall['ml.target_prediction'], ax=ax3)
ax3.set_title('Maximum minimum recall classification')
plt.show()
display(Markdown("**Evaluation results for maximize_accuracy**"))
pprint.pprint(evaluate(dest_index=dest_index, class_assignment_objective='maximize_accuracy'))
display(Markdown("**Evaluation results for maximize_accuracy**"))
pprint.pprint(evaluate(dest_index=dest_index, class_assignment_objective='maximize_minimum_recall'))
display(Markdown('**Maximize accuracy result:**'))
pprint.pprint(results_unbalanced_maxacc.iloc[26]['ml.top_classes'])
display(Markdown('**Maximize minimum recall result:**'))
pprint.pprint(results_unbalanced_maxminrecall.iloc[26]['ml.top_classes'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Computers by definition cannot generate truly random numbers. The Mersenne Twister is a widely-used algorithm for generating pseudo random numbers form a deterministic process. That is, while the numbers generated from the algorithm are not random in the literal sense, they exhibit distributional qualities that make them indistinguishable from truly random numbers.
Step2: Example
Step3: The white noise process
Step4: Example
Step5: Notice that if $-1< \rho < 1$, then $\mu$ is the expected value of the process. That is, when $-1< \rho < 1$, the process will fluctuate around $\mu$. But if $\rho>1$ or $\rho<-1$, the process will explode away from $\mu$.
Step6: Example
Step7: The random walk process
|
<ASSISTANT_TASK:>
Python Code:
# Create an array with 5 draws from the normal(0,1) distribution and print
np.random.normal(size=5)
# Create an array with 5 draws from the normal(0,1) distribution and print
np.random.normal(size=5)
# Set the seed for the random number generator
np.random.seed(129)
# Create an array with 5 draws from the normal(0,1) distribution and print
np.random.normal(size=5)
# Set the seed for the random number generator
np.random.seed(129)
# Create two arrays:
# x: 500 draws from the normal(0,1) distribution
# y: 500 draws from the normal(0,2) distribution
x = np.random.normal(loc=0,scale=1,size=500)
y = np.random.normal(loc=0,scale=2,size=500)
# Plot
plt.plot(x,lw=3,alpha = 0.6,label='$\sigma=1$')
plt.plot(y,lw=3,alpha = 0.6,label='$\sigma=2$')
plt.grid(linestyle=':')
plt.legend(ncol=2,loc='lower right')
# Simulate an AR(1) process for 51 periods. Set the RNG seed to 129
np.random.seed(129)
T = 51
x0=0
mu=1
rho=0.5
sigma=1
x = np.zeros(T)
x[0] = x0
# draw random numbers for white noise process
eps= np.random.normal(loc=0,scale=sigma,size=T-1)
for t in range(T-1):
x[t+1] = mu*(1-rho) + rho*x[t] + eps[t]
# Plot
plt.plot(x,lw=3,alpha = 0.6,label='$\sigma=1$')
plt.grid(linestyle=':')
# Simulate an AR(1) process for 51 periods. Set the RNG seed to 129
np.random.seed(129)
T = 51
x0=0
mu=1
rho=1.5
sigma=1
import time
# Wait for 5 seconds
x = np.zeros(T)
x[:] = np.NAN
x[0] = x0
# draw random numbers for white noise process
eps= np.random.normal(loc=0,scale=sigma,size=T-1)
for t in range(T-1):
x[t+1] = mu*(1-rho) + rho*x[t] + eps[t]
# Plot
plt.plot(x,lw=3,alpha = 0.6,label='$\sigma=1$')
plt.grid(linestyle=':')
def ar1(mu=0,rho=0,sigma=1,x0=0,T=25):
'''Funciton for simulating an AR(1) process for T periods
Args:
mu (float): mean of the AR(1) process
rho (float): autoregressive parameter
sigma (float): standard deviation of the white noise process
x0 (float): initial value of the process
T (int): number of periods to simulate
Returns:
numpy array
'''
# initialize x array
x = np.zeros(T)
x[0] = x0
# draw random numbers for white noise process
eps= np.random.normal(loc=0,scale=sigma,size=T-1)
for t in range(T-1):
x[t+1] = mu*(1-rho) + rho*x[t] + eps[t]
return x
fig = plt.figure(figsize=(12,8))
np.random.seed(129)
y = ar1(mu=0,rho=0,sigma=1,x0=0,T=51)
ax1 = fig.add_subplot(2,2,1)
ax1.plot(y,lw=3,alpha=0.7)
ax1.set_title('$X_t = \epsilon_t$')
ax1.grid()
np.random.seed(129)
y = ar1(mu=0,rho=0,sigma=1,x0=0,T=51)
ax2 = fig.add_subplot(2,2,2)
ax2.plot(y,lw=3,alpha=0.7)
ax2.set_title('$X_t = 0.5\cdot X_{t-1} + \epsilon_t$')
ax2.grid()
np.random.seed(129)
y = ar1(mu=0,rho=0.9,sigma=1,x0=0,T=51)
ax3 = fig.add_subplot(2,2,3)
ax3.plot(y,lw=3,alpha=0.7)
ax3.set_title('$X_t = 0.9\cdot X_{t-1} + \epsilon_t$')
ax3.grid()
np.random.seed(129)
y = ar1(mu=0,rho=-0.5,sigma=1,x0=0,T=51)
ax4 = fig.add_subplot(2,2,4)
ax4.plot(y,lw=3,alpha=0.7)
ax4.set_title('$X_t = -0.5\cdot X_{t-1} + \epsilon_t$')
ax4.grid()
np.random.seed(129)
for i in range(7):
plt.plot(ar1(rho=1,T=501))
plt.grid()
plt.title('Five random walk processes')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: LAB
Step2: How about in descending order?
|
<ASSISTANT_TASK:>
Python Code:
dinos.assign(Decimal = dinos.sha256.apply(lambda x: int(x, base=16)))
dinos.sort_values(by='sha256').head(10)
dinos.sort_values(by='sha256', ascending=False).head(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step2: 4.5.1 UV coverage
Step3: Let's express the corresponding physical baseline in ENU coordinates.
Step4: Let's place the interferometer at a latitude $L_a=+45^\circ00'00''$.
Step5: Figure 4.5.1
Step6: 4.5.1.1.3 Computing of the projected baselines in ($u$,$v$,$w$) coordinates as a function of time
Step7: As the $u$, $v$, $w$ coordinates explicitly depend on $H$, we must evaluate them for each observational time step. We will use the equations defined in $\S$ 4.2.2 ➞
Step8: We now have everything that describes the $uvw$-track of the baseline (over an 8-hour observational period). It is hard to predict which locus the $uvw$ track traverses given only the three mathematical equations from above. Let's plot it in $uvw$ space and its projection in $uv$ space.
Step9: Figure 4.5.2
Step10: Figure 4.5.3
Step11: Let's compute the $uv$ tracks of an observation of the NCP ($\delta=90^\circ$)
Step12: Let's compute the uv tracks when observing a source at $\delta=30^\circ$
Step13: Figure 4.5.4
Step14: Figure 4.5.5
Step15: <span style="background-color
Step16: We then convert the ($\alpha$,$\delta$) to $l,m$
Step17: The source and phase centre coordinates are now given in degrees.
Step18: Figure 4.5.6
Step19: We create the dimensions of our visibility plane.
Step20: We create our fully-filled visibility plane. With a "perfect" interferometer, we could sample the entire $uv$-plane. Since we only have a finite amount of antennas, this is never possible in practice. Recall that our sky brightness $I(l,m)$ is related to our visibilites $V(u,v)$ via the Fourier transform. For a bunch of point sources we can therefore write
Step21: Below we sample our visibility plane on the $uv$-track derived in the first section, i.e. $V(u_t,v_t)$.
Step22: Figure 4.5.7
Step23: Figure 4.5.8
Step24: Figure 4.5.9
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
from mpl_toolkits.mplot3d import Axes3D
import plotBL
HTML('../style/code_toggle.html')
ant1 = np.array([-500e3,500e3,0]) # in m
ant2 = np.array([500e3,-500e3,+10]) # in m
b_ENU = ant2-ant1 # baseline
D = np.sqrt(np.sum((b_ENU)**2)) # |b|
print str(D/1000)+" km"
L = (np.pi/180)*(45+0./60+0./3600) # Latitude in radians
A = np.arctan2(b_ENU[0],b_ENU[1])
print "Baseline Azimuth="+str(np.degrees(A))+"ยฐ"
E = np.arcsin(b_ENU[2]/D)
print "Baseline Elevation="+str(np.degrees(E))+"ยฐ"
%matplotlib nbagg
plotBL.sphere(ant1,ant2,A,E,D,L)
# Observation parameters
c = 3e8 # Speed of light
f = 1420e9 # Frequency
lam = c/f # Wavelength
dec = (np.pi/180)*(-30-43.0/60-17.34/3600) # Declination
time_steps = 600 # Time Steps
h = np.linspace(-4,4,num=time_steps)*np.pi/12 # Hour angle window
ant1 = np.array([25.095,-9.095,0.045])
ant2 = np.array([90.284,26.380,-0.226])
b_ENU = ant2-ant1
D = np.sqrt(np.sum((b_ENU)**2))
L = (np.pi/180)*(-30-43.0/60-17.34/3600)
A=np.arctan2(b_ENU[0],b_ENU[1])
print "Azimuth=",A*(180/np.pi)
E=np.arcsin(b_ENU[2]/D)
print "Elevation=",E*(180/np.pi)
X = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))
Y = D*np.cos(E)*np.sin(A)
Z = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))
u = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
v = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
w = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
%matplotlib nbagg
plotBL.UV(u,v,w)
%matplotlib inline
from matplotlib.patches import Ellipse
# parameters of the UVtrack as an ellipse
a=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
b=a*np.sin(dec) # minor axis
v0=Z/lam*np.cos(dec)/1e3 # center of ellipse
plotBL.UVellipse(u,v,w,a,b,v0)
L=np.radians(90.)
ant1 = np.array([25.095,-9.095,0.045])
ant2 = np.array([90.284,26.380,-0.226])
b_ENU = ant2-ant1
D = np.sqrt(np.sum((b_ENU)**2))
A=np.arctan2(b_ENU[0],b_ENU[1])
print "Azimuth=",A*(180/np.pi)
E=np.arcsin(b_ENU[2]/D)
print "Elevation=",E*(180/np.pi)
X = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))
Y = D*np.cos(E)*np.sin(A)
Z = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))
dec=np.radians(90.)
uNCP = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
vNCP = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
wNCP = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
# parameters of the UVtrack as an ellipse
aNCP=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
bNCP=aNCP*np.sin(dec) # minor axi
v0NCP=Z/lam*np.cos(dec)/1e3 # center of ellipse
dec=np.radians(30.)
u30 = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
v30 = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
w30 = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
a30=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
b30=a*np.sin(dec) # minor axi
v030=Z/lam*np.cos(dec)/1e3 # center of ellipse
%matplotlib inline
plotBL.UVellipse(u30,v30,w30,a30,b30,v030)
plotBL.UVellipse(uNCP,vNCP,wNCP,aNCP,bNCP,v0NCP)
L=np.radians(90.)
X = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))
Y = D*np.cos(E)*np.sin(A)
Z = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))
# At local zenith == Celestial Equator
dec=np.radians(0.)
uEQ = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
vEQ = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
wEQ = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
# parameters of the UVtrack as an ellipse
aEQ=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
bEQ=aEQ*np.sin(dec) # minor axi
v0EQ=Z/lam*np.cos(dec)/1e3 # center of ellipse
# Close to Zenith
dec=np.radians(10.)
u10 = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
v10 = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
w10 = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
a10=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
b10=a*np.sin(dec) # minor axi
v010=Z/lam*np.cos(dec)/1e3 # center of ellipse
%matplotlib inline
plotBL.UVellipse(u10,v10,w10,a10,b10,v010)
plotBL.UVellipse(uEQ,vEQ,wEQ,aEQ,bEQ,v0EQ)
H = np.linspace(-6,6,600)*(np.pi/12) #Hour angle in radians
d = 100 #We assume that we have already divided by wavelength
delta = 60*(np.pi/180) #Declination in degrees
u_60 = d*np.cos(H)
v_60 = d*np.sin(H)*np.sin(delta)
RA_sources = np.array([5+30.0/60,5+32.0/60+0.4/3600,5+36.0/60+12.8/3600,5+40.0/60+45.5/3600])
DEC_sources = np.array([60,60+17.0/60+57.0/3600,61+12.0/60+6.9/3600,61+56.0/60+34.0/3600])
Flux_sources_labels = np.array(["","1 Jy","0.5 Jy","0.2 Jy"])
Flux_sources = np.array([1,0.5,0.1]) #in Jy
step_size = 200
print "Phase center Source 1 Source 2 Source3"
print repr("RA="+str(RA_sources)).ljust(2)
print "DEC="+str(DEC_sources)
RA_rad = np.array(RA_sources)*(np.pi/12)
DEC_rad = np.array(DEC_sources)*(np.pi/180)
RA_delta_rad = RA_rad-RA_rad[0]
l = np.cos(DEC_rad)*np.sin(RA_delta_rad)
m = (np.sin(DEC_rad)*np.cos(DEC_rad[0])-np.cos(DEC_rad)*np.sin(DEC_rad[0])*np.cos(RA_delta_rad))
print "l=",l*(180/np.pi)
print "m=",m*(180/np.pi)
point_sources = np.zeros((len(RA_sources)-1,3))
point_sources[:,0] = Flux_sources
point_sources[:,1] = l[1:]
point_sources[:,2] = m[1:]
%matplotlib inline
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plt.xlim([-4,4])
plt.ylim([-4,4])
plt.xlabel("$l$ [degrees]")
plt.ylabel("$m$ [degrees]")
plt.plot(l[0],m[0],"bx")
plt.hold("on")
plt.plot(l[1:]*(180/np.pi),m[1:]*(180/np.pi),"ro")
counter = 1
for xy in zip(l[1:]*(180/np.pi)+0.25, m[1:]*(180/np.pi)+0.25):
ax.annotate(Flux_sources_labels[counter], xy=xy, textcoords='offset points',horizontalalignment='right',
verticalalignment='bottom')
counter = counter + 1
plt.grid()
u = np.linspace(-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10, num=step_size, endpoint=True)
v = np.linspace(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10, num=step_size, endpoint=True)
uu, vv = np.meshgrid(u, v)
zz = np.zeros(uu.shape).astype(complex)
s = point_sources.shape
for counter in xrange(1, s[0]+1):
A_i = point_sources[counter-1,0]
l_i = point_sources[counter-1,1]
m_i = point_sources[counter-1,2]
zz += A_i*np.exp(-2*np.pi*1j*(uu*l_i+vv*m_i))
zz = zz[:,::-1]
u_track = u_60
v_track = v_60
z = np.zeros(u_track.shape).astype(complex)
s = point_sources.shape
for counter in xrange(1, s[0]+1):
A_i = point_sources[counter-1,0]
l_i = point_sources[counter-1,1]
m_i = point_sources[counter-1,2]
z += A_i*np.exp(-1*2*np.pi*1j*(u_track*l_i+v_track*m_i))
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.imshow(zz.real,extent=[-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10,-1*(np.amax(abs(v_60)))-10, \
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Real part of visibilities")
plt.subplot(122)
plt.imshow(zz.imag,extent=[-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10,-1*(np.amax(abs(v_60)))-10, \
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Imaginary part of visibilities")
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.plot(z.real)
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Real: sampled visibilities")
plt.subplot(122)
plt.plot(z.imag)
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Imag: sampled visibilities")
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.imshow(abs(zz),
extent=[-1*(np.amax(np.abs(u_60)))-10,
np.amax(np.abs(u_60))+10,
-1*(np.amax(abs(v_60)))-10,
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Amplitude of visibilities")
plt.subplot(122)
plt.imshow(np.angle(zz),
extent=[-1*(np.amax(np.abs(u_60)))-10,
np.amax(np.abs(u_60))+10,
-1*(np.amax(abs(v_60)))-10,
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Phase of visibilities")
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.plot(abs(z))
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Abs: sampled visibilities")
plt.subplot(122)
plt.plot(np.angle(z))
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Phase: sampled visibilities")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problรจme 1
Step2: Random Forest
Step3: On s'inspire de l'exemple Feature importances with forests of trees.
Step4: Modรจle linรฉaire
Step5: TPOT
Step6: Le module optimise les hyperparamรจtres, parfois un peu trop ร en juger la mauvaise performance obtenue sur la base de test.
Step7: Feature importance pour une observations
Step8: Problรจme 2
Step9: Mรฉthode non linรฉaire
Step10: Texte
Step11: Des mots aux coordonnรฉes - tf-idf
Step12: LDA
Step13: On obtient dix vecteurs qui reprรฉsentent les dix vecteurs associรฉs aux dix clusters. Chaque dimension reliรฉ au fait que le mot appartient ou non au cluster.
Step14: Clustering
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
import pandas
df = pandas.read_csv("data/housing.data", delim_whitespace=True, header=None)
df.head()
cols = "CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX PTRATIO B LSTAT MEDV".split()
df.columns = cols
df.head()
X = df.drop("MEDV", axis=1)
y = df["MEDV"]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
from sklearn.ensemble import RandomForestRegressor
clr = RandomForestRegressor()
clr.fit(X, y)
importances = clr.feature_importances_
importances
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.figure(figsize=(12,4))
indices = np.argsort(importances)[::-1]
std = np.std([tree.feature_importances_ for tree in clr.estimators_],
axis=0)
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
xlabels = list(df.columns[:-1])
xlabels = [xlabels[i] for i in indices]
plt.xticks(range(X.shape[1]), xlabels)
plt.xlim([-1, X.shape[1]])
plt.show()
from sklearn.metrics import r2_score
r2_score(y_train, clr.predict(X_train))
r2_score(y_test, clr.predict(X_test))
import statsmodels.api as sm
model = sm.OLS(y_train, X_train)
results = model.fit()
results.params
results.summary()
model = sm.OLS(y,X.drop("LSTAT", axis=1))
results = model.fit()
results.summary()
try:
from tpot import TPOTRegressor
except ImportError:
# for sklearn 0.22
import sklearn.preprocessing
from sklearn.impute import SimpleImputer
sklearn.preprocessing.Imputer = SimpleImputer
from tpot import TPOTRegressor
tpot = TPOTRegressor(generations=2, population_size=50, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_boston_pipeline.py')
r2_score(y_train, tpot.predict(X_train))
r2_score(y_test, tpot.predict(X_test))
clr = RandomForestRegressor()
clr.fit(X, y)
from treeinterpreter import treeinterpreter as ti
prediction, bias, contributions = ti.predict(clr, X_test)
for i in range(min(2, X_train.shape[0])):
print("Instance", i)
print("Bias (trainset mean)", bias[i])
print("Feature contributions:")
for c, feature in sorted(zip(contributions[i], df.columns),
key=lambda x: -abs(x[0])):
print(feature, round(c, 2))
print( "-"*20)
import pandas
df = pandas.read_csv("data/multiTimeline.csv", skiprows=1)
df.columns= ["Semaine", "compte"]
df["SemaineDt"] = pandas.to_datetime(df.Semaine)
df=df.set_index("SemaineDt")
df["compte"] = df["compte"].astype(float)
df.head()
%matplotlib inline
df.plot()
from statsmodels.tsa.arima_model import ARIMA
arma_mod = ARIMA(df["compte"].values, order=(6 ,1, 1))
res = arma_mod.fit()
res.params
res.summary()
from statsmodels.tsa.tsatools import lagmat
lag = 8
X = lagmat(df["compte"], lag)
lagged = df.copy()
for c in range(1,lag+1):
lagged["lag%d" % c] = X[:, c-1]
pandas.concat([lagged.head(), lagged.tail()])
xc = ["lag%d" % i for i in range(1,lag+1)]
split = 0.66
isplit = int(len(lagged) * split)
xt = lagged[10:][xc]
yt = lagged[10:]["compte"]
X_train, y_train, X_test, y_test = xt[:isplit], yt[:isplit], xt[isplit:], yt[isplit:]
from sklearn.ensemble import RandomForestRegressor
clr = RandomForestRegressor()
clr.fit(X_train, y_train)
from sklearn.metrics import r2_score
r2 = r2_score(y_test.values, clr.predict(X_test))
r2
plt.scatter(y_test.values, clr.predict(X_test));
from ensae_teaching_cs.data import twitter_zip
df = twitter_zip(as_df=True)
df.head(n=2).T
keep = df.text.dropna().index
dfnonan = df.iloc[keep, :]
dfnonan.shape
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, max_features=1000)
tfidf = tfidf_vectorizer.fit_transform(dfnonan["text"])
tfidf[:2, :]
tfidf[:2, :].todense()
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_components=10, max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=0)
lda.fit(tfidf)
tf_feature_names = tfidf_vectorizer.get_feature_names()
tf_feature_names[100:103]
lda.components_.shape
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
print_top_words(lda, tf_feature_names, 10)
from sklearn.cluster import KMeans
km = KMeans(n_clusters=10)
km.fit(tfidf)
km.cluster_centers_.shape
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.cluster_centers_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
print_top_words(km, tf_feature_names, 10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualization
Step3: Classical Machine Learning
Step4: Logistic Regression
Step5: ElasticNet
Step6: Decision Tree
Step7: Random Forest
Step8: SVM
Step9: Results and save the model
|
<ASSISTANT_TASK:>
Python Code:
import os
import librosa
import itertools
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import kurtosis
from scipy.stats import skew
import sklearn
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.pipeline import Pipeline
from sklearn.feature_selection import SelectKBest
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.feature_selection import VarianceThreshold
from sklearn.feature_selection import SelectFromModel
import lightgbm as lgbm
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
# Set the seed
np.random.seed(42)
gtzan_dir = '../data/genres/'
# Parameters
song_samples = 22050*30
genres = {'metal': 0, 'disco': 1, 'classical': 2, 'hiphop': 3, 'jazz': 4,
'country': 5, 'pop': 6, 'blues': 7, 'reggae': 8, 'rock': 9}
def get_features(y, sr, n_fft = 1024, hop_length = 512):
# Features to concatenate in the final dictionary
features = {'centroid': None, 'roloff': None, 'flux': None, 'rmse': None,
'zcr': None, 'contrast': None, 'bandwidth': None, 'flatness': None}
# Count silence
if 0 < len(y):
y_sound, _ = librosa.effects.trim(y, frame_length=n_fft, hop_length=hop_length)
features['sample_silence'] = len(y) - len(y_sound)
# Using librosa to calculate the features
features['centroid'] = librosa.feature.spectral_centroid(y, sr=sr, n_fft=n_fft, hop_length=hop_length).ravel()
features['roloff'] = librosa.feature.spectral_rolloff(y, sr=sr, n_fft=n_fft, hop_length=hop_length).ravel()
features['zcr'] = librosa.feature.zero_crossing_rate(y, frame_length=n_fft, hop_length=hop_length).ravel()
features['rmse'] = librosa.feature.rms(y, frame_length=n_fft, hop_length=hop_length).ravel()
features['flux'] = librosa.onset.onset_strength(y=y, sr=sr).ravel()
features['contrast'] = librosa.feature.spectral_contrast(y, sr=sr).ravel()
features['bandwidth'] = librosa.feature.spectral_bandwidth(y, sr=sr, n_fft=n_fft, hop_length=hop_length).ravel()
features['flatness'] = librosa.feature.spectral_flatness(y, n_fft=n_fft, hop_length=hop_length).ravel()
# MFCC treatment
mfcc = librosa.feature.mfcc(y, n_fft = n_fft, hop_length = hop_length, n_mfcc=13)
for idx, v_mfcc in enumerate(mfcc):
features['mfcc_{}'.format(idx)] = v_mfcc.ravel()
# Get statistics from the vectors
def get_moments(descriptors):
result = {}
for k, v in descriptors.items():
result['{}_max'.format(k)] = np.max(v)
result['{}_min'.format(k)] = np.min(v)
result['{}_mean'.format(k)] = np.mean(v)
result['{}_std'.format(k)] = np.std(v)
result['{}_kurtosis'.format(k)] = kurtosis(v)
result['{}_skew'.format(k)] = skew(v)
return result
dict_agg_features = get_moments(features)
dict_agg_features['tempo'] = librosa.beat.tempo(y, sr=sr)[0]
return dict_agg_features
def read_process_songs(src_dir, debug = True):
# Empty array of dicts with the processed features from all files
arr_features = []
# Read files from the folders
for x,_ in genres.items():
folder = src_dir + x
for root, subdirs, files in os.walk(folder):
for file in files:
# Read the audio file
file_name = folder + "/" + file
signal, sr = librosa.load(file_name)
# Debug process
if debug:
print("Reading file: {}".format(file_name))
# Append the result to the data structure
features = get_features(signal, sr)
features['genre'] = genres[x]
arr_features.append(features)
return arr_features
%%time
# Get list of dicts with features and convert to dataframe
features = read_process_songs(gtzan_dir, debug=False)
df_features = pd.DataFrame(features)
df_features.shape
df_features.head()
df_features.to_csv('../data/gtzan_features.csv', index=False)
X = df_features.drop(['genre'], axis=1).values
y = df_features['genre'].values
# Standartize the dataset
scale = StandardScaler()
x_scaled = scale.fit_transform(X)
# Use PCA only for visualization
pca = PCA(n_components=35, whiten=True)
x_pca = pca.fit_transform(x_scaled)
print("cumulative explained variance ratio = {:.4f}".format(np.sum(pca.explained_variance_ratio_)))
# Use LDA only for visualization
lda = LDA()
x_lda = lda.fit_transform(x_scaled, y)
# Using tsne
tsne = TSNE(n_components=2, verbose=1, learning_rate=250)
x_tsne = tsne.fit_transform(x_scaled)
plt.figure(figsize=(18, 4))
plt.subplot(131)
plt.scatter(x_pca[:,0], x_pca[:,1], c=y)
plt.colorbar()
plt.title("Embedded space with PCA")
plt.subplot(132)
plt.scatter(x_lda[:,0], x_lda[:,1], c=y)
plt.colorbar()
plt.title("Embedded space with LDA")
plt.subplot(133)
plt.scatter(x_tsne[:,0], x_tsne[:,1], c=y)
plt.colorbar()
plt.title("Embedded space with TSNE")
plt.show()
# Helper to plot confusion matrix -- from Scikit-learn website
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y)
params = {
"cls__penalty": ["l1", "l2"],
"cls__C": [0.5, 1, 2, 5],
"cls__max_iter": [500]
}
pipe_lr = Pipeline([
('scale', StandardScaler()),
('var_tresh', VarianceThreshold(threshold=(.8 * (1 - .8)))),
('feature_selection', SelectFromModel(lgbm.LGBMClassifier())),
('cls', LogisticRegression())
])
grid_lr = GridSearchCV(pipe_lr, params, scoring='accuracy', n_jobs=6, cv=5)
grid_lr.fit(X_train, y_train)
preds = grid_lr.predict(X_test)
print("best score on validation set (accuracy) = {:.4f}".format(grid_lr.best_score_))
print("best score on test set (accuracy) = {:.4f}".format(accuracy_score(y_test, preds)))
params = {
"cls__loss": ['log'],
"cls__penalty": ["elasticnet"],
"cls__l1_ratio": [0.15, 0.25, 0.5, 0.75],
}
pipe_en = Pipeline([
('scale', StandardScaler()),
('var_tresh', VarianceThreshold(threshold=(.8 * (1 - .8)))),
('feature_selection', SelectFromModel(lgbm.LGBMClassifier())),
('cls', SGDClassifier())
])
grid_en = GridSearchCV(pipe_en, params, scoring='accuracy', n_jobs=6, cv=5)
grid_en.fit(X_train, y_train)
preds = grid_en.predict(X_test)
print("best score on validation set (accuracy) = {:.4f}".format(grid_en.best_score_))
print("best score on test set (accuracy) = {:.4f}".format(accuracy_score(y_test, preds)))
params = {
"cls__criterion": ["gini", "entropy"],
"cls__splitter": ["best", "random"],
}
pipe_cart = Pipeline([
('var_tresh', VarianceThreshold(threshold=(.8 * (1 - .8)))),
('feature_selection', SelectFromModel(lgbm.LGBMClassifier())),
('cls', DecisionTreeClassifier())
])
grid_cart = GridSearchCV(pipe_cart, params, scoring='accuracy', n_jobs=6, cv=5)
grid_cart.fit(X_train, y_train)
preds = grid_cart.predict(X_test)
print("best score on validation set (accuracy) = {:.4f}".format(grid_cart.best_score_))
print("best score on test set (accuracy) = {:.4f}".format(accuracy_score(y_test, preds)))
params = {
"cls__n_estimators": [100, 250, 500, 1000],
"cls__criterion": ["gini", "entropy"],
"cls__max_depth": [5, 7, None]
}
pipe_rf = Pipeline([
('var_tresh', VarianceThreshold(threshold=(.8 * (1 - .8)))),
('feature_selection', SelectFromModel(lgbm.LGBMClassifier())),
('cls', RandomForestClassifier())
])
grid_rf = GridSearchCV(pipe_rf, params, scoring='accuracy', n_jobs=6, cv=5)
grid_rf.fit(X_train, y_train)
preds = grid_rf.predict(X_test)
print("best score on validation set (accuracy) = {:.4f}".format(grid_rf.best_score_))
print("best score on test set (accuracy) = {:.4f}".format(accuracy_score(y_test, preds)))
params = {
"cls__C": [0.5, 1, 2, 5],
"cls__kernel": ['rbf', 'linear', 'sigmoid'],
}
pipe_svm = Pipeline([
('scale', StandardScaler()),
('var_tresh', VarianceThreshold(threshold=(.8 * (1 - .8)))),
('feature_selection', SelectFromModel(lgbm.LGBMClassifier())),
('cls', SVC())
])
grid_svm = GridSearchCV(pipe_svm, params, scoring='accuracy', n_jobs=6, cv=5)
grid_svm.fit(X_train, y_train)
preds = grid_svm.predict(X_test)
print("best score on validation set (accuracy) = {:.4f}".format(grid_svm.best_score_))
print("best score on test set (accuracy) = {:.4f}".format(accuracy_score(y_test, preds)))
cm = confusion_matrix(y_test, preds)
classes = ['metal', 'disco', 'classical', 'hiphop', 'jazz', 'country', 'pop', 'blues', 'reggae', 'rock']
plt.figure(figsize=(10,10))
plot_confusion_matrix(cm, classes, normalize=True)
from sklearn.externals import joblib
joblib.dump(grid_svm, "../models/pipe_svm.joblib")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preparation (keras.dataset)
Step2: Training
Step3: Plotting Network Performance Trend
Step4: After 100 epochs, we get a 98.8% validation accuracy.
|
<ASSISTANT_TASK:>
Python Code:
from keras.models import Sequential
from keras.layers.core import Dense, Dropout
from keras.optimizers import SGD
nb_classes = 10
# FC@512+relu -> DropOut(0.2) -> FC@512+relu -> DropOut(0.2) -> FC@nb_classes+softmax
# ... your Code Here
# Decomment and Execute this cell to get the solution
# %load solutions/sol_221_1.py
from keras.datasets import mnist
from keras.utils import np_utils
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype("float32")
X_test = X_test.astype("float32")
X_train /= 255
X_test /= 255
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, 10)
Y_test = np_utils.to_categorical(y_test, 10)
network_history = model.fit(X_train, Y_train, batch_size=128,
epochs=100, verbose=1, validation_data=(X_test, Y_test))
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.plot(network_history.history['loss'])
plt.plot(network_history.history['val_loss'])
plt.legend(['Training', 'Validation'])
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.plot(network_history.history['acc'])
plt.plot(network_history.history['val_acc'])
plt.legend(['Training', 'Validation'], loc='lower right')
# %load solutions/sol_221_2.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The problem with the squared error is that it makes small errors (< 1.0) even smaller and large large errors (>1.0) disproportionately larger.
Step2: Datasets with large outliers show heavy tails in the residual distribution when trained using MSE as shown above.
Step3: MAE does not over-prioritize outliers and it is easy to interpret
Step4: Particularly interesting is that for $\delta=1.0$ Huber loss is asymptotically similar to MAE and behaves as MSE around 0. For other values of $\delta$, Huber loss still increases linearly for large errors and $\delta$ controls simply the slope of this growth.
Step5: Conclusion
Step6: The distribution of the values has a "bump" for smaller values and then a long tail with a small number of very large values.
Step7: Note that the scale of the y-axis in the bottom has been changed to be able to see those small residuals!
|
<ASSISTANT_TASK:>
Python Code:
a = symbols('a') #actual value
p = symbols('p') #predicted value
mse = lambda a,p: (a-p)**2
mse_plot = plot(mse(0, p),(p, -3, 3), show=False, legend=True, line_color="red")
mse_plot[0].label='MSE'
mse_plot.show()
# Let's see how to identify
np.random.seed(1000)
N = 10000
w = np.ones(5)
X = np.random.randn(N, 5)
y = X.dot(w)
# The last 1000 target values get some serious noise
y[-1000:] += np.random.normal(0,100.0, 1000)
mse = lambda x: mean_squared_error(y, X.dot(x))
x0 = np.zeros(X.shape[1]) #initial guess
result_w = minimize(mse, x0, tol=1e-5)['x'] #MSE result
residuals_mse = y - X.dot(result_w)
# Plot the distribution of the residuals
kde = gaussian_kde(residuals_mse)
dist_space = np.linspace(min(residuals_mse), max(residuals_mse), 100)
_ = pl.plot(dist_space, kde(dist_space))
mae = lambda a,p: Abs(a-p)
mae_plot = plot(mae(0, p),(p, -3, 3), show=False, line_color="blue")
mae_plot[0].label = "MAE"
mse_plot.extend(mae_plot)
mse_plot.show()
huber = lambda delta,a,p: Piecewise((0.5*(a-p)**2, Abs(a-p) <= delta), (delta*Abs(a-p)-0.5*delta**2, True))
huber1_plot = plot(huber(1.0, 0, p),(p, -3, 3), show=False, line_color="green")
huber1_plot[0].label = "Huber $\delta=1.0$"
huber2_plot = plot(huber(2.0, 0, p),(p, -3, 3), show=False, line_color="turquoise")
huber2_plot[0].label = "Huber $\delta=2.0$"
mse_plot.extend(huber1_plot)
mse_plot.extend(huber2_plot)
mse_plot.show()
pseudo_huber = lambda delta,a,p: delta**2*(sqrt(1+((a-p)/delta)**2)-1)
pseudo_huber1 = plot(pseudo_huber(1.0, 0,p),(p, -3, 3), show=False, legend=True, line_color='lightgreen')
pseudo_huber1[0].label='Pseud-Huber $\delta=1.0$'
huber1_plot.extend(pseudo_huber1)
huber1_plot.legend = True
huber1_plot.show()
from sympy.stats import LogNormal, density
z = symbols('z')
log_norm_1 = LogNormal("x", 0, 1.0)
log_norm_2 = LogNormal("x", 0, 0.5)
log_norm_3 = LogNormal("x", 0, 0.25)
pdf_plot1= plot(density(log_norm_1)(z), (z,0.01, 5), show=False, line_color="blue")
pdf_plot2= plot(density(log_norm_2)(z), (z,0.01, 5), show=False, line_color="green")
pdf_plot3= plot(density(log_norm_3)(z), (z,0.01, 5), show=False, line_color="red")
pdf_plot1.extend(pdf_plot2)
pdf_plot1.extend(pdf_plot3)
pdf_plot1.show()
# We generate random data and train a simple linear model to minimize
# MSE and MSLE loss functions.
np.random.seed(1000)
N = 10000
w = np.ones(5)
X = np.random.randn(N, 5)
y = X.dot(w)
yexp = np.exp(y)
mse = lambda x: mean_squared_error(yexp, X.dot(x))
def msle(x, t=1):
pred = X.dot(x)+t
pred[pred<0] = 0
return mean_squared_log_error(yexp+t, pred)
x0 = np.zeros(X.shape[1]) #initial guess
res_mse = minimize(mse, x0, tol=1e-5) #MSE residuals
results = {} #MSLE residuals for different offsets
T = [1.0, 10.0, 100.0]
for t in T:
results[t] = minimize(lambda x: msle(x, t), x0, tol=1e-5)
sorted_idx = np.argsort(yexp)
label_sorted = yexp[sorted_idx]
def get_sorted_residuals(results):
residuals = yexp - X.dot(results['x'])
return residuals[sorted_idx]
f, (axTop, axMiddle, axBottom) = pl.subplots(3, 1, sharex=True, figsize=(10, 15))
# plot the density distribution of the target values on top
kde = gaussian_kde(yexp)
yrange = np.logspace(min(y), max(y), 100, base=np.exp(1))
axTop.plot(yrange, kde(yrange))
axTop.set_ylabel('density')
# plot the magnitudes of residuals for MSE and MSLE with different offsets
for t in T:
# use a median filter to smooth residuals and see the trend.
axMiddle.plot(label_sorted, median_filter(get_sorted_residuals(results[t]), size=50), label='t={}'.format(t))
axBottom.plot(label_sorted, median_filter(get_sorted_residuals(results[t]), size=50), label='t={}'.format(t))
axMiddle.plot(label_sorted, median_filter(get_sorted_residuals(res_mse), size=50), label='mse')
axBottom.plot(label_sorted, median_filter(get_sorted_residuals(res_mse), size=50), label='mse')
axBottom.set_xscale('log')
axBottom.set_xlabel('target')
axBottom.set_xticks(np.logspace(-3,3,7))
axMiddle.set_ylabel('residual')
axBottom.set_ylabel('residual')
axMiddle.set_ylim(800,1000)
axBottom.set_ylim(-20,75)
axMiddle.legend()
pl.show()
quantiles = np.linspace(0, 1.0, 11)
qvalues = np.quantile(yexp, np.linspace(0, 1.0, 11))
for q, val in zip(quantiles, qvalues):
print("Percentile {:.0f}\tvalue {:.2f}".format(q*100, val))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Influence of temperature
Step2: Mole fractions
Step3: Surface Production Rates
Step4: Comparison between temperatures
Step5: Effect of catalyst loading
|
<ASSISTANT_TASK:>
Python Code:
import os
import matplotlib as mpl
mpl.rcParams['figure.dpi'] = 500
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
ls T-950K/
mole_df = pd.read_csv(os.path.join('T-950K','gas_mole_tr.csv'))
msdot_df = pd.read_csv(os.path.join('T-950K','gas_msdot_tr.csv'))
surf_cov_df = pd.read_csv(os.path.join('T-950K','surf_cov_tr.csv'))
ax = plt.subplot(1, 1, 1)
ax.plot('t(s)', 'H2', color='r', data=mole_df, marker='^', markersize=0.5, label="$H_2$")
ax.plot('t(s)', 'N2', color='g', data=mole_df, marker='v', markersize=0.5, label="$N_2$")
ax.plot('t(s)', 'NH3', color='b', data=mole_df, marker='*', markersize=0.5, label="$NH_3$")
ax.set_xlabel('Time (s)')
ax.ticklabel_format(axis='y', style='sci', scilimits=(0,0))
ax.set_ylabel('Mole Fraction')
#ax.set_xlim([0,50])
# Change time limit to 1
ax.set_xlim([0,0.04])
ax.legend(loc="upper left", bbox_to_anchor=(1,1))
plt.tight_layout()
plt.show()
#plt.savefig('Ammonia_gas_species_molefrac.png', dpi=500)
ax3 = plt.subplot(1, 1, 1)
ax3.plot('t(s)', 'H2', color='r', data=msdot_df, marker='^', markersize=0.5, label="$H_2$")
ax3.plot('t(s)', 'N2', color='g', data=msdot_df, marker='v', markersize=0.5, label="$N_2$")
ax3.plot('t(s)', 'NH3', color='b', data=msdot_df, marker='*', markersize=0.5, label="$NH_3$")
ax3.set_xlabel('Time (s)')
ax3.ticklabel_format(axis='y', style='sci', scilimits=(0,0))
ax3.set_ylabel('Surface Production Rate')
#ax3.set_xlim([0,50])
ax3.set_xlim([0,0.001])
ax3.set_ylim([-1e-6,1e-6])
ax3.legend(loc="upper left", bbox_to_anchor=(1,1))
plt.tight_layout()
plt.show()
#plt.savefig('Ammonia_gas_species_production_rate.png', dpi=500)
print(surf_cov_df.columns)
ax1 = plt.subplot(1, 1, 1)
ax1.plot('t(s)', 'RU(S1)', color='r', data=surf_cov_df, marker='^', markersize=0.5, label="$Ru*$")
ax1.plot('t(s)', 'H(S1)', color='g', data=surf_cov_df, marker='v', markersize=0.5, label="$H*$")
ax1.plot('t(s)', 'N(S1)', color='b', data=surf_cov_df, marker='*', markersize=0.5, label="$N*$")
ax1.set_xlabel('Time (s)')
ax1.ticklabel_format(axis='y', style='sci', scilimits=(0,0))
ax1.set_ylabel('Surface Coverages')
ax1.set_xlim([0,5])
#ax1.set_xlim([0,0.04])
ax1.legend(loc="upper left", bbox_to_anchor=(1,1))
plt.tight_layout()
plt.show()
#plt.savefig('Ammonia_surf_cov.png', dpi=500)
mole_df_950 = pd.read_csv(os.path.join('T-950K','gas_mole_tr.csv'))
msdot_df_950 = pd.read_csv(os.path.join('T-950K','gas_msdot_tr.csv'))
surf_cov_df_950 = pd.read_csv(os.path.join('T-950K','surf_cov_tr.csv'))
mole_df_900 = pd.read_csv(os.path.join('T-900K','gas_mole_tr.csv'))
msdot_df_900 = pd.read_csv(os.path.join('T-900K','gas_msdot_tr.csv'))
surf_cov_df_900 = pd.read_csv(os.path.join('T-900K','surf_cov_tr.csv'))
ax = plt.subplot(1, 1, 1)
ax.plot('t(s)', 'H2', '-', color='r', data=mole_df_950, marker='^', markersize=0.5, label="$H_2$ 950K")
ax.plot('t(s)', 'H2', '--', color='r', data=mole_df_900, marker='^', markersize=0.5, label="$H_2$ 900K")
ax.plot('t(s)', 'N2', '-', color='g', data=mole_df_950, marker='v', markersize=0.5, label="$N_2$ 950K")
ax.plot('t(s)', 'N2', '--', color='g', data=mole_df_900, marker='v', markersize=0.5, label="$N_2$ 900K")
ax.plot('t(s)', 'NH3', '-', color='b', data=mole_df_950, marker='*', markersize=0.5, label="$NH_3$ 950K")
ax.plot('t(s)', 'NH3', '--', color='b', data=mole_df_900, marker='*', markersize=0.5, label="$NH_3$ 900K")
ax.set_xlabel('Time (s)')
ax.ticklabel_format(axis='y', style='sci', scilimits=(0,0))
ax.set_ylabel('Mole Fraction')
#ax.set_xlim([0,50])
# Change time limit to 1
ax.set_xlim([0,5])
ax.legend(loc="upper left", bbox_to_anchor=(1,1))
plt.tight_layout()
plt.savefig('Ammonia_molefrac_Tcomp.png', dpi=500)
ax = plt.subplot(1, 1, 1)
ax.plot('t(s)', 'RU(S1)', '-', color='r', data=surf_cov_df_950, marker='^', markersize=0.5, label="$Ru*$ 950K")
ax.plot('t(s)', 'RU(S1)', '--', color='r', data=surf_cov_df_900, marker='^', markersize=0.5, label="$Ru*$ 900K")
ax.plot('t(s)', 'H(S1)', '-', color='g', data=surf_cov_df_950, marker='v', markersize=0.5, label="$H*$ 950K")
ax.plot('t(s)', 'H(S1)', '--', color='g', data=surf_cov_df_900, marker='v', markersize=0.5, label="$H*$ 900K")
ax.plot('t(s)', 'N(S1)', '-', color='b', data=surf_cov_df_950, marker='*', markersize=0.5, label="$N*$ 950K")
ax.plot('t(s)', 'N(S1)', '--', color='b', data=surf_cov_df_900, marker='*', markersize=0.5, label="$N*$ 900K")
ax.set_xlabel('Time (s)')
ax.ticklabel_format(axis='y', style='sci', scilimits=(0,0))
ax.set_ylabel('Surface Coverages')
#ax.set_xlim([0,0.5])
ax.set_xlim([0,0.04])
ax.legend(loc="upper left", bbox_to_anchor=(1,1))
plt.tight_layout()
plt.savefig('Ammonia_surf_cov_Tcomp.png', dpi=500)
mole_df_10 = pd.read_csv(os.path.join('AbyV-1000','gas_mole_tr.csv'))
msdot_df_10 = pd.read_csv(os.path.join('AbyV-1000','gas_msdot_tr.csv'))
surf_cov_df_10 = pd.read_csv(os.path.join('AbyV-1000','surf_cov_tr.csv'))
mole_df_15 = pd.read_csv(os.path.join('AbyV-1500','gas_mole_tr.csv'))
msdot_df_15 = pd.read_csv(os.path.join('AbyV-1500','gas_msdot_tr.csv'))
surf_cov_df_15 = pd.read_csv(os.path.join('AbyV-1500','surf_cov_tr.csv'))
ax = plt.subplot(1, 1, 1)
ax.plot('t(s)', 'H2', '-', color='r', data=mole_df_15, marker='^', markersize=0.5, label="$H_2$ 1.5k")
ax.plot('t(s)', 'H2', '--', color='r', data=mole_df_10, marker='^', markersize=0.5, label="$H_2$ 1.0K")
ax.plot('t(s)', 'N2', '-', color='g', data=mole_df_15, marker='v', markersize=0.5, label="$N_2$ 1.5k")
ax.plot('t(s)', 'N2', '--', color='g', data=mole_df_10, marker='v', markersize=0.5, label="$N_2$ 1.0k")
ax.plot('t(s)', 'NH3', '-', color='b', data=mole_df_15, marker='*', markersize=0.5, label="$NH_3$ 1.5k")
ax.plot('t(s)', 'NH3', '--', color='b', data=mole_df_10, marker='*', markersize=0.5, label="$NH_3$ 1.0k")
ax.set_xlabel('Time (s)')
ax.ticklabel_format(axis='y', style='sci', scilimits=(0,0))
ax.set_ylabel('Mole Fraction')
#ax.set_xlim([0,50])
# Change time limit to 1
ax.set_xlim([0,5])
ax.legend(loc="upper left", bbox_to_anchor=(1,1))
plt.tight_layout()
plt.show()
#plt.savefig('Ammonia_molefrac_catload_cmp.png', dpi=500)
ax = plt.subplot(1, 1, 1)
ax.plot('t(s)', 'RU(S1)', '-', color='r', data=surf_cov_df_15, marker='^', markersize=0.5, label="$Ru*$ 1.5k")
ax.plot('t(s)', 'RU(S1)', '--', color='r', data=surf_cov_df_10, marker='^', markersize=0.5, label="$Ru*$ 1.0k")
ax.plot('t(s)', 'H(S1)', '-', color='g', data=surf_cov_df_15, marker='v', markersize=0.5, label="$H*$ 1.5k")
ax.plot('t(s)', 'H(S1)', '--', color='g', data=surf_cov_df_10, marker='v', markersize=0.5, label="$H*$ 1.0k")
ax.plot('t(s)', 'N(S1)', '-', color='b', data=surf_cov_df_15, marker='*', markersize=0.5, label="$N*$ 1.5k")
ax.plot('t(s)', 'N(S1)', '--', color='b', data=surf_cov_df_10, marker='*', markersize=0.5, label="$N*$ 1.0k")
ax.set_xlabel('Time (s)')
ax.ticklabel_format(axis='y', style='sci', scilimits=(0,0))
ax.set_ylabel('Surface Coverages')
#ax.set_xlim([0,0.5])
ax.set_xlim([0,0.04])
ax.legend(loc="upper left", bbox_to_anchor=(1,1))
plt.tight_layout()
plt.show()
#plt.savefig('Ammonia_surf_cov_catload-comp.png', dpi=500)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the database
Step2: Explore contents of the database file
Step3: The content of the raw IMU file
Step4: Implemented analysis methods
Step5: Try an analysis
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import nvg.ximu.ximudata as ximudata
%matplotlib notebook
reload(ximudata)
dbfilename = "/home/kjartan/Dropbox/Public/nvg201209.hdf5"
db = ximudata.NVGData(dbfilename);
dbfile = db.hdfFile;
print "Subjects: ", dbfile.keys()
print "Trials: ", dbfile['S5'].keys()
print "IMUs: ", dbfile['S5/B'].keys()
print "Attributes of example trial", dbfile['S5/B'].attrs.keys()
print "Shape of example IMU data entry", dbfile['S5/B/N'].shape
db.plot_imu_data()
print [s for s in dir(db) if s.startswith("get")]
res = db.apply_to_all_trials(db.get_RoM_angle_to_vertical, {'imu':'N'},
subjlist=['S2', 'S3'], triallist=['B', 'N'])
res
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download MNIST
Step3: Conversions
Step4: Sweet!
Step5: Okay, cool.
Step6: Alternate Conversion Method
|
<ASSISTANT_TASK:>
Python Code:
from pathlib import Path
import numpy as np
import os
from urllib.request import urlretrieve
import matplotlib.pyplot as plt
%matplotlib inline
def download_mnist(path=Path('data/mnist')):
os.makedirs(path, exist_ok=True)
urls = ['http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz',]
for url in urls:
fname = url.split('/')[-1]
if not os.path.exists(path/fname): urlretrieve(url, path/fname)
import struct
import gzip
import numpy as np
def read_IDX(fname):
see: https://gist.github.com/tylerneylon/ce60e8a06e7506ac45788443f7269e40
with gzip.open(fname) as f:
zero, data_type, dims = struct.unpack('>HBB', f.read(4))
shape = tuple(struct.unpack('>I', f.read(4))[0] for d in range(dims))
return np.frombuffer(f.read(), dtype=np.uint8).reshape(shape)
# See: https://docs.python.org/3/library/struct.html#byte-order-size-and-alignment
# IDX format assumes MSB (big-endian) ordering used by non-Intel machines.
# Therefore if you're using an Intel cpu specify '>' when unpacking to let
# Python know to handle the header bytes correctly
download_mnist()
fname = Path('data/mnist/t10k-images-idx3-ubyte.gz')
tmp = read_IDX(fname)
tmp.shape
plt.imshow(tmp[0])
tmp = None
fname = Path('data/mnist/t10k-labels-idx1-ubyte.gz')
tmp = read_IDX(fname)
tmp.shape
tmp
set(tmp)
def read_IDX(fname):
with gzip.open(fname) as f:
zero, data_type, dims = struct.unpack('>HBB', f.read(4))
import pdb; pdb.set_trace()
shape = tuple(struct.unpack('>I', f.read(4))[0] for d in range(dims))
return np.frombuffer(f.read(), dtype=np.uint8).reshape(shape)
# fname = Path('data/mnist/t10k-labels-idx1-ubyte.gz')
fname = Path('data/mnist/t10k-images-idx3-ubyte.gz')
tmp = read_IDX(fname)
## see: https://medium.com/@mannasiladittya/converting-mnist-data-in-idx-format-to-python-numpy-array-5cb9126f99f1
# 1. imports
import struct
import numpy as np
# 2. open IDX file in readable binary mode
fname = {'images': 'train-images-idx3-ubyte', 'labels':'train-labels-idx1-ubyte'}
fimg = open('data/mnist/' + fname['images'], 'rb')
# 3. read magic number
fimg.seek(0)
magic = struct.unpack('>4B', fimg.read(4))
# 4. read dims of image dataset
nimg = struct.unpack('>I', fimg.read(4))[0] # num images
nrow= struct.unpack('>I', fimg.read(4))[0] # num rows
ncol= struct.unpack('>I', fimg.read(4))[0] # num cols
# 5. create numpy array
img_arr = np.zeros((nimg, nrow, ncol))
# 6. read img data
n_bytes_total = nimg * nrow * ncol * 1 # since ea. pixel is 1 byte
img_arr = 255 - np.asarray(struct.unpack('>' + 'B' * n_bytes_total, fimg.read(n_bytes_total))
).reshape((nimg, nrow, ncol))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Demo
Step2: A SampleSource can be sampled to get a TSTData (two-sample test data) object. A TSTData object is just an encapsulation of two $n\times d$ numpy arrays, representing the two samples. The TSTData object will be fed to a testing algorithm to conduct the two-sample test. All two-sample test algorithms are implemented to take in a TSTData object as its input.
Step3: Note that in practice, we have access to only two data matrices
Step4: Now that we have tst_data, let us randomly split it into two disjoint halves of equal size
Step5: Optimize the test locations (features) and the Gaussian width
Step6: The optimization procedure returns back 3 things
Step7: Here we see that the test rejects the null hypothesis with a very small p-value.
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import freqopttest.util as util
import freqopttest.data as data
import freqopttest.kernel as kernel
import freqopttest.tst as tst
import freqopttest.glo as glo
import theano
# Full sample size
n = 600
# Consider two dimensions here
dim = 2
seed = 15
# mean shift
my = 1.0
ss = data.SSGaussMeanDiff(dim, my=my)
#ss = data.SSGaussVarDiff(dim)
#ss = data.SSSameGauss(dim)
#ss = data.SSBlobs()
# draw n points from P, and n points from Q.
# tst_data is an instance of TSTData
tst_data = ss.sample(n, seed=seed)
# Let's plot the data
x, y = tst_data.xy()
plt.plot(x[:, 0], x[:, 1], 'b.', markeredgecolor='b', label='$X\sim P$', alpha=0.5)
plt.plot(y[:, 0], y[:, 1], 'r.', markeredgecolor='r', label='$Y\sim Q$', alpha=0.5)
plt.axis('equal')
plt.legend()
tr, te = tst_data.split_tr_te(tr_proportion=0.5, seed=2)
# J = number of test locations (features)
J = 1
# significance level of the test
alpha = 0.01
# These are options to the optimizer. Many have default values.
# See the code for the full list of options.
op = {
'n_test_locs': J, # number of test locations to optimize
'max_iter': 200, # maximum number of gradient ascent iterations
'locs_step_size': 1.0, # step size for the test locations (features)
'gwidth_step_size': 0.1, # step size for the Gaussian width
'tol_fun': 1e-4, # stop if the objective does not increase more than this.
'seed': seed+5, # random seed
}
# optimize on the training set
test_locs, gwidth, info = tst.MeanEmbeddingTest.optimize_locs_width(tr, alpha, **op)
# Construct a MeanEmbeddingTest object with the best optimized test features,
# and optimized Gaussian width
met_opt = tst.MeanEmbeddingTest(test_locs, gwidth, alpha)
# Do the two-sample test on the test data te.
# The returned test_result is a dictionary.
test_result = met_opt.perform_test(te)
test_result
# Plot evolution of the test locations, Gaussian width
# trajectories of the Gaussian width
gwidths = info['gwidths']
fig, axs = plt.subplots(2, 2, figsize=(10, 9))
axs[0, 0].plot(gwidths)
axs[0, 0].set_xlabel('iteration')
axs[0, 0].set_ylabel('Gaussian width')
axs[0, 0].set_title('Gaussian width evolution')
# evolution of objective values
objs = info['obj_values']
axs[0, 1].plot(objs)
axs[0, 1].set_title('Objective $\lambda(T)$')
# trajectories of the test locations
# iters x J. X Coordinates of all test locations
locs = info['test_locs']
for coord in [0, 1]:
locs_d0 = locs[:, :, coord]
J = locs_d0.shape[1]
axs[1, coord].plot(locs_d0)
axs[1, coord].set_xlabel('iteration')
axs[1, coord].set_ylabel('index %d of test_locs'%(coord))
axs[1, coord].set_title('evolution of %d test locations'%J)
print('optimized width: %.3f'%gwidth)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prioritize
Step2: Note
Step3: Some special objects
Step4: http
Step5: The above object was put as SAT==2, but is at very close z to AnaK...? r~=17.6 in DECALS. Marla says this is because it's too far from the host, so not worth special-izing
Step6: Risa suggests
Step7: This is not actually quite on the center-of-light in DECALS... but it might be an HII region?
Step12: That one seems good
Step13: Make the master catalogs
Step14: For the remainder, generate and visually inspect them one at a time. Edit the file to remove any that are not good sky positions
Step15: Actually generate the master catalogs
Step16: Observations
Step18: This function parses a dump of all the image headers from 'fitsheader ccd_?/.fits'
Step19: Night 1
Step21: Make observing log
Step22: Night 2
Step24: Make observing log
Step25: Nights 3+
Step26: NSA 145729 is being treated a bit oddly because there might be two back-to-back-ish observations. So we're making one config in advance for both the p0 and p1 configurations, and making fld's for config 2 assuming either of those.
Step27: Logs for observations
Step28: Utilities for during the night
Step29: Inspect guider fields as needed
Step30: Planning
Step31: Find possible targets
Step32: Airmass chart
Step33: Notes
Step34: Note
Step35: Basically there after 2 observations of Narnia and OBrother. AnaK and Dune 1 or maybe 2, Gilg/Ody 1.
Step36: Log from last run
|
<ASSISTANT_TASK:>
Python Code:
nmstoget = ('Dune', 'AnaK', 'Odyssey', 'Gilgamesh', 'OBrother', 'Narnia', 'Catch22')
hosts_to_target = [h for h in hsd.values() if h.name in nmstoget]
assert len(hosts_to_target)==len(nmstoget)
new_targets = [hosts.NSAHost(145729), hosts.NSAHost(21709)]
hosts_to_target.extend(new_targets)
# now set to the latest base catalogs
for h in hsd.values() + hosts_to_target:
h.fnsdss = 'SAGADropbox/base_catalogs/base_sql_nsa{0}.fits.gz'.format(h.nsaid)
h._cached_sdss = None
# make sure the catalogs are loaded
for h in hosts_to_target:
h.get_sdss_catalog()
hosts_to_target
# these are the already-observed objects
# we use "h" for this only as a convenience because the method adds some useful bits that aren't in the raw catalogs
spectra = h.load_and_reprocess_sdss_catalog('SAGADropbox/data/saga_spectra_clean.fits.gz')
for h in hosts_to_target:
print('On host', h.name)
cat = h.get_sdss_catalog()
r0 = cat['r'] - cat['Ar']
cutbyrawmagcut = (r0<20.75)&(cat['r']>21)
if np.sum(cutbyrawmagcut)>0:
print('WARNING: ', np.sum(cutbyrawmagcut), 'objects have rmags in selection but are extincted too much for r>21')
pris = np.ones(len(cat))
# remove list
tokeep = cat['REMOVE']==-1
pris[~tokeep] = -cat['REMOVE'][~tokeep] # sets the REMOVE objects to -their remove value
remmskval = np.min(pris)-1
# remove anything in the remove list online but not in the catalog as remove
pris[~targeting.remove_targets_with_remlist(cat, h, maskonly=True, verbose='warning')&(pris>-1)] = remmskval
if np.sum(pris==remmskval) > 0:
print('Removed', np.sum(pris==remmskval), 'due to online remove list. Remmsk val:', remmskval)
photgood = ((cat['r'] < 21.) & # this is to cut the numbers a bit - more stringent mag cuts below
(cat['fibermag_r']<23) &
(cat['phot_sg']=='GALAXY'))
nearish = (cat['RHOST_KPC']<450) # again, to limit the numbers
pris[~photgood|~nearish] = 0
# pris ~-100 are removed due to already-observed
nospec = cat['ZQUALITY']<3
pris[~nospec] = -100-cat['ZQUALITY'][~nospec]
brighter = r0 < 20
fainter = (r0 < 20.75)&~brighter
goodcolors = targeting.colorcut_mask(cat, {'g-r': (None, 0.85, 2), 'r-i': (None, 0.55, 2)},deredden=True)
okcolors = targeting.colorcut_mask(cat, {'g-r': (None, 1.2), 'r-i': (None, 0.7)}, deredden=False)
inside = cat['RHOST_KPC']<300
pris[(pris>0)&~inside&fainter&goodcolors] = np.max(pris) + 1
pris[(pris>0)&~inside&brighter&goodcolors] = np.max(pris) + 1
pris[(pris>0)&inside&fainter&okcolors] = np.max(pris) + 1
pris[(pris>0)&inside&brighter&okcolors] = np.max(pris) + 1
pris[(pris>0)&inside&fainter&goodcolors] = np.max(pris) + 1
pris[(pris>0)&inside&brighter&goodcolors] = np.max(pris) + 1
#everything left is in pri 1
# this *shouldn't* be necessary, as ZQUALITY should be in the base catalog.
# But as a sanity check we look to see if anything in the spectral catalog is still being included
spec_this_host = spectra[spectra['HOST_NSAID']==h.nsaid]
spec_this_host = spec_this_host[np.in1d(spec_this_host['OBJID'], cat['OBJID'])]
zq = cat['ZQUALITY'].copy()
for i, zqi in zip(spec_this_host['OBJID'], spec_this_host['ZQUALITY']):
zq[cat['OBJID']==i] = zqi
if np.any(pris[zq>2]>=0):
print('POSSIBLE PROBLEM: Found some objects in spectrum list that are *not* claimed '
'as having spectra in the base catalogs. Setting them to -11x:', dict(Counter(pris[pris<-110])))
pris[zq>2] = -110 - zq[zq>2]
#de-duplicate
if len(np.unique(cat['OBJID'])) != len(cat):
_, idxs = np.unique(cat['OBJID'], return_index=True)
msk = np.ones_like(cat, dtype=bool)
msk[idxs] = 0
pris[msk] = -1000
print('WARNING: some duplicate objid found. Setting', np.sum(pris==-1000), 'dupes to pri=-1000')
cat['aat_pris'] = pris
if 'aat_pris_unjiggered' in cat.colnames:
cat.remove_column('aat_pris_unjiggered')
#informational
counter = Counter(pris)
print('Rank counts:')
for k in reversed(sorted(counter)):
print(int(k), ':', counter[k])
for h in new_targets:
print(h.name)
cat = h.get_sdss_catalog()
if 'aat_pris_unjiggered' in cat.colnames:
print('Already rejiggered', h, 'skipping (but still reporting stats)...')
pris = cat['aat_pris']
print(dict(Counter(pris[pris>0])))
print(dict(Counter(pris[pris<=0])))
continue
p = cat['PROBABILITY_CLASS1']
if np.all(p<0):
print('WARNING: host', h, 'does not have probs, so not re-jiggering based on ML probs')
continue
pris = np.array(cat['aat_pris_unjiggered' if 'aat_pris_unjiggered' in cat.colnames else 'aat_pris'])
goodcolorbright = pris == 7
goodcolorfaint = pris == 6
okcolor = (pris==5) | (pris==4)
pris[okcolor] = -0.75
pris[goodcolorfaint] = 4
pris[goodcolorbright] = 5
pris[goodcolorfaint&(p>.001)] = 6
pris[goodcolorbright&(p>.001)] = 7
print(dict(Counter(pris[pris>0])))
print(dict(Counter(pris[pris<=0])))
cat['aat_pris_unjiggered'] = cat['aat_pris']
cat['aat_pris'] = pris
special_objids = []
possible_anak_sat = SkyCoord(353.7229, 1.2064, unit=u.deg)
spectrascs = SkyCoord(spectra['ra'], spectra['dec'], unit=u.deg)
row = spectra[np.argsort(possible_anak_sat.separation(spectrascs))[0]]
special_objids.append(row['OBJID'])
row
del special_objids[special_objids.index(row['OBJID'])]
possible_anak_sat = SkyCoord(354.286, 0.211, unit=u.deg)
anakcat = hsd['AnaK'].get_sdss_catalog()
anakscs = SkyCoord(anakcat['ra'], anakcat['dec'], unit=u.deg)
seps = possible_anak_sat.separation(anakscs)
closest = np.argsort(seps)[0]
print(seps[closest].arcsec)
row = anakcat[closest]
special_objids.append(row['OBJID'])
row
possible_anak_sat = SkyCoord(354.527, 0.533964, unit=u.deg)
seps = possible_anak_sat.separation(anakscs)
closest = np.argsort(seps)[0]
print(seps[closest].arcsec)
row = anakcat[closest]
special_objids.append(row['OBJID'])
row
def find_risa_objs(stringfromrisa, h):
risa_tab = Table.read(stringfromrisa, delimiter='\t', format='ascii', names=['unk', 'ra', 'dec', 'mag', 'cand'])
risa_sc = SkyCoord(risa_tab['ra'], risa_tab['dec'], unit=u.deg)
cat = h.get_sdss_catalog()
catsc = SkyCoord(cat['ra'], cat['dec'], unit=u.deg)
idx, d2d, _ = risa_sc.match_to_catalog_sky(catsc)
assertmsg = '{} matched of {}: {}'.format(np.sum(d2d < .1*u.arcsec),len(d2d), d2d[d2d > .1*u.arcsec])
assert np.all(d2d < .1*u.arcsec), assertmsg
return cat['OBJID'][idx]
risa_145729=
-1 224.56410 -1.0730193 19.7973 true
-1 224.59164 -1.1261357 17.9597 true
-1 224.58605 -1.1340797 19.9805 true
-1 224.65696 -1.1021396 17.4972 true
-1 224.54478 -1.1645862 19.4057 true
-1 224.50349 -1.1783027 17.7666 true
-1 224.61258 -1.2283750 20.1190 true
-1 224.66071 -1.2407656 20.3448 true
-1 224.58210 -1.2891033 20.1278 true
-1 224.68795 -0.82928863 18.7276 true
-1 224.46354 -0.85993860 20.6228 true
-1 224.43907 -1.3290346 20.2419 true
-1 224.27041 -1.0663297 19.6382 true
-1 224.92796 -1.0868430 19.9441 true
-1 224.95218 -1.2046019 20.2506 true
-1 224.98659 -1.0996963 19.2848 true
-1 224.95028 -1.2134533 19.1667 true
-1 224.56810 -0.71035594 19.5400 true
-1 224.56710 -0.71155324 18.3361 true
-1 224.63475 -0.76637428 20.5220 true
-1 224.79342 -0.82424335 19.5245 true
-1 224.26293 -1.3190454 19.7427 true
-1 224.34037 -1.3851494 19.5061 true
-1 224.67776 -1.3717603 18.9769 true
-1 224.30819 -0.89372642 19.5476 true
-1 224.95888 -1.0081097 19.6524 true
-1 225.01145 -1.2106150 19.6745 true
-1 224.27946 -0.80572367 19.0886 true
-1 224.44473 -0.64135326 18.2103 true
-1 224.59702 -0.60626247 19.4427 true
-1 224.98059 -1.2775413 20.7447 true
-1 224.25056 -1.3830278 20.6792 true
-1 224.03729 -1.0589027 20.7401 true
-1 224.94320 -0.87332390 19.9586 true
-1 224.12169 -1.2418469 18.6920 true
-1 225.09967 -1.2117895 19.8792 true
-1 224.28313 -0.67401930 20.1558 true
-1 224.18769 -0.79627184 19.9399 true
-1 224.23305 -0.67032897 20.2131 true
-1 225.00922 -0.80628957 20.5866 true
-1 224.32848 -1.5812675 18.2125 true
-1 224.27623 -1.5292467 18.4006 true
-1 224.70055 -1.6463751 18.5479 true
-1 225.06682 -1.2727903 20.5982 true
-1 224.89664 -1.5217602 19.0338 true
-1 225.02588 -1.4044669 20.2629 true
-1 224.98083 -1.4368200 20.4261 true
-1 225.07035 -0.95870445 19.6174 true
-1 224.14144 -0.71374995 20.3682 true
-1 224.18156 -0.65458846 19.8804 true
-1 224.03748 -0.86010033 20.2428 true
-1 224.29784 -1.5985485 19.0072 true
-1 224.30080 -1.5957333 20.6291 true
-1 224.65269 -1.6814901 20.2405 true
-1 224.18598 -1.4982058 19.6720 true
-1 225.18215 -0.98714751 20.2422 true
[1:-1]
special_objids.extend(find_risa_objs(risa_145729, [h for h in hosts_to_target if h.nsaid==145729][0])
risa_obrother=
1 335.97281 -3.4295662 17.7332 true
1 335.91314 -3.5404510 19.7043 true
-1 335.77781 -3.4459616 20.1615 true
-1 335.81490 -3.6025596 20.5123 true
1 336.04145 -3.2204987 20.5081 true
-1 336.09493 -3.4649021 19.7341 true
1 335.99401 -3.7007769 20.5931 true
1 336.12273 -3.5101925 20.4868 true
1 335.72556 -3.1674595 20.6372 true
1 335.84376 -3.0261104 20.7444 true
-1 336.23396 -3.1875586 20.7117 true
1 335.55249 -3.6052065 20.0624 true
1 335.65592 -3.6558837 20.5213 true
-1 335.57983 -3.6963397 19.3788 true
-1 336.30042 -3.4636766 19.8654 true
[1:-1]
special_objids.extend(find_risa_objs(risa_obrother, hsd['OBrother']))
risa_catch22=
-1 348.62149 4.5071708 17.7932 true
-1 348.73347 4.5865011 19.4766 true
-1 348.67493 4.6123235 20.3472 true
-1 348.72361 4.4495699 20.6729 true
-1 348.72881 4.4323062 19.4804 true
-1 348.55899 4.5900220 20.2576 true
-1 348.64485 4.4040044 17.6392 true
-1 348.59640 4.3492465 20.6181 true
-1 348.68132 4.3095517 20.6868 true
-1 348.68817 4.3020869 20.7035 true
-1 348.89822 4.4892740 18.9281 true
-1 348.43132 4.7245873 19.3515 true
-1 348.51966 4.7464873 19.3880 true
-1 348.39920 4.5666321 18.2252 true
-1 348.99115 4.5918658 20.7471 true
-1 348.31622 4.3290159 19.6409 true
-1 348.87290 4.8064919 18.9324 true
-1 348.63961 4.2011104 20.5785 true
-1 348.98746 4.3261837 18.3678 true
-1 348.40085 4.8681252 17.4557 true
-1 348.64976 4.8775596 18.4048 true
-1 348.82739 4.9078081 19.3681 true
-1 348.68761 4.9226928 17.8673 true
-1 348.85846 4.8346772 19.6157 true
[1:-1]
special_objids.extend(find_risa_objs(risa_catch22, hsd['Catch22']))
risa_narnia=
[1:-1]
special_objids.extend(find_risa_objs(risa_narnia, hsd['Narnia']))
for soid in special_objids:
for h in hosts_to_target:
cat = h.get_sdss_catalog()
if soid in cat['OBJID']:
print('Found oid', soid, 'in', h.name, 'so setting to pri=9')
pris = cat['aat_pris']
if np.any(pris[np.in1d(cat['OBJID'], soid)]<1):
print('WARNING: ', np.sum(pris[np.in1d(cat['OBJID'], soid)]<1), 'special objects in',
h.name, 'are pri<0... skipping')
cat['aat_pris'][np.in1d(cat['OBJID'], soid)&(pris>0)] = 9
break
else:
print('Could not find oid', soid, 'anywhere')
# commented because it should only be run once
!ls aat_targets_jun2015/*_sky.dat
#!cp aat_targets_jun2015/*_sky.dat aat_targets_jul2016/
#Identify sky regions for each host and write out to separate files -
from os.path import exists
for h in hosts_to_target:
outfn = 'aat_targets_jul2016/' + h.name.replace(' ','_') + '_sky.dat'
if exists(outfn):
print(outfn, 'exists, not overwriting')
else:
try:
h.get_usnob_catalog()
usnocat = None
except IOError:
# this is currently hanging as the usno server is down...
#print('Downloading USNO B1 catalog for', h.name)
#h.usnob_environs_query(dl=True)
#usnocat = None
usnocat = False
aat.select_sky_positions(h, nsky=100, outfn=outfn, rad=1*u.deg, usnocat=usnocat)
aat.imagelist_fld_targets(outfn, ttype='sky', n=np.inf)
!subl $outfn
raw_input('Wrote ' + outfn + '. Press enter to continue onto the next one once '
'you remove bad entries from this file.')
for h in hosts_to_target:
cat = h.get_sdss_catalog()
pris = cat['aat_pris']
guides = aat.select_guide_stars_sdss(cat)
calibs = aat.select_flux_stars(cat, onlyoutside=300*u.kpc)
skyradec = 'aat_targets_jul2016/{0}_sky.dat'.format(h.name)
aat.produce_master_fld(h, datetime.date(2016, 7, 28), cat[pris>0], pris=pris[pris>0].astype(int),
guidestars=guides, fluxstars=calibs,skyradec=skyradec,
outfn='aat_targets_jul2016/{}_master.fld'.format(h.name),
randomizeorder=True, inclhost=False)
def do_subsampling(h, finum, n_in_pri, **kwargs):
assert h in hosts_to_target
n_in_pri = dict(n_in_pri) #copy just in case
for i in range(9):
if i+1 not in n_in_pri:
n_in_pri[i+1] = np.inf
fnbase = 'aat_targets_jul2016/' + h.name
fnmaster = fnbase + '_master.fld'
fnconfig = kwargs.pop('fnconfig', fnbase + '_{0}.fld'.format(finum))
print('Writing', fnconfig, 'from master', fnmaster)
kwargs.setdefault('nflux', 5)
kwargs.setdefault('nguides', 30)
kwargs.setdefault('fieldname', str(finum))
if 'listorem' not in kwargs:
listorem = []
for i in range(1, finum):
globpat = fnbase + '_' + str(i) + '*.lis'
listorem.extend(glob(globpat))
assertmsg = 'Only got ' + str(len(listorem)) +' .lis file(s):'+str(listorem)+', expected ' + str(finum-1)
assert len(listorem) == (finum-1), assertmsg
kwargs['listorem'] = listorem
aat.subsample_from_master_fld(fnmaster, fnconfig, n_in_pri, **kwargs)
return fnconfig
def make_logtab(logs, skip_res=[]):
Generates a table with the AAT observing logs.
``logs`` can be a filename or list of lines from fitsheader
logtab = Table()
logtab['num'] = [1]
logtab['ccd'] = [2]
logtab['UTDATE'] = ['2015:06:19']
logtab['UTSTART'] = ['08:56:19']
logtab['OBJECT'] = ['a'*100]
logtab['TOTALEXP'] = ['a'*20]
logtab['RUNCMD'] = ['a'*20]
logtab['GRATID'] = ['a'*20]
logtab['SOURCE'] = ['plate 0 ']
logtab['MEANRA'] = [1.1]
logtab['MEANDEC'] = [1.1]
logtab['CFG_FILE'] = ['Odyssey_1p0xxxxxxxxxx.sds']
logtab = logtab[1:]
def add_logcol(accumulated_lines):
l = accumulated_lines[0]
hdunum = int(l.split()[2])
hdupath = l.split()[4][:-1]
if hdunum != 0:
return
for rex in skip_res:
if re.match(rex, hdupath):
print('Skipping', hdupath, 'because it has', rex)
return
items = {'num': int(hdupath.split('/')[-1].split('.')[0][5:]),
'ccd': int(hdupath.split('/')[-2][4:])}
for l in accumulated_lines[1:]:
if '=' in l:
nm, data = l.split('=')
nm = nm.strip()
data = data.split('/')[:2]
if len(data)> 1:
comment = data[1]
data = data[0].replace("'", '').strip()
if nm in logtab.colnames:
items[nm] = data
logtab.add_row(items)
accumulated_lines = None
if isinstance(logs, basestring):
with open(logs) as f:
loglines = list(f)
else:
loglines = logs
for l in loglines:
if l.startswith('# HDU'):
if accumulated_lines is not None:
add_logcol(accumulated_lines)
accumulated_lines = [l]
elif l.strip() == '':
continue
elif accumulated_lines is not None:
accumulated_lines.append(l)
return logtab
sshtarget = 'visitor3@aatlxa'
fnconfig = do_subsampling(hsd['Dune'], 1, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['Odyssey'], 1, {1:200, 2:200, 3:300})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['OBrother'], 1, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['Catch22'], 1, {1:100, 2:50, 3:100})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['AnaK'], 1, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
# use this to get the log data from the remote machine
rem_fitsheader_loc = '/home/visitor3/miniconda2/bin/fitsheader'
datadir = '/data_lxy/aatobs/OptDet_data/160728'
fitsheader_output = !ssh $sshtarget "$rem_fitsheader_loc $datadir/ccd_?/*.fits"
# use this to use the local file to make the log
fitsheader_output = !fitsheader aat_data/28jul2016/ccd_?/*.fits
logtab = make_logtab(fitsheader_output, skip_res=['.*temp.*'])
logtab[logtab['ccd']==1].show_in_notebook(display_length=20)
comments_jul28 =
* At start of night weather clear, but high humidity and possible fog
* The Dune_1 observations have the blue side centered 20 angstroms further to the red than the remaining
observations. The initial setup provides a bit more overlap but after inspecting a bit we concluded that it
would be better to tryto make sure 3727 falls into the image at the blue side
* start of Dune_1 : seeing ~1.5-2"
* start of Odyssey_1 : seeing ~2-3"
* start of OBrother_1: seeing ~4-5"
* near end of OBrother_1: seeing ~2.5". Sudden improvement in weather, humidity fell/fog disappeared,
* start of Catch22_1: seeing ~3"
* start of AnaK_1: seeing ~1.9"
[1:-1]
display.Markdown(comments_jul28)
logfn = 'aat_data/28jul2016/log_jul28'
logtab.write(logfn,format='ascii')
with open(logfn, 'a') as f:
f.write('\nComments:\n')
f.write(comments_jul28)
fnconfig = do_subsampling(hsd['Dune'], 2, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['Gilgamesh'], 1, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['OBrother'], 2, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['Catch22'], 2, {1:50, 2:50, 3:200})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['Narnia'], 1, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
# use this to use the local file to make the log
fitsheader_output = !fitsheader aat_data/29jul2016/ccd_?/*.fits
logtab = make_logtab(fitsheader_output, skip_res=['.*temp.*'])
logtab[logtab['ccd']==1].show_in_notebook(display_length=20)
comments_jul29 =
* start of Dune_2 : seeing ~1.3"
* start of Gilgamesh_1 : seeing ~1.6"
* start of Odyssey_1a : seeing ~1.4"
* start of OBrother_2 : seeing ~1.7"
* temperature rose dramatically near end of OBrother_2 - might lead to variable seeing
* start of Catch22_2: seeing ~2.3"
* start of Narnia_1: seeing ~1.5"
[1:-1]
display.Markdown(comments_jul29)
logfn = 'aat_data/29jul2016/log_jul29'
logtab.write(logfn,format='ascii')
with open(logfn, 'a') as f:
f.write('\nComments:\n')
f.write(comments_jul29)
hs = [h for h in hosts_to_target if h.nsaid==145729]
assert len(hs) == 1
fnconfig = do_subsampling(hs[0], 1, {1:25, 2:25, 3:50, 4:100})
!scp $fnconfig "$sshtarget:configure/"
hs = [h for h in hosts_to_target if h.nsaid==145729]
assert len(hs) == 1
fnconfig = do_subsampling(hs[0], 2, {1:25, 2:25, 3:50, 4:100},
listorem=['aat_targets_jul2016/NSA145729_1_p0.lis'],
fnconfig='aat_targets_jul2016/NSA145729_2_after1p0.fld')
!scp $fnconfig "$sshtarget:configure/"
hs = [h for h in hosts_to_target if h.nsaid==145729]
assert len(hs) == 1
fnconfig = do_subsampling(hs[0], 2, {1:25, 2:25, 3:50, 4:100},
listorem=['aat_targets_jul2016/NSA145729_1_p1.lis'],
fnconfig='aat_targets_jul2016/NSA145729_2_after1p1.fld')
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['Catch22'], 3, {1:100, 2:100, 3:300)
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['Narnia'], 2, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
hs = [h for h in hosts_to_target if h.nsaid==21709]
assert len(hs) == 1
fnconfig = do_subsampling(hs[0], 1, {1:25, 2:25, 3:50, 4:100})
!scp $fnconfig "$sshtarget:configure/"
# fitsheader run on the output downloaded to another computer
with open('/Users/erik/tmp/fitsheader_jul30') as f:
fitsheader_output = list(f)
logtab = make_logtab(fitsheader_output, skip_res=['.*temp.*'])
logtab[logtab['ccd']==1].show_in_notebook(display_length=20)
logfn = 'aat_data/30jul2016/log_jul30'
logtab.write(logfn,format='ascii')
!scp "$sshtarget:configure/*.sds" aat_targets_jul2016/
# *_* because otherwise the collision matrix list comes through too
!scp "$sshtarget:configure/*_*.lis" aat_targets_jul2016/
listab, lisscs, lisheader = aat.load_lis_file('aat_targets_jul2016/Narnia_1_p1.lis')
guidemsk = listab['codes']=='F'
names = ['{0[ids]}_f#={0[fibnums]}'.format(row) for row in listab[guidemsk]]
print(targeting.sampled_imagelist(lisscs[guidemsk], None, names=names))
import astroplan
#if online
ufo = urllib2.urlopen('https://docs.google.com/spreadsheet/ccc?key=1b3k2eyFjHFDtmHce1xi6JKuj3ATOWYduTBFftx5oPp8&output=csv')
hosttab = QTable.read(ufo.read(), format='csv')
ufo.close()
#if offline
#hosttab = Table.read('SAGADropbox/hosts/host_catalog_flag0.csv')
hostscs = SkyCoord(u.Quantity(hosttab['RA'], u.deg),
u.Quantity(hosttab['Dec'], u.deg),
distance=u.Quantity(hosttab['distance'], u.Mpc))
allspec = Table.read('/Users/erik/Dropbox/SAGA/data/allspectaken_v5.fits.gz')
sagaobsed = allspec[~((allspec['TELNAME']=='NSA')|
(allspec['TELNAME']=='SDSS')|
(allspec['TELNAME']=='GAMA'))]
sagaobsed_nsaids = np.unique(sagaobsed['HOST_NSAID'])
#UTC time from 8:35-19:35 is AAT 18 deg window
nighttimes = Time('2016-7-28 8:45:00') + np.arange(12)*u.hour
aao = EarthLocation(lon='149:3:57.9', lat='-31:16:37.3')
aao_frame = AltAz(obstime=nighttimes, location=aao)
sunaao = get_sun(nighttimes).transform_to(aao_frame)
np.max(sunaao.alt.value)
seczs = []
for sc in hostscs:
az = sc.transform_to(aao_frame)
seczs.append(az.secz)
seczs = np.array(seczs)
hrsvis = np.sum((1<=seczs)&(seczs<1.75),axis=1)
visenough = hrsvis>2
aaoobs = astroplan.Observer(aao, 'Australia/NSW')
midnight = aaoobs.midnight(Time('2016-7-28'))
up_times = {}
hoststoshow = hosttab[visenough]
hosts_to_target_nsaids = []
for h in hosts_to_target:
hosts_to_target_nsaids.append(h.nsaid)
if h.nsaid not in hoststoshow['NSAID']:
print('adding', h)
hoststoshow.add_row(None)
hoststoshow['NSAID'][-1] = h.nsaid
hoststoshow['RA'][-1] = h.ra
hoststoshow['Dec'][-1] = h.dec
with open('aat_targets_jul2016/aattargs_iobserve.dat', 'w') as f:
for host in hoststoshow:
already_obs = host['NSAID'] in sagaobsed_nsaids
name = 'NSA'+str(host['NSAID'])
for nm, val in hsd.items():
if val.nsaid == host['NSAID']:
name = nm
if nm.startswith('NSA'):
name = name+'_obsed'
break
f.write(name.replace(' ','_'))
if already_obs:
f.write('-observed')
f.write(' ')
f.write(str(host['RA']) + ' ')
f.write(str(host['Dec']) + '\n')
targ = astroplan.FixedTarget(SkyCoord(host['RA'], host['Dec'], unit=u.deg), name)
tpl = (name, host['NSAID'], host['RA'], host['Dec'])
transit = aaoobs.target_meridian_transit_time(midnight, targ)
up_times[transit.jd] = tpl
timestoplot = transit + np.linspace(-6, 6, 100)*u.hour
taa = aaoobs.altaz(timestoplot, targ)
msk = taa.secz >=1
color = 'g' if already_obs else 'k'
color = 'r' if host['NSAID'] in hosts_to_target_nsaids else color
plt.plot(timestoplot.plot_date[msk], taa.secz[msk], c=color)
plt.text(transit.plot_date, aaoobs.altaz(transit, targ).secz, name, ha='center', color=color)
t0 = aaoobs.sun_rise_time(midnight, 'previous')
t1 = aaoobs.sun_set_time(midnight, 'previous')
t2 = aaoobs.twilight_evening_civil(midnight, 'previous')
t3 = aaoobs.twilight_evening_nautical(midnight, 'previous')
t4 = aaoobs.twilight_evening_astronomical(midnight, 'previous')
t5 = aaoobs.twilight_morning_astronomical(midnight, 'next')
t6 = aaoobs.twilight_morning_nautical(midnight, 'next')
t7 = aaoobs.twilight_morning_civil(midnight, 'next')
t8 = aaoobs.sun_rise_time(midnight, 'next')
t9 = aaoobs.sun_set_time(midnight, 'next')
plt.fill_between([t0.plot_date,t1.plot_date],1,3, lw=0, facecolor='y', alpha=.9)
plt.fill_between([t1.plot_date,t2.plot_date],1,3, lw=0, facecolor='y', alpha=.75)
plt.fill_between([t2.plot_date,t3.plot_date],1,3, lw=0, facecolor='y', alpha=.5)
plt.fill_between([t3.plot_date,t4.plot_date],1,3, lw=0, facecolor='y', alpha=.25)
plt.fill_between([t5.plot_date,t6.plot_date],1,3, lw=0, facecolor='y', alpha=.25)
plt.fill_between([t6.plot_date,t7.plot_date],1,3, lw=0, facecolor='y', alpha=.5)
plt.fill_between([t7.plot_date,t8.plot_date],1,3, lw=0, facecolor='y', alpha=.75)
plt.fill_between([t8.plot_date,t9.plot_date],1,3, lw=0, facecolor='y', alpha=.9)
plt.axvline(midnight.plot_date, ls=':', c='k')
plt.gca().xaxis_date(aaoobs.timezone)
plt.xlim(t1.plot_date-.05, t8.plot_date+.05)
plt.axhline(1/np.cos(65*u.deg), c='k', ls='--') #~AAT limit
plt.ylim(2.5,1)
plt.tight_layout()
#NSA urls
for jd in sorted(up_times):
name, nsaid, ra, dec = up_times[jd]
print(name, 'http://www.nsatlas.org/getAtlas.html?search=nsaid&nsaID={}&submit_form=Submit'.format(nsaid))
# DECALS URLs
for jd in sorted(up_times):
name, nsaid, ra, dec = up_times[jd]
print(name, 'http://legacysurvey.org/viewer?ra={}&dec={}&zoom=8'.format(ra, dec))
sagaobsed[sagaobsed['HOST_NSAID']==150307].show_in_notebook()
for h in hosts_to_target:
fnbase = 'aat_targets_jul2016/test_runs/' + h.name
finum = 1
fnmaster = fnbase.replace('test_runs/', '') + '_master.fld'
fnconfig = fnbase + '_test_{0}.fld'.format(finum)
print('Writing', fnconfig, 'from master', fnmaster)
listorem = [fnbase + '_test_' + str(i) + '.lis' for i in range(1, finum)]
aat.subsample_from_master_fld(fnmaster, fnconfig,
{1:30, 2:30, 3:30, 4:np.inf, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux
nflux=5, nguides=30,
fieldname=str(finum), listorem=listorem)
print('')
for h in hosts_to_target:
fnbase = 'aat_targets_jul2016/test_runs/' + h.name
finum = 2
fnmaster = fnbase.replace('test_runs/', '') + '_master.fld'
fnconfig = fnbase + '_test_{0}.fld'.format(finum)
print('Writing', fnconfig, 'from master', fnmaster)
listorem = [fnbase + '_test_' + str(i) + '.lis' for i in range(1, finum)]
aat.subsample_from_master_fld(fnmaster, fnconfig,
{1:30, 2:30, 3:30, 4:np.inf, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux
nflux=5, nguides=30,
fieldname=str(finum), listorem=listorem)
print('')
for h in hosts_to_target:
fnbase = 'aat_targets_jul2016/test_runs/' + h.name
finum = 3
fnmaster = fnbase.replace('test_runs/', '') + '_master.fld'
fnconfig = fnbase + '_test_{0}.fld'.format(finum)
print('Writing', fnconfig, 'from master', fnmaster)
listorem = [fnbase + '_test_' + str(i) + '.lis' for i in range(1, finum)]
aat.subsample_from_master_fld(fnmaster, fnconfig,
{1:30, 2:30, 3:30, 4:np.inf, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux
nflux=5, nguides=30,
fieldname=str(finum), listorem=listorem)
print('')
h = hsd['Catch22']
fnbase = 'aat_targets_jul2016/test_runs/' + h.name
finum = 2
fnmaster = fnbase.replace('test_runs/', '') + '_master.fld'
fnconfig = fnbase + '_test_{0}.fld'.format(finum)
print('Writing', fnconfig, 'from master', fnmaster)
listorem = [fnbase + '_test_' + str(i) + '.lis' for i in range(1, finum)]
aat.subsample_from_master_fld(fnmaster, fnconfig,
{1:30, 2:30, 3:30, 4:np.inf, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux
nflux=5, nguides=30,
fieldname=str(finum), listorem=listorem)
h = hsd['Catch22']
fnbase = 'aat_targets_jul2016/test_runs/' + h.name
finum = 3
fnmaster = fnbase.replace('test_runs/', '') + '_master.fld'
fnconfig = fnbase + '_test_{0}.fld'.format(finum)
print('Writing', fnconfig, 'from master', fnmaster)
listorem = [fnbase + '_test_' + str(i) + '.lis' for i in range(1, finum)]
aat.subsample_from_master_fld(fnmaster, fnconfig,
{1:30, 2:30, 3:30, 4:np.inf, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux
nflux=5, nguides=30,
fieldname=str(finum), listorem=listorem)
h = hsd['Catch22']
fnbase = 'aat_targets_jul2016/test_runs/' + h.name
finum = 4
fnmaster = fnbase.replace('test_runs/', '') + '_master.fld'
fnconfig = fnbase + '_test_{0}.fld'.format(finum)
print('Writing', fnconfig, 'from master', fnmaster)
listorem = [fnbase + '_test_' + str(i) + '.lis' for i in range(1, finum)]
aat.subsample_from_master_fld(fnmaster, fnconfig,
{1:30, 2:30, 3:30, 4:np.inf, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux
nflux=5, nguides=30,
fieldname=str(finum), listorem=listorem)
logtab2015 = make_logtab('aat_targets_jul2016/alljun_2015_aaomega_headers')
logtab2015[(logtab2015['ccd']==1)].show_in_notebook(display_length=20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then instantiate a problem class, we'll name it "My LP problem" and we're looking for an optimal maximum so we use LpMaximize
Step2: We then model our decision variables using the LpVariable class. In our example, x had a lower bound of 0 and y had a lower bound of 2.
Step3: The objective function and constraints are added using the += operator to our model.
Step4: We have now constructed our problem and can have a look at it.
Step5: PuLP supports open source linear programming solvers such as CBC and GLPK, as well as commercial solvers such as Gurobi and IBM's CPLEX.
Step6: We have also checked the status of the solver, there are 5 status codes
|
<ASSISTANT_TASK:>
Python Code:
import pulp
my_lp_problem = pulp.LpProblem("My LP Problem", pulp.LpMaximize)
x = pulp.LpVariable('x', lowBound=0, cat='Continuous')
y = pulp.LpVariable('y', lowBound=2, cat='Continuous')
# Objective function
my_lp_problem += 4 * x + 3 * y, "Z"
# Constraints
my_lp_problem += 2 * y <= 25 - x
my_lp_problem += 4 * y >= 2 * x - 8
my_lp_problem += y <= 2 * x - 5
my_lp_problem
my_lp_problem.solve()
pulp.LpStatus[my_lp_problem.status]
for variable in my_lp_problem.variables():
print "{} = {}".format(variable.name, variable.varValue)
print pulp.value(my_lp_problem.objective)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: That's not very helpful to look at. One billionaire ended up squeezing everybody else into a single line in my histogram. Plus it skewed my mean income significantly
Step2: It's important to dig into what is causing your outliers, and understand where they are coming from. You also need to think about whether removing them is a valid thing to do, given the spirit of what it is you're trying to analyze. If I know I want to understand more about the incomes of "typical Americans", filtering out billionaires seems like a legitimate thing to do.
Step3: That looks better. And, our mean is more, well, meangingful now as well
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
incomes = np.random.normal(27000, 15000, 10000)
incomes = np.append(incomes, [1000000000])
import matplotlib.pyplot as plt
plt.hist(incomes, 50)
plt.show()
incomes.mean()
def reject_outliers(data):
u = np.median(data)
s = np.std(data)
filtered = [e for e in data if (u - 2 * s < e < u + 2 * s)]
return filtered
filtered = reject_outliers(incomes)
plt.hist(filtered, 50)
plt.show()
np.mean(filtered)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exibimos sรฉrie histรณrica
Step2: Regressรฃo entre taxa de inflaรงรฃo e taxa de crescimento do PIB
Step3: Regressรฃo entre taxa de juros e inflaรงรฃo
Step4: Note inflaรงรฃo correlaciona positivamente com taxa de juros, porรฉm isso se explica pelo fato de se subir os juros
Step5: Regressรฃo entre taxa de juros e inflaรงรฃo no ano seguinte.
Step6: Sanity check
Step7: Note que a nรฃo hรก evidรชncia estatรญstica de que taxas de juros maiores provocaram reduรงรฃo da inflaรงรฃo no ano seguinte!
|
<ASSISTANT_TASK:>
Python Code:
df = pd.read_csv("../data/brazil_gdp_inflation_ir.csv", dtype={'Time':np.float64, 'GDP':np.float64, 'Inflation':np.float64, 'Interest':np.float64}, index_col="Time")
df['RInterest'] = df.Interest - df.Inflation
brazil = df.sort()
brazil
filter = brazil.Inflation < 11.0
sns.set(style="darkgrid")
plt.title(u"PIB vs Inflaรงรฃo atรฉ faixa de 10%")
ax = sns.regplot(brazil[filter].Inflation, brazil[filter].GDP, robust=True, n_boot=500)
ax.set_xlabel(u"Inflaรงรฃo")
ax.set_ylabel(u"PIB")
sns.coefplot("GDP ~ Inflation", data=brazil[filter])
sns.residplot(brazil[brazil.Inflation < 11.0].Inflation, brazil[brazil.Inflation < 11.0].GDP, lowess=True)
filter = brazil.Inflation < 11.0
color = sns.color_palette()[2]
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True)
f.set_figwidth(12)
sns.regplot(brazil[filter].Interest, brazil[filter].Inflation, robust=True, n_boot=500, ax=ax1)
sns.regplot(brazil[filter].RInterest, brazil[filter].Inflation, robust=True, n_boot=500, ax=ax2).set_ylabel("")
ax1.set_xlabel(u"Taxa de juros nominal")
ax2.set_xlabel(u"Taxa de juros real")
ax1.set_ylabel(u"Inflaรงรฃo")
f.tight_layout()
f = plt.figure()
filter = brazil.index > 1998
l1, = plt.plot(brazil[filter].index, brazil[filter].Interest, figure=f, label=u"Tx juros nominal")
l2, = plt.plot(brazil[filter].index, brazil[filter].Inflation, figure=f, label=u"Inflaรงรฃo")
l3, = plt.plot(brazil[filter].index, brazil[filter].RInterest, figure=f, label=u"Tx juros real")
f.legend(handles=[l1,l2,l3], labels=[u"Tx juros nominal",u"Inflaรงรฃo", u"Tx juros real"])
plt.title(u"Grรกfico da sรฉrie histรณrica analisada")
#f.tight_layout()
brazil[filter].describe()
del filter
brazil_ir_s1 = brazil.copy()
brazil_ir_s1.Interest = brazil.Interest.shift(1)
brazil_ir_s1.RInterest = brazil.RInterest.shift(1)
brazil_ir_s1
color = sns.color_palette()[2]
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True)
f.set_figwidth(12)
sns.regplot(brazil_ir_s1.Interest, brazil_ir_s1.Inflation, robust=True, n_boot=500, ax=ax1)
sns.regplot(brazil_ir_s1.RInterest, brazil_ir_s1.Inflation, robust=True, n_boot=500, ax=ax2).set_ylabel("")
ax1.set_xlabel(u"Taxa de juros nominal do ano anterior")
ax2.set_xlabel(u"Taxa de juros real do ano anterior")
ax1.set_ylabel(u"Inflaรงรฃo")
f.tight_layout()
sns.coefplot("Inflation ~ RInterest", data=brazil_ir_s1, intercept=False)
model = ols("Inflation ~ RInterest + 1", brazil_ir_s1).fit()
model.summary()
sns.residplot(brazil_ir_s1.RInterest, brazil_ir_s1.Inflation)
model = rlm("Inflation ~ RInterest + 1", brazil_ir_s1).fit()
model.summary2()
brazil_ir_s1.describe()
brazil_ir_s2 = brazil.copy()
brazil_ir_s2.Interest = brazil.Interest.shift(2)
brazil_ir_s2.RInterest = brazil.RInterest.shift(2)
sns.set(style="darkgrid")
color = sns.color_palette()[2]
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True)
f.set_figwidth(12)
sns.regplot(brazil_ir_s2.Interest, brazil_ir_s2.Inflation, robust=True, n_boot=500, ax=ax1)
sns.regplot(brazil_ir_s2.RInterest, brazil_ir_s2.Inflation, robust=True, n_boot=500, ax=ax2).set_ylabel("")
ax1.set_xlabel(u"Taxa de juros nominal de dois anos antes")
ax2.set_xlabel(u"Taxa de juros real de dois anos antes")
ax1.set_ylabel(u"Inflaรงรฃo")
f.tight_layout()
sns.coefplot("Inflation ~ RInterest", data=brazil_ir_s2)
model = ols("Inflation ~ RInterest + 1", brazil_ir_s2).fit()
model.summary()
sns.residplot(brazil_ir_s2.RInterest, brazil_ir_s2.Inflation)
model = rlm("Inflation ~ RInterest + 1", brazil_ir_s2).fit()
model.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ะขะธะฟะฐ ะปะธะฝะธั, ัะธะฟะฐ ะฟััะผะฐั
Step3:
Step4: <img src="http
|
<ASSISTANT_TASK:>
Python Code:
plt.plot(h1_phi - h2_phi[:3], h2_phi[:3])
plt.show()
from sklearn.linear_model import LinearRegression
LR = LinearRegression()
LR.fit(h2_phi[:3].reshape(3, 1), (h1_phi - h2_phi[:3]).reshape(3, 1))
h1_phi_4_predicted_delta = LR.predict([[h2_phi[3]]])
h1_phi_4_predicted = h2_phi[3] + h1_phi_4_predicted_delta
h1_phi_4_predicted # ะฟัะธะฑะปะธะทะธะปะธ ัะฐะทะฝะพััั ะพะฟัะธะผะฐะปัะฝะพะน ะบัะธะฒะพะน, ัะฟัะพะณะฝะพะทะธัะพะฒะฐะปะธ
h1_phi_new = np.array(list(h1_phi) + [h1_phi_4_predicted])
h1_phi_new
len(ne_phi), len(ne_lambda)
len(hg_phi), len(hg_lambda)
X = np.hstack([ne_phi, hg_phi])
y = np.hstack([ne_lambda, hg_lambda])
f = interp1d(X, y, kind='quadratic')
plt.figure(figsize=(15, 15))
plt.plot(ne_phi, ne_lambda)
plt.scatter(ne_phi, ne_lambda)
plt.plot(hg_phi, hg_lambda)
plt.scatter(hg_phi, hg_lambda)
#plt.plot(h1_phi, h1_lambda)
#plt.scatter(h1_phi, h1_lambda)
grid = np.linspace(X.min(), X.max(), 1000)
plt.plot(grid, f(grid), color="red")
plt.grid()
plt.show()
plt.figure(figsize=(15, 15))
plt.scatter(ne_phi, ne_lambda, marker="+", s=200, label="ะะตะพะฝ", color="black")
plt.scatter(hg_phi, hg_lambda, marker="x", s=200, label="ะ ัััั", color="black")
plt.scatter(h1_phi_new, f(h1_phi_new), marker=4, s=150, label="ะะพะดะพัะพะด", color="black")
plt.scatter(h1_phi_new, f(h1_phi_new+2), marker=4, s=150, label="ะะพะดะพัะพะด", color="black")
plt.scatter(h1_phi_new, f(h1_phi_new-2), marker=4, s=150, label="ะะพะดะพัะพะด", color="black")
grid = np.linspace(X.min(), X.max(), 1000)
plt.plot(grid, f(grid), color="red", label="ะะฝัะตัะฟะพะปััะธะพะฝะฝะฐั ะบัะธะฒะฐั", lw=2)
plt.xlabel("ะฃะณะปั", fontsize=15)
plt.ylabel("$\lambda$", fontsize=15)
plt.legend()
xticklocs = np.linspace(X.min(), X.max(), 20)
yticklocs = np.linspace(y.min(), y.max(), 20)
plt.xticks(xticklocs)
plt.yticks(yticklocs)
plt.grid(lw=2)
plt.savefig(fname="2.2_Hydrogen_Spectre.pdf", dpi=900, format="pdf", papertype="a4")
plt.savefig(fname="2.2_Hydrogen_Spectre.png", format="png")
plt.show()
h_lambda_predicted_lo = f(h1_phi_new - delta_phi) # ะธะฝัะตัะฒะฐะป ะดะปั ะฟะพะณัะตัะฝะพััะตะน
h_lambda_predicted_hi = f(h1_phi_new + delta_phi)
h_lambda_predicted = (h_lambda_predicted_lo + h_lambda_predicted_hi) / 2
for i in range(4):
error = (h_lambda_predicted_hi[i] - h_lambda_predicted_lo[i]) / 2
print("h_lambda_{} = ".format(i+3), labwork.sciRoundD(h_lambda_predicted[i],
error,
"ะะฝะณัััะตะผ"
))
labwork.sciPrintD(h_lambda_predicted[i],
error,
"h_lambda_{} = ".format(i+3))
h_lambda_predicted, (h_lambda_predicted_hi - h_lambda_predicted_lo) / 2
n = 2
R = np.array([((n*m)**2) / (h_lambda_predicted[m-3] * (m*m - n*n))for m in range(3, 7)])
R # ัะตะพัะธั: R = 109 677,6 ัะผ ^ (-1)
R.mean() # ะดะปะธะฝั ะฒะพะปะฝ ะฑัะปะธ ะฒ ะฐะฝะณัััะตะผะฐั
-> ะฝัะถะฝะพ ัะผะฝะพะถะธัั ะฝะฐ 10^10
R.mean() * 10 ** 10 # ััะพ ะฒ ะผ ^ (-1)
R_predicted = R.mean() * 10 ** 8 # ััะพ ะฒ cะผ ^ (-1)
R_predicted
R * 10 ** 8 # ัะตะพัะธั: R = 109 677,6 ัะผ ^ (-1)
labwork.sciPrintR(R_predicted , R.std(ddof=1) / (4)**0.5 / R.mean(), "R = ") # ั ะฟะพะณัะตัะฝะพัััั
print("R = ", labwork.sciRoundR(R_predicted , R.std(ddof=1) / (4)**0.5 / R.mean(), "ัะผ ^ (-1) "))
R_theory = 109677.6
(R_theory - R_predicted) / R_theory # ะพัะปะธัะธะต ะพั ัะตะพัะธะธ, ะดะพะปั
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Start by parsing the .mat files from the matched controls
Step2: Clean up - remove nan entries as these cause hddm to fail
Step3: Merge patient and matched control data to single .csv file
Step4: Build HDDM model
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%cd ..
import warnings; warnings.filterwarnings('ignore')
from utils import matparser, data_compiler
import glob
data_dir = 'data/controls/'
matparser.parse_dir(data_dir)
out_dir = "data/controls.csv"
data_compiler.compile_dir(data_dir, out_dir)
!cat data/controls.csv | grep -v nan > data/controls_clean.csv
!cat data/patients_clean.csv | sed 1d | awk -v d="patient" -F"," 'BEGIN { OFS = "," } {$5=d; print}' | tr -d $'\r' > data/patients_tagged.csv
!cat data/controls_clean.csv | sed 1d | awk -v d="control" -F"," 'BEGIN { OFS = "," } {$5=d; print}' | tr -d $'\r' > data/controls_tagged.csv
!echo "response,rt,subj_idx,stim,subj_type" > data/combined_clean.csv
!cat data/patients_tagged.csv >> data/combined_clean.csv; sed 1d data/controls_tagged.csv >> data/combined_clean.csv
import hddm
data = hddm.load_csv('data/combined_clean.csv')
model = hddm.HDDM(data, depends_on={'v': ['stim', 'subj_type'], 'a': 'subj_type'})
model.find_starting_values()
model.sample(6000, burn=20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load dataset
Step2: Utilities function
Step3: Extract data
Step4: Modified imputation method using MLPRegressor
Step5: Feature Augmentation method from Bestagini
Step6: Neural Network
Step7: Validation with Leave One Well Out on Training Dataset
Step8: Applying to Test Dataset
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(1337)
import warnings
warnings.filterwarnings("ignore")
import time as tm
import pandas as pd
from keras.models import Sequential, Model
from keras.constraints import maxnorm
from keras.layers import Dense, Dropout, Activation
from keras.utils import np_utils
from sklearn.metrics import f1_score, recall_score, accuracy_score, confusion_matrix
from sklearn.model_selection import LeaveOneGroupOut
from sklearn import preprocessing
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
%matplotlib inline
training_data = pd.read_csv('../data/facies_vectors.csv')
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0, 2], [1], [4], [3, 5], [4, 6, 7], [5, 7], [5, 6, 8], [6, 7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '.g')
ax[1].plot(logs.ILD_log10, logs.Depth, '.')
ax[2].plot(logs.DeltaPHI, logs.Depth, '.', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '.', color='r')
ax[4].plot(logs.PE, logs.Depth, '.', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
X = training_data.drop(['Formation', 'Well Name', 'Depth', 'Facies', 'FaciesLabels'], axis=1).values
y = training_data['Facies'].values - 1
wells = training_data["Well Name"].values
from sklearn.neural_network import MLPRegressor
reg = MLPRegressor()
DataImpAll = training_data.drop(['Formation', 'Well Name', 'Depth', 'FaciesLabels'], axis=1).copy()
DataImp = DataImpAll.dropna(axis = 0, inplace=False)
Ximp=DataImp.loc[:, DataImp.columns != 'PE']
Yimp=DataImp.loc[:, 'PE']
reg.fit(Ximp, Yimp)
X[np.array(DataImpAll.PE.isnull()),4] = reg.predict(DataImpAll.loc[DataImpAll.PE.isnull(),:].drop('PE',axis=1,inplace=False))
# Feature windows concatenation function
def augment_features_window(X, N_neig):
# Parameters
N_row = X.shape[0]
N_feat = X.shape[1]
# Zero padding
X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
this_row = np.hstack((this_row, X[r+c]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth):
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X, axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1):
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
well = training_data['Well Name'].values
depth = training_data['Depth'].values
X, padded_rows = augment_features(X, well, depth, N_neig=1)
def fDNN(in_dim, out_dim):
# Model
model = Sequential()
model.add(Dense(152, input_dim=in_dim, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(out_dim, activation='softmax'))
# Compilation
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
logo = LeaveOneGroupOut()
nb_classes = 9
epoch = 10
bats = 20
t0 = tm.time()
f1s_ls = []
acc_ls = []
adj_ls = []
from scipy.signal import medfilt
for train, test in logo.split(X, y, groups=wells):
well_name = wells[test[0]]
# Scaling
scaler = preprocessing.MinMaxScaler().fit(X)
X_tr = scaler.transform(X[train])
X_te = scaler.transform(X[test])
Y_tr = np_utils.to_categorical(y[train], nb_classes)
in_dim = len(X_tr[0])
# Method initialization
mlp = fDNN(in_dim, nb_classes)
# Training
mlp.fit(X_tr, Y_tr, nb_epoch=epoch, batch_size=bats, verbose=0)
# Predict
y_hat = mlp.predict_classes(X_te, verbose=0)
y_hat = medfilt(y_hat, kernel_size=5)
try:
f1s = f1_score(y[test], y_hat, average="weighted", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
except:
f1s = 0
try:
conf = confusion_matrix(y[test], y_hat, labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
acc = f1_score(y[test], y_hat, average="micro", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
except:
acc = 0
try:
acc_adj = accuracy_adjacent(conf, adjacent_facies)
except:
acc_adj = 0
f1s_ls += [f1s]
acc_ls += [acc]
adj_ls += [acc_adj]
print("{:>20s} f1w:{:.3f} | f1m:{:.3f} | acc_adj:{:.3f}".format(well_name, f1s, acc, acc_adj))
t1 = tm.time()
print("Avg F1w", np.average(f1s_ls)*100, "Avg F1m", np.average(acc_ls)*100, "Avg Adj", np.average(adj_ls)*100)
print((t1-t0), "seconds")
blind_data = pd.read_csv('../data/nofacies_data.csv')
X_blind = blind_data.drop(['Formation', 'Well Name', 'Depth'], axis=1).values
well_blind = blind_data['Well Name'].values
depth_blind = blind_data['Depth'].values
X_blind, padded_rows = augment_features(X_blind, well_blind, depth_blind, N_neig=1)
# Scaling
scl = preprocessing.MinMaxScaler().fit(X)
X_train = scl.transform(X)
X_blind = scl.transform(X_blind)
Y_train = np_utils.to_categorical(y, nb_classes)
# Method initialization
model = fDNN(in_dim, nb_classes)
# Training
model.fit(X_train, Y_train, nb_epoch=epoch, batch_size=bats, verbose=0)
# Predict
y_blind = model.predict_classes(X_blind, verbose=0)
y_blind = medfilt(y_blind, kernel_size=5)
blind_data["Facies"] = y_blind + 1 # return the original value (1-9)
blind_data.to_csv("PA_Team_Submission_2.csv")
make_facies_log_plot(
blind_data[blind_data['Well Name'] == 'STUART'],
facies_colors)
make_facies_log_plot(
blind_data[blind_data['Well Name'] == 'CRAWFORD'],
facies_colors)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Making an interpolation from TRIGDAT
Step2: By default, GetGBMData will grad data from all detectors. However, one can set a subset to retrieve different data types
Step3: Interpolating
Step4: Single detector
Step5: We can also go back into the GBMFrame
Step6: Earth Centered Coordinates
Step7: Working with the GBM class
Step8: Plotting
Step9: In Fermi GBM coodinates
Step10: Capturing points on the sky
Step11: Looking at Earth occulted points on the sky
Step12: Source/Detector Separation
Step13: Examining Legal Detector Pairs
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
from astropy.coordinates import SkyCoord
import astropy.coordinates as coord
import astropy.units as u
from gbmgeometry import *
data = GetGBMData("080916009")
data.set_destination("") # You can enter a folder here. If you want the CWD, you do not have to set
data.get_trigdat()
data.select_detectors('n1','n2','b0')
data.get_rsp_cspec()
interp = PositionInterpolator(trigdat="glg_trigdat_all_bn080916009_v02.fit")
# In trigger times
print "Quaternions"
print interp.quaternion(0)
print interp.quaternion(10)
print
print "SC XYZ"
print interp.sc_pos(0)
print interp.sc_pos(10)
na = NaIA(interp.quaternion(0))
print na.get_center()
print na.get_center().icrs #J2000
print na.get_center().galactic # Galactic
print
print "Changing in time"
na.set_quaternion(interp.quaternion(100))
print na.get_center()
print na.get_center().icrs #J2000
print na.get_center().galactic # Galactic
center_j2000 = na.get_center().icrs
center_j2000
center_j2000.transform_to(GBMFrame(quaternion=interp.quaternion(100.)))
na = NaIA(interp.quaternion(0),interp.sc_pos(0)*u.km)
na.get_center()
myGBM = GBM(interp.quaternion(0),sc_pos=interp.sc_pos(0)*u.km)
myGBM.get_centers()
[x.icrs for x in myGBM.get_centers()]
myGBM.detector_plot(radius=60)
myGBM.detector_plot(radius=10,projection='ortho',lon_0=40)
myGBM.detector_plot(radius=10,projection='ortho',lon_0=0,lat_0=40,fignum=2)
myGBM.detector_plot(radius=60,fermi_frame=True)
myGBM.detector_plot(radius=10,projection='ortho',lon_0=20,fermi_frame=True)
myGBM.detector_plot(radius=10,projection='ortho',lon_0=200,fermi_frame=True,fignum=3)
myGBM.detector_plot(radius=10,projection='ortho',lon_0=0,lat_0=40,fignum=2,fermi_frame=True)
grb = SkyCoord(ra=130.,dec=-45 ,frame='icrs', unit='deg')
myGBM.detector_plot(radius=60,
projection='moll',
good=True, # only plot NaIs that see the GRB
point=grb,
lon_0=110,lat_0=-0)
myGBM.detector_plot(radius=60,
projection='ortho',
good=True, # only plot NaIs that see the GRB
point=grb,
lon_0=180,lat_0=-40,fignum=2)
myGBM.detector_plot(radius=10,show_earth=True,lon_0=90)
myGBM.detector_plot(radius=10,lon_0=100,show_earth=True,projection='ortho')
myGBM.detector_plot(radius=10,show_earth=True,lon_0=120,lat_0=-30,fermi_frame=True,projection='ortho')
seps = myGBM.get_separation(grb)
seps.sort("Separation")
seps
get_legal_pairs()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Merge to 2 arrays into 1 array
Step2: Extract the 2 arrays out
Step3: Shuffle and see the output.
|
<ASSISTANT_TASK:>
Python Code:
a = np.array([[[ 0., 1., 2.],
[ 3., 4., 5.]],
[[ 6., 7., 8.],
[ 9., 10., 11.]],
[[ 12., 13., 14.],
[ 15., 16., 17.]]])
b = np.array([[ 0., 1.],
[ 2., 3.],
[ 4., 5.]])
print a.shape
print b.shape
c = np.c_[a.reshape(len(a), -1), b.reshape(len(b), -1)]
print c
a2 = c[:, :a.size//len(a)].reshape(a.shape)
b2 = c[:, a.size//len(a):].reshape(b.shape)
print a2
print b2
np.random.shuffle(c)
print a2
print b2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Constants
Step2: Operations
Step3: Placeholders
Step4: Variables
Step5: Classification using the MNIST dataset
Step6: The MNIST dataset contain 55,000 images. The dimensions of each image is 28-by-28. Each vector has 784 elements because 28*28=784.
Step7: Before we begin, we specify three parameters
Step8: Network parameters
Step9: def multi
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
tf.__version__
h = tf.constant('Hello World')
h
h.graph is tf.get_default_graph()
x = tf.constant(100)
x
# Create Session object in which we can run operations.
# A session object encapsulates the environment in which
# operations are executed. Tensor objects are evaluated
# by operations.
session = tf.Session()
session.run(h)
session.run(x)
type(session.run(x))
type(session.run(h))
a = tf.constant(2)
b = tf.constant(3)
with tf.Session() as session:
print('Addition: {}'.format(session.run(a + b)))
print('Subtraction: {}'.format(session.run(a - b)))
print('Multiplication: {}'.format(session.run(a * b)))
print('Division: {}'.format(session.run(a / b)))
e = np.array([[5., 5.]])
f = np.array([[2.], [2.]])
e
f
# Convert numpy arrays to TensorFlow objects
ec = tf.constant(e)
fc = tf.constant(f)
matrix_mult_op = tf.matmul(ec, fc)
with tf.Session() as session:
print('Matrix Multiplication: {}'.format(session.run(matrix_mult_op)))
c = tf.placeholder(tf.int32)
d = tf.placeholder(tf.int32)
add_op = tf.add(c, d)
sub_op = tf.subtract(c, d)
mult_op = tf.multiply(c, d)
div_op = tf.divide(c, d)
with tf.Session() as session:
input_dict = {c: 11, d: 10}
print('Addition: {}'.format(session.run(add_op, feed_dict=input_dict)))
print('Subtraction: {}'.format(session.run(sub_op, feed_dict=input_dict)))
print('Multiplication: {}'.format(session.run(mult_op, feed_dict=input_dict)))
print('Division: {}'.format(session.run(div_op, feed_dict={c:11, d:11})))
var2 = tf.get_variable('var2', [2])
var2
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('data', one_hot=True)
type(mnist)
mnist.train.images
mnist.train.images.shape
# Convert the vector to a 28x28 matrix
sample_img = mnist.train.images[0].reshape(28, 28)
# Show the picture
plt.imshow(sample_img, cmap='Greys')
learning_rate = 0.001
training_epochs = 15
batch_size = 100
# Number of classes is 10 because we have 10 digits
n_classes = 10
# Number of training examples
n_samples = mnist.train.num_examples
# The flatten array of the 28x28 image matrix contains 784 elements
n_input = 784
# Number of neurons in the hidden layers. For image data, 256 neurons
# is common because we have 256 intensity values (8-bit).
# In this example, we only use 2 hidden layers. The more hidden
# layers, we use the longer it takes for the model to run but
# more layers has the possibility of being more accurate.
n_hidden_1 = 256
n_hidden_2 = 256
def multilayer_perceptron(x, weights, biases):
'''
x: Placeholder for the data input
weights: Dictionary of weights
biases: Dictionary of bias values
'''
# First hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Second hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer
layer_out = tf.add(tf.matmul(layer_2, weights['out']), biases['out'])
return layer_out
weights = {
'h1': tf.Variable(tf.random_normal(shape=[n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal(shape=[n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal(shape=[n_hidden_2, n_classes]))
}
tf.random_normal(shape=(n_input, n_hidden_1))
#tf.Session().run(weights['h1'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As in previous examples, all data is contained in a 2-D feature matrix data.data, where the rows represent data samples, and the columns are the feature values
Step2: With a look at the provided feature names, we recognize some that we mentioned above
Step3: Since this is a binary classification task, we expect to find exactly two target names
Step4: Let's split the dataset into training and test sets using a healthy 80-20 split
Step5: Building the decision tree
Step6: Since we did not specify any pre-pruning parameters, we would expect this decision tree to grow quite large and result in a perfect score on the training set
Step7: However, to our surprise, the test error is not too shabby, either
Step8: Now we want to do some model exploration. For example, we mentioned above that the depth of a tree influences its performance. If we wanted to study this dependency more systematically, we could repeat building the tree for different values of max_depth
Step9: For each of these values, we want to run the full model cascade from start to finish. We also want to record the train and test scores. We do this in a for loop
Step10: We can plot the scores as a function of the tree depth using Matplotlib
Step11: Let's do one more. What about the minimum numbers of samples required to make a node a leaf node?
Step12: This leads to a plot that looks quite different from the one before
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import datasets
data = datasets.load_breast_cancer()
data.data.shape
data.feature_names
data.target_names
import sklearn.model_selection as ms
X_train, X_test, y_train, y_test = ms.train_test_split(data.data, data.target, test_size=0.2, random_state=42)
X_train.shape, X_test.shape
from sklearn import tree
dtc = tree.DecisionTreeClassifier(random_state=42)
dtc.fit(X_train, y_train)
dtc.score(X_train, y_train)
dtc.score(X_test, y_test)
with open("tree.dot", 'w') as f:
f = tree.export_graphviz(dtc, out_file=f,
feature_names=data.feature_names,
class_names=data.target_names)
import numpy as np
max_depths = np.array([1, 2, 3, 5, 7, 9, 11])
train_score = []
test_score = []
for d in max_depths:
dtc = tree.DecisionTreeClassifier(max_depth=d, random_state=42)
dtc.fit(X_train, y_train)
train_score.append(dtc.score(X_train, y_train))
test_score.append(dtc.score(X_test, y_test))
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.figure(figsize=(10, 6))
plt.plot(max_depths, train_score, 'o-', linewidth=3, label='train')
plt.plot(max_depths, test_score, 's-', linewidth=3, label='test')
plt.xlabel('max_depth')
plt.ylabel('score')
plt.ylim(0.85, 1.1)
plt.legend()
train_score = []
test_score = []
min_samples = np.array([2, 4, 8, 16, 32])
for s in min_samples:
dtc = tree.DecisionTreeClassifier(min_samples_leaf=s, random_state=42)
dtc.fit(X_train, y_train)
train_score.append(dtc.score(X_train, y_train))
test_score.append(dtc.score(X_test, y_test))
plt.figure(figsize=(10, 6))
plt.plot(min_samples, train_score, 'o-', linewidth=3, label='train')
plt.plot(min_samples, test_score, 's-', linewidth=3, label='test')
plt.xlabel('min_samples_leaf')
plt.ylabel('score')
plt.ylim(0.9, 1)
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Parsing job titles
Step2: Check for duplicates in results
Step3: Save invalid titles
Step4: Rm dups due to invalid titles and replace NAs by empty strings (to avoid NAs destroying standard titles later)
Step5: Standardize Job Titles
Step6: The following helps to detect the confusing problem
Step7: It shows that the problem is due to NA parts, e.g. sec-func, which add spaces (almost invisible to naked eyes) to the std-ized version. Thus we need a better join which combines only valid parts. That made me create joinValidParts().
Step8: From now on, title means standard title.
Step9: Standardizing Job Titles of Posts
Step10: Sanity check if the std-ation works
Step11: Re-query job titles with at least 2 posts
Step12: Domains in Job Titles
Step13: Though the total no. of domains is large, we actually only interested in domains with at least $2$ job titles. The reason is the domains with only $1$ title give us no pairwise similarity score.
Step14: Primary Functions in Job Titles
Step15: Test parser web service (src by Koon Han)
|
<ASSISTANT_TASK:>
Python Code:
df = pd.read_csv(DATA_DIR + 'doc_index_filter.csv')
titles = list(df['title'].unique())
n_title = len(titles)
print('# unique titles: %d' % n_title)
title_stats = pd.read_csv(DATA_DIR + 'stats_job_titles.csv')
def parseBatch(b, start=None, end=None):
'''
@brief: parse a batch of 100 titles
@return: DF containing results of parsing the 100 titles in batch b
'''
print('Parsing batch {}...'.format(b))
if (not start) and (not end): start = 100*b; end = start + 100
batch_titles = titles[start:end]
frames = [parse(t) for t in batch_titles]
res = pd.concat(frames)
res.reset_index(inplace=True); del res['index']
time.sleep(3)
# defensive save result of the batch
fname = DATA_DIR + 'tmp/b{}.csv'.format(b)
res.to_csv(fname, index=False)
print('\t saved result to tmp file')
return res
n_batch = n_title/100; remainder = n_title % 100
frames = [parseBatch(b) for b in range(n_batch)]
rem_titles = titles[-remainder:]
last_frame = pd.concat([parse(t) for t in rem_titles])
frames.append(last_frame)
res = pd.concat(frames)
print res.shape
print('# dups in parsing result: %d' %sum(res.duplicated('title')))
res[res.duplicated('title')]
invalid_titles = res.title[res.duplicated('title')].unique()
print('# invalid titles: %d' %len(invalid_titles))
pd.DataFrame({'title': invalid_titles}).to_csv(DATA_DIR + 'invalid_titles.csv', index=False)
res = res.drop_duplicates('title')
print res.shape
res.domain.fillna('', inplace=True); res.position.fillna('', inplace=True)
res.pri_func.fillna('', inplace=True); res.sec_func.fillna('', inplace=True)
res.to_csv(DATA_DIR + 'parsed_titles.csv', index=False)
def camelCase(s):
return s.title()
def joinValidParts(row):
'''
@brief: Only join valid parts with data, parts which are empty str are removed
'''
parts = [row.position, row.domain, row.sec_func, row.pri_func]
valid_parts = [p for p in parts if p != '']
# print('# valid parts: %d' %len(valid_parts))
return ' '.join(valid_parts)
# This naive join will cause confusing results
res['std_title'] = res.position + ' ' + res.domain + ' ' + res.sec_func + ' ' + res.pri_func
row = res.query('non_std_title != std_title').iloc[0]
my_util.tagChanges(row.non_std_title, row.title)
row = res.query('non_std_title == "Site Engineer"').iloc[0]
print joinValidParts(row)
' '.join([row.position, row.domain, row.sec_func, row.pri_func])
res['std_title'] = res.apply(joinValidParts, axis=1)
res.std_title = res.std_title.apply(camelCase)
res.rename(columns={'std_title': 'title'}, inplace=True)
res.to_csv(DATA_DIR + 'parsed_titles.csv', index=False)
stdForm = dict(zip(res.non_std_title, res.title))
print('# non-std titles which can be standardized: %d' %len(stdForm.keys()))
uniq_std_titles = np.unique((stdForm.values()))
print('# unique standard titles: %d' % len(uniq_std_titles))
def toStd(t):
if t not in stdForm.keys():
return t
else:
if stdForm[t] == '': # for titles which could not be parsed
return t
else:
return stdForm[t]
# return stdForm[t] if t in stdForm.keys() else t
df = pd.read_csv(DATA_DIR + 'doc_index_filter.csv')
df.columns
uniq_non_std_titles = df['non_std_title'].unique()
# sum([t not in stdForm.keys() for t in uniq_non_std_titles])
tmp = [t for t in uniq_non_std_titles if t not in stdForm.keys()]
tmp[:5]
del df['title']
print("# unique non-std titles in data: %d" % df.non_std_title.nunique())
df['title'] = map(toStd, df['non_std_title'])
df.query('non_std_title != title')[['non_std_title', 'title']].head()
any(df.title == '')
df.to_csv(DATA_DIR + 'doc_index_filter.csv')
by_title_agg = df.groupby('title').agg({'job_id': 'nunique'})
by_title_agg.rename(columns={'job_id': 'n_post'}, inplace=True)
by_title_agg.reset_index(inplace=True)
by_title_agg.head()
titles2_ = by_title_agg.query('n_post >= 2')
print('# titles with >= 2 posts: %d' %titles2_.shape[0])
title_df = pd.merge(titles2_, res)
print(title_df.shape[0])
# res.columns
title_df.columns
title_df.to_csv(DATA_DIR + 'new_titles_2posts_up.csv', index=False)
by_domain = res.groupby('domain')
print('# domains: {}'.format(by_domain.ngroups))
by_domain_agg = by_domain.agg({'title': len})
by_domain_agg = by_domain_agg.add_prefix('n_').reset_index()
by_domain_agg.sort_values('n_title', ascending=False, inplace=True)
by_domain_agg.describe().round(1)
domains_2 = by_domain_agg.query('n_title >= 2')
print('# domains with at least 2 job titles: %d' %len(domains_2))
by_domain_agg.to_csv(DATA_DIR + 'stats_domains.csv', index=False)
by_pri_func = res.groupby('pri_func')
print('# primary functions: {}'.format(by_pri_func.ngroups))
by_pri_func_agg = by_pri_func.agg({'title': len})
by_pri_func_agg = by_pri_func_agg.add_prefix('n_').reset_index()
by_pri_func_agg.sort_values('n_title', ascending=False, inplace=True)
by_pri_func_agg.to_csv(DATA_DIR + 'stats_pri_funcs.csv', index=False)
by_pri_func_agg.describe().round(1)
by_pri_func_agg.head(10)
# r0 = requests.post(parse_url, auth=(user, pwd),
# json={"job_title":"accountant", "verbose": False})
# print r0.status_code
# print r0.json()['output']
r1 = requests.post(parse_url, auth=(user, pwd),
json={"job_title":"software developer", "verbose": False})
print r1.status_code
print r1.json()['output']
# r2 = requests.post(parse_url, auth=(user, pwd),
# json={"job_title":"pre-school teacher", "verbose": False})
# print r2.status_code
# print r2.json()['output']
# r3 = requests.post(parse_url, auth=(user, pwd),
# json={"job_title":"Assistant Civil and Structural Engineer",
# "verbose": False})
r4 = requests.post(parse_url, auth=(user, pwd),
json={"job_title": "Security Analyst, Information Technology",
"verbose": False})
print r4.json()['output']
r5 = requests.post(parse_url, auth=(user, pwd),
json={"job_title": "Data analyst and scientist",
"verbose": False})
print r5.json()['output']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Kvswap
Step2: Examples
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
!pip install --quiet -U apache-beam
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Garden plants' >> beam.Create([
('๐', 'Strawberry'),
('๐ฅ', 'Carrot'),
('๐', 'Eggplant'),
('๐
', 'Tomato'),
('๐ฅ', 'Potato'),
])
| 'Key-Value swap' >> beam.KvSwap()
| beam.Map(print)
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Finding HIF-1 direct target candidates
Step3: As a safety check, let's make a qPCR like plot to visualize our genes, and let's make sure they have the behavior we want
Step4: vhl-1 dependent, hif-1-independent, genes
Step5: Plot vhl-1-dependent, hif-1-independent genes
|
<ASSISTANT_TASK:>
Python Code:
# important stuff:
import os
import pandas as pd
import numpy as np
# morgan
import tissue_enrichment_analysis as tea
import epistasis as epi
import genpy
import gvars
# Graphics
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import rc
rc('text', usetex=True)
rc('text.latex', preamble=r'\usepackage{cmbright}')
rc('font', **{'family': 'sans-serif', 'sans-serif': ['Helvetica']})
# Magic function to make matplotlib inline;
%matplotlib inline
# This enables SVG graphics inline.
%config InlineBackend.figure_formats = {'png', 'retina'}
# JB's favorite Seaborn settings for notebooks
rc = {'lines.linewidth': 2,
'axes.labelsize': 18,
'axes.titlesize': 18,
'axes.facecolor': 'DFDFE5'}
sns.set_context('notebook', rc=rc)
sns.set_style("dark")
mpl.rcParams['xtick.labelsize'] = 16
mpl.rcParams['ytick.labelsize'] = 16
mpl.rcParams['legend.fontsize'] = 14
q = 0.1
genvar = gvars.genvars()
tissue_df = tea.fetch_dictionary()
phenotype_df = pd.read_csv('../input/phenotype_ontology.csv')
go_df = pd.read_csv('../input/go_dictionary.csv')
tidy_data = pd.read_csv('../output/temp_files/DE_genes.csv')
tidy_data.sort_values('target_id', inplace=True)
tidy_data.dropna(subset=['ens_gene'], inplace=True)
tidy_data['fancy genotype'] = tidy_data.code.map(genvar.fancy_mapping)
tidy_data = tidy_data[tidy_data.genotype != 'fog-2']
tidy_data.head()
def collate(x):
For a vector `x`, find what elements in x are contained in
tidy_data.target_id.
return tidy_data.target_id.isin(x)
hif_genes = pd.read_csv('../output/temp_files/hypoxia_response.csv')
n = len(hif_genes[hif_genes.b > 0].ens_gene.unique())
message = 'There are {0} unique genes that' +\
' are candidates for HIF-1 direct binding'
print(message.format(n))
ids = hif_genes[hif_genes.b > 0].target_id
hypoxia_direct_targets = tidy_data[tidy_data.target_id.isin(ids)]
names = hypoxia_direct_targets.sort_values('qval').target_id.unique()[0:10]
name_sort = {}
for i, name in enumerate(names):
name_sort[name] = i+1
plot_df = tidy_data[tidy_data.target_id.isin(names)].copy()
plot_df['order'] = plot_df.target_id.map(name_sort)
plot_df.sort_values('order', inplace=True)
plot_df.reset_index(inplace=True)
genpy.qPCR_plot(plot_df, genvar.plot_order, genvar.plot_color,
clustering='fancy genotype', plotting_group='target_id',
rotation=90)
res = tea.enrichment_analysis(hypoxia_direct_targets.ens_gene.unique(),
tissue_df, show=False)
res['logQ'] = -res['Q value'].apply(np.log10)
tea.plot_enrichment_results(res, analysis='tissue', y='logQ')
res = tea.enrichment_analysis(hypoxia_direct_targets.ens_gene.unique(),
phenotype_df, show=False)
res['logQ'] = -res['Q value'].apply(np.log10)
tea.plot_enrichment_results(res, analysis='phenotype', y='logQ')
res = tea.enrichment_analysis(hypoxia_direct_targets.ens_gene.unique(),
go_df, show=False)
res['logQ'] = -res['Q value'].apply(np.log10)
tea.plot_enrichment_results(res, analysis='go', y='logQ')
positive = tidy_data[(tidy_data.qval < q) & (tidy_data.b > 0)]
negative = tidy_data[(tidy_data.qval < q) & (tidy_data.b < 0)]
# find the genes that overlap between vhl1 and egl-9vhl-1 and change in
# same directiom
vhl_pos = epi.find_overlap(['d', 'a'], positive)
vhl_neg = epi.find_overlap(['d', 'a'], negative)
vhl = list(set(vhl_pos + vhl_neg))
# find genes that change in the same direction in vhl(-) and
# vhl(+ datasets)
same_vhl = []
for genotype in ['b', 'e', 'f', 'c']:
same_vhl += epi.find_overlap(['d', 'a', genotype], positive)
same_vhl += epi.find_overlap(['d', 'a', genotype], negative)
# put it all together:
ind = (collate(vhl)) & (~collate(same_vhl))
vhl_regulated = tidy_data[ind & (tidy_data.code == 'd')]
n = len(vhl_regulated.ens_gene.unique())
message = 'There are {0} genes that appear to be ' + \
'regulated in a hif-1-independent, vhl-1-dependent manner.'
print(message.format(n))
# begin plotting
names = vhl_regulated.sort_values('qval').target_id.unique()[0:10]
name_sort = {}
for i, name in enumerate(names):
name_sort[name] = i+1
plot_df = tidy_data[tidy_data.target_id.isin(names)].copy()
plot_df['order'] = plot_df.target_id.map(name_sort)
plot_df.sort_values('order', inplace=True)
plot_df.reset_index(inplace=True)
genpy.qPCR_plot(plot_df, genvar.plot_order, genvar.plot_color,
clustering='fancy genotype', plotting_group='target_id',
rotation=90)
# save to file
cols = ['ext_gene', 'ens_gene', 'target_id', 'b', 'qval']
vhl_regulated[cols].to_csv('../output/temp_files/vhl_1_regulated_genes.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using the Apache Beam Orchestrator for TFX
Step2: Import packages
Step3: Check the versions
Step4: Download example data
Step5: Take a quick look at the CSV file
Step6: Set up pipeline paths
Step7: Define some constants and functions for both the Transform component and the Trainer component. Define them in a Python module, in this case saved to disk using the %%writefile magic command since you are working in a notebook.
Step10: Now define a module containing the preprocessing_fn() function that we will pass to the Transform component.
Step17: Create a Python module containing a trainer_fn function, which must return an estimator. If you prefer creating a Keras model, you can do so and then convert it to an estimator using keras.model_to_estimator().
Step19: Create the Pipeline
Step20: Run the pipeline
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q -U \
tensorflow==2.0.0 \
tfx==0.15.0rc0
import os
import tempfile
import urllib
import tensorflow as tf
import tfx
from tfx.components.evaluator.component import Evaluator
from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen
from tfx.components.example_validator.component import ExampleValidator
from tfx.components.model_validator.component import ModelValidator
from tfx.components.pusher.component import Pusher
from tfx.components.schema_gen.component import SchemaGen
from tfx.components.statistics_gen.component import StatisticsGen
from tfx.components.trainer.component import Trainer
from tfx.components.transform.component import Transform
from tfx.proto import evaluator_pb2
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.orchestration import metadata
from tfx.orchestration import pipeline
from tfx.orchestration.beam.beam_dag_runner import BeamDagRunner
from tfx.utils.dsl_utils import external_input
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
# Download the example data.
DATA_PATH = 'https://raw.githubusercontent.com/ageron/open-datasets/master/' \
'online_news_popularity_for_course/online_news_popularity_for_course.csv'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
!head {_data_filepath}
# Set up paths.
_constants_module_file = 'online_news_constants.py'
_transform_module_file = 'online_news_transform.py'
_trainer_module_file = 'online_news_trainer.py'
_serving_model_dir = os.path.join(tempfile.mkdtemp(),
'serving_model/online_news_simple')
%%writefile {_constants_module_file}
DENSE_FLOAT_FEATURE_KEYS = [
"timedelta", "n_tokens_title", "n_tokens_content",
"n_unique_tokens", "n_non_stop_words", "n_non_stop_unique_tokens",
"n_hrefs", "n_self_hrefs", "n_imgs", "n_videos", "average_token_length",
"n_keywords", "kw_min_min", "kw_max_min", "kw_avg_min", "kw_min_max",
"kw_max_max", "kw_avg_max", "kw_min_avg", "kw_max_avg", "kw_avg_avg",
"self_reference_min_shares", "self_reference_max_shares",
"self_reference_avg_shares", "is_weekend", "global_subjectivity",
"global_sentiment_polarity", "global_rate_positive_words",
"global_rate_negative_words", "rate_positive_words", "rate_negative_words",
"avg_positive_polarity", "min_positive_polarity", "max_positive_polarity",
"avg_negative_polarity", "min_negative_polarity", "max_negative_polarity",
"title_subjectivity", "title_sentiment_polarity", "abs_title_subjectivity",
"abs_title_sentiment_polarity"]
VOCAB_FEATURE_KEYS = ["data_channel"]
BUCKET_FEATURE_KEYS = ["LDA_00", "LDA_01", "LDA_02", "LDA_03", "LDA_04"]
CATEGORICAL_FEATURE_KEYS = ["weekday"]
# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [6]
#UNUSED: date, slug
LABEL_KEY = "n_shares_percentile"
VOCAB_SIZE = 10
OOV_SIZE = 5
FEATURE_BUCKET_COUNT = 10
def transformed_name(key):
return key + '_xf'
%%writefile {_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
from online_news_constants import *
def preprocessing_fn(inputs):
tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
outputs = {}
for key in DENSE_FLOAT_FEATURE_KEYS:
# Preserve this feature as a dense float, setting nan's to the mean.
outputs[transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
for key in VOCAB_FEATURE_KEYS:
# Build a vocabulary for this feature.
outputs[transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
top_k=VOCAB_SIZE,
num_oov_buckets=OOV_SIZE)
for key in BUCKET_FEATURE_KEYS:
outputs[transformed_name(key)] = tft.bucketize(
_fill_in_missing(inputs[key]), FEATURE_BUCKET_COUNT,
always_return_num_quantiles=False)
for key in CATEGORICAL_FEATURE_KEYS:
outputs[transformed_name(key)] = _fill_in_missing(inputs[key])
# How popular is this article?
outputs[transformed_name(LABEL_KEY)] = _fill_in_missing(inputs[LABEL_KEY])
return outputs
def _fill_in_missing(x):
Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
%%writefile {_trainer_module_file}
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from online_news_constants import *
def transformed_names(keys):
return [transformed_name(key) for key in keys]
# Tf.Transform considers these features as "raw"
def _get_raw_feature_spec(schema):
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _gzip_reader_fn(filenames):
Small utility returning a record reader that can read gzip'ed files.
return tf.data.TFRecordDataset(
filenames,
compression_type='GZIP')
def _build_estimator(config, hidden_units=None, warm_start_from=None):
Build an estimator for predicting the popularity of online news articles
Args:
config: tf.estimator.RunConfig defining the runtime environment for the
estimator (including model_dir).
hidden_units: [int], the layer sizes of the DNN (input layer first)
warm_start_from: Optional directory to warm start from.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in transformed_names(DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=VOCAB_SIZE + OOV_SIZE, default_value=0)
for key in transformed_names(VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=FEATURE_BUCKET_COUNT, default_value=0)
for key in transformed_names(BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key,
num_buckets=num_buckets,
default_value=0) for key, num_buckets in zip(
transformed_names(CATEGORICAL_FEATURE_KEYS),
MAX_CATEGORICAL_FEATURE_VALUES)
]
return tf.estimator.DNNLinearCombinedRegressor(
config=config,
linear_feature_columns=categorical_columns,
dnn_feature_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25],
warm_start_from=warm_start_from)
def _example_serving_receiver_fn(tf_transform_output, schema):
Build the serving in inputs.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
Tensorflow graph which parses examples, applying tf-transform to them.
raw_feature_spec = _get_raw_feature_spec(schema)
raw_feature_spec.pop(LABEL_KEY)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
serving_input_receiver = raw_input_fn()
transformed_features = tf_transform_output.transform_raw_features(
serving_input_receiver.features)
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
def _eval_input_receiver_fn(tf_transform_output, schema):
Build everything needed for the tf-model-analysis to run the model.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
EvalInputReceiver function, which contains:
- Tensorflow graph which parses raw untransformed features, applies the
tf-transform preprocessing operators.
- Set of raw, untransformed features.
- Label against which predictions will be compared.
# Notice that the inputs are raw features, not transformed features here.
raw_feature_spec = _get_raw_feature_spec(schema)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
serving_input_receiver = raw_input_fn()
features = serving_input_receiver.features.copy()
transformed_features = tf_transform_output.transform_raw_features(features)
# NOTE: Model is driven by transformed features (since training works on the
# materialized output of TFT, but slicing will happen on raw features.
features.update(transformed_features)
return tfma.export.EvalInputReceiver(
features=features,
receiver_tensors=serving_input_receiver.receiver_tensors,
labels=transformed_features[transformed_name(LABEL_KEY)])
def _input_fn(filenames, tf_transform_output, batch_size=200):
Generates features and labels for training or evaluation.
Args:
filenames: [str] list of CSV files to read data from.
tf_transform_output: A TFTransformOutput.
batch_size: int First dimension size of the Tensors returned by input_fn
Returns:
A (features, indices) tuple where features is a dictionary of
Tensors, and indices is a single Tensor of label indices.
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
filenames, batch_size, transformed_feature_spec, reader=_gzip_reader_fn)
transformed_features = dataset.make_one_shot_iterator().get_next()
# We pop the label because we do not want to use it as a feature while we're
# training.
return transformed_features, transformed_features.pop(
transformed_name(LABEL_KEY))
# TFX will call this function
def trainer_fn(hparams, schema):
Build the estimator using the high level API.
Args:
hparams: Holds hyperparameters used to train the model as name/value pairs.
schema: Holds the schema of the training examples.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
# Number of nodes in the first layer of the DNN
first_dnn_layer_size = 100
num_dnn_layers = 4
dnn_decay_factor = 0.7
train_batch_size = 40
eval_batch_size = 40
tf_transform_output = tft.TFTransformOutput(hparams.transform_output)
train_input_fn = lambda: _input_fn(
hparams.train_files,
tf_transform_output,
batch_size=train_batch_size)
eval_input_fn = lambda: _input_fn(
hparams.eval_files,
tf_transform_output,
batch_size=eval_batch_size)
train_spec = tf.estimator.TrainSpec(
train_input_fn,
max_steps=hparams.train_steps)
serving_receiver_fn = lambda: _example_serving_receiver_fn(
tf_transform_output, schema)
exporter = tf.estimator.FinalExporter('online-news', serving_receiver_fn)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=hparams.eval_steps,
exporters=[exporter],
name='online-news-eval')
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=999, keep_checkpoint_max=1)
run_config = run_config.replace(model_dir=hparams.serving_model_dir)
estimator = _build_estimator(
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
for i in range(num_dnn_layers)
],
config=run_config,
warm_start_from=hparams.warm_start_from)
# Create an input receiver for TFMA processing
receiver_fn = lambda: _eval_input_receiver_fn(
tf_transform_output, schema)
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
_pipeline_name = 'online_news_beam'
_pipeline_root = tempfile.mkdtemp(prefix='tfx-pipelines')
_pipeline_root = os.path.join(_pipeline_root, 'pipelines', _pipeline_name)
# Sqlite ML-metadata db path.
_metadata_root = tempfile.mkdtemp(prefix='tfx-metadata')
_metadata_path = os.path.join(_metadata_root, 'metadata.db')
def _create_pipeline(pipeline_name, pipeline_root, data_root,
transform_module_file, trainer_module_file,
serving_model_dir, metadata_path):
Implements the online news pipeline with TFX.
input_data = external_input(data_root)
# Brings data into the pipeline or otherwise joins/converts training data.
example_gen = CsvExampleGen(input=input_data)
# Computes statistics over data for visualization and example validation.
statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])
# Generates schema based on statistics files.
infer_schema = SchemaGen(
statistics=statistics_gen.outputs['statistics'])
# Performs anomaly detection based on statistics and data schema.
validate_stats = ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=infer_schema.outputs['schema'])
# Performs transformations and feature engineering in training and serving.
transform = Transform(
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
module_file=transform_module_file)
# Uses user-provided Python function that implements a model using
# TensorFlow's Estimators API.
trainer = Trainer(
module_file=trainer_module_file,
transformed_examples=transform.outputs['transformed_examples'],
schema=infer_schema.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000))
# Uses TFMA to compute a evaluation statistics over features of a model.
model_analyzer = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
feature_slicing_spec=evaluator_pb2.FeatureSlicingSpec(specs=[
evaluator_pb2.SingleSlicingSpec(
column_for_slicing=['weekday'])
]))
# Performs quality validation of a candidate model (compared to a baseline).
model_validator = ModelValidator(
examples=example_gen.outputs['examples'], model=trainer.outputs['model'])
# Checks whether the model passed the validation steps and pushes the model
# to a file destination if check passed.
pusher = Pusher(
model=trainer.outputs['model'],
model_blessing=model_validator.outputs['blessing'],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=serving_model_dir)))
return pipeline.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
components=[
example_gen, statistics_gen, infer_schema, validate_stats, transform,
trainer, model_analyzer, model_validator, pusher
],
enable_cache=True,
metadata_connection_config=metadata.sqlite_metadata_connection_config(
metadata_path),
additional_pipeline_args={},
)
online_news_pipeline = _create_pipeline(
pipeline_name=_pipeline_name,
pipeline_root=_pipeline_root,
data_root=_data_root,
transform_module_file=_transform_module_file,
trainer_module_file=_trainer_module_file,
serving_model_dir=_serving_model_dir,
metadata_path=_metadata_path)
BeamDagRunner().run(online_news_pipeline)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hyperparameters
Step2: An optimizer can be seen as a function that takes a gradient, obtained by backpropagation, and returns an update to be applied to the current parameters. Other optimizers can be found in
Step3: Dataset
Step4: The pre-processing consists in retrieving the list of symbols occuring in the text and to convert each of them into an unique index. This index will be used to create an one-hot representation of the symbol that will be the input of the model.
Step6: The following auxiliary function creates a minibatch in a 3D tensor (batch_size, SEQ_LENGTH, vocab_size).
Step8: Model definition
Step9: Optimization
Step11: The next function generates text given a phrase of length at least SEQ_LENGTH.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import theano
import theano.tensor as T
import lasagne
seed = 1
lasagne.random.set_rng(np.random.RandomState(seed))
# Sequence Length
SEQ_LENGTH = 20
# Number of units in the hidden (LSTM) layers
DEPTH = 2
N_HIDDEN = 512
NON_LINEARITY = lasagne.nonlinearities.rectify
# All gradients above this will be clipped
GRAD_CLIP = 100
# Number of epochs to train the net
NUM_EPOCHS = 50
# Optimization learning rate
LEARNING_RATE = 0.01
# Batch Size
BATCH_SIZE = 128
# How often should we check the output?
PRINT_FREQ = 500
my_optimizer = lambda loss, params: lasagne.updates.nesterov_momentum(
loss, params, learning_rate=0.01, momentum=0.9)
import urllib.request #For downloading the sample text file. You won't need this if you are providing your own file.
try:
in_text = urllib.request.urlopen('https://s3.amazonaws.com/text-datasets/nietzsche.txt').read()
#in_text = open('your_file.txt', 'r').read()
in_text = in_text.decode("utf-8")
print(in_text[:250])
except Exception as e:
print("Please verify the location of the input file/URL.")
print("A sample txt file can be downloaded from https://s3.amazonaws.com/text-datasets/nietzsche.txt")
raise IOError('Unable to Read Text')
chars = list(set(in_text))
data_size, vocab_size = len(in_text), len(chars)
char_to_ix = {ch:i for i,ch in enumerate(chars)}
ix_to_char = {i:ch for i,ch in enumerate(chars)}
print('Number of unique symbols: {}'.format(vocab_size))
print('Number of symbols in the dataset: {}'.format(data_size))
input_shape = (None, None, vocab_size)
def iterate_minibatch(
p,
batch_size = BATCH_SIZE,
data=in_text,
return_target=True):
Return a minibatch compatible with the input of the model and the associated targets
:type p: int
:param The index of the character to begin to read
:type batch_size: int
:param The number of datapoints in the current batch
:type data: str
:param The whole text
:type return_target: bool
:param Create the targets (next character) associated to the sequences
x = np.zeros((batch_size,SEQ_LENGTH,vocab_size))
y = np.zeros(batch_size)
for n in range(batch_size):
ptr = n
for i in range(SEQ_LENGTH):
x[n,i,char_to_ix[data[p+ptr+i]]] = 1.
if(return_target):
y[n] = char_to_ix[data[p+ptr+SEQ_LENGTH]]
return x, np.array(y,dtype='int32')
def create_lstm(
vocab_size,
input_shape,
input_var = None,
nonlinearity = lasagne.nonlinearities.tanh,
depth=2,
n_hidden=800,
grad_clip=100):
A generic function for creating a LSTM neural network.
:type vocab_size: int
:param vocab_size: number of elements in the dictionary
:type input_shape: tuple
:param input_shape: a tuple containing the shape of the input
:type input_var: theano.tensor.var.TensorVariable
:param input_var: a theano symbolic variable, created automatically if None
:type nonlinearity: lasagne.nonlinearities
:param nonlinearity: a nonlinearity function that follows all dense layers
:type depth: int
:param depth: the depth of the LSTM
:type n_hidden: int or list
:param n_hidden: number of hidden units per LSTM cells (if int, the same for all layers)
:type grad_clip: float
:param grad_clip: threshold for the gradient in the LSTM
# First, we build the network, starting with an input layer
# Recurrent layers expect input of shape
# (batch size, SEQ_LENGTH, num_features)
network = lasagne.layers.InputLayer(
shape=input_shape,
input_var=input_var
)
# We now build the LSTM layer
# We clip the gradients at GRAD_CLIP to prevent the problem of exploding gradients.
for _ in range(depth-1):
network = lasagne.layers.LSTMLayer(
network,
num_units=n_hidden,
grad_clipping=grad_clip,
nonlinearity=nonlinearity)
network = lasagne.layers.LSTMLayer(
network,
num_units=n_hidden,
grad_clipping=grad_clip,
nonlinearity=nonlinearity,
only_return_final=True)
# The output the previous module, with shape (batch_size, N_HIDDEN),
# is then passed through the softmax nonlinearity to
# create a probability distribution over the dictionary.
# The output of this stage is (batch_size, vocab_size).
network = lasagne.layers.DenseLayer(network, num_units=vocab_size, W = lasagne.init.Normal(), nonlinearity=lasagne.nonlinearities.softmax)
return network
# Theano tensor for the targets
input_var = T.tensor3('inputs')
target_values = T.ivector('target_output')
network = create_lstm(
vocab_size,
input_shape,
input_var,
nonlinearity=NON_LINEARITY,
depth=DEPTH,
n_hidden=N_HIDDEN)
# lasagne.layers.get_output produces a variable for the output of the net
network_output = lasagne.layers.get_output(network)
# The loss function is calculated as the mean of the (categorical) cross-entropy between the prediction and target.
loss = lasagne.objectives.categorical_crossentropy(network_output, target_values).mean()
# Retrieve all the parameters of the models and create the optimizer
params = lasagne.layers.get_all_params(network,trainable=True)
updates = my_optimizer(loss, params)
# Theano functions for training and computing cost
print("Compiling functions ...")
train = theano.function([input_var, target_values], loss, updates=updates, allow_input_downcast=True)
compute_loss = theano.function([input_var, target_values], loss, allow_input_downcast=True)
# In order to generate text from the network, we need the probability distribution of the next character given
# the state of the network and the input (a seed).
# In order to produce the probability distribution of the prediction, we compile a function called probs.
probs = theano.function([input_var],network_output,allow_input_downcast=True)
generation_phrase = "The meaning of life is " #This phrase will be used as seed to generate text.
def predict(N=200):
Output a sequence of characters of lenght N according to the current model
:type N: int
:param N: the number of characters to generate
assert(len(generation_phrase)>=SEQ_LENGTH)
sample_ix = []
x,_ = iterate_minibatch(len(generation_phrase)-SEQ_LENGTH, 1, generation_phrase,0)
for i in range(N):
# Pick the character that got assigned the highest probability
# ix = np.argmax(probs(x).ravel())
# Alternatively, to sample from the distribution instead:
ix = np.random.choice(np.arange(vocab_size), p=probs(x).ravel())
sample_ix.append(ix)
x[:,0:SEQ_LENGTH-1,:] = x[:,1:,:]
x[:,SEQ_LENGTH-1,:] = 0
x[0,SEQ_LENGTH-1,sample_ix[-1]] = 1.
random_snippet = generation_phrase + ''.join(ix_to_char[ix] for ix in sample_ix)
print("----\n %s \n----" % random_snippet)
print("Training ...")
print("Seed used for text generation is: " + generation_phrase)
p = 0
for it in range(int(data_size * NUM_EPOCHS / BATCH_SIZE)):
predict() # Generate text using the p^th character as the start.
avg_cost = 0;
for i in range(PRINT_FREQ):
x,y = iterate_minibatch(p)
p += SEQ_LENGTH + BATCH_SIZE - 1
if(p+BATCH_SIZE+SEQ_LENGTH >= data_size):
print('Carriage Return')
p = 0;
avg_cost += train(x, y)
print("Epoch {} average loss = {}".format(it*1.0*PRINT_FREQ/data_size*BATCH_SIZE, avg_cost / PRINT_FREQ))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using interact for animation with data
Step2: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays
Step3: Compute a 2d NumPy array called phi
Step4: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
Step5: Use interact to animate the plot_soliton_data function versus time.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
from math import sqrt
def soliton(x, t, c, a):
# make x and t arrays (if not already)
if type(t) != np.ndarray:
t = np.array([t])
if type(x) != np.ndarray:
x = np.array([x])
#set "xct" to x-ct
#because x and t are both arrays, use the sum of meshgid to "add" them without worrying about array size
#this will givs us a 1D array if T or X is a single element array
xct = sum(np.meshgrid(-1*c*t, x))
#set a to be the same array size as xct
a = a*np.ones_like(xct)
# z is the array that represents the bracketed expression
z = (sqrt(c))/2*(xct-a)
# sech = 2/e**z + e**-z use np.exp to input each element
sech2z = (2/(np.exp(z) + np.exp(-z)))**2
answer = 1/2*c*sech2z
return(answer)
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
tmin = 0.0
tmax = 10.0
tpoints = 100
t = np.linspace(tmin, tmax, tpoints)
xmin = 0.0
xmax = 10.0
xpoints = 200
x = np.linspace(xmin, xmax, xpoints)
c = 1.0
a = 0.0
phi = soliton(x, t, c, a)
phi.shape
assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[0,0]==soliton(x[0],t[0],c,a)
def plot_soliton_data(i=0):
#set the t to vary with [i]
#note t is set at 0 (i=0) so this plot is a graph of Phi v X with t=0
plt.plot(x, soliton(x, t[i], c, a))
#Graph styling
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.xlabel("X", fontsize = 14)
plt.ylabel(r'$\phi$ (x, %s)' %(t[i]), fontsize = 16)
plt.title("Korteweg de Vries Equation", fontsize = 16)
plot_soliton_data(0)
assert True # leave this for grading the plot_soliton_data function
interact(plot_soliton_data, i=(0,99));
assert True # leave this for grading the interact with plot_soliton_data cell
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: reading configuration
Step2: connecting, retrieving/sending messages
Step3: 1and1
Step4: Gmail
Step5: iTSCOM
Step6: WIP
|
<ASSISTANT_TASK:>
Python Code:
import maildaemon
cfg = maildaemon.load_config()
connections = cfg['connections']
from maildaemon.imap_daemon import IMAPDaemon
from maildaemon.smtp_connection import SMTPConnection
from maildaemon.pop_daemon import POPDaemon
one_and_one_imap = IMAPDaemon.from_dict(connections['1and1-imap'])
one_and_one_imap.connect()
one_and_one_imap.retrieve_folders_with_flags()
one_and_one_imap.retrieve_messages([1, 2, 3])
one_and_one_imap.disconnect()
one_and_one_smtp = SMTPConnection.from_dict(connections['1and1-smtp'])
#one_and_one_pop = POPDaemon.from_dict(connections['1and1-pop'])
one_and_one_smtp.connect()
#one_and_one_pop.connect()
one_and_one_smtp.disconnect()
#one_and_one_pop.disconnect()
gmail_imap = IMAPDaemon.from_dict(connections['gmail-imap'])
gmail_imap.connect()
gmail_imap.retrieve_folders_with_flags()
gmail_imap.open_folder()
gmail_imap.open_folder('[Gmail]/Sent Mail')
gmail_imap.open_folder('[Gmail]/All Mail')
gmail_imap.retrieve_message_ids()
gmail_imap.retrieve_message_parts(1, ['UID', 'FLAGS'])
gmail_imap.retrieve_messages_parts(range(1,100), ['ENVELOPE'], '[Gmail]/All Mail')
gmail_imap.retrieve_messages_parts(range(1,100), ['ENVELOPE'])
message_parts = gmail_imap.retrieve_message_parts(10, ['ENVELOPE'])
message_parts
gmail_imap.move_messages([10], '[Gmail]/Important')
def enable(self):
self._link.enable('MOVE')
enable(gmail_imap)
message = gmail_imap.retrieve_message(1)
message
gmail_imap.disconnect()
gmail_smtp = SMTPConnection.from_dict(connections['gmail-smtp'])
gmail_smtp.connect()
gmail_smtp.disconnect()
#itscom_smtp = SMTPConnection.from_dict(connections['itscom-smtp'])
itscom_pop = POPDaemon.from_dict(connections['itscom-pop'])
itscom_pop.connect()
itscom_pop.retrieve_message_ids()
itscom_pop.disconnect()
import email
envelope, body = message_parts
email_message = email.message_from_bytes(body) # type: email.message.Message
print(email_message.as_string())
email_message
email_message.defects
parser = email.parser.BytesParser()
msg = parser.parsebytes(body, headersonly=False)
print(msg.as_string())
msg.items() == email_message.items()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To use Numpy arrays we need to import the module.
Step2: Hm, we created the plot but where is it? Unlike matplotlib PyNGL can't display inline plots but IPython provides a solution for us.
Step3: That's really sparse. Next, we want to add a title above the plot and add the axis titles, too.
Step4: Display the grid lines of the coordinate system, too.
Step5: Change the line settings
Step6: To display two datasets in one xy-plot.
Step7: Uh, that's not what we want! We want to have two lines with different colors, lets say red and blue. And while we're at it, they should have two different dash pattern types.
Step8: NOTICE
Step9: We can distinguish them now but don't know which line is y1, y2 or y3. It is always good to have legend and that's what we want to do next.
Step10: That is the default. Doesn't look very nice, does it? So, we should fix it up a bit.
Step11: The next example shows how to read a dataset from file.
Step12: The variable time is x and and the variable tsurf is y.
Step13: Hm, I would like to have the x-axis labels as dates and not as indices.
|
<ASSISTANT_TASK:>
Python Code:
import Ngl
wks = Ngl.open_wks('png', 'plot_xy')
import numpy as np
x = np.arange(0,5)
y = np.arange(0,10,2)
plot = Ngl.xy(wks, x, y)
from IPython.display import Image
Image(filename='plot_xy.png', retina=True)
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
res.tiXAxisString = 'x-axis title string'
res.tiYAxisString = 'y-axis title string'
plot = Ngl.xy(wks, x, y, res)
Image(filename='plot_xy.png', retina=True)
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
res.tiXAxisString = 'x-axis title string'
res.tiYAxisString = 'y-axis title string'
res.tmXMajorGrid = True
res.tmYMajorGrid = True
plot = Ngl.xy(wks, x, y, res)
Image(filename='plot_xy.png', retina=True)
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
res.tiXAxisString = 'x-axis title string'
res.tiYAxisString = 'y-axis title string'
res.tmXMajorGrid = True
res.tmYMajorGrid = True
res.xyLineColor = 'red'
res.xyDashPattern = 3 # -- - -- - --
res.xyLineThicknessF = 5
plot = Ngl.xy(wks, x, y, res)
Image(filename='plot_xy.png', retina=True)
y1 = np.array([0,3,6,1,4])
y2 = np.array([2,5,3,2,7])
data = np.array([y1,y2])
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
plot = Ngl.xy(wks, x, data, res)
Image(filename='plot_xy.png', retina=True)
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
res.xyLineColors = ['red','blue']
res.xyDashPatterns = [3,16]
res.xyLineThicknesses = [5,3]
plot = Ngl.xy(wks, x, data, res)
Image(filename='plot_xy.png', retina=True)
y1 = np.array([0,3,6,1,4])
y2 = np.array([2,5,3,2,7])
y3 = np.array([1,1,2,3,5])
data = np.array([y1,y2,y3])
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
res.xyLineColors = ['red','blue','green']
res.xyDashPatterns = [3,16,0]
res.xyLineThicknesses = [5,3,5]
plot = Ngl.xy(wks, x, data, res)
Image(filename='plot_xy.png', retina=True)
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
res.xyLineColors = ['red','blue','green']
res.xyDashPatterns = [3,16,0]
res.xyLineThicknesses = [5,3,5]
res.pmLegendDisplayMode = "Always" #-- turn on the drawing
plot = Ngl.xy(wks, x, data, res)
Image(filename='plot_xy.png', retina=True)
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
res.xyLineColors = ['red','blue','green']
res.xyDashPatterns = [3,16,0]
res.xyLineThicknesses = [5,3,5]
res.xyExplicitLegendLabels = ["y1","y2","y3"]
res.pmLegendDisplayMode = "Always" #-- turn on the legend drawing
res.pmLegendOrthogonalPosF = -1.0 #-- move the legend upward
res.pmLegendParallelPosF = 0.17 #-- move the legend rightward
res.pmLegendWidthF = 0.15 #-- increase width
res.pmLegendHeightF = 0.10 #-- increase height
res.lgPerimOn = False #-- turn off the perimeter
plot = Ngl.xy(wks, x, data, res)
Image(filename='plot_xy.png', retina=True)
import xarray as xr
ds = xr.open_dataset('./data/tsurf_fldmean.nc')
tsurf = ds.tsurf
time = np.arange(0,len(ds.time),1)
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy_tsurf.png')
res = Ngl.Resources()
res.tiMainString = 'Variable tsurf'
res.tiXAxisString = 'time'
res.tiYAxisString = tsurf.long_name
plot = Ngl.xy(wks, time, tsurf[:,0,0].values, res)
Image(filename='plot_xy_tsurf.png', retina=True)
import datetime
ntime = len(ds.time)
years = ds.time.dt.year.values
months = ds.time.dt.month.values
days = ds.time.dt.day.values
date_labels = [datetime.date(years[i],months[i],days[i]) for i in range(0,ntime)]
date_labels = list(np.array(date_labels,dtype='str'))
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy_tsurf.png')
res = Ngl.Resources()
res.tiMainString = 'Variable tsurf'
res.tiXAxisString = 'time'
res.tiYAxisString = tsurf.long_name
res.tmXBMode = 'Explicit' #-- use explicit values
res.tmXBValues = time[::4] #-- use the new x-values array
res.tmXBLabels = date_labels[::4] #-- use the new x-values array as labels
res.tmXBLabelFontHeightF = 0.008
res.tmXBLabelAngleF = 45
res.tmXBMinorOn = False #-- turn off minor tickmark
plot = Ngl.xy(wks, time, tsurf[:,0,0].values, res)
Image(filename='plot_xy_tsurf.png', retina=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: sorted() ไธ้ไบๅ่กจ๏ผ่ไธไผ็ๆๅนถ่ฟๅไธไธชๆฐ็ๆๅบๅ็ๅ่กจ๏ผๅๆๅฏน่ฑกไธๅๅฝฑๅ๏ผ
Step2: ่ฝ็ถไธๆฏๅๅฐๆๅบ๏ผไฝๅฆๆๆฏไผ ๅ
ฅ็ๆๅจ๏ผ่ฟๆฏไผ่ขซๅพช็ฏๆ็๏ผ
Step3: Key
Step4: ๆ่
๏ผๅฝ่ฟญไปฃๅฏน่ฑก็ๅ
็ด ่พไธบๅคๆๆถ๏ผๅฏไปฅๅชๆ็
งๅ
ถไธญ็ๆไบๅฑๆง่ฟ่กๆๅบ๏ผ
Step5: Python ็ operator ๆ ๅๅบๆไพไบไธไบๆไฝ็ฌฆ็ธๅ
ณ็ๆนๆณ๏ผๅฏไปฅๆดๆนไพฟๅฐ่ทๅๅ
็ด ็ๅฑๆง๏ผ
Step6: ็ป่ฟ key ๅค็ไนๅไผ้่ฟ < ็ฌฆๅทๅฏนไธคไธชๅ
็ด ่ฟ่กๆฏ่พ๏ผๅจ Python 2.7 ็็ๆฌไธญ๏ผsorted() ่ฟๅฏไปฅๆฅๆถๅฆๅคไธไธชๅๆฐ cmp๏ผ็จๆฅๆฅ็ฎก < ็ๆฏ่พ่ฟ็จใไฝๆฏๅจ Python 3.5 ไธญๅทฒ็ปๅ
จ้ขๆๅผไบ่ฟไธๅๆณ๏ผๅ
ๆฌ sorted() ไธญ็ cmp ๅๆฐๅๅฏน่ฑกไธญ็ __cmp__ ๆฏ่พๆไฝ๏ผๅชๆๅจ้่ฆๅๅๅ
ผๅฎน็ๆถๅๆๅฏ่ฝๅจ Python 3.5 ็จๅฐ่ฟไธๅ่ฝ๏ผๅ
ถๆฟๆข็ๆนๆณไธบ๏ผ
|
<ASSISTANT_TASK:>
Python Code:
from random import randrange
lst = [randrange(1, 100) for _ in range(10)]
print(lst)
lst.sort()
print(lst)
lst = [randrange(1, 100) for _ in range(10)]
tup = tuple(lst)
print(sorted(tup)) # return List
print(tup)
tup = (randrange(1, 100) for _ in range(10))
print(sorted(tup))
for i in tup:
print(i)
lst = [randrange(-10, 10) for _ in range(10)]
print(lst)
print(sorted(lst, key=abs))
lst = list(zip("hello world hail python".split(), [randrange(1, 10) for _ in range(4)]))
print(lst)
print(sorted(lst, key=lambda item: item[1]))
from operator import itemgetter, attrgetter
print(lst)
print(sorted(lst, key=itemgetter(1)))
# ไธๅ้ฝๅชๆฏๅฝๆฐ
fitemgetter = lambda ind: lambda item: item[ind]
print(sorted(lst, key=fitemgetter(1)))
class P(object):
def __init__(self, w, n):
self.w = w
self.n = n
def __repr__(self):
return "{}=>{}".format(self.w, self.n)
ps = [P(i[0], i[1]) for i in lst]
print(sorted(ps, key=attrgetter('n')))
from functools import cmp_to_key as new_cmp_to_key
# new_cmp_to_key works like this
def cmp_to_key(mycmp):
'Convert a cmp= function into a key= function'
class K:
def __init__(self, obj, *args):
self.obj = obj
def __lt__(self, other):
return mycmp(self.obj, other.obj) < 0
return K
def reverse_cmp(x, y):
return y[1] - x[1]
sorted(lst, key=cmp_to_key(reverse_cmp))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: With Spaces
Step2: Do the loop and calculate the scores
Step3: Now without spaces
Step4: Do the loop and calculate the scores
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from sklearn import svm, cross_validation
import h5py
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import sys
sys.path.append("../")
# Data to use
Ndata = 10000
# First we load the file
file_location = '../results_database/text_wall_street_columns.hdf5'
# Now we need to get the letters and align them
text_directory = '../data/wall_street_letters.npy'
letters_sequence = np.load(text_directory)
Nletters = len(letters_sequence)
symbols = set(letters_sequence)
targets = []
for index in range(Ndata):
letter_index = index // 10
targets.append(letters_sequence[letter_index])
# Transform to array
targets = np.array(targets)
# Calculate the predictions
Ntime_clusters_set = np.arange(3, 50, 3)
scores_mixed = []
scores_indp = []
# Nexa parameters
Nspatial_clusters = 3
Nembedding = 3
for Ntime_clusters in Ntime_clusters_set:
print(Ntime_clusters)
# Here calculate the scores for the mixes
run_name = '/test'
f = h5py.File(file_location, 'r')
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
cluster_to_index = nexa['cluster_to_index']
code_vectors_softmax = np.array(nexa['code-vectors-softmax'])
# Now we need to classify
X = code_vectors_softmax[:Ndata]
y = targets
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, kernel='linear')
clf_linear.fit(X_train, y_train)
score = clf_linear.score(X_test, y_test) * 100.0
scores_mixed.append(score)
# Here calculate the scores for the independent
run_name = '/indep'
f = h5py.File(file_location, 'r')
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
cluster_to_index = nexa['cluster_to_index']
code_vectors_softmax = np.array(nexa['code-vectors-softmax'])
# Now we need to classify
X = code_vectors_softmax[:Ndata]
y = targets
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, kernel='linear')
clf_linear.fit(X_train, y_train)
score = clf_linear.score(X_test, y_test) * 100.0
scores_indp.append(score)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.plot(Ntime_clusters_set, scores_indp, 'o-', label='independent', lw=2, markersize=10)
ax.plot(Ntime_clusters_set, scores_mixed, 'o-', label='mixed', lw=2, markersize=10)
ax.set_ylim(0, 105)
ax.set_ylabel('Accuracy')
ax.set_xlabel('Number of Data Clusters')
ax.set_title('Accuracy vs Number of Data Clusters for different features')
ax.legend()
# Data to use
Ndata = 10000
# First we load the file
file_location = '../results_database/text_wall_street_columns_spaces.hdf5'
# Now we need to get the letters and align them
text_directory = '../data/wall_street_letters_spaces.npy'
letters_sequence = np.load(text_directory)
Nletters = len(letters_sequence)
symbols = set(letters_sequence)
targets = []
for index in range(Ndata):
letter_index = index // 10
targets.append(letters_sequence[letter_index])
# Transform to array
targets = np.array(targets)
# Calculate the predictions
Ntime_clusters_set = np.arange(3, 50, 3)
scores_mixed_wspaces = []
scores_indp_wspaces = []
# Nexa parameters
Nspatial_clusters = 3
Nembedding = 3
for Ntime_clusters in Ntime_clusters_set:
print(Ntime_clusters)
# Here calculate the scores for the mixes
run_name = '/test'
f = h5py.File(file_location, 'r')
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
cluster_to_index = nexa['cluster_to_index']
code_vectors_softmax = np.array(nexa['code-vectors-softmax'])
# Now we need to classify
X = code_vectors_softmax[:Ndata]
y = targets
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, kernel='linear')
clf_linear.fit(X_train, y_train)
score = clf_linear.score(X_test, y_test) * 100.0
scores_mixed_wspaces.append(score)
# Here calculate the scores for the independent
run_name = '/indep'
f = h5py.File(file_location, 'r')
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
cluster_to_index = nexa['cluster_to_index']
code_vectors_softmax = np.array(nexa['code-vectors-softmax'])
# Now we need to classify
X = code_vectors_softmax[:Ndata]
y = targets
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, kernel='linear')
clf_linear.fit(X_train, y_train)
score = clf_linear.score(X_test, y_test) * 100.0
scores_indp_wspaces.append(score)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.plot(Ntime_clusters_set, scores_indp, 'o-', label='independent', lw=2, markersize=10)
ax.plot(Ntime_clusters_set, scores_mixed, 'o-', label='mixed', lw=2, markersize=10)
ax.set_ylim(0, 105)
ax.set_ylabel('Accuracy')
ax.set_xlabel('Number of Data Clusters')
ax.set_title('Accuracy vs Number of Data Clusters for different features (Without Sapces)')
ax.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/divina_commedia.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
#text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
words_ordered = sorted(set(text))
# TODO: Implement Function
vocab_to_int = {word: index for index, word in enumerate(words_ordered)}
int_to_vocab = {index: word for index, word in enumerate(words_ordered)}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
token_dict = dict()
token_dict['.'] = "||Period||"
token_dict[','] = "||Comma||"
token_dict['"'] = "||Quotation_Mark||"
token_dict[';'] = "||Semicolon||"
token_dict['!'] = "||Exclamation_Mark||"
token_dict['?'] = "||Question_Mark||"
token_dict['('] = "||Left_Parentheses||"
token_dict[')'] = "||Right_Parentheses||"
token_dict['--'] = "||Dash||"
token_dict['\n'] = "||Return||"
return token_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, shape=(None, None), name="input")
targets = tf.placeholder(tf.int32, shape=(None,None), name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm_layers = 2 #Need to pass test?! (otherwise final_state shape will be wrong)
lstm = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
# print(initial_state)
initial_state = tf.identity(initial_state, name="initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embeddings = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embeddings, ids=input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
#print(cell)
#print(inputs)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
# Shape is lstm_layers x 2 (inputs and targets) x None (batch_size) x lstm_units
#print(final_state)
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim=embed_dim)
# outputs shape is batch_size x seq_len x lstm_units
outputs, final_state = build_rnn(cell, inputs=embed)
#print(outputs.shape)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
# logits shape is batch_size x seq_len x vocab_size
#print(logits.shape)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
#print("Batch_size: " + str(batch_size))
#print("Seq length: " + str(seq_length))
# Consider that targets is shifted by 1
num_batches = len(int_text)//(batch_size * seq_length + 1)
#print("Num batches: " + str(num_batches))
#print("Text length: " + str(len(int_text)))
batches = np.zeros(shape=(num_batches, 2, batch_size, seq_length), dtype=np.int32)
#print(batches.shape)
# TODO: Add a smarter check
for batch_index in range(0, num_batches):
for in_batch_index in range(0, batch_size):
start_x = (batch_index * seq_length) + (seq_length * num_batches * in_batch_index)
start_y = start_x + 1
x = int_text[start_x : start_x + seq_length]
y = int_text[start_y : start_y + seq_length]
#print("batch_index: " + str(batch_index))
#print("in_batch_index: " + str(in_batch_index))
#print("start_x: " + str(start_x))
#print(x)
batches[batch_index][0][in_batch_index] = np.asarray(x)
batches[batch_index][1][in_batch_index] = np.asarray(y)
#print(batches)
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# FINAL LOSS: 0.213 - Seq length 20, LR 0.001, Epochs 250
# Number of Epochs
num_epochs = 250
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 99
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
# probs shape is batch_size x seq_len x vocab_size
probs = tf.nn.softmax(logits, name='probs')
#print(probs.shape)
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
# x and y shapes are batch_size x seq_len
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return loaded_graph.get_tensor_by_name("input:0"), loaded_graph.get_tensor_by_name("initial_state:0"), \
loaded_graph.get_tensor_by_name("final_state:0"), loaded_graph.get_tensor_by_name("probs:0")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return int_to_vocab[np.argmax(probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
#print(vocab_to_int)
gen_length = 200
#prime_word = 'Inferno: Canto I'
prime_word = 'vuolsi'
prime_word = str.lower(prime_word)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = prime_word.split()
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hat potential
Step2: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$
Step3: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
def hat(x,a,b):
return -a*x**2+b*x**4
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(1.0, 10.0, 1.0)==-9.0
a = 5.0
b = 1.0
x=np.linspace(-3,3,100)
plt.figure(figsize=(9,6))
plt.xlabel('Range'), plt.ylabel('V(x)'), plt.title('Hat Potential')
plt.plot(x, hat(x,a,b))
plt.box(False)
plt.grid(True)
plt.tick_params(axis='x', top='off', direction='out')
plt.tick_params(axis='y', right='off', direction='out');
assert True # leave this to grade the plot
res1 = opt.minimize_scalar(hat, bounds=(-3,0), args=(a,b), method='bounded')
res2 = opt.minimize_scalar(hat, bounds=(0,3), args=(a,b), method='bounded')
print('Local minima: %f, %f' % (res1.x, res2.x))
plt.figure(figsize=(9,6))
plt.xlabel('Range'), plt.ylabel('V(x)')
plt.plot(x, hat(x,a,b), label="Potential")
plt.scatter(res1.x, res1.fun, marker="o", color="r")
plt.scatter(res2.x, res2.fun, marker="o", color="r")
plt.title('Finding Local Minima of Hat Potential')
plt.box(False), plt.grid(True), plt.xlim(-2.5,2.5), plt.ylim(-8,4)
plt.tick_params(axis='x', top='off', direction='out')
plt.tick_params(axis='y', right='off', direction='out')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.);
assert True # leave this for grading the plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To create a transmission line of 100um
Step2: More detailed examples illustrating how to create various kinds of Media
Step3: For the purpose of microwave network analysis, the defining properties of a (single moded) transmission line are it's characteristic impedance and propagation constant. These properties return complex numpy.ndarray's, A port impedance is also needed when different networks are connected.
Step4: The port impedance is given by z0 (lower z). Which we set to 1, just to illustrate how this works. The port impedance is used to compute impedance mismatched if circuits of different port impedance are connected.
Step5: The propagation constant is given by gamma
Step6: Lets take a look at some other Media's
Step7: Simulate a 1cm slab of Si in half-space,
Step8: Rectangular Waveguide
Step9: The z0 argument in the Rectangular Waveguide constructor is used
Step10: Because the wave quantities are dynamic they change when the attributes
Step11: Network Synthesis
Step12: Or to create a $90^{\circ}$ section of cpw line,
Step13: Building Circuits
Step14: When Networks with more than 2 ports need to be connected together, use
Step15: Adding networks in shunt is pretty common, so there is a Media.shunt() function to do this,
Step16: If a specific circuit is created frequently, it may make sense to
Step17: A more useful example may be to create a function for a shunt-stub tuner,
Step18: This approach lends itself to design optimization.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import skrf as rf
rf.stylely()
from skrf import Frequency
from skrf.media import CPW
freq = Frequency(75,110,101,'ghz')
cpw = CPW(freq, w=10e-6, s=5e-6, ep_r=10.6)
cpw
cpw.line(100*1e-6, name = '100um line')
freq = Frequency(75,110,101,'ghz')
cpw = CPW(freq, w=10e-6, s=5e-6, ep_r=10.6, z0 =1)
cpw
cpw.Z0[:3]
cpw.z0[:3]
cpw.gamma[:3]
from skrf.media import Freespace
freq = Frequency(10,20,101,'ghz')
air = Freespace(freq)
air
air.z0[:2] # 377ohm baby!
# plane wave in Si
si = Freespace(freq, ep_r = 11.2)
si.z0[:3] # ~110ohm
slab = air.thru() ** si.line(1, 'cm') ** air.thru()
slab.plot_s_db(n=0)
from skrf.media import RectangularWaveguide
freq = Frequency(75,110,101,'ghz')
wg = RectangularWaveguide(freq, a=100*rf.mil, z0=50) # see note below about z0
wg
air = Freespace(freq)
from matplotlib import pyplot as plt
air.plot(air.gamma.imag, label='Freespace')
wg.plot(wg.gamma.imag, label='WR10')
plt.ylabel('Propagation Constant (rad/m)')
plt.legend()
for ep_r in [9,10,11]:
cpw.ep_r = ep_r
cpw.frequency.plot(cpw.beta, label='er=%.1f'%ep_r)
plt.xlabel('Frequency [GHz]')
plt.ylabel('Propagation Constant [rad/m]')
plt.legend()
wg.short(name = 'short')
cpw.line(d=90,unit='deg', name='line')
delay_short = wg.line(d=90,unit='deg') ** wg.short()
delay_short.name = 'delay short'
delay_short
tee = cpw.tee()
delay_open = cpw.delay_open(40,'deg')
shunt_open = rf.connect(tee,1,delay_open,0)
cpw.shunt(delay_open)
delay_short = lambda d: wg.line(d,'deg')**wg.short()
delay_short(90)
def shunt_stub(med, d0, d1):
return med.line(d0,'deg')**med.shunt_delay_open(d1,'deg')
shunt_stub(cpw,10,90)
from scipy.optimize import fmin
# the load we are trying to match
load = cpw.load(.2+.2j)
# single stub circuit generator function
def shunt_stub(med, d0, d1):
return med.line(d0,'deg')**med.shunt_delay_open(d1,'deg')
# define the cost function we want to minimize (this uses sloppy namespace)
def cost(d):
# prevent negative length lines, returning high cost
if d[0] <0 or d[1] <0:
return 1e3
return (shunt_stub(cpw,d[0],d[1]) ** load)[100].s_mag.squeeze()
# initial guess of optimal delay lengths in degrees
d0 = 120,40 # initial guess
#determine the optimal delays
d_opt = fmin(cost,(120,40))
d_opt
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-3', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.