text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lab Task #1
Step2: Check data exists
Step3: Now that we have the Keras wide-and-deep code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform.
Step4: We then use the %%writefile magic to write the contents of the cell below to a file called task.py in the babyweight/trainer folder.
Step5: In the same way we can write to the file model.py the model that we developed in the previous notebooks.
Step6: Train locally
Step7: Training on Cloud AI Platform
Step8: The training job should complete within 10 to 15 minutes. You do not need to wait for this training job to finish before moving forward in the notebook, but will need a trained model to complete our next lab.
Step9: Build and push container image to repo
Step10: Note
Step11: Kindly ignore the incompatibility errors.
Step12: Lab Task #5
Step13: When I ran it, I used train_examples=2000000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was
Step14: Repeat training
|
<ASSISTANT_TASK:>
Python Code:
import os
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "${PROJECT}
# TODO: Change these to try this notebook out
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.1"
os.environ["PYTHONVERSION"] = "3.7"
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*000000000000.csv
%%bash
mkdir -p babyweight/trainer
touch babyweight/trainer/__init__.py
%%writefile babyweight/trainer/task.py
import argparse
import json
import os
from trainer import model
import tensorflow as tf
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--job-dir",
help="this model ignores this field, but it is required by gcloud",
default="junk"
)
parser.add_argument(
"--train_data_path",
help="GCS location of training data",
required=True
)
parser.add_argument(
"--eval_data_path",
help="GCS location of evaluation data",
required=True
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models",
required=True
)
parser.add_argument(
"--batch_size",
help="Number of examples to compute gradient over.",
type=int,
default=512
)
# TODO: Add nnsize argument
# TODO: Add nembeds argument
# TODO: Add num_epochs argument
# TODO: Add train_examples argument
# TODO: Add eval_steps argument
# Parse all arguments
args = parser.parse_args()
arguments = args.__dict__
# Unused args provided by service
arguments.pop("job_dir", None)
arguments.pop("job-dir", None)
# Modify some arguments
arguments["train_examples"] *= 1000
# Append trial_id to path if we are doing hptuning
# This code can be removed if you are not using hyperparameter tuning
arguments["output_dir"] = os.path.join(
arguments["output_dir"],
json.loads(
os.environ.get("TF_CONFIG", "{}")
).get("task", {}).get("trial", "")
)
# Run the training job
model.train_and_evaluate(arguments)
%%writefile babyweight/trainer/model.py
import datetime
import os
import shutil
import numpy as np
import tensorflow as tf
# Determine CSV, label, and key columns
# TODO: Add CSV_COLUMNS and LABEL_COLUMN
# Set default values for each CSV column.
# Treat is_male and plurality as strings.
# TODO: Add DEFAULTS
def features_and_labels(row_data):
# TODO: Add your code here
pass
def load_dataset(pattern, batch_size=1, mode='eval'):
# TODO: Add your code here
pass
def create_input_layers():
# TODO: Add your code here
pass
def categorical_fc(name, values):
# TODO: Add your code here
pass
def create_feature_columns(nembeds):
# TODO: Add your code here
pass
def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units):
# TODO: Add your code here
pass
def rmse(y_true, y_pred):
# TODO: Add your code here
pass
def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3):
# TODO: Add your code here
pass
def train_and_evaluate(args):
model = build_wide_deep_model(args["nnsize"], args["nembeds"])
print("Here is our Wide-and-Deep architecture so far:\n")
print(model.summary())
trainds = load_dataset(
args["train_data_path"],
args["batch_size"],
'train')
evalds = load_dataset(
args["eval_data_path"], 1000, 'eval')
if args["eval_steps"]:
evalds = evalds.take(count=args["eval_steps"])
num_batches = args["batch_size"] * args["num_epochs"]
steps_per_epoch = args["train_examples"] // num_batches
checkpoint_path = os.path.join(args["output_dir"], "checkpoints/babyweight")
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path, verbose=1, save_weights_only=True)
history = model.fit(
trainds,
validation_data=evalds,
epochs=args["num_epochs"],
steps_per_epoch=steps_per_epoch,
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[cp_callback])
EXPORT_PATH = os.path.join(
args["output_dir"], datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
%%bash
OUTDIR=babyweight_trained
rm -rf ${OUTDIR}
export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight
python3 -m trainer.task \
--job-dir=./tmp \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--batch_size=# TODO: Add batch size
--num_epochs=# TODO: Add the number of epochs to train for
--train_examples=# TODO: Add the number of examples to train each epoch for
--eval_steps=# TODO: Add the number of evaluation batches to run
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBID=babyweight_$(date -u +%y%m%d_%H%M%S)
gcloud ai-platform jobs submit training ${JOBID} \
--region=${REGION} \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=${OUTDIR} \
--staging-bucket=gs://${BUCKET} \
--master-machine-type=n1-standard-8 \
--scale-tier=CUSTOM \
--runtime-version=${TFVERSION} \
--python-version=${PYTHONVERSION} \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=10000 \
--eval_steps=100 \
--batch_size=32 \
--nembeds=8
%%writefile babyweight/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY trainer /babyweight/trainer
RUN apt update && \
apt install --yes python3-pip && \
pip3 install --upgrade --quiet tensorflow==2.1 && \
pip3 install --upgrade --quiet cloudml-hypertune
ENV PYTHONPATH ${PYTHONPATH}:/babyweight
ENTRYPOINT ["python3", "babyweight/trainer/task.py"]
%%writefile babyweight/push_docker.sh
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
export IMAGE_REPO_NAME=babyweight_training_container
export IMAGE_URI=gcr.io/${PROJECT_ID}/${IMAGE_REPO_NAME}
echo "Building $IMAGE_URI"
docker build -f Dockerfile -t ${IMAGE_URI} ./
echo "Pushing $IMAGE_URI"
docker push ${IMAGE_URI}
%%bash
cd babyweight
bash push_docker.sh
%%bash
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
export IMAGE_REPO_NAME=babyweight_training_container
export IMAGE_URI=gcr.io/${PROJECT_ID}/${IMAGE_REPO_NAME}
echo "Running $IMAGE_URI"
docker run ${IMAGE_URI} \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=gs://${BUCKET}/babyweight/trained_model \
--batch_size=10 \
--num_epochs=10 \
--train_examples=1 \
--eval_steps=1
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBID=babyweight_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBNAME}
# gsutil -m rm -rf ${OUTDIR}
IMAGE=gcr.io/${PROJECT}/babyweight_training_container
gcloud ai-platform jobs submit training ${JOBID} \
--staging-bucket=gs://${BUCKET} \
--region=${REGION} \
--master-image-uri=${IMAGE} \
--master-machine-type=n1-standard-4 \
--scale-tier=CUSTOM \
-- \
--train_data_path=# TODO: Add path to training data in GCS
--eval_data_path=# TODO: Add path to evaluation data in GCS
--output_dir=${OUTDIR} \
--num_epochs=# TODO: Add the number of epochs to train for
--train_examples=# TODO: Add the number of examples to train each epoch for
--eval_steps=# TODO: Add the number of evaluation batches to run
--batch_size=# TODO: Add batch size
--nembeds=# TODO: Add number of embedding dimensions
%%writefile hyperparam.yaml
trainingInput:
scaleTier: STANDARD_1
hyperparameters:
hyperparameterMetricTag: # TODO: Add metric we want to optimize
goal: # TODO: MAXIMIZE or MINIMIZE?
maxTrials: 20
maxParallelTrials: 5
enableTrialEarlyStopping: True
params:
- parameterName: batch_size
type: # TODO: What datatype?
minValue: # TODO: Choose a min value
maxValue: # TODO: Choose a max value
scaleType: # TODO: UNIT_LINEAR_SCALE or UNIT_LOG_SCALE?
- parameterName: nembeds
type: # TODO: What datatype?
minValue: # TODO: Choose a min value
maxValue: # TODO: Choose a max value
scaleType: # TODO: UNIT_LINEAR_SCALE or UNIT_LOG_SCALE?
%%bash
OUTDIR=gs://${BUCKET}/babyweight/hyperparam
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBNAME}
gsutil -m rm -rf ${OUTDIR}
IMAGE=gcr.io/${PROJECT}/babyweight_training_container
gcloud ai-platform jobs submit training ${JOBNAME} \
--staging-bucket=gs://${BUCKET} \
--region=${REGION} \
--master-image-uri=${IMAGE} \
--master-machine-type=n1-standard-4 \
--scale-tier=CUSTOM \
--# TODO: Add config for hyperparam.yaml
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=20000 \
--eval_steps=100
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model_tuned
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBNAME}
gsutil -m rm -rf ${OUTDIR}
IMAGE=gcr.io/${PROJECT}/babyweight_training_container
gcloud ai-platform jobs submit training ${JOBNAME} \
--staging-bucket=gs://${BUCKET} \
--region=${REGION} \
--master-image-uri=${IMAGE} \
--master-machine-type=n1-standard-4 \
--scale-tier=CUSTOM \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=20000 \
--eval_steps=100 \
--batch_size=32 \
--nembeds=8
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Still Python, but inside a Cython cell
Step2: Cython
Step3: Example of a while loop
Step6: Application
Step7: Quick demo
Step8: Speed test - compare Cython and Numpy
|
<ASSISTANT_TASK:>
Python Code:
def p(n, m):
output = 0
for i in range(n):
output += i % m
return output
%timeit p(1000000, 42)
%%cython
def f(n, m):
output = 0
for i in range(n):
output += i % m
return output
%timeit f(1000000, 42)
%%cython
def c(int n):
cdef int i, output
for i in range(n):
output += i % 42
return output
%timeit c(1000000)
%%cython
def w(int n):
cdef int i = 0, output
while i < n:
output += i % 42
i += 1
return output
%timeit w(1000000)
%%cython
cimport cython
# This code was translated to Cython from a
# wikipedia article at:
#
# https://en.wikipedia.org/wiki/Xorshift
#
# "Xorshift random number generators are a
# class of pseudorandom number generators that
# was discovered by George Marsaglia.[1] They
# generate the next number in their sequence
# by repeatedly taking the exclusive or of a
# number with a bit shifted version of itself.
# This makes them extremely fast on modern
# computer architectures"
#
# "A naive C implementation of a xorshift+ generator
# that passes all tests from the BigCrush suite
# (with an order of magnitude fewer failures than
# Mersenne Twister or WELL) typically takes fewer
# than 10 clock cycles on x86 to generate a random
# number, thanks to instruction pipelining."
cdef unsigned long long s[2] # Seed: initialize to nonzero
cdef inline unsigned long long xorshiftplus():
Direct translation from Wikipedia page
cdef unsigned long long x = s[0]
cdef unsigned long long y = s[1]
s[0] = y
x ^= x << 23 # a
x ^= x >> 17 # b
x ^= y^(y>>26) # c
s[1] = x
return x + y
@cython.boundscheck(False)
def random_array(unsigned long long[:] output):
Array must be already be sized
s[0] = 1 # Set the seed
s[1] = 2
cdef int i, n = output.shape[0]
for i in range(n):
output[i] = xorshiftplus()
# Create storage for our 8-byte random numbers
import numpy
output = numpy.zeros(10, dtype=numpy.uint64)
random_array(output)
print(output)
n = int(1e8) # 100 million random numbers
output = numpy.zeros(n, dtype=numpy.uint64)
%timeit random_array(output)
%timeit y = numpy.random.randint(low=0, high=2**31 - 1, size=n)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step 1 (clustering)
Step1: Do some preprocessing to group the data by 'Anon Stud Id' and extract features for further analysis
Step2: Note to reviewers
Step3: Reading from the scratch instead
Step4: Making backup for stud_data in HDF5 format
Step5: Clustering analysis
Step6: However, the benchmark model chooses only 14 students (out of 8,980) for a "gaming" cluster
Step7: Clustering
Step8: Choose the pair of columns with best score
Step9: Visualising the KMeans clusters
Step10: Then, consider adding one more column to further increase the score
Step11: As expected, the pair ['frac_1s_hints', 'max_probl_views'] still gives the best score.
Step12: GaussianMixture
Step13: AgglomerativeClustering
Step14: Visualising the AgglomerativeClustering clusters
Step15: Further clustering of obtained KMeans groups
Step16: Visualise obtained clusters
Step17: As we see, group 01 contains more students with "gaming" behaviour, so I proceed with group 00
Step18: So, there is a subgroup 001 of 1109 students that do not use many hints. What about the rest (000, 6186 students)?
Step19: Splitting group 0000 (students with large 'num_sess' and 'num_probs')
Step20: As we see, these two groups represent students with "intermediate experience" (00000) and "largest experience" (00001).
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import time
import gzip
import shutil
import seaborn as sns
from collections import Counter
from sklearn.mixture import GaussianMixture
from sklearn.cluster import KMeans, MeanShift, estimate_bandwidth, AgglomerativeClustering
from sklearn.metrics import silhouette_score #, make_scorer
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.neighbors import kneighbors_graph
def hdf_fixed_write_compress(df):
df.to_hdf('data1-step1.hdf','test',mode='w',complib='blosc')
return
def hdf_fixed_read_compress():
df = pd.read_hdf('data.hdf','test')
return df
with gzip.open('data1.hdf.gz', 'rb') as f_in, open('data.hdf', 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
!ls -lh data.hdf
data = hdf_fixed_read_compress()
data.head()
def prepare_stud_data_new(df):
start_time = time.time()
stud_list = df['Anon Student Id'].unique()
cols=['num_sess', \
'num_days', \
'num_probs', \
'num_atts', \
'num_hints', \
'frac_corr_atts', \
'frac_3s_atts', \
'frac_1s_hints', \
'time_atts', \
'time_hints', \
'max_probl_views', \
'max_atts']
numbers = []
#stud_data = pd.DataFrame(columns=cols)
stud_info_df = pd.DataFrame()
i = 0
for stud_name in stud_list:
stud_info_df = df[df['Anon Student Id'] == stud_name].copy()
# total number of days loading the system
num_days = len(set(stud_info_df['Day']))
# total number of sessions opened
num_sessions = len(set(stud_info_df['Session Id']))
# total number of problems entered
num_problems = len(set(stud_info_df['Problem Name']))
# total number of attempts made by the student
num_attempts = stud_info_df[stud_info_df['Student Response Type'] == 0].shape[0]
# total number of hints made by the student
num_hints = stud_info_df[stud_info_df['Student Response Type'] == 1].shape[0]
# fraction of short attemps (with time <= 3 sec)
if (num_attempts > 0):
frac_3s_atts = stud_info_df[(stud_info_df['Student Response Type'] == 0) & (stud_info_df['Duration (sec)'] <= 3.0)].shape[0] / num_attempts
else:
frac_3s_atts = 0
# fraction of short hints (with time <= 1 sec)
if (num_hints > 0):
frac_1s_hints = stud_info_df[(stud_info_df['Student Response Type'] == 1) & (stud_info_df['Duration (sec)'] <= 1.0)].shape[0] / num_hints
else:
frac_1s_hints = 0
# fraction of correct attempts
if (num_attempts > 0):
fraction_correct_attempts = stud_info_df[(stud_info_df['Student Response Type'] == 0) & (stud_info_df['Outcome'] == 0)].shape[0] / num_attempts
else:
fraction_correct_attempts = 0
# total number of time spent for attempts (in seconds)
total_time_attempts = stud_info_df[stud_info_df['Student Response Type'] == 0]['Duration (sec)'].sum()
# total number of time spent for hints (in seconds)
total_time_hints = stud_info_df[stud_info_df['Student Response Type'] == 1]['Duration (sec)'].sum()
# averaged maximal numbers of 'Problem View'
avg_max_problem_views = stud_info_df[['Problem Name', 'Problem View']].groupby(['Problem Name']).agg(np.max).mean()[0]
# averaged maximal number of attempts ('x')
avg_max_attempts = stud_info_df[['Problem Name', 'x']].groupby(['Problem Name']).agg(np.max).mean()[0]
stud_name = i # assign unique numerical ID to each student
if num_attempts != 0:
avd_time_att = total_time_attempts / num_attempts
else:
avg_time_att = 0
if num_hints != 0:
avg_time_hint = total_time_hints / num_hints
else:
avg_time_hint = 0
numbers.append([num_sessions, \
num_days, \
num_problems, \
num_attempts, \
num_hints, \
fraction_correct_attempts, \
frac_3s_atts, \
frac_1s_hints, \
total_time_attempts, \
total_time_hints, \
avg_max_problem_views, \
avg_max_attempts])
print("\r\t>>> Progress\t:{:.4%}".format((i + 1)/len(stud_list)), end='')
i += 1
stud_data = pd.DataFrame(data=numbers, columns=cols)
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
return stud_data
#stud_data = prepare_stud_data_new(data.head(500000).copy())
#stud_data = prepare_stud_data_new(data.copy())
stud_data = pd.read_hdf('stud_data.hdf','test')
#stud_data.to_hdf('stud_data.hdf','test',mode='w',complib='blosc')
stud_data.shape
stud_data.describe()
def calculate_gaming_score():
i = 0
start_time = time.time()
best_score = -1 # mininal silhouette_score
best_atts_threshold = 0.001
best_hints_threshold = 0.001
n_bins = 50
for atts_threshold in np.linspace(0.001, 0.999, n_bins):
for hints_threshold in np.linspace(0.001, 0.999, n_bins):
#print(atts_threshold, hints_threshold)
gaming_preds = np.array((stud_data['frac_3s_atts'] > atts_threshold) | (stud_data['frac_1s_hints'] > hints_threshold), dtype=int)
gaming_score = silhouette_score(log_scaled_data, gaming_preds)
if best_score < gaming_score:
best_score = gaming_score
best_atts_threshold = atts_threshold
best_hints_threshold = hints_threshold
print("\r\t>>> Progress\t:{:.4%}".format((i + 1)/(n_bins**2)), end='')
i += 1
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
return best_score, best_atts_threshold, best_hints_threshold
#calculate_gaming_score()
print("The calculate_gaming_score() runs ~45 minutes, so I give the final score:\n \
(0.4473167950112576, 0.89716326530612245, 0.73422448979591837)")
#gaming_preds = np.array((stud_data['frac_3s_atts'] > 0.2) | (stud_data['frac_1s_hints'] > 0.2), dtype=int)
#gaming_score = silhouette_score(log_scaled_data, gaming_preds)
#print(gaming_score)
stud_data[(stud_data['frac_3s_atts'] > 0.89716326530612245) | (stud_data['frac_1s_hints'] > 0.73422448979591837)].shape[0]
# old name: process_data
def transform_data(selected_columns, data):
'''
Apply log-transform and MinMaxScaler() to the selected data columns which are not fractions (frac_*)
Parameters
==========
selected_columns : list
list of columns to leave in processed data
data : pandas.DataFrame
data to process (note that data should contain all selected_columns)
Returns
=======
log_scaled_data : pandas.DataFrame
log-transformed and scaled data selected by selected_columns
'''
data.reset_index(drop=True, inplace=True)
log_data = data[selected_columns].copy()
skewed = log_data.columns.tolist()
skewed = [item for item in skewed if not item.startswith('frac_')]
log_data[skewed] = log_data[skewed].apply(lambda x: np.log10(x + 1))
scaler = MinMaxScaler().fit(log_data)
log_scaled_data = scaler.transform(log_data)
log_scaled_data = pd.DataFrame(log_scaled_data, columns=log_data.columns)
return log_scaled_data
def replace_group_numbers(best_preds):
'''
Replace group numbers in best_preds with sorting by group size
(so that the largest group is 0, the second largest is 1 etc.)
Parameters
==========
best_preds : numpy array
unsorted array of predictions
Returns
=======
best_preds_sorted : numpy array
sorted array of predictions
'''
pp = pd.DataFrame(best_preds, columns = ["old_group"])
dict_pp = {item[0]: i for i, item in enumerate(Counter(best_preds).most_common())}
pp['new_group'] = pp['old_group'].replace(dict_pp)
best_preds_sorted = np.array(pp['new_group'])
return best_preds_sorted
def kmeans(log_scaled_data):
'''
Apply KMeans clustering algorithm with 2 <= cluster_number <= 6 to log_scaled_data
(transformed and scaled by transform_data() function)
Parameters
==========
log_scaled_data : pandas.DataFrame
data log-transormed and MinMaxScaler()-ed for KMeans clustering
Returns
=======
best_clusterer : sklearn Model
clustering algorithm with the largest Silhouette Coefficient
best_score : float
the largest value of the Silhouette Coefficient
best_preds_sorted : numpy.array
array with clustering predictions for log_scaled_data
(0 is the largest cluster, 1 is the second largest etc.)
'''
best_score = 0
for n_clusters in range(2,6):
clusterer = KMeans(n_clusters=n_clusters, n_init=10, random_state=0)
clusterer.fit(log_scaled_data)
preds = clusterer.predict(log_scaled_data)
# Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(log_scaled_data, preds)
if best_score < score:
best_clusterer = clusterer
# Predict the cluster for each data point
best_preds = best_clusterer.predict(log_scaled_data)
best_score = score
best_clusters = n_clusters
best_preds_sorted = replace_group_numbers(best_preds)
return best_clusterer, best_score, best_preds_sorted
all_columns = ['num_sess', 'num_days', 'num_probs', 'num_atts', 'num_hints', 'frac_corr_atts', \
'frac_3s_atts', 'frac_1s_hints', 'time_atts', 'time_hints', 'max_probl_views', 'max_atts']
def choose_pair_columns_kmeans(all_columns, log_scaled_all_data):
'''
Selects pair of columns in data that produces clusters with the largest score.
In this function, only KMeans clustering algorithm is used
Parameters
==========
all_columns : list
list of columns to look for the pair with the largest score
log_scaled_data : pandas DataFrame
properly scaled DataFrame with all columns
Returns
=======
best_columns : list
pair of data columns with the largest score
best_score : float
the largest value of the score
best_clusterer : sklearn Model
clustering algorithm with the largest score
best_preds : numpy.array
array with clustering predictions for log_scaled_data
(0 is the largest cluster, 1 is the second largest etc.)
'''
best_score = 0
best_columns = []
j = 0
l = len(all_columns)
num_pairs = (l-1)*l/2
for column in all_columns:
selected_columns = [column]
columns_to_add = [a for a in all_columns if (a not in selected_columns)]
for column1 in columns_to_add:
if all_columns.index(column) < all_columns.index(column1):
selected_columns = [column, column1]
print("\r\t>>> Progress\t:{:.4%}".format((j+1)/num_pairs), end='')
j += 1
#log_scaled_data = transform_data(selected_columns, stud_data)
clusterer, score, preds = kmeans(log_scaled_all_data[selected_columns])
if score > best_score:
best_score = score
best_clusterer = clusterer
best_preds = preds
best_columns = selected_columns.copy()
return best_columns, best_score, best_clusterer, best_preds
start_time = time.time()
# consider skipping the step below because it takes some time (~8.5 minites)
log_scaled_all_data = transform_data(all_columns, stud_data)
#best_columns, best_kmeans_score, best_kmeans_clusterer, best_kmeans_preds = choose_pair_columns_kmeans(all_columns, log_scaled_all_data)
# Instead run it single time (6 seconds only)
best_columns = ['frac_1s_hints', 'max_probl_views']
best_kmeans_clusterer, best_kmeans_score, best_kmeans_preds = kmeans(log_scaled_all_data[best_columns])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_columns)
print("\t>>> Best score:", best_kmeans_score)
print("\t>>> Best clusterer:", best_kmeans_clusterer)
print("\t>>> Best preds:", best_kmeans_preds)
def preds_to_indices(preds): # gives array and returns array of indices with 1s
new_list = []
for i, val in enumerate(preds):
if val == 1:
new_list.append(i)
return np.array(new_list)
log_scaled_all_data.describe()
best_kmeans_preds_mask = preds_to_indices(best_kmeans_preds)
log_scaled_all_data_kmeans_0 = log_scaled_all_data.copy()[~log_scaled_all_data.index.isin(best_kmeans_preds_mask)]
log_scaled_all_data_kmeans_1 = log_scaled_all_data.copy()[log_scaled_all_data.index.isin(best_kmeans_preds_mask)]
plt.scatter(log_scaled_all_data_kmeans_0['frac_1s_hints'], \
log_scaled_all_data_kmeans_0['max_probl_views'], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_1['frac_1s_hints'], \
log_scaled_all_data_kmeans_1['max_probl_views'], \
alpha=0.6, s=15, c='grey')
plt.xlim([0.0, 0.6])
plt.ylim([0.0, 0.4])
plt.figtext(x=0.64, y=0.56, s='Group 1', ha='center', size=14, color='black')
plt.figtext(x=0.20, y=0.69, s='Group 0', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel('frac_1s_hints', size=14)
ax.set_ylabel('max_probl_views (log-transformed, scaled)', size=14)
plt.plot((0.14, 0.14), (0.001, 0.399), 'k--', c='blue')
plt.show()
def cols_iterate_kmeans(selected_columns, best_score, best_clusterer, best_preds):
all_columns = ['num_sess', 'num_days', 'num_probs', 'num_atts', \
'num_hints', 'frac_corr_atts', 'frac_3s_atts', 'frac_1s_hints', \
'time_atts', 'time_hints', 'max_probl_views', 'max_atts']
columns_to_add = [a for a in all_columns if (a not in selected_columns)]
#print(columns_to_add)
for column in columns_to_add:
print("*"*40)
print("*** Trying to add column", column)
print("*"*40)
selected_columns.append(column)
log_scaled_data = transform_data(selected_columns, stud_data)
clusterer, score, preds = kmeans(log_scaled_data)
if score > best_score:
print("!!! Success !!!")
best_score = score
best_clusterer = clusterer
best_preds = preds
print("!!! New score is", best_score)
print("!!! New best clusterer is", best_clusterer)
print("!!! New best selected_columns are", selected_columns)
columns_to_add.remove(column)
else:
print("!!! Last score is equal or worse then our best one")
print("!!! According to Occam's razor, remove the column", column)
selected_columns.remove(column)
print("!!! Still the best selected columns are", selected_columns)
return selected_columns, best_score, best_clusterer, best_preds
# Just skip this step, it does not give new results:
# kmeans_clusterer = best_kmeans_clusterer
# kmeans_score = best_kmeans_score
# kmeans_preds = best_kmeans_preds
# selected_columns = best_columns # ['frac_1s_hints', 'max_probl_views']
# new_columns, new_kmeans_score, new_kmeans_clusterer, new_kmeans_preds = cols_iterate_kmeans(selected_columns, kmeans_score, kmeans_clusterer, kmeans_preds)
# if new_kmeans_score > kmeans_score:
# print("+++ SUCCESS")
# selected_columns = new_columns
# best_kmeans_score = new_kmeans_score
# best_kmeans_clusterer = new_kmeans_clusterer
# best_kmeans_preds = new_kmeans_preds
# else:
# print("--- GIVE UP")
def largest_cluster_fraction(preds):
'''
calculates the fraction of students that are in the largest group
Parameters
==========
preds : list
list of predictions
Returns
=======
fraction : float
largest fraction of students
best_i : integer
number of the largest group
'''
fraction = 0
ll = len(preds)
for i in np.unique(preds):
frac = len(preds[preds == i])/ll
if frac > fraction:
fraction = frac
best_i = i
return fraction, best_i
# Rewrite similar to kmeans procedure !!!
def meanshift(log_scaled_data):
'''
Apply MeanShift clustering algorithm to log_scaled_data
(transformed and scaled by transform_data() function)
Number of clusters is selected according to estimate_badwidth procedure
with quantiles in np.linspace(0.01, 0.99, 99)
Parameters
==========
log_scaled_data : pandas.DataFrame
data log-transormed and MinMaxScaler()-ed for KMeans clustering
Returns
=======
best_clusterer : sklearn Model
clustering algorithm with the largest Silhouette Coefficient
best_score : float
the largest value of the Silhouette Coefficient
best_preds_sorted : numpy.array
array with clustering predictions for log_scaled_data
(0 is the largest cluster, 1 is the second largest etc.)
cluster_frac : float
fraction of students inside the largest group
'''
start_time = time.time()
best_score = 0
best_cluster_frac = 0
for alpha in np.linspace(0.01, 0.99, 99):
bandwidth = estimate_bandwidth(log_scaled_data, quantile=alpha, n_samples=None, random_state=0)
clusterer = MeanShift(bandwidth=bandwidth, bin_seeding=True)
clusterer.fit(log_scaled_data)
preds = clusterer.fit_predict(log_scaled_data)
cluster_frac = largest_cluster_fraction(preds)[0]
# Calculate the mean silhouette coefficient for the number of clusters chosen
try:
score = silhouette_score(log_scaled_data, preds)
except ValueError:
score = 0
print(alpha, clusterer.cluster_centers_.shape[0], score, cluster_frac)
# setting cluster_frac > 0.85, the value obtained in KMeans algorithm for ['frac_1s_hints', 'max_probl_views']
if (best_score < score) and (cluster_frac < 0.85):
best_clusterer = clusterer
best_preds = preds
best_score = score
best_clusters = clusterer.cluster_centers_.shape[0]
best_cluster_frac = cluster_frac
print('*'*68)
print("Our best model has", best_clusters, "clusters and sihlouette is", best_score)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
print('>'*68)
best_preds_sorted = replace_group_numbers(best_preds)
cluster_frac = best_cluster_frac
return best_clusterer, best_score, best_preds_sorted, cluster_frac
# Rinning MeanShift is too slow: runs about 12 min for 1 pair,
# and produces too bad results (largest score = 0.55 for reasonable max_fractions < 0.85)
# start_time = time.time()
# log_scaled_data = transform_data(best_columns, stud_data)
# best_meanshift_clusterer, best_meansift_score, best_meanshift_preds, _ = meanshift(log_scaled_data)
# print(best_meanshift_clusterer, best_meanshift_score, best_meanshift_preds)
# end_time = time.time()
# print("Running time is {}s".format(end_time-start_time))
def gaussmix(log_scaled_data): # GaussianMixture
start_time = time.time()
max_score = 0
for n_clusters in range(2,6):
clusterer = GaussianMixture(random_state=0, n_init=50, n_components=n_clusters).fit(log_scaled_data)
preds = clusterer.predict(log_scaled_data)
# Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(log_scaled_data, preds)
print("For our model with", clusterer.n_components, "clusters, the sihlouette score is", score)
if max_score < score:
best_clusterer = clusterer
# Predict the cluster for each data point
best_preds = best_clusterer.predict(log_scaled_data)
max_score = score
best_clusters = n_clusters
print('*'*68)
print("Our best model has", best_clusters, "clusters and sihlouette is", max_score)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
print('>'*68)
best_preds_sorted = replace_group_numbers(best_preds)
return best_clusterer, max_score, best_preds_sorted
def run_clustering_gaussmix(log_scaled_data):
best_score = 0
print(">>> GaussianMixture:")
clusterer, score, preds = gaussmix(log_scaled_data)
if score > best_score:
best_clusterer = clusterer
best_score = score
best_preds = preds
print("Best clusterer is", best_clusterer)
print("Max score is", best_score)
print("Best preds is", best_preds)
return best_clusterer, best_score, best_preds
# ~0.6 min running time but very small score (~0.15)
# start_time = time.time()
# log_scaled_data = transform_data(best_columns, stud_data)
# gaussmix_best_clusterer, gaussmix_best_score, gaussmix_best_preds = run_clustering_gaussmix(log_scaled_data)
# print(gaussmix_best_clusterer, gaussmix_best_score, gaussmix_best_preds)
# end_time = time.time()
# print("Running time is {}s".format(end_time-start_time))
def agglom(log_scaled_data): # AgglomerativeClustering with 'ward' connectivity
start_time = time.time()
max_score = 0
for n_clusters in range(2,3): # use only 2 clusters
connectivity = kneighbors_graph(log_scaled_data, n_neighbors=100, include_self=False)
# make connectivity symmetric
connectivity = 0.5 * (connectivity + connectivity.T)
clusterer = AgglomerativeClustering(n_clusters=n_clusters, \
linkage='ward', \
connectivity=connectivity)
preds = clusterer.fit_predict(log_scaled_data)
# Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(log_scaled_data, preds)
print("For our model with", clusterer.n_clusters, "clusters, and the sihlouette score is", score)
if max_score < score:
best_clusterer = clusterer
# Predict the cluster for each data point
best_preds = preds
max_score = score
best_clusters = n_clusters
print('*'*68)
print("Our best model has", best_clusters, "clusters and sihlouette is", max_score)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
print('>'*68)
best_preds_sorted = replace_group_numbers(best_preds)
return best_clusterer, max_score, best_preds_sorted
def run_clustering_agglom(log_scaled_data):
best_score = 0
print(">>> AgglomerativeClustering:")
clusterer, score, preds = agglom(log_scaled_data)
if score > best_score:
best_clusterer = clusterer
best_score = score
best_preds = preds
print("Best clusterer is", best_clusterer)
print("Max score is", best_score)
print("Best preds is", best_preds)
return best_clusterer, best_score, best_preds
# Gives results very similar to KMeans but takes ~4 times more running time
start_time = time.time()
log_scaled_data = transform_data(best_columns, stud_data)
best_agglom_clusterer, best_agglom_score, best_agglom_preds = run_clustering_agglom(log_scaled_data)
print(best_agglom_clusterer, best_agglom_score, best_agglom_preds)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
best_agglom_preds_mask = preds_to_indices(best_agglom_preds)
log_scaled_data_agglom_0 = log_scaled_data.copy()[~log_scaled_data.index.isin(best_agglom_preds_mask)]
log_scaled_data_agglom_1 = log_scaled_data.copy()[log_scaled_data.index.isin(best_agglom_preds_mask)]
plt.scatter(log_scaled_data_agglom_0['frac_1s_hints'], \
log_scaled_data_agglom_0['max_probl_views'], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_data_agglom_1['frac_1s_hints'], \
log_scaled_data_agglom_1['max_probl_views'], \
alpha=0.6, s=15, c='grey')
plt.xlim([0.0, 0.6])
plt.ylim([0.0, 0.4])
plt.figtext(x=0.64, y=0.56, s='Group 1', ha='center', size=14, color='black')
plt.figtext(x=0.20, y=0.69, s='Group 0', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel('frac_1s_hints', size=14)
ax.set_ylabel('max_probl_views (log-transformed, scaled)', size=14)
plt.plot((0.145, 0.145), (0.001, 0.399), 'k--', c='blue')
plt.show()
best_kmeans_preds_mask = preds_to_indices(best_kmeans_preds)
log_scaled_all_data_kmeans_0 = log_scaled_all_data.copy()[~log_scaled_all_data.index.isin(best_kmeans_preds_mask)]
# In this particular splitting, take drop=False to save the initial index
# (simplifying students recovery for step 2)
log_scaled_all_data_kmeans_0.reset_index(inplace=True, drop=False)
log_scaled_all_data_kmeans_0.index
start_time = time.time()
# best_kmeans_columns_0, \
# best_kmeans_score_0, \
# best_kmeans_clusterer_0, \
# best_kmeans_preds_0 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_0)
best_kmeans_columns_0 = ['frac_3s_atts', 'max_probl_views']
best_kmeans_clusterer_0, best_kmeans_score_0, best_kmeans_preds_0 = kmeans(log_scaled_all_data_kmeans_0[best_kmeans_columns_0])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_0)
print("\t>>> Best score:", best_kmeans_score_0)
print("\t>>> Best clusterer:", best_kmeans_clusterer_0)
print("\t>>> Best preds:", best_kmeans_preds_0)
print(sum(best_kmeans_preds_0), len(best_kmeans_preds_0), len(best_kmeans_preds_0[best_kmeans_preds_0 == 0]))
log_scaled_all_data_kmeans_0.reset_index(inplace=True, drop=True)
best_kmeans_preds_mask_0 = preds_to_indices(best_kmeans_preds_0)
log_scaled_all_data_kmeans_00 = log_scaled_all_data_kmeans_0.copy()[~log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]
log_scaled_all_data_kmeans_01 = log_scaled_all_data_kmeans_0.copy()[log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]
plt.scatter(log_scaled_all_data_kmeans_00[best_kmeans_columns_0[0]], \
log_scaled_all_data_kmeans_00[best_kmeans_columns_0[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_01[best_kmeans_columns_0[0]], \
log_scaled_all_data_kmeans_01[best_kmeans_columns_0[1]], \
alpha=0.6, s=15, c='grey')
plt.xlim([0.0, 0.6])
plt.ylim([0.0, 0.4])
plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_0[0], size=14)
ax.set_ylabel(best_kmeans_columns_0[1], size=14)
plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
len(best_kmeans_preds_0)
#best_kmeans_preds_mask_0 = preds_to_indices(best_kmeans_preds_0) # already implemented during group0 visualisation
log_scaled_all_data_kmeans_00 = log_scaled_all_data_kmeans_0.copy()[~log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]
log_scaled_all_data_kmeans_00.reset_index(inplace=True, drop=True)
log_scaled_all_data_kmeans_00.index
start_time = time.time()
# best_kmeans_columns_00, \
# best_kmeans_score_00, \
# best_kmeans_clusterer_00, \
# best_kmeans_preds_00 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_00)
best_kmeans_columns_00 = ['frac_3s_atts', 'time_hints']
best_kmeans_clusterer_00, \
best_kmeans_score_00, \
best_kmeans_preds_00 = kmeans(log_scaled_all_data_kmeans_00[best_kmeans_columns_00])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_00)
print("\t>>> Best score:", best_kmeans_score_00)
print("\t>>> Best clusterer:", best_kmeans_clusterer_00)
print("\t>>> Best preds:", best_kmeans_preds_00)
print(sum(best_kmeans_preds_00), len(best_kmeans_preds_00), len(best_kmeans_preds_00[best_kmeans_preds_00 == 0]))
best_kmeans_preds_mask_00 = preds_to_indices(best_kmeans_preds_00)
log_scaled_all_data_kmeans_000 = log_scaled_all_data_kmeans_00.copy()[~log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]
log_scaled_all_data_kmeans_001 = log_scaled_all_data_kmeans_00.copy()[log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]
plt.scatter(log_scaled_all_data_kmeans_000[best_kmeans_columns_00[0]], \
log_scaled_all_data_kmeans_000[best_kmeans_columns_00[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_001[best_kmeans_columns_00[0]], \
log_scaled_all_data_kmeans_001[best_kmeans_columns_00[1]], \
alpha=0.6, s=15, c='grey')
# plt.xlim([0.0, 0.6])
# plt.ylim([0.0, 0.4])
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_00[0], size=14)
ax.set_ylabel(best_kmeans_columns_00[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
log_scaled_all_data_kmeans_000 = log_scaled_all_data_kmeans_00.copy()[~log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]
log_scaled_all_data_kmeans_000.reset_index(inplace=True, drop=True)
log_scaled_all_data_kmeans_000.index
start_time = time.time()
# best_kmeans_columns_000, \
# best_kmeans_score_000, \
# best_kmeans_clusterer_000, \
# best_kmeans_preds_000 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_000)
best_kmeans_columns_000 = ['num_sess', 'num_probs']
best_kmeans_clusterer_000, \
best_kmeans_score_000, \
best_kmeans_preds_000 = kmeans(log_scaled_all_data_kmeans_000[best_kmeans_columns_000])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_000)
print("\t>>> Best score:", best_kmeans_score_000)
print("\t>>> Best clusterer:", best_kmeans_clusterer_000)
print("\t>>> Best preds:", best_kmeans_preds_000)
print(sum(best_kmeans_preds_000), len(best_kmeans_preds_000), len(best_kmeans_preds_000[best_kmeans_preds_000 == 0]))
best_kmeans_preds_mask_000 = preds_to_indices(best_kmeans_preds_000)
log_scaled_all_data_kmeans_0000 = log_scaled_all_data_kmeans_000.copy()[~log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]
log_scaled_all_data_kmeans_0001 = log_scaled_all_data_kmeans_000.copy()[log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]
plt.scatter(log_scaled_all_data_kmeans_0000[best_kmeans_columns_000[0]], \
log_scaled_all_data_kmeans_0000[best_kmeans_columns_000[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_0001[best_kmeans_columns_000[0]], \
log_scaled_all_data_kmeans_0001[best_kmeans_columns_000[1]], \
alpha=0.6, s=15, c='grey')
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_000[0], size=14)
ax.set_ylabel(best_kmeans_columns_000[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
log_scaled_all_data_kmeans_0000 = log_scaled_all_data_kmeans_000.copy()[~log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]
log_scaled_all_data_kmeans_0000.reset_index(inplace=True, drop=True)
log_scaled_all_data_kmeans_0000.index
start_time = time.time()
# best_kmeans_columns_0000, \
# best_kmeans_score_0000, \
# best_kmeans_clusterer_0000, \
# best_kmeans_preds_0000 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_0000)
best_kmeans_columns_0000 = ['num_sess', 'num_probs']
best_kmeans_clusterer_0000, \
best_kmeans_score_0000, \
best_kmeans_preds_0000 = kmeans(log_scaled_all_data_kmeans_0000[best_kmeans_columns_0000])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_0000)
print("\t>>> Best score:", best_kmeans_score_0000)
print("\t>>> Best clusterer:", best_kmeans_clusterer_0000)
print("\t>>> Best preds:", best_kmeans_preds_0000)
print(sum(best_kmeans_preds_0000), \
len(best_kmeans_preds_0000), \
len(best_kmeans_preds_0000[best_kmeans_preds_0000 == 0]))
best_kmeans_preds_mask_0000 = preds_to_indices(best_kmeans_preds_0000)
log_scaled_all_data_kmeans_00000 = log_scaled_all_data_kmeans_0000.copy()[~log_scaled_all_data_kmeans_0000.index.isin(best_kmeans_preds_mask_0000)]
log_scaled_all_data_kmeans_00001 = log_scaled_all_data_kmeans_0000.copy()[log_scaled_all_data_kmeans_0000.index.isin(best_kmeans_preds_mask_0000)]
plt.scatter(log_scaled_all_data_kmeans_00000[best_kmeans_columns_0000[0]], \
log_scaled_all_data_kmeans_00000[best_kmeans_columns_0000[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_00001[best_kmeans_columns_0000[0]], \
log_scaled_all_data_kmeans_00001[best_kmeans_columns_0000[1]], \
alpha=0.6, s=15, c='grey')
# plt.xlim([0.0, 0.6])
# plt.ylim([0.0, 0.4])
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_0000[0], size=14)
ax.set_ylabel(best_kmeans_columns_0000[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
group1_index = np.array(log_scaled_all_data_kmeans_1.index)
len(group1_index)
group2_index = np.array(log_scaled_all_data_kmeans_01['index'])
len(group2_index)
group3_index = np.array(log_scaled_all_data_kmeans_001['index'])
len(group3_index)
group4_index = np.array(log_scaled_all_data_kmeans_0001['index'])
len(group4_index)
group5_index = np.array(log_scaled_all_data_kmeans_00000['index'])
len(group5_index)
group6_index = np.array(log_scaled_all_data_kmeans_00001['index'])
len(group6_index)
def create_joint_cluster_index():
'''
Saves group index files into cluster_index.csv for further analysis
'''
cluster_index_lst = []
for i in range(len(stud_data)+1):
if i in group1_index:
cluster_index_lst.append(1)
elif i in group2_index:
cluster_index_lst.append(2)
elif i in group3_index:
cluster_index_lst.append(3)
elif i in group4_index:
cluster_index_lst.append(4)
elif i in group5_index:
cluster_index_lst.append(5)
elif i in group6_index:
cluster_index_lst.append(6)
print(Counter(cluster_index_lst))
cluster_index = pd.Series(cluster_index_lst, dtype=int)
cluster_index.to_csv('cluster_index.csv')
return
create_joint_cluster_index()
! ls -lh cluster_index.csv
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: DO NOT MODIFY the following constants as they include filepaths used in this notebook and data that is shared during training and inference.
Step2: Setup Environment
Step3: DELETE any old data from previous runs
Step4: Clone the TensorFlow Github Repository, which contains the relevant code required to run this tutorial.
Step5: Load TensorBoard to visualize the accuracy and loss as training proceeds.
Step6: Training
Step7: Skipping the training
Step8: Generate a TensorFlow Model for Inference
Step9: Generate a TensorFlow Lite Model
Step10: Testing the TensorFlow Lite model's accuracy
Step11: Generate a TensorFlow Lite for MicroControllers Model
Step12: Deploy to a Microcontroller
|
<ASSISTANT_TASK:>
Python Code:
# A comma-delimited list of the words you want to train for.
# The options are: yes,no,up,down,left,right,on,off,stop,go
# All the other words will be used to train an "unknown" label and silent
# audio data with no spoken words will be used to train a "silence" label.
WANTED_WORDS = "yes,no"
# The number of steps and learning rates can be specified as comma-separated
# lists to define the rate at each stage. For example,
# TRAINING_STEPS=12000,3000 and LEARNING_RATE=0.001,0.0001
# will run 12,000 training loops in total, with a rate of 0.001 for the first
# 8,000, and 0.0001 for the final 3,000.
TRAINING_STEPS = "12000,3000"
LEARNING_RATE = "0.001,0.0001"
# Calculate the total number of steps, which is used to identify the checkpoint
# file name.
TOTAL_STEPS = str(sum(map(lambda string: int(string), TRAINING_STEPS.split(","))))
# Print the configuration to confirm it
print("Training these words: %s" % WANTED_WORDS)
print("Training steps in each stage: %s" % TRAINING_STEPS)
print("Learning rate in each stage: %s" % LEARNING_RATE)
print("Total number of training steps: %s" % TOTAL_STEPS)
# Calculate the percentage of 'silence' and 'unknown' training samples required
# to ensure that we have equal number of samples for each label.
number_of_labels = WANTED_WORDS.count(',') + 1
number_of_total_labels = number_of_labels + 2 # for 'silence' and 'unknown' label
equal_percentage_of_training_samples = int(100.0/(number_of_total_labels))
SILENT_PERCENTAGE = equal_percentage_of_training_samples
UNKNOWN_PERCENTAGE = equal_percentage_of_training_samples
# Constants which are shared during training and inference
PREPROCESS = 'micro'
WINDOW_STRIDE = 20
MODEL_ARCHITECTURE = 'tiny_conv' # Other options include: single_fc, conv,
# low_latency_conv, low_latency_svdf, tiny_embedding_conv
# Constants used during training only
VERBOSITY = 'WARN'
EVAL_STEP_INTERVAL = '1000'
SAVE_STEP_INTERVAL = '1000'
# Constants for training directories and filepaths
DATASET_DIR = 'dataset/'
LOGS_DIR = 'logs/'
TRAIN_DIR = 'train/' # for training checkpoints and other files.
# Constants for inference directories and filepaths
import os
MODELS_DIR = 'models'
if not os.path.exists(MODELS_DIR):
os.mkdir(MODELS_DIR)
MODEL_TF = os.path.join(MODELS_DIR, 'model.pb')
MODEL_TFLITE = os.path.join(MODELS_DIR, 'model.tflite')
FLOAT_MODEL_TFLITE = os.path.join(MODELS_DIR, 'float_model.tflite')
MODEL_TFLITE_MICRO = os.path.join(MODELS_DIR, 'model.cc')
SAVED_MODEL = os.path.join(MODELS_DIR, 'saved_model')
QUANT_INPUT_MIN = 0.0
QUANT_INPUT_MAX = 26.0
QUANT_INPUT_RANGE = QUANT_INPUT_MAX - QUANT_INPUT_MIN
%tensorflow_version 1.x
import tensorflow as tf
!rm -rf {DATASET_DIR} {LOGS_DIR} {TRAIN_DIR} {MODELS_DIR}
!git clone -q --depth 1 https://github.com/tensorflow/tensorflow
%load_ext tensorboard
%tensorboard --logdir {LOGS_DIR}
!python tensorflow/tensorflow/examples/speech_commands/train.py \
--data_dir={DATASET_DIR} \
--wanted_words={WANTED_WORDS} \
--silence_percentage={SILENT_PERCENTAGE} \
--unknown_percentage={UNKNOWN_PERCENTAGE} \
--preprocess={PREPROCESS} \
--window_stride={WINDOW_STRIDE} \
--model_architecture={MODEL_ARCHITECTURE} \
--how_many_training_steps={TRAINING_STEPS} \
--learning_rate={LEARNING_RATE} \
--train_dir={TRAIN_DIR} \
--summaries_dir={LOGS_DIR} \
--verbosity={VERBOSITY} \
--eval_step_interval={EVAL_STEP_INTERVAL} \
--save_step_interval={SAVE_STEP_INTERVAL}
#!curl -O "https://storage.googleapis.com/download.tensorflow.org/models/tflite/speech_micro_train_2020_05_10.tgz"
#!tar xzf speech_micro_train_2020_05_10.tgz
!rm -rf {SAVED_MODEL}
!python tensorflow/tensorflow/examples/speech_commands/freeze.py \
--wanted_words=$WANTED_WORDS \
--window_stride_ms=$WINDOW_STRIDE \
--preprocess=$PREPROCESS \
--model_architecture=$MODEL_ARCHITECTURE \
--start_checkpoint=$TRAIN_DIR$MODEL_ARCHITECTURE'.ckpt-'{TOTAL_STEPS} \
--save_format=saved_model \
--output_file={SAVED_MODEL}
import sys
# We add this path so we can import the speech processing modules.
sys.path.append("/content/tensorflow/tensorflow/examples/speech_commands/")
import input_data
import models
import numpy as np
SAMPLE_RATE = 16000
CLIP_DURATION_MS = 1000
WINDOW_SIZE_MS = 30.0
FEATURE_BIN_COUNT = 40
BACKGROUND_FREQUENCY = 0.8
BACKGROUND_VOLUME_RANGE = 0.1
TIME_SHIFT_MS = 100.0
DATA_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/speech_commands_v0.02.tar.gz'
VALIDATION_PERCENTAGE = 10
TESTING_PERCENTAGE = 10
model_settings = models.prepare_model_settings(
len(input_data.prepare_words_list(WANTED_WORDS.split(','))),
SAMPLE_RATE, CLIP_DURATION_MS, WINDOW_SIZE_MS,
WINDOW_STRIDE, FEATURE_BIN_COUNT, PREPROCESS)
audio_processor = input_data.AudioProcessor(
DATA_URL, DATASET_DIR,
SILENT_PERCENTAGE, UNKNOWN_PERCENTAGE,
WANTED_WORDS.split(','), VALIDATION_PERCENTAGE,
TESTING_PERCENTAGE, model_settings, LOGS_DIR)
with tf.Session() as sess:
float_converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL)
float_tflite_model = float_converter.convert()
float_tflite_model_size = open(FLOAT_MODEL_TFLITE, "wb").write(float_tflite_model)
print("Float model is %d bytes" % float_tflite_model_size)
converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.inference_input_type = tf.lite.constants.INT8
converter.inference_output_type = tf.lite.constants.INT8
def representative_dataset_gen():
for i in range(100):
data, _ = audio_processor.get_data(1, i*1, model_settings,
BACKGROUND_FREQUENCY,
BACKGROUND_VOLUME_RANGE,
TIME_SHIFT_MS,
'testing',
sess)
flattened_data = np.array(data.flatten(), dtype=np.float32).reshape(1, 1960)
yield [flattened_data]
converter.representative_dataset = representative_dataset_gen
tflite_model = converter.convert()
tflite_model_size = open(MODEL_TFLITE, "wb").write(tflite_model)
print("Quantized model is %d bytes" % tflite_model_size)
# Helper function to run inference
def run_tflite_inference(tflite_model_path, model_type="Float"):
# Load test data
np.random.seed(0) # set random seed for reproducible test results.
with tf.Session() as sess:
test_data, test_labels = audio_processor.get_data(
-1, 0, model_settings, BACKGROUND_FREQUENCY, BACKGROUND_VOLUME_RANGE,
TIME_SHIFT_MS, 'testing', sess)
test_data = np.expand_dims(test_data, axis=1).astype(np.float32)
# Initialize the interpreter
interpreter = tf.lite.Interpreter(tflite_model_path)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()[0]
output_details = interpreter.get_output_details()[0]
# For quantized models, manually quantize the input data from float to integer
if model_type == "Quantized":
input_scale, input_zero_point = input_details["quantization"]
test_data = test_data / input_scale + input_zero_point
test_data = test_data.astype(input_details["dtype"])
correct_predictions = 0
for i in range(len(test_data)):
interpreter.set_tensor(input_details["index"], test_data[i])
interpreter.invoke()
output = interpreter.get_tensor(output_details["index"])[0]
top_prediction = output.argmax()
correct_predictions += (top_prediction == test_labels[i])
print('%s model accuracy is %f%% (Number of test samples=%d)' % (
model_type, (correct_predictions * 100) / len(test_data), len(test_data)))
# Compute float model accuracy
run_tflite_inference(FLOAT_MODEL_TFLITE)
# Compute quantized model accuracy
run_tflite_inference(MODEL_TFLITE, model_type='Quantized')
# Install xxd if it is not available
!apt-get update && apt-get -qq install xxd
# Convert to a C source file
!xxd -i {MODEL_TFLITE} > {MODEL_TFLITE_MICRO}
# Update variable names
REPLACE_TEXT = MODEL_TFLITE.replace('/', '_').replace('.', '_')
!sed -i 's/'{REPLACE_TEXT}'/g_model/g' {MODEL_TFLITE_MICRO}
# Print the C source file
!cat {MODEL_TFLITE_MICRO}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'giss-e2-1h', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load taxifare dataset
Step2: Let's check that the files were copied correctly and look like we expect them to.
Step3: Create an input pipeline
Step4: Create a Baseline DNN Model in Keras
Step5: We'll build our DNN model and inspect the model architecture.
Step6: Train the model
Step7: Visualize the model loss curve
Step8: Predict with the model locally
Step9: Improve Model Performance Using Feature Engineering
Step10: Geolocation/Coordinate Feature Columns
Step11: Scaling latitude and longitude
Step12: Next, we create a function named 'scale_latitude', where we pass in all the latitudinal values and subtract 37 from each value. Note that our scaling latitude ranges from -37 to -45. Thus, the value 37 is the minimal latitudinal value. The delta or difference between -37 and -45 is 8. We subtract 37 from each latitudinal value and then divide by 8 to return a scaled value.
Step13: Putting it all together
Step14: Next, we'll create our DNN model now with the engineered features. We'll set NBUCKETS = 10 to specify 10 buckets when bucketizing the latitude and longitude.
Step15: Let's see how our model architecture has changed now.
Step16: As before, let's visualize the DNN model layers.
Step17: Let's a prediction with this new model with engineered features on the example we had above.
|
<ASSISTANT_TASK:>
Python Code:
import datetime
import logging
import os
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import layers
from tensorflow.keras import models
# set TF error log verbosity
logging.getLogger("tensorflow").setLevel(logging.ERROR)
print(tf.version.VERSION)
if not os.path.isdir("../data"):
os.makedirs("../data")
!gsutil cp gs://cloud-training-demos/feat_eng/data/*.csv ../data
!ls -l ../data/*.csv
!head ../data/*.csv
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = ['pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count']
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
# A function to define features and labesl
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
# A utility method to create a tf.data dataset from a Pandas Dataframe
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = tf.data.experimental.make_csv_dataset(pattern,
batch_size,
CSV_COLUMNS,
DEFAULTS)
dataset = dataset.map(features_and_labels) # features, label
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
# Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred): # Root mean square error
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# input layer
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
# feature_columns
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Constructor for DenseFeatures takes a list of numeric columns
dnn_inputs = layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = layers.Dense(1, activation='linear', name='fare')(h2)
model = models.Model(inputs, output)
# compile model
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
model = build_dnn_model()
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 59621 * 5
NUM_EVALS = 5
NUM_EVAL_EXAMPLES = 14906
trainds = load_dataset('../data/taxi-train*',
TRAIN_BATCH_SIZE,
tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('../data/taxi-valid*',
1000,
tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'mse'])
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string),
}, steps=1)
# TODO 1a
# TODO 1b
# TODO 1c
# TODO 2
def scale_longitude(lon_column):
return (lon_column + 78)/8.
def scale_latitude(lat_column):
return (lat_column - 37)/8.
def transform(inputs, numeric_cols, string_cols, nbuckets):
print("Inputs before features transformation: {}".format(inputs.keys()))
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in numeric_cols
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ['pickup_longitude', 'dropoff_longitude']:
transformed[lon_col] = layers.Lambda(
scale_longitude,
name="scale_{}".format(lon_col))(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ['pickup_latitude', 'dropoff_latitude']:
transformed[lat_col] = layers.Lambda(
scale_latitude,
name='scale_{}'.format(lat_col))(inputs[lat_col])
# TODO 2 - Your code here
# add Euclidean distance
# TODO 3 - Your code here
# create bucketized features
# TODO 3
# create crossed columns
# create embedding columns
feature_columns['pickup_and_dropoff'] = fc.embedding_column(pd_pair, 100)
print("Transformed features: {}".format(transformed.keys()))
print("Feature columns: {}".format(feature_columns.keys()))
return transformed, feature_columns
NBUCKETS = 10
# DNN MODEL
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# input layer is all float except for pickup_datetime which is a string
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(inputs,
numeric_cols=NUMERIC_COLS,
string_cols=STRING_COLS,
nbuckets=NBUCKETS)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = layers.Dense(1, activation='linear', name='fare')(h2)
model = models.Model(inputs, output)
# Compile model
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
model = build_dnn_model()
tf.keras.utils.plot_model(model, 'dnn_model_engineered.png', show_shapes=False, rankdir='LR')
trainds = load_dataset('../data/taxi-train*',
TRAIN_BATCH_SIZE,
tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('../data/taxi-valid*',
1000,
tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS+3,
steps_per_epoch=steps_per_epoch)
plot_curves(history, ['loss', 'mse'])
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string),
}, steps=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ็ปไน 2๏ผๅๅฝๆฐๅฏ่ฟๅ1 - 1/3 + 1/5 - 1/7...็ๅn้กน็ๅใๅจไธป็จๅบไธญ๏ผๅๅซไปคn=1000ๅ100000๏ผๆๅฐ4ๅ่ฏฅๅฝๆฐ็ๅใ
Step2: ็ปไน 3๏ผๅฐtask3ไธญ็็ปไน 1ๅ็ปไน 4ๆนๅไธบๅฝๆฐ๏ผๅนถ่ฟ่ก่ฐ็จใ
Step3: TASK3็ปไน 4๏ผ่ฑๆๅ่ฏๅๆฐ่ฝฌๅคๆฐ๏ผ่ฆๆฑ่พๅ
ฅไธไธช่ฑๆๅจ่ฏ๏ผๅๆฐๅฝขๅผ๏ผ๏ผ่ฝๅคๅพๅฐๅ
ถๅคๆฐๅฝขๅผ๏ผๆ็ปๅบๅๆฐ่ฝฌๅคๆฐๅฝขๅผ็ๅปบ่ฎฎ๏ผๆ็คบ๏ผsome_string.endswith(some_letter)ๅฝๆฐๅฏไปฅๅคๆญๆๅญ็ฌฆไธฒ็ปๅฐพๅญ็ฌฆ๏ผๅฏๅฐ่ฏ่ฟ่ก๏ผ'myname'.endswith('me')๏ผliupengyuan'.endswith('n')๏ผใ
Step4: ๆๆๆง็ปไน ๏ผๅ็จๅบ๏ผๅฏไปฅๆฑไปๆดๆฐmๅฐๆดๆฐn็ดฏๅ ็ๅ๏ผ้ด้ไธบk๏ผๆฑๅ้จๅ้็จๅฝๆฐๅฎ็ฐ๏ผไธป็จๅบไธญ็ฑ็จๆท่พๅ
ฅm๏ผn๏ผk่ฐ็จๅฝๆฐ้ช่ฏๆญฃ็กฎๆงใ
|
<ASSISTANT_TASK:>
Python Code:
def total (m):
i=0
result=1
while i<m:
i+=1
result*=i
return result
m=int(input('please enter an integer. '))
n=int(input('please enter an integer. '))
k=int(input('please enter an integer. '))
print('The result of ',m,'!+',n,'!+',k,'! is :',total(m)+total(n)+total(k))
def sum (n):
i=0
total=0
while i<n:
i+=1
if i%2==0:
total-=1/(2*i-1)
else:
total+=1/(2*i-1)
return total
n=int(input('please enter an integer. '))
m=int(input('please enter an integer. '))
print('resoult is :',4*sum(n))
print('resoult is :',4*sum(m))
def find_star ( n,name):
if 321<=n<=419 :
print('Mr',name,'ไฝ ๆฏ็ฝ็พๅบง')
elif 420<=n<=520 :
print('Mr',name,'ไฝ ๆฏ้ๅธธๆไธชๆง็้็ๅบง๏ผ')
elif 521<=n<=621 :
print('Mr',name,'ไฝ ๆฏๅๅญๅบง')
elif 622<=n<=722 :
print('Mr',name,'ไฝ ๆฏๅทจ่นๅบง')
elif 723<=n<=822 :
print('Mr',name,'ไฝ ๆฏ็ฎๅญๅบง')
elif 823<=n<=922 :
print('Mr',name,'ไฝ ๆฏๅคๅฅณๅบง')
elif 923<=n<=1023 :
print('Mr',name,'ไฝ ๆฏๅคฉๆฐๅบง')
elif 1024<=n<=1122 :
print('Mr',name,'ไฝ ๆฏๅคฉ่ๅบง')
elif 1123<=n<=1221 :
print('Mr',name,'ไฝ ๆฏๅฐๆๅบง')
elif 1222<=n<=1231 or 101<=n<=119 :
print('Mr',name,'ไฝ ๆฏๆฉ็พฏๅบง')
elif 120<=n<=218 :
print('Mr',name,'ไฝ ๆฏๆฐดๅนณๅบง')
else :
print('Mr',name,'ไฝ ๆฏๅ้ฑผๅบง')
print('what is your name ?')
name=input()
print('when is your birthday ?for example 3 ๆ 4ๆฅ ไปฅๅฆไธๆ ผๅผ่พๅฆ๏ผ304')
dat=int(input())
find_star(n,name)
def change(word):
if word.endswith('sh') or word.endswith('ch') or word.endswith('x') or word.endswith('s'):
print(word,'es',sep='')
else:
print(word,'s',sep='')
word=input('please enter a word :')
change(word)
def sum (m,n,k):
i=m
total=m
while i<n and i+k<=n :
i+=k
total+=i
return total
m=int(input('please enter the from integer :'))
n=int(input('please enter the to integer : '))
k=int(input('please enter the gap :'))
print('The result :',sum(m,n,k))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, we create a request for the "warning" data type
Step2: Now loop through each record and plot it as either Polygon or MultiPolygon, with appropriate colors
|
<ASSISTANT_TASK:>
Python Code:
from awips.dataaccess import DataAccessLayer
from awips.tables import vtec
from datetime import datetime
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
from cartopy.feature import ShapelyFeature,NaturalEarthFeature
from shapely.geometry import MultiPolygon,Polygon
def warning_color(phensig):
return vtec[phensig]['color']
def make_map(bbox, projection=ccrs.PlateCarree()):
fig, ax = plt.subplots(figsize=(20,12),
subplot_kw=dict(projection=projection))
ax.set_extent(bbox)
gl = ax.gridlines(draw_labels=True)
gl.xlabels_top = gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
return fig, ax
DataAccessLayer.changeEDEXHost("edex-cloud.unidata.ucar.edu")
request = DataAccessLayer.newDataRequest()
request.setDatatype("warning")
request.setParameters('phensig')
times = DataAccessLayer.getAvailableTimes(request)
# Get records for last 50 available times
response = DataAccessLayer.getGeometryData(request, times[-50:-1])
print("Using " + str(len(response)) + " records")
# Each record will have a numpy array the length of the number of "parameters"
# Default is 1 (request.setParameters('phensig'))
parameters = {}
for x in request.getParameters():
parameters[x] = np.array([])
print(parameters)
%matplotlib inline
bbox=[-127,-64,24,49]
fig, ax = make_map(bbox=bbox)
siteids=np.array([])
periods=np.array([])
reftimes=np.array([])
for ob in response:
poly = ob.getGeometry()
site = ob.getLocationName()
pd = ob.getDataTime().getValidPeriod()
ref = ob.getDataTime().getRefTime()
# do not plot if phensig is blank (SPS)
if ob.getString('phensig'):
phensigString = ob.getString('phensig')
siteids = np.append(siteids,site)
periods = np.append(periods,pd)
reftimes = np.append(reftimes,ref)
for parm in parameters:
parameters[parm] = np.append(parameters[parm],ob.getString(parm))
if poly.geom_type == 'MultiPolygon':
geometries = np.array([])
geometries = np.append(geometries,MultiPolygon(poly))
geom_count = ", " + str(len(geometries)) +" geometries"
else:
geometries = np.array([])
geometries = np.append(geometries,Polygon(poly))
geom_count=""
for geom in geometries:
bounds = Polygon(geom)
intersection = bounds.intersection
geoms = (intersection(geom)
for geom in geometries
if bounds.intersects(geom))
#print(vtec[phensigString]['hdln']
# + " (" + phensigString + ") issued at " + str(ref)
# + " ("+str(poly.geom_type) + geom_count + ")")
color = warning_color(phensigString)
shape_feature = ShapelyFeature(geoms,ccrs.PlateCarree(),
facecolor=color, edgecolor=color)
ax.add_feature(shape_feature)
states_provinces = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='50m',
facecolor='none')
political_boundaries = cfeature.NaturalEarthFeature(category='cultural',
name='admin_0_boundary_lines_land',
scale='50m', facecolor='none')
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(states_provinces, edgecolor='black')
ax.add_feature(political_boundaries, edgecolor='black')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Study of Repository Creations
Step2: 2. Timeline Of Activity (active repos,actors, and number of events)
Step3: 3. Productive Bursts
Step4: 4. Actor Contributions
Step5: Misc
Step6: Contribution Activity
Step7: Distribution of Contributions
|
<ASSISTANT_TASK:>
Python Code:
project_dir = "/home/ubuntu/github/AstroWeekStudy/python_code/"
import sys
sys.path.append("/home/ubuntu/github/AstroWeekStudy/python_code/")
try:
reload(loadData)
reload(astroWeekLib)
except:
import loadData
import astroWeekLib
from loadData import *
from astroWeekLib import *
df,df2014,df2015,resampled = build_main_df()
df_users_created = prepareUserDf(df)
df_repos_created = build_df_repos_created(df2014)
#df_repos_created[(df_repos_created.index >= '2014-02-10') & (df_repos_created.index < '2014-02-25')].sort()
#wk_repos = test[(test.index >= '2014-12-15') & (test.index < '2015-03-15')]
repos = df_repos_created[(df_repos_created.index >= '2014-09-15') & (df_repos_created.index < '2014-11-10')].repo_url.values
tseries = df.type[df['repo_url'].isin(repos)].resample("1D",how="count")
t3 = test[(test.index >= '2014-03-15') & (test.index < '2014-12-10')]
median_t3 = np.median(t3)
plot(t3)
plot(np.zeros_like(t3)+median_t3)
pl.xlabel("Times since t0 [days]")
pl.ylabel("Count Repositories Created")
y = tseries.values
x = np.arange(1,len(y)+1)
c = ( y > 0 ) * (x < 50)
lx = np.log10(x[c])
ly = np.log10(y[c])
fit = S.linregress(lx,ly)
print fit
plot(lx,ly)
plot(lx,lx*fit[0]+fit[1])
xlabel("Time from t0 [days]")
ylabel("Count of Daily Repo Creations")
print test.index[0]
c = test.values > 0
x = np.arange(1,len(test.values[c])+1)
lx = np.log10(x)
ly = np.log10(test.values[c])
fit = S.linregress(x,ly)
print fit
pl.plot(x,ly)
pl.plot(x,x*fit[0]+fit[1],'k-')
pl.xlabel("Time [weeks since 06/22/2008]")
pl.ylabel("log10(Number of Repos Created) \n [repos/week]")
pl.ylim(ymin=0)
astro_dt = [257]
event_types = np.unique(df.type.values)
event_types_short = ['CreateEvent','PushEvent','PullRequestEvent','CommitCommentEvent','IssuesEvent']#,'IssueCommentEvent','ForkEvent','WatchEvent']
color = ['yellow','cyan','green','lime','red']#,'pink','black','goldenrod']
ax = pl.figure(1,(20,5))
y_old = np.zeros_like(resampled['event_types']['PushEvent'].values)
x = np.arange(len(y_old))
#x_tstamp = resampled['event_types']['all'].index
countAllEvents = np.array(map(float,resampled['event_types']['all'].values))
#event_types_short = event_types_short[::-1]
ratio = False
for i,e in enumerate(event_types_short):
print i,e
if ratio:
y = resampled['event_types'][e].values / countAllEvents
pl.bar(x,y,width=1,bottom=y_old,lw=0.1,color=color[i],label=e)
y_old = y_old + y
pl.ylim(ymax=2)
else:
y = resampled['event_types'][e].values
#y[y < 1] = 1
#ly = np.log10(y)
pl.bar(x,y,width=1,bottom=y_old,lw=0.1,color=color[i],label=e)
y_old = y_old + y
#pl.ylim(ymax=10)
pl.bar(astro_dt[0],160,width=5,color="darkblue",alpha=0.3)
#pl.yscale('log')
y = resampled['event_types']['all'].values
y = (y)/np.std(y)
#pl.plot(x,y,label="all")
pl.xlim(astro_dt[0]-100,astro_dt[0]+5+100)
pl.ylim(ymax=130)
#pl.legend(["all"] + event_types_short ,loc=0)
pl.legend(event_types_short[::-1] ,loc=2)
pl.ylabel("Count Events")
#pl.xlabel("Time [days since Jan. 1, 2014]")
pl.savefig(project_dir + "figures/events_day.eps")
graph_param = {"events" : {"color": "blue"},
"actors": {"color": "red"},
"repos" : {"color": "green"}
}
pl.figure(1,(20,3))
#show the region corresponding to astroweek
y_old = np.zeros_like(resampled['activity']['repos'].values)
opacity = 0.6
types = ["actors","repos"]
for k in types:
y = y_old + resampled['activity'][k].values
pl.bar(resampled['activity'][k].index,y,bottom=y_old,width=1,lw=0.1,color=graph_param[k]['color'],label=k)
y_old = y
#if k in ["actors","repos"]:
# x = resampled['activity'][k].index
# for p in [50,75,90]:
# perc = np.percentile(resampled['activity'][k].values,p)
# pl.plot(x,np.zeros(len(x))+perc,'-',color=graph_param[k]['color'])
pl.bar(timestamp_astroweek[0],70,width=5,color="darkblue",alpha=0.3)
t = resampled['activity'][k].index
pl.xlim(t[astro_dt[0]-100],t[astro_dt[0]+5+100])
pl.ylim(ymax=70)
pl.xlabel("Timeline [days]")
pl.ylabel("Daily Activity")
pl.legend(loc=2)
pl.savefig(project_dir + "figures/activity_day.eps")
f, (ax1, ax2) = pl.subplots(1, 2, sharey=True)
ax1.plot(x, y)
ax1.set_title('Sharing Y axis')
ax2.scatter(x, y)
activity_type = ['events','actors','repos']
colors = ['blue','red','green']
tseries = {}
for i,at in enumerate(activity_type):
print "%s activity"%at
test = resampled['activity'][at]
before = test[test.index >= astroweek[0]].values
during = test[(test.index >= astroweek[0]) & (test.index < astroweek[1])].values
after = test[(test.index > astroweek[1]) & (test.index <= time_boundaries[1])].values
#after_start = test[(test.index >= astroweek[0]) & (test.index <= time_boundaries[1])].values
after_start = test[(test.index >= astroweek[0]) & (test.index <= "2015-01-01")].values
tseries[at] = after_start
first_moment = np.mean
print "\tbefore: %s (%s)"%(first_moment(before), np.std(before))
print "\tduring: %s (%s)"%(first_moment(during), np.std(during))
print "\tafter: %s (%s)"%(first_moment(after), np.std(after))
print "\tafter start: %s (%s)"%(first_moment(after_start), np.std(after_start))
print "\tincrease rate (during-before)/before %s"%((first_moment(during)-first_moment(before))/first_moment(before))
pl.figure(1,(18,6))
pl.subplot(121)
pl.plot(after_start/float(after_start[0]),'-',label=at,color=colors[i])
pl.xlabel("Time since start of AstroWeek [days]")
pl.ylabel("Level of activity [count/day]")
pl.legend(loc=0)
pl.subplot(122)
y = tseries[at]
x = np.arange(1,len(y)+1)
ly = np.log10(y)
lx = np.log10(x)
fit = S.linregress(lx,ly)
print "\n\tpower law decay fit : %.2f (r = %.2f, p = %.2f, se= %.2f)\n"%(fit[0],fit[2],fit[3],fit[4])
pl.plot(lx,ly,'-',ms =3,label=at,color=colors[i])
pl.plot(lx,lx*fit[0]+fit[1],color=colors[i])
pl.subplot(122)
ly2 = repo_decay_y#/float(max(repo_decay_y))
pl.plot(repo_decay_x,ly2,'ko',label="repos_created")
fit = S.linregress(repo_decay_x,ly2)
print "repo creations: "
print "\n\tpower law decay fit : %.2f (r = %.2f, p = %.2f, se= %.2f)\n"%(fit[0],fit[2],fit[3],fit[4])
pl.plot(lx,lx*fit[0]+fit[1],'k-',label="fit_repos_created")
pl.xlabel("Time since start of AstroWeek \n [log10(days)]")
pl.ylabel("Level of activity [log10(count/day)]")
pl.xlim(xmax=2.1)
#pl.legend(loc=0)
pl.savefig(project_dir + "figures/relaxation.eps")
df_repo_created_aw = None
for r in repo_created_during_astroweek:
try:
df_repo_created_aw = pandas.concat([df_repo_created_aw,df[df['repo_url']==r]])
except:
df_repo_created_aw = df[df['repo_url']==r]
df_repo_created_aw.sort(inplace=True)
df_repo_created_aw = df_repo_created_aw[(df_repo_created_aw['repo_created_at'] >= astroweek[0]) & (df_repo_created_aw['repo_created_at'] < astroweek[1])]
#df_repo_created_aw = df[(df['repo_created_at'] >= astroweek[0]) & (df['repo_created_at'] < astroweek[1])]
def resample_df(df,t_resol="1D"):
event_types = np.unique(df.type.values)
event_dic = {}
event_dic['all'] = df.type.resample(t_resol,how='count')
event_count = df.type.resample(t_resol,how='count')
for e in event_types:
event_dic[e] = df[df['type']==e].type.resample(t_resol,how='count')
if len(event_dic[e]) < len(event_count):
event_dic[e] = fill_ommitted_resample(event_dic[e],event_count)
resampled = {"activity" :
{'events' : event_count,
'actors' : df.actor.resample(t_resol,how=countUnique),
'repos' : df.repo.resample(t_resol,how=countUnique)
},
'event_types' : event_dic
}
return resampled
resampled_repo_created_aw = resample_df(df_repo_created_aw)
activity_type = ['events','actors','repos']
colors = ['blue','red','green']
markers = ['o','s','d']
figure(1,(9,6))
for i,ix in enumerate(activity_type):
y = resampled_repo_created_aw['activity'][ix]
x = np.arange(1,len(y)+1)
c = (y>0)
lx = np.log10(x[c])
ly = np.log10(y[c])
fit = S.linregress(lx,ly)
print fit
plot(lx,ly,marker=markers[i],lw=0.8,color=colors[i])
#pl.bar(lx,ly,width=lx2[1:]-lx2[:-1],color=colors[i])
plot(lx,lx*fit[0]+fit[1],'-',color=colors[i])
pl.xlabel("log10(days)")
pl.ylabel("log10(activity)")
pl.ylim(ymin=0)
pl.savefig(project_dir + "figures/relaxation_aw_repos.eps")
def burstPlot(x,y):
c = (y >0)*(x>0)
lx = np.log10(x[c])
ly = np.log10(y[c])
fit = S.linregress(lx,ly)
print fit
plot(lx,ly,'o')
plot(lx,lx*fit[0]+fit[1],'k-')
return fit
x = resampled['activity']['repos'].values
y = resampled['activity']['events'].values
slope = burstPlot(x,y)[0]
## Some Simple Test
sAE = 1.14
sAR = 1.05
sRE = 1.07
print 1.05*1.07
activity_type = ['events','actors','repos']
tseries = {}
for at in activity_type:
print "%s activity"%at
test = resampled['activity'][at]
before = test[test.index >= astroweek[0]].values
during = test[(test.index >= astroweek[0]) & (test.index < astroweek[1])].values
after = test[(test.index > astroweek[1]) & (test.index <= time_boundaries[1])].values
after_start = test[(test.index >= astroweek[0]) & (test.index <= "2015-04-20")].values
tseries[at] = {'before' : before, 'during' : during, 'after' : after, 'after_start' : after_start}
period = 'before'
x_type = 'actors'
y_type = 'events'
pl.figure(1,(10,9))
burstPlot(tseries[x_type][period],tseries[y_type][period])
xlabel("log10(%s)"%x_type)
ylabel("log10(%s)"%y_type)
pl.savefig(project_dir + "figures/productive_burst_before.eps")
ec_before = df_users_created['event_count_before'].values
ec_during = df_users_created['event_count_during'].values
ec_after = df_users_created['event_count_after'].values
c = (ec_during > 0)*(ec_before > 0)*(ec_after > 0)
logBefore = np.log10(ec_before[c])
logDuring = np.log10(ec_during[c])
logAfter = np.log10(ec_after[c])
figure(1,(10,9))
fit = S.linregress(logBefore,logDuring)
print fit
pl.plot(logBefore,logDuring,'s',ms=8,color="darkblue")#,label="During Astro Hack Week")
pl.plot(logBefore,logAfter,'ro',ms=8)#,label="Period : After Astro Hack Week")
pl.plot(logBefore,logBefore*fit[0]+fit[1],'-',color="darkblue",label="During Astro Hack Week")#,label="Scaling Exp. = %.2f (std. err.=%.2f) (During AstroWeek)"%(fit[0],fit[4]
fit2 = S.linregress(logBefore,logAfter)
print fit2
#r'${\rm Damage~increment}~\Lambda$'
pl.plot(logBefore,logBefore*fit2[0]+fit2[1],'r-',label="After Astro Hack Week")#%(fit2[0],fit2[4]))
pl.plot(logBefore,logBefore+fit2[1],'k-',label="Linearity")
pl.xlabel("log10(Event Count Before) [per actor]")
pl.ylabel("log10(Event Count Period) [per actor]")
pl.xlim(xmin=0.25)
pl.legend(loc=2)
pl.savefig(project_dir + "figures/tradeoff_senior_contributors.eps")
repo_timestamp = np.array([datetime.strptime(dt,"%Y-%m-%d %H:%M:%S") for dt in df_repo_created_aw['repo_created_at']])
df_repo_created_aw['repo_timestamp'] = repo_timestamp
diff_timestamp = df_repo_created_aw.timestamp - df_repo_created_aw.repo_timestamp
df_repo_created_aw['diff_timestamp'] = diff_timestamp
df_repo_created_aw['diff_hours'] = [x.astype('timedelta64[h]')/np.timedelta64(1, 'h') for x in df_repo_created_aw['diff_timestamp']]
r = np.unique(df_repo_created_aw['repo'].values,return_counts=True)
o = np.argsort(r[1])
for X in r[0][o]:
x,y = rankorder(df_repo_created_aw[df_repo_created_aw['repo']==X]['diff_hours'])
loglog(x/24.,y,'o')
x,y = rankorder(df_repo_created_aw['diff_hours'])
loglog(x/24.,y,'ko-')
'''Non-paramtric daily detrending'''
test = resampled['repos']
correctedTimeSeries = test.values.copy()
wkd = test.index.weekday
dailyActivity = {}
correctedTimeSeries = np.array(map(float,test.values.copy()))
for wday in np.unique(wkd):
dailyActivity[wday] = np.mean(test.values[wkd==wday])
print dailyActivity[wday],np.std(test.values[wkd==wday])
#print correctedTimeSeries[wkd==wday]
correctedTimeSeries[wkd==wday] = correctedTimeSeries[wkd==wday]/float(dailyActivity[wday])
#print correctedTimeSeries[wkd==wday]
figure(1,(15,7))
plot((test.values-np.mean(test.values))/np.std(test.values),'k-')
plot(correctedTimeSeries,'r-')
user_df = prepareUserDf(df)
pl.figure(1,(13,9))
periods = ['event_count_before','event_count_during','event_count_after']
colors = {'event_count_before': 'blue',
'event_count_during': 'green',
'event_count_after' :'red'}
day_count = [184.,5.,182.]
#days_before = 184
#days_after = 182
minx = 150
dicMinCCDF = {"event_count_before" : minx ,
"event_count_during": 5,
"event_count_after": minx
}
for i,p in enumerate(periods):
print p
data = user_df[p].values
data = data[data>0]
x,y = rankorder(data)
x = np.array(x) / day_count[i]
lx = np.log10(x)
ly = np.log10(y)
ly = ly/float(ly[-1])
#lxmin = lx[x> dicMinCCDF[p]][-1]
#lymin = ly[x> dicMinCCDF[p]][-1]
pl.plot(lx,ly,'o',label=p,color=colors[p])
#fit = pwlaw_fit_disc(x,dicMinCCDF[p])
#print lymin-fit[0]*lx
#pl.plot(lx,-fit[0]*lx + (lymin +fit[0]*lxmin) ,'k',color=colors[p])
#print fit
pl.ylim(0,1)
#pl.xlim(0,1.5)
pl.legend(loc=0)
pl.xlabel("Count Events (per day)")
pl.ylabel("CCDF")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up
Step2: Initial condition
Step3: Run the model
Step4: Snapshots
Step5: pyqg has a built-in method that computes the vertical modes.
Step6: We can project the solution onto the modes
Step7: Diagnostics
Step8: By default the modal KE and PE spectra are also calculated
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from numpy import pi
from matplotlib import pyplot as plt
%matplotlib inline
import pyqg
from pyqg import diagnostic_tools as tools
L = 1000.e3 # length scale of box [m]
Ld = 15.e3 # deformation scale [m]
kd = 1./Ld # deformation wavenumber [m^-1]
Nx = 64 # number of grid points
H1 = 500. # layer 1 thickness [m]
H2 = 1750. # layer 2
H3 = 1750. # layer 3
U1 = 0.05 # layer 1 zonal velocity [m/s]
U2 = 0.025 # layer 2
U3 = 0.00 # layer 3
rho1 = 1025.
rho2 = 1025.275
rho3 = 1025.640
rek = 1.e-7 # linear bottom drag coeff. [s^-1]
f0 = 0.0001236812857687059 # coriolis param [s^-1]
beta = 1.2130692965249345e-11 # planetary vorticity gradient [m^-1 s^-1]
Ti = Ld/(abs(U1)) # estimate of most unstable e-folding time scale [s]
dt = Ti/200. # time-step [s]
tmax = 300*Ti # simulation time [s]
m = pyqg.LayeredModel(nx=Nx, nz=3, U = [U1,U2,U3],V = [0.,0.,0.],L=L,f=f0,beta=beta,
H = [H1,H2,H3], rho=[rho1,rho2,rho3],rek=rek,
dt=dt,tmax=tmax, twrite=5000, tavestart=Ti*10)
sig = 1.e-7
qi = sig*np.vstack([np.random.randn(m.nx,m.ny)[np.newaxis,],
np.random.randn(m.nx,m.ny)[np.newaxis,],
np.random.randn(m.nx,m.ny)[np.newaxis,]])
m.set_q(qi)
m.run()
plt.figure(figsize=(18,4))
plt.subplot(131)
plt.pcolormesh(m.x/m.rd,m.y/m.rd,(m.q[0,]+m.Qy[0]*m.y)/(U1/Ld),cmap='Spectral_r')
plt.xlabel(r'$x/L_d$')
plt.ylabel(r'$y/L_d$')
plt.colorbar()
plt.title('Layer 1 PV')
plt.subplot(132)
plt.pcolormesh(m.x/m.rd,m.y/m.rd,(m.q[1,]+m.Qy[1]*m.y)/(U1/Ld),cmap='Spectral_r')
plt.xlabel(r'$x/L_d$')
plt.ylabel(r'$y/L_d$')
plt.colorbar()
plt.title('Layer 2 PV')
plt.subplot(133)
plt.pcolormesh(m.x/m.rd,m.y/m.rd,(m.q[2,]+m.Qy[2]*m.y)/(U1/Ld),cmap='Spectral_r')
plt.xlabel(r'$x/L_d$')
plt.ylabel(r'$y/L_d$')
plt.colorbar()
plt.title('Layer 3 PV')
print "The first baroclinic deformation radius is", m.radii[1]/1.e3, "km"
print "The second baroclinic deformation radius is", m.radii[2]/1.e3, "km"
pn = m.modal_projection(m.p)
plt.figure(figsize=(18,4))
plt.subplot(131)
plt.pcolormesh(m.x/m.rd,m.y/m.rd,pn[0]/(U1*Ld),cmap='Spectral_r')
plt.xlabel(r'$x/L_d$')
plt.ylabel(r'$y/L_d$')
plt.colorbar()
plt.title('Barotropic streamfunction')
plt.subplot(132)
plt.pcolormesh(m.x/m.rd,m.y/m.rd,pn[1]/(U1*Ld),cmap='Spectral_r')
plt.xlabel(r'$x/L_d$')
plt.ylabel(r'$y/L_d$')
plt.colorbar()
plt.title('1st baroclinic streamfunction')
plt.subplot(133)
plt.pcolormesh(m.x/m.rd,m.y/m.rd,pn[2]/(U1*Ld),cmap='Spectral_r')
plt.xlabel(r'$x/L_d$')
plt.ylabel(r'$y/L_d$')
plt.colorbar()
plt.title('2nd baroclinic streamfunction')
kr, kespec_1 = tools.calc_ispec(m,m.get_diagnostic('KEspec')[0])
_, kespec_2 = tools.calc_ispec(m,m.get_diagnostic('KEspec')[1])
_, kespec_3 = tools.calc_ispec(m,m.get_diagnostic('KEspec')[2])
plt.loglog( kr, kespec_1, '.-' )
plt.loglog( kr, kespec_2, '.-' )
plt.loglog( kr, kespec_3, '.-' )
plt.legend(['layer 1','layer 2', 'layer 3'], loc='lower left')
plt.ylim([1e-14,1e-6]); plt.xlim([m.kk.min(), m.kk.max()])
plt.xlabel(r'k (m$^{-1}$)'); plt.grid()
plt.title('Kinetic Energy Spectrum');
kr, modal_kespec_1 = tools.calc_ispec(m,m.get_diagnostic('KEspec_modal')[0])
_, modal_kespec_2 = tools.calc_ispec(m,m.get_diagnostic('KEspec_modal')[1])
_, modal_kespec_3 = tools.calc_ispec(m,m.get_diagnostic('KEspec_modal')[2])
_, modal_pespec_2 = tools.calc_ispec(m,m.get_diagnostic('PEspec_modal')[0])
_, modal_pespec_3 = tools.calc_ispec(m,m.get_diagnostic('PEspec_modal')[1])
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.loglog( kr, modal_kespec_1, '.-' )
plt.loglog( kr, modal_kespec_2, '.-' )
plt.loglog( kr, modal_kespec_3, '.-' )
plt.legend(['barotropic ','1st baroclinic', '2nd baroclinic'], loc='lower left')
plt.ylim([1e-14,1e-6]); plt.xlim([m.kk.min(), m.kk.max()])
plt.xlabel(r'k (m$^{-1}$)'); plt.grid()
plt.title('Kinetic Energy Spectra');
plt.subplot(122)
plt.loglog( kr, modal_pespec_2, '.-' )
plt.loglog( kr, modal_pespec_3, '.-' )
plt.legend(['1st baroclinic', '2nd baroclinic'], loc='lower left')
plt.ylim([1e-14,1e-6]); plt.xlim([m.kk.min(), m.kk.max()])
plt.xlabel(r'k (m$^{-1}$)'); plt.grid()
plt.title('Potential Energy Spectra');
_, APEgenspec = tools.calc_ispec(m,m.get_diagnostic('APEgenspec'))
_, APEflux = tools.calc_ispec(m,m.get_diagnostic('APEflux'))
_, KEflux = tools.calc_ispec(m,m.get_diagnostic('KEflux'))
_, KEspec = tools.calc_ispec(m,m.get_diagnostic('KEspec')[1]*m.M**2)
ebud = [ APEgenspec,
APEflux,
KEflux,
-m.rek*(m.Hi[-1]/m.H)*KEspec ]
ebud.append(-np.vstack(ebud).sum(axis=0))
ebud_labels = ['APE gen','APE flux div.','KE flux div.','Diss.','Resid.']
[plt.semilogx(kr, term) for term in ebud]
plt.legend(ebud_labels, loc='upper right')
plt.xlim([m.kk.min(), m.kk.max()])
plt.xlabel(r'k (m$^{-1}$)'); plt.grid()
plt.title('Spectral Energy Transfers');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-2', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Continuous variables
Step2: Binary classification
Step3: Run the Binary Classifier metrics function and View the True Positive Rate and the False Positive Rate
Step4: View all metrics at a given threshold value
Step5: Run the Area Under ROC curve function
Step6: Multi-class classification
|
<ASSISTANT_TASK:>
Python Code:
%load_ext sql
# %sql postgresql://gpdbchina@10.194.10.68:55000/madlib
%sql postgresql://fmcquillan@localhost:5432/madlib
%sql select madlib.version();
%%sql
DROP TABLE IF EXISTS test_set;
CREATE TABLE test_set(
pred FLOAT8, -- predicted values
obs FLOAT8 -- actual observed values
);
INSERT INTO test_set VALUES
(37.5,53.1), (12.3,34.2), (74.2,65.4), (91.1,82.1);
SELECT * FROM test_set;
%%sql
DROP TABLE IF EXISTS table_out;
SELECT madlib.mean_abs_error( 'test_set', 'table_out', 'pred', 'obs');
SELECT * FROM table_out;
%%sql
DROP TABLE IF EXISTS table_out;
SELECT madlib.mean_abs_perc_error( 'test_set', 'table_out', 'pred', 'obs');
SELECT * FROM table_out;
%%sql
DROP TABLE IF EXISTS table_out;
SELECT madlib.mean_perc_error( 'test_set', 'table_out', 'pred', 'obs');
SELECT * FROM table_out;
%%sql
DROP TABLE IF EXISTS table_out;
SELECT madlib.mean_squared_error( 'test_set', 'table_out', 'pred', 'obs');
SELECT * FROM table_out;
%%sql
DROP TABLE IF EXISTS table_out;
SELECT madlib.r2_score( 'test_set', 'table_out', 'pred', 'obs');
SELECT * FROM table_out;
%%sql
DROP TABLE IF EXISTS table_out;
SELECT madlib.adjusted_r2_score( 'test_set', 'table_out', 'pred', 'obs', 3, 100);
SELECT * FROM table_out;
%%sql
DROP TABLE IF EXISTS test_set;
CREATE TABLE test_set AS
SELECT ((a*8)::integer)/8.0 pred, -- prediction probability TRUE
((a*0.5+random()*0.5)>0.5) obs -- actual observations
FROM (select random() as a from generate_series(1,100)) x;
SELECT * FROM test_set;
%%sql
DROP TABLE IF EXISTS table_out;
SELECT madlib.binary_classifier( 'test_set', 'table_out', 'pred', 'obs');
SELECT threshold, tpr, fpr FROM table_out ORDER BY threshold;
%%sql
SELECT * FROM table_out WHERE threshold=0.5;
%%sql
DROP TABLE IF EXISTS table_out;
SELECT madlib.area_under_roc( 'test_set', 'table_out', 'pred', 'obs');
SELECT * FROM table_out;
%%sql
DROP TABLE IF EXISTS test_set;
CREATE TABLE test_set AS
SELECT (x+y)%5+1 AS pred,
(x*y)%5 AS obs
FROM generate_series(1,5) x,
generate_series(1,5) y;
SELECT * FROM test_set;
%%sql
DROP TABLE IF EXISTS table_out;
SELECT madlib.confusion_matrix( 'test_set', 'table_out', 'pred', 'obs');
SELECT * FROM table_out ORDER BY class;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we have category_lines, a dictionary mapping each category (language) to a list of lines (names). We also kept track of all_categories (just a list of languages) and n_categories for later reference.
Step2: Turning Names into Tensors
Step3: Creating the Network
Step4: Manually testing the network
Step5: To run a step of this network we need to pass an input (in our case, the Tensor for the current letter) and a previous hidden state (which we initialize as zeros at first). We'll get back the output (probability of each language) and a next hidden state (which we keep for the next step).
Step6: For the sake of efficiency we don't want to be creating a new Tensor for every step, so we will use line_to_tensor instead of letter_to_tensor and use slices. This could be further optimized by pre-computing batches of Tensors.
Step7: As you can see the output is a <1 x n_categories> Tensor, where every item is the likelihood of that category (higher is more likely).
Step8: We will also want a quick way to get a training example (a name and its language)
Step9: Training the Network
Step10: We will also create an "optimizer" which updates the parameters of our model according to its gradients. We will use the vanilla SGD algorithm with a low learning rate.
Step11: Each loop of training will
Step12: Now we just have to run that with a bunch of examples. Since the train function returns both the output and loss we can print its guesses and also keep track of loss for plotting. Since there are 1000s of examples we print only every print_every time steps, and take an average of the loss.
Step13: Plotting the Results
Step14: Evaluating the Results
Step15: You can pick out bright spots off the main axis that show which languages it guesses incorrectly, e.g. Chinese for Korean, and Spanish for Italian. It seems to do very well with Greek, and very poorly with English (perhaps because of overlap with other languages).
|
<ASSISTANT_TASK:>
Python Code:
import glob
all_filenames = glob.glob('../data/names/*.txt')
print(all_filenames)
import unicodedata
import string
all_letters = string.ascii_letters + " .,;'"
n_letters = len(all_letters)
# Turn a Unicode string to plain ASCII, thanks to http://stackoverflow.com/a/518232/2809427
def unicode_to_ascii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
print(unicode_to_ascii('ลlusร rski'))
# Build the category_lines dictionary, a list of names per language
category_lines = {}
all_categories = []
# Read a file and split into lines
def readLines(filename):
lines = open(filename).read().strip().split('\n')
return [unicode_to_ascii(line) for line in lines]
for filename in all_filenames:
category = filename.split('/')[-1].split('.')[0]
all_categories.append(category)
lines = readLines(filename)
category_lines[category] = lines
n_categories = len(all_categories)
print('n_categories =', n_categories)
print(category_lines['Italian'][:5])
import torch
# Just for demonstration, turn a letter into a <1 x n_letters> Tensor
def letter_to_tensor(letter):
tensor = torch.zeros(1, n_letters)
letter_index = all_letters.find(letter)
tensor[0][letter_index] = 1
return tensor
# Turn a line into a <line_length x 1 x n_letters>,
# or an array of one-hot letter vectors
def line_to_tensor(line):
tensor = torch.zeros(len(line), 1, n_letters)
for li, letter in enumerate(line):
letter_index = all_letters.find(letter)
tensor[li][0][letter_index] = 1
return tensor
print(letter_to_tensor('J'))
print(line_to_tensor('Jones').size())
import torch.nn as nn
from torch.autograd import Variable
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(input_size + hidden_size, output_size)
self.softmax = nn.LogSoftmax()
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
hidden = self.i2h(combined)
output = self.i2o(combined)
output = self.softmax(output)
return output, hidden
def init_hidden(self):
return Variable(torch.zeros(1, self.hidden_size))
n_hidden = 128
rnn = RNN(n_letters, n_hidden, n_categories)
input = Variable(letter_to_tensor('A'))
hidden = rnn.init_hidden()
output, next_hidden = rnn(input, hidden)
print('output.size =', output.size())
input = Variable(line_to_tensor('Albert'))
hidden = Variable(torch.zeros(1, n_hidden))
output, next_hidden = rnn(input[0], hidden)
print(output)
def category_from_output(output):
top_n, top_i = output.data.topk(1) # Tensor out of Variable with .data
category_i = top_i[0][0]
return all_categories[category_i], category_i
print(category_from_output(output))
import random
def random_training_pair():
category = random.choice(all_categories)
line = random.choice(category_lines[category])
category_tensor = Variable(torch.LongTensor([all_categories.index(category)]))
line_tensor = Variable(line_to_tensor(line))
return category, line, category_tensor, line_tensor
for i in range(10):
category, line, category_tensor, line_tensor = random_training_pair()
print('category =', category, '/ line =', line)
criterion = nn.NLLLoss()
learning_rate = 0.005 # If you set this too high, it might explode. If too low, it might not learn
optimizer = torch.optim.SGD(rnn.parameters(), lr=learning_rate)
def train(category_tensor, line_tensor):
rnn.zero_grad()
hidden = rnn.init_hidden()
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
loss = criterion(output, category_tensor)
loss.backward()
optimizer.step()
return output, loss.data[0]
import time
import math
n_epochs = 100000
print_every = 5000
plot_every = 1000
# Keep track of losses for plotting
current_loss = 0
all_losses = []
def time_since(since):
now = time.time()
s = now - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
start = time.time()
for epoch in range(1, n_epochs + 1):
# Get a random training input and target
category, line, category_tensor, line_tensor = random_training_pair()
output, loss = train(category_tensor, line_tensor)
current_loss += loss
# Print epoch number, loss, name and guess
if epoch % print_every == 0:
guess, guess_i = category_from_output(output)
correct = 'โ' if guess == category else 'โ (%s)' % category
print('%d %d%% (%s) %.4f %s / %s %s' % (epoch, epoch / n_epochs * 100, time_since(start), loss, line, guess, correct))
# Add current loss avg to list of losses
if epoch % plot_every == 0:
all_losses.append(current_loss / plot_every)
current_loss = 0
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
%matplotlib inline
plt.figure()
plt.plot(all_losses)
# Keep track of correct guesses in a confusion matrix
confusion = torch.zeros(n_categories, n_categories)
n_confusion = 10000
# Just return an output given a line
def evaluate(line_tensor):
hidden = rnn.init_hidden()
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
return output
# Go through a bunch of examples and record which are correctly guessed
for i in range(n_confusion):
category, line, category_tensor, line_tensor = random_training_pair()
output = evaluate(line_tensor)
guess, guess_i = category_from_output(output)
category_i = all_categories.index(category)
confusion[category_i][guess_i] += 1
# Normalize by dividing every row by its sum
for i in range(n_categories):
confusion[i] = confusion[i] / confusion[i].sum()
# Set up plot
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(confusion.numpy())
fig.colorbar(cax)
# Set up axes
ax.set_xticklabels([''] + all_categories, rotation=90)
ax.set_yticklabels([''] + all_categories)
# Force label at every tick
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def predict(input_line, n_predictions=3):
print('\n> %s' % input_line)
output = evaluate(Variable(line_to_tensor(input_line)))
# Get top N categories
topv, topi = output.data.topk(n_predictions, 1, True)
predictions = []
for i in range(n_predictions):
value = topv[0][i]
category_index = topi[0][i]
print('(%.2f) %s' % (value, all_categories[category_index]))
predictions.append([value, all_categories[category_index]])
predict('Dovesky')
predict('Jackson')
predict('Satoshi')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Preprocessing
Step2: We procceed to run Paolo Bestagini's routine to include a small window of values to acount for the spatial component in the log analysis, as well as the gradient information with respect to depth. This will be our prepared training dataset.
Step3: Data Analysis
Step4: Prediction
|
<ASSISTANT_TASK:>
Python Code:
%%sh
pip install pandas
pip install scikit-learn
pip install tpot
from __future__ import print_function
import numpy as np
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold , StratifiedKFold
from classification_utilities import display_cm, display_adj_cm
from sklearn.metrics import confusion_matrix, f1_score
from sklearn import preprocessing
from sklearn.model_selection import LeavePGroupsOut
from sklearn.multiclass import OneVsOneClassifier
from sklearn.ensemble import RandomForestClassifier
from scipy.signal import medfilt
#Load Data
data = pd.read_csv('../facies_vectors.csv')
# Parameters
feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
print(data.head())
data['PE'] = data.groupby("Facies").PE.transform(lambda x: x.fillna(x.mean()))
X_train, X_test, y_train, y_test = train_test_split(X_reg, y_reg, train_size=0.75, test_size=0.25)
# Store features and labels
X = data[feature_names].values
y = data['Facies'].values
# Store well labels and depths
well = data['Well Name'].values
depth = data['Depth'].values
# Feature windows concatenation function
def augment_features_window(X, N_neig):
# Parameters
N_row = X.shape[0]
N_feat = X.shape[1]
# Zero padding
X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
this_row = np.hstack((this_row, X[r+c]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth):
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X, axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1):
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
X_aug, padded_rows = augment_features(X, well, depth)
# Initialize model selection methods
lpgo = LeavePGroupsOut(2)
# Generate splits
split_list = []
for train, val in lpgo.split(X, y, groups=data['Well Name']):
hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)
hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)
if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):
split_list.append({'train':train, 'val':val})
def preprocess():
# Preprocess data to use in model
X_train_aux = []
X_test_aux = []
y_train_aux = []
y_test_aux = []
# For each data split
split = split_list[5]
# Remove padded rows
split_train_no_pad = np.setdiff1d(split['train'], padded_rows)
# Select training and validation data from current split
X_tr = X_aug[split_train_no_pad, :]
X_v = X_aug[split['val'], :]
y_tr = y[split_train_no_pad]
y_v = y[split['val']]
# Select well labels for validation data
well_v = well[split['val']]
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
X_train_aux.append( X_tr )
X_test_aux.append( X_v )
y_train_aux.append( y_tr )
y_test_aux.append ( y_v )
X_train = np.concatenate( X_train_aux )
X_test = np.concatenate ( X_test_aux )
y_train = np.concatenate ( y_train_aux )
y_test = np.concatenate ( y_test_aux )
return X_train , X_test , y_train , y_test
from tpot import TPOTClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = preprocess()
tpot = TPOTClassifier(generations=5, population_size=20,
verbosity=2,max_eval_time_mins=20,
max_time_mins=100,scoring='f1_micro',
random_state = 17)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('FinalPipeline_LM_mean_per_facies.py')
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import BernoulliNB
from sklearn.pipeline import make_pipeline, make_union
from sklearn.preprocessing import FunctionTransformer
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
# Train and test a classifier
def train_and_test(X_tr, y_tr, X_v, well_v):
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
# Train classifier
#clf = make_pipeline(make_union(VotingClassifier([("est", ExtraTreesClassifier(criterion="gini", max_features=1.0, n_estimators=500))]), FunctionTransformer(lambda X: X)), XGBClassifier(learning_rate=0.73, max_depth=10, min_child_weight=10, n_estimators=500, subsample=0.27))
#clf = make_pipeline( KNeighborsClassifier(n_neighbors=5, weights="distance") )
#clf = make_pipeline(MaxAbsScaler(),make_union(VotingClassifier([("est", RandomForestClassifier(n_estimators=500))]), FunctionTransformer(lambda X: X)),ExtraTreesClassifier(criterion="entropy", max_features=0.0001, n_estimators=500))
# * clf = make_pipeline( make_union(VotingClassifier([("est", BernoulliNB(alpha=60.0, binarize=0.26, fit_prior=True))]), FunctionTransformer(lambda X: X)),RandomForestClassifier(n_estimators=500))
clf = make_pipeline ( XGBClassifier(learning_rate=0.12, max_depth=3, min_child_weight=10, n_estimators=150, seed = 17, colsample_bytree = 0.9) )
clf.fit(X_tr, y_tr)
# Test classifier
y_v_hat = clf.predict(X_v)
# Clean isolated facies for each well
for w in np.unique(well_v):
y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=5)
return y_v_hat
#Load testing data
test_data = pd.read_csv('../validation_data_nofacies.csv')
# Prepare training data
X_tr = X
y_tr = y
# Augment features
X_tr, padded_rows = augment_features(X_tr, well, depth)
# Removed padded rows
X_tr = np.delete(X_tr, padded_rows, axis=0)
y_tr = np.delete(y_tr, padded_rows, axis=0)
# Prepare test data
well_ts = test_data['Well Name'].values
depth_ts = test_data['Depth'].values
X_ts = test_data[feature_names].values
# Augment features
X_ts, padded_rows = augment_features(X_ts, well_ts, depth_ts)
# Predict test labels
y_ts_hat = train_and_test(X_tr, y_tr, X_ts, well_ts)
# Save predicted labels
test_data['Facies'] = y_ts_hat
test_data.to_csv('Prediction_XX_Final.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Since we are running on separate test data, we don't need to do a train_test_split here. But we will scale the data. Need to remember to scale the test data later!
Step2: Applying to Quasars Candidates
Step3: If you want to compare ZSPEC to ZPHOT, use the cells below for test set
Step4: Scale the test data
Step5: Not currently executing the next 2 cells, but putting the code here in case we want to do it later.
Step6: Instantiate Photo-z Algorithm of Choice
Step7: Apply Photo-z Algorithm(s)
Step8: Nadaraya-Watson
Step9: Only need this if Xtest is too big
|
<ASSISTANT_TASK:>
Python Code:
## Read in the Training Data and Instantiating the Photo-z Algorithm
%matplotlib inline
from astropy.table import Table
import numpy as np
import matplotlib.pyplot as plt
#data = Table.read('GTR-ADM-QSO-ir-testhighz_findbw_lup_2016_starclean.fits')
#JT PATH ON TRITON to training set after classification
#data = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/Training_set/GTR-ADM-QSO-ir-testhighz_findbw_lup_2016_starclean_with_shenlabel.fits')
data = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/Training_set/GTR-ADM-QSO-Trainingset-with-McGreer-VVDS-DR12Q_splitlabel_VCVcut_best.fits')
#JT PATH HOME USE SHEN ZCUT
#data = Table.read('/home/john/Catalogs/QSO_Candidates/Training_set/GTR-ADM-QSO-ir-testhighz_findbw_lup_2016_starclean_with_shenlabel.fits')
#data = data.filled()
# Remove stars
qmask = (data['zspec']>0)
qdata = data[qmask]
print len(qdata)
# X is in the format need for all of the sklearn tools, it just has the colors
#Xtrain = np.vstack([ qdata['ug'], qdata['gr'], qdata['ri'], qdata['iz'], qdata['zs1'], qdata['s1s2']]).T
Xtrain = np.vstack([np.asarray(qdata[name]) for name in ['ug', 'gr', 'ri', 'iz', 'zs1', 's1s2']]).T
#y = np.array(data['labels'])
ytrain = np.array(qdata['zspec'])
# For algorithms that need scaled data:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(Xtrain) # Don't cheat - fit only on training data
#testdata = Table.read('GTR-ADM-QSO-ir_good_test_2016_out_Stripe82all.fits')
# TEST DATA USING 3.5<z<5 zrange ON TRITON
#testdata = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/Final_S82_candidates_full/GTR-ADM-QSO-ir_good_test_2016_out_Stripe82all.fits')
# TEST DATA USING 2.9<z<5.4 zrange ON HOME
#testdata = Table.read('/Users/johntimlin/Catalogs/QSO_Candidates/photoz/SpIES_SHELA_Quasar_Canidates_Shen_zrange_JTmultiproc.fits')
#testdata = Table.read('./catalogs/HZ_forphotoz.fits')
testdata = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/New_training_candidates/Test_point_source_classifier/Final_sets/HZLZ_combined_all_wphotoz_alldata_allclassifiers.fits')
#Limit to objects that have been classified as quasars
#qsocandmask = ((testdata['ypredRFC']==0) | (testdata['ypredSVM']==0) | (testdata['ypredBAG']==0))
testdatacand = testdata#[qsocandmask]
print len(testdata),len(testdatacand)
## Test zspec objects with zspec >=2.9 and see how well the zphot matches with zspec
#testdata = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/Final_S82_candidates_full/QSOs_S82_wzspec_wcolors.fits')
#Limit to objects that have been classified as quasars
#qsocandmask = ((testdata['ypredRFC']==0) | (testdata['ypredSVM']==0) | (testdata['ypredBAG']==0))
#qsocandmask = (testdata['ZSPEC'] >= 2.9)
#testdatacand = testdata#[qsocandmask]
#print len(testdata),len(testdatacand)
#Xtest = np.vstack([ testdatacand['ug'], testdatacand['gr'], testdatacand['ri'], testdatacand['iz'], testdatacand['zs1'], testdatacand['s1s2']]).T
Xtest = np.vstack([np.asarray(testdatacand[name]) for name in ['ug', 'gr', 'ri', 'iz', 'zs1', 's1s2']]).T
XStest = scaler.transform(Xtest) # apply same transformation to test data
# Read in KDE candidates
dataKDE = Table.read('GTR-ADM-QSO-ir-testhighz_kdephotoz_lup_2016_quasar_candidates.dat', format='ascii')
print dataKDE.keys()
print len(XKDE)
XKDE = np.vstack([ dataKDE['ug'], dataKDE['gr'], dataKDE['ri'], dataKDE['iz'], dataKDE['zch1'], dataKDE['ch1ch2'] ]).T
# Read in RF candidates
dataRF = Table.read('GTR-ADM-QSO-ir_good_test_2016_out.fits')
print dataRF.keys()
print len(dataRF)
# Canidates only
maskRF = (dataRF['ypred']==0)
dataRF = dataRF[maskRF]
print len(dataRF)
# X is in the format need for all of the sklearn tools, it just has the colors
XRF = np.vstack([ dataRF['ug'], dataRF['gr'], dataRF['ri'], dataRF['iz'], dataRF['zs1'], dataRF['s1s2']]).T
import numpy as np
from astroML.linear_model import NadarayaWatson
model = NadarayaWatson('gaussian', 0.05)
model.fit(Xtrain,ytrain)
from sklearn.ensemble import RandomForestRegressor
modelRF = RandomForestRegressor()
modelRF.fit(Xtrain,ytrain)
zphotRF = modelRF.predict(Xtest)
zphotNW = model.predict(Xtest)
from dask import compute, delayed
def process(Xin):
return model.predict(Xin)
# Create dask objects
dobjs = [delayed(process)(x.reshape(1,-1)) for x in Xtest]
import dask.threaded
ypred = compute(*dobjs, get=dask.threaded.get)
# The dask output needs to be reformatted.
zphotNW = np.array(ypred).reshape(1,-1)[0]
testdatacand['zphotNW'] = zphotNW
testdatacand['zphotRF'] = zphotRF
#TRITON PATH
#testdatacand.write('/Users/johntimlin/Catalogs/QSO_candidates/photoz/Candidates_photoz_S82_shenzrange.fits', format='fits')
#HOME PATH
#testdatacand.write('/home/john/Catalogs/QSO_Candidates/photoz/Candidates_photoz_S82_shenzrange.fits', format='fits')
testdatacand.write('./HZLZ_combined_all_hzclassifiers_wphotoz.fits')
from densityplot import *
from pylab import *
fig = plt.figure(figsize=(5,5))
hex_scatter(testdatacand['zphotNW'],testdatacand['ug'], min_cnt=10, levels=2, std=True, smoothing=1,
hkwargs={'gridsize': 100, 'cmap': plt.cm.Blues},
skwargs={'color': 'k'})
plt.xlabel('zphot')
plt.ylabel('u-g')
#plt.xlim([-0.1,5.5])
#plt.ylim([-0.1,5.5])
plt.show()
from astroML.plotting import hist as fancyhist
fancyhist(testdatacand['zphotRF'], bins="freedman", histtype="step")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Jax
Step2: Section 1
Step3: So we see that JAX gets the same answer by computing the derivative of the composite or by multiplying together the two component derivatives.
Step4: Multivariate
Step5: Now we can compute the partial
Step6: Let's compare to the right-hand-side of the partial formula
Step7: Same results!
Step8: Example
Step9: Exercise
Step10: Now let's compare the derivatives. We want to see that the chain rule holds
Step11: So we see that the chain rule holds for our test point.
Step12: Section 2
Step13: Let's use squared error as the loss for a data point $(x, y)$
Step14: We could then use these partial derivatives to update our weights, and iterate
Step15: Now we construct neural networks to fit to this data.
Step16: We'll use cross entropy for the loss function.
Step17: For different choices of parameters we obtain different network architectures. For example we may want one or more hidden inner layers, or more neurons per hidden layer.
Step18: We can use grad to compute the derivatives.
Step19: First, we'll try to fit XOR with a one-layer network (i.e, no hidden layers) with gradient descent.
Step20: Plotting the predictions of the network on a grid, we find that we can't fit XOR. In hindsight, this is forseeable because the classes (0 and 1) are not linearly separable.
Step21: What about a two-layer network? With a two-layer neural network, we add a "hidden" layer between the input and output. The hidden layer applies an affine function to the input followed by a non-linear "activation" function (in this case, hyperbolic tangent).
Step22: Again, we plot the predictions of the network and find that this time we're able to correctly classify the points.
|
<ASSISTANT_TASK:>
Python Code:
#@title Python imports
import collections
import datetime
from functools import partial
import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from scipy import stats
import seaborn as sns
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from IPython.display import display, Math
import sympy
# Make colab plots larger
plt.rcParams['figure.figsize'] = [8, 6]
plt.rcParams['figure.dpi'] = 100
# Functions to display matrices nicely
def matrix_to_latex(a):
return sympy.latex(sympy.Matrix(np.round(a, 3)))
def display_latex(s, newline=True):
display(Math(s))
if newline:
print()
import jax
import jax.numpy as jnp
from jax import jit, grad, vmap, api, random, jacfwd, jacrev
from jax.experimental import optimizers, stax
# Define our functions
def f(x):
return jnp.sqrt(x)
def g(x):
return x*x + 1
def y(x):
return f(g(x))
# We'll use x0 as our point so as not to over use the letter x
x0 = 1.
# Compute the gradient functions.
dy_dx = jacfwd(y)
dg_dx = jacfwd(g)
df_dg = jacfwd(f)
# Compute the values and print
display_latex(r"\frac{{d y}}{{d x}}({}) = {}".format(x0, dy_dx(x0)))
display_latex(r"\frac{{dg}}{{dx}}({}) = {}".format(x0, dg_dx(x0)))
display_latex(r"\frac{{\partial f}}{{dx}} ({}) = {}".format(g(x0), df_dg(g(x0))))
display_latex(r"{} * {} = {}".format(dg_dx(x0), df_dg(g(x0)), np.dot(dg_dx(x0), df_dg(g(x0)))))
# Experiments here!
def f(x):
# Your outer function
pass
def g(x):
# Your inner function
pass
def y(x):
# The composite
return f(g(x))
# Copy the derivatives code from the above cells
# @title Solution (double click to show)
def f(x):
return 1. / (1 + jnp.exp(x))
pass
def g(x):
# Your inner function
return -2 * x
def y(x):
# The composite
return f(g(x))
# We'll use x0 as our point so as not to over use the letter x
x0 = 0.
# Compute the gradient functions.
dy_dx = jacfwd(y)
dg_dx = jacfwd(g)
df_dg = jacfwd(f)
# Compute the values and print
display_latex(r"\frac{{d y}}{{d x}}({}) = {}".format(x0, dy_dx(x0)))
display_latex(r"\frac{{dg}}{{dx}}({}) = {}".format(x0, dg_dx(x0)))
display_latex(r"\frac{{\partial f}}{{dx}} ({}) = {}".format(g(x0), df_dg(g(x0))))
display_latex(r"{} * {} = {}".format(dg_dx(x0), df_dg(g(x0)), np.dot(dg_dx(x0), df_dg(g(x0)))))
def f(x, y):
return jnp.exp((x*x - y*y)/4)
def g(s, t):
return s + t
def h(s, t):
return s - t
# We'll use z for the composite so as not to define f twice in code
def z(s, t):
return f(g(s, t), h(s, t))
# Use JAX to get the Jacobian, which in this case is again a function that returns
# a scalar value
# argnums=0 gives us the partial with respect to the first variable
dz_ds = jacfwd(z, argnums=0)
for s, t in [(1., 0.), (0., 1.), (1., 1.)]:
display_latex(r"\frac{{\partial z}}{{\partial s}}({}, {}) = {}".format(s, t, dz_ds(s, t)))
dz_dx = jacfwd(f, argnums=0)
dx_ds = jacfwd(g, argnums=0)
dz_dy = jacfwd(f, argnums=1)
dy_ds = jacfwd(h, argnums=0)
for s, t in [(1., 0.), (0., 1.), (1., 1.)]:
x_ = g(s, t)
y_ = h(s, t)
dz_ds = dz_dx(x_, y_) * dx_ds(s, t) + dz_dy(x_, y_) * dy_ds(s, t)
display_latex(r"\frac{{\partial z}}{{\partial s}}({}, {}) = {}".format(s, t, dz_ds))
# Your code here (copy and modify the above code cells)
# @title Solution (double click to show)
# Use JAX to get the Jacobian, which in this case is a function that returns
# a scalar value
# argnums=0 gives us the partial with respect to the first variable
dz_dt = jacfwd(z, argnums=1)
for s, t in [(1., 0.), (0., 1.), (1., 1.)]:
display_latex(r"\frac{{\partial z}}{{\partial s}}({}, {}) = {}".format(t, s, dz_dt(s, t)))
dz_dx = jacfwd(f, argnums=0)
dx_dt = jacfwd(g, argnums=1)
dz_dy = jacfwd(f, argnums=1)
dy_dt = jacfwd(h, argnums=1)
for s, t in [(1., 0.), (0., 1.), (1., 1.)]:
x = g(s, t)
y = h(s, t)
dz_dt = dz_dx(x, y) * dx_dt(s, t) + dz_dy(x, y) * dy_dt(s, t)
display_latex(r"\frac{{\partial z}}{{\partial s}}({}, {}) = {}".format(s, t, dz_dt))
# Fix the dimension
n = 4
def f(x):
return jnp.sum(x * x)
def g(t):
return t * np.array([i for i in range(1, n+1)])
def h(t):
return f(g(t))
# Compute the partials
t = 1.
dz_dx = jacfwd(f)
dx_dt = jacfwd(g)
dh_dt = jacfwd(h)
display_latex(r"\frac{{d f}}{{d t}}({}) = {}".format(t, dh_dt((t))))
display_latex(r"\frac{{\partial \mathbf{{x}}}}{{\partial t}}({}) = {}".format(t, dx_dt(t)))
display_latex(r"\frac{{\partial f}}{{\partial \mathbf{{x}} }} ({}) = {}".format(g(t), dz_dx(g(t))))
display_latex(r"{} * {} = {}".format(dx_dt(t), dz_dx(g(t)), np.dot(dx_dt(t), dz_dx(g(t)))))
# Dimensions of our matrices, n x m and m x k
n = 3
m = 4
k = 2
# Two random matrices
A_1 = np.random.rand(n, m)
A_2 = np.random.rand(m, k)
display_latex(r"A_1 = {}".format(matrix_to_latex(A_1)))
display_latex(r"A_2 = {}".format(matrix_to_latex(A_2)))
def f(x):
return jnp.dot(A_1, x)
def g(x):
return jnp.dot(A_2, x)
def h(x):
return f(g(x))
test_point = jnp.array([1.]*k)
display_latex(r"\mathbf{{x}} = {}".format(matrix_to_latex(test_point)))
display_latex(r"g(\mathbf{{x}}) = {}".format(matrix_to_latex(g(test_point))))
display_latex(r"f(g(\mathbf{{x}})) = {}".format(matrix_to_latex(f(g(test_point)))))
display_latex(r"h(\mathbf{{x}}) = {}".format(matrix_to_latex(h(test_point))))
# This time we'll go ahead and evaluate
dh_dx = jacfwd(h)(test_point)
dg_dx = jacfwd(g)(test_point)
df_dg = jacfwd(f)(g(test_point))
display_latex(r"\frac{{\partial f}}{{\partial g}} = {}".format(matrix_to_latex(df_dg)))
display_latex(r"\frac{{\partial g}}{{\partial x}} = {}".format(matrix_to_latex(dg_dx)))
display_latex(r"\frac{{\partial h}}{{\partial x}} = {}".format(matrix_to_latex(dh_dx)))
display_latex(r"\frac{{\partial f}}{{\partial g}} \frac{{\partial g}}{{\partial x}} = {} {} = {}".format(
matrix_to_latex(df_dg),
matrix_to_latex(dg_dx),
matrix_to_latex(np.dot(df_dg, dg_dx))))
test_point = jnp.array(np.identity(k))
## Take the 0-th index to collapse the outer array layer
dh_dx = jacfwd(h)(test_point[0])
dg_dx = jacfwd(g)(test_point[0])
df_dg = jacfwd(f)(jnp.array(np.identity(m))[0])
display_latex(r"\frac{{\partial f}}{{\partial g}} = {} = A_1".format(matrix_to_latex(df_dg)))
display_latex(r"\frac{{\partial g}}{{\partial x}} = {} = A_2".format(matrix_to_latex(dg_dx)))
display_latex(r"\frac{{\partial h}}{{\partial x}} = {} = A_1 A_2".format(matrix_to_latex(dh_dx)))
display_latex(r"A_1 A_2 = {}".format(matrix_to_latex(np.dot(A_1, A_2))))
def activation(x):
return jax.nn.sigmoid(x)
# return 1. / (1. + jnp.exp(-x))
def g(W, b, x):
# vmap applies the activation component-wise, in case
# the output is a vector
return vmap(activation)(jnp.dot(W, x) + b)
# Initialize W and b
# Choose their sizes
n = 1
m = 3
# Pick two random matrices
W = np.random.rand(n, m)
b = np.random.rand(n, 1)
print("Random initial parameter values")
display_latex(r"W = {}".format(matrix_to_latex(W)))
display_latex(r"b = {}".format(matrix_to_latex(b)))
def loss(y, W, b, x):
return (y - g(W, b, x))**2
# Compute the derivative of the loss function at an initial point
# from our "data"
y = np.random.rand(n, 1)
x_0 = np.random.rand(m, 1)
# Uncomment if you want to see the values
# print(x_0)
# print(y)
print("Partials for weights ({} x {} values):\n".format(n, m))
dloss_W = jacrev(loss, argnums=1)(y, W, b, x_0)
print(dloss_W)
print("\nPartials for bias ({} values):\n".format(n))
dloss_b = jacrev(loss, argnums=2)(y, W, b, x_0)
print(dloss_b)
## Data representing XOR
data = np.array([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=np.float32)
labels = np.array([0, 1, 1, 0], dtype=np.float32)
plt.scatter(data[:,0], data[:,1], c=labels, cmap='viridis', s=100)
plt.show()
# Initialize a fully-connected network
def init_params(key, in_dim, out_dims: list):
params = {}
num_layers = len(out_dims)
# Build up the layers iteratively
for layer in range(num_layers):
out_dim = out_dims[layer]
key, subkey1, subkey2 = random.split(key, num=3)
W = jax.nn.initializers.glorot_normal()(subkey1, (in_dim, out_dim))
b = jax.nn.initializers.normal()(subkey2, (out_dim,))
params["layer_{}".format(layer)] = {'W': W, 'b': b}
in_dim = out_dim
return params
@jit
def predict(params, inputs):
# This is the forward pass where we calculate the outputs of each layer
n_layers = len(params.keys())
for i, layer in enumerate(params.keys()):
W, b = params[layer].values()
if i == n_layers:
outputs = jnp.dot(inputs, W) + b
# Apply this activation to the hidden layers
else:
outputs = jax.nn.tanh(jnp.dot(inputs, W) + b)
inputs = outputs
return jax.nn.sigmoid(outputs)
@jit
def cross_entropy(y_hat, y):
return -y * jnp.log(y_hat) - (1 - y) * jnp.log(1 - y_hat)
@jit
def loss_fn(params, data, labels):
predictions = predict(params, data)
return jnp.mean(cross_entropy(predictions, labels))
@jit
def update(params, grads, alpha=0.1):
# Gradient descent update
return jax.tree_multimap(lambda p, g: p - alpha * g, params, grads)
# Network with no hidden layers
key = random.PRNGKey(0)
params = init_params(key, in_dim=2, out_dims=[1])
params
# Parameters for a network with one hidden layer with 10 neurons
key = random.PRNGKey(0)
params = init_params(key, in_dim=2, out_dims=[10, 1])
params
# per example gradients
key = random.PRNGKey(0)
params = init_params(key, in_dim=2, out_dims=[1])
vmap(partial(grad(loss_fn), params))(data, labels)
tkey = random.PRNGKey(0)
params = init_params(key, in_dim=2, out_dims=[1])
loss_grad_fn = grad(loss_fn)
losses = []
iterations = 500
for i in range(iterations):
grads = loss_grad_fn(params, data, labels)
losses.append(loss_fn(params, data, labels))
params = update(params, grads, alpha=.1)
plt.plot(losses)
plt.xlabel("Iteration")
plt.ylabel("Cross entropy loss")
fig, ax = plt.subplots()
xx, yy = np.mgrid[-.1:1.1:.01, -.1:1.1:.01]
grid = np.c_[xx.ravel(), yy.ravel()]
probs = predict(params, grid).reshape(xx.shape)
contour = ax.contourf(xx, yy, probs, levels=50)
fig.colorbar(contour)
ax.scatter(data[:,0], data[:,1], c=labels, s=100, cmap='viridis')
plt.show()
key = random.PRNGKey(9)
params = init_params(key, in_dim=2, out_dims=[10, 1])
loss_grad_fn = grad(loss_fn)
losses = []
iterations = 100
for i in range(iterations):
grads = loss_grad_fn(params, data, labels)
losses.append(loss_fn(params, data, labels))
params = update(params, grads, alpha=.8)
plt.plot(losses)
plt.xlabel("Iteration")
plt.ylabel("Cross entropy loss")
fig, ax = plt.subplots()
xx, yy = np.mgrid[-.1:1.1:.01, -.1:1.1:.01]
grid = np.c_[xx.ravel(), yy.ravel()]
probs = predict(params, grid).reshape(xx.shape)
contour = ax.contourf(xx, yy, probs, levels=50)
fig.colorbar(contour)
ax.scatter(data[:,0], data[:,1], c=labels, s=100, cmap='viridis')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here I'll create a model like normal, using the same one from my solution for part 4.
Step2: The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are precision and recall and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.
Step3: With the probabilities, we can get the most likely class using the ps.topk method. This returns the $k$ highest values. Since we just want the most likely class, we can use ps.topk(1). This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.
Step4: Now we can check if the predicted classes match the labels. This is simple to do by equating top_class and labels, but we have to be careful of the shapes. Here top_class is a 2D tensor with shape (64, 1) while labels is 1D with shape (64). To get the equality to work out the way we want, top_class and labels must have the same shape.
Step5: Now we need to calculate the percentage of correct predictions. equals has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to torch.mean. If only it was that simple. If you try torch.mean(equals), you'll get an error
Step6: The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up our code by turning off gradients using torch.no_grad()
Step7: Overfitting
Step8: Inference
|
<ASSISTANT_TASK:>
Python Code:
import torch
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5),
(0.5, 0.5, 0.5))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/',
download=True,
train=True,
transform=transform)
trainloader = torch.utils.data.DataLoader(dataset=trainset,
batch_size=64,
shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/',
download=True,
train=False,
transform=transform)
testloader = torch.utils.data.DataLoader(dataset=testset,
batch_size=64,
shuffle=True)
from torch import nn, optim
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
model = Classifier()
images, labels = next(iter(testloader))
# Get the class probabilities
ps = torch.exp(model(images))
# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples
print(ps.shape)
top_p, top_class = ps.topk(1, dim=1)
# Look at the most likely classes for the first 10 examples
print(top_class[:10,:])
equals = top_class == labels.view(*top_class.shape)
accuracy = torch.mean(equals.type(torch.FloatTensor))
print(f'Accuracy: {accuracy.item()*100}%')
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(params=model.parameters(),
lr=0.003)
epochs = 50
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
## TODO: Implement the validation pass and print out the validation accuracy
### Resetting the validation values each epoch
test_loss = 0
accuracy = 0
with torch.no_grad():
for images, labels in testloader:
# Calculate the log probabilities
log_probs = model(images)
# Add the loss of the batch to the total loss
test_loss += criterion(log_probs, labels)
ps = torch.exp(log_probs)
# Choose the predicted class with the highest probability
top_p, top_class = ps.topk(k=1, dim=1)
# Calculate number of correct guesses for the batch
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss / len(trainloader))
test_losses.append(test_loss / len(testloader))
print("Epoch: {}/{}.. ".format(e + 1, epochs),
"Training Loss: {:.3f}.. ".format(running_loss / len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss / len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy / len(testloader)))
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
## TODO: Define your model with dropout added
class MyModelWithDropout(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout with 0.2 probability of droppinga value
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
x = F.log_softmax(self.fc4(x), dim=1)
return x
## TODO: Train your model with dropout, and monitor the training progress with the validation loss and accuracy
model = MyModelWithDropout()
criterion = nn.NLLLoss()
optimizer = optim.Adam(params=model.parameters(),
lr=0.003)
epochs = 50
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
## TODO: Implement the validation pass and print out the validation accuracy
### Resetting the validation values each epoch
test_loss = 0
accuracy = 0
with torch.no_grad():
for images, labels in testloader:
# Calculate the log probabilities
log_probs = model(images)
# Add the loss of the batch to the total loss
test_loss += criterion(log_probs, labels)
# Obtaining softmax from log_softmax last layer activation
ps = torch.exp(log_probs)
# Choose the predicted class with the highest probability
top_p, top_class = ps.topk(k=1, dim=1)
# Calculate number of correct guesses for the batch
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss / len(trainloader))
test_losses.append(test_loss / len(testloader))
print("Epoch: {}/{}.. ".format(e + 1, epochs),
"Training Loss: {:.3f}.. ".format(running_loss / len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss / len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy / len(testloader)))
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
# Import helper module
import helper
# Evaluation
model.eval()
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.view(1, 784)
# Calculate the class probabilities (softmax) for img
with torch.no_grad():
output = model.forward(img)
ps = torch.exp(output)
# Plot the image and probabilities
helper.view_classify(img.view(1, 28, 28), ps, version='Fashion')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploring the data
Step2: Each image is 28 x 28 pixels and represents a digit from 0 to 9. These images are black and white, so each pixel is a value from 0 (white) to 255 (black). Raw numbers can be hard to interpret sometimes, so we can plot the values to see the handwritten digit as an image.
Step3: Define the model
Step5: Write Input Functions
Step6: Time to train the model! The original MNIST linear classifier had an error rate of 12%. Let's use that to sanity check that our model is learning.
Step7: Evaluating Predictions
Step8: It's understandable why the poor computer would have some trouble. Some of these images are difficult for even humans to read. In fact, we can see what the computer thinks each digit looks like.
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard
from tensorflow.keras.layers import Dense, Flatten, Softmax
print(tf.__version__)
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
HEIGHT, WIDTH = x_train[0].shape
NCLASSES = tf.size(tf.unique(y_train).y)
print("Image height x width is", HEIGHT, "x", WIDTH)
tf.print("There are", NCLASSES, "classes")
IMGNO = 12
# Uncomment to see raw numerical values.
# print(x_test[IMGNO])
plt.imshow(x_test[IMGNO].reshape(HEIGHT, WIDTH));
print("The label for image number", IMGNO, "is", y_test[IMGNO])
def linear_model():
# TODO: Build a sequential model and compile it.
return model
BUFFER_SIZE = 5000
BATCH_SIZE = 100
def scale(image, label):
# TODO
def load_dataset(training=True):
Loads MNIST dataset into a tf.data.Dataset
(x_train, y_train), (x_test, y_test) = mnist
x = x_train if training else x_test
y = y_train if training else y_test
# TODO: a) one-hot encode labels, apply `scale` function, and create dataset.
# One-hot encode the classes
if training:
# TODO
return dataset
def create_shape_test(training):
dataset = load_dataset(training=training)
data_iter = dataset.__iter__()
(images, labels) = data_iter.get_next()
expected_image_shape = (BATCH_SIZE, HEIGHT, WIDTH)
expected_label_ndim = 2
assert(images.shape == expected_image_shape)
assert(labels.numpy().ndim == expected_label_ndim)
test_name = 'training' if training else 'eval'
print("Test for", test_name, "passed!")
create_shape_test(True)
create_shape_test(False)
NUM_EPOCHS = 10
STEPS_PER_EPOCH = 100
model = linear_model()
train_data = load_dataset()
validation_data = load_dataset(training=False)
OUTDIR = "mnist_linear/"
checkpoint_callback = ModelCheckpoint(
OUTDIR, save_weights_only=True, verbose=1)
tensorboard_callback = TensorBoard(log_dir=OUTDIR)
history = model.fit(
# TODO: specify training/eval data, # epochs, steps per epoch.
verbose=2,
callbacks=[checkpoint_callback, tensorboard_callback]
)
BENCHMARK_ERROR = .12
BENCHMARK_ACCURACY = 1 - BENCHMARK_ERROR
accuracy = history.history['accuracy']
val_accuracy = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
assert(accuracy[-1] > BENCHMARK_ACCURACY)
assert(val_accuracy[-1] > BENCHMARK_ACCURACY)
print("Test to beat benchmark accuracy passed!")
assert(accuracy[0] < accuracy[1])
assert(accuracy[1] < accuracy[-1])
assert(val_accuracy[0] < val_accuracy[1])
assert(val_accuracy[1] < val_accuracy[-1])
print("Test model accuracy is improving passed!")
assert(loss[0] > loss[1])
assert(loss[1] > loss[-1])
assert(val_loss[0] > val_loss[1])
assert(val_loss[1] > val_loss[-1])
print("Test loss is decreasing passed!")
image_numbers = range(0, 10, 1) # Change me, please.
def load_prediction_dataset():
dataset = (x_test[image_numbers], y_test[image_numbers])
dataset = tf.data.Dataset.from_tensor_slices(dataset)
dataset = dataset.map(scale).batch(len(image_numbers))
return dataset
predicted_results = model.predict(load_prediction_dataset())
for index, prediction in enumerate(predicted_results):
predicted_value = np.argmax(prediction)
actual_value = y_test[image_numbers[index]]
if actual_value != predicted_value:
print("image number: " + str(image_numbers[index]))
print("the prediction was " + str(predicted_value))
print("the actual label is " + str(actual_value))
print("")
bad_image_number = 8
plt.imshow(x_test[bad_image_number].reshape(HEIGHT, WIDTH));
DIGIT = 0 # Change me to be an integer from 0 to 9.
LAYER = 1 # Layer 0 flattens image, so no weights
WEIGHT_TYPE = 0 # 0 for variable weights, 1 for biases
dense_layer_weights = model.layers[LAYER].get_weights()
digit_weights = dense_layer_weights[WEIGHT_TYPE][:, DIGIT]
plt.imshow(digit_weights.reshape((HEIGHT, WIDTH)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ์ ์ ํ๋ฅ (p-value)์ด 34%๋ก ๋์ผ๋ฏ๋ก ๊ท๋ฌด ๊ฐ์ค์ ๊ธฐ๊ฐํ ์ ์๋ค. ๋ฐ๋ผ์ $\theta=0.5$์ด๋ค.
Step2: ์ ์ ํ๋ฅ (p-value)์ด 92%๋ก ๋์ผ๋ฏ๋ก ๊ท๋ฌด ๊ฐ์ค์ ๊ธฐ๊ฐํ ์ ์๋ค. ๋ฐ๋ผ์ $\theta=0.5$์ด๋ค.
Step3: ์ ์ ํ๋ฅ (p-value)์ด 0.018%๋ก ๋ฎ์ผ๋ฏ๋ก ๊ท๋ฌด ๊ฐ์ค์ ๊ธฐ๊ฐํ ์ ์๋ค. ๋ฐ๋ผ์ $\theta \neq 0.5$์ด๋ค.
Step4: ์ ์ ํ๋ฅ (p-value)์ด 17.8%๋ก ๋์ผ๋ฏ๋ก ๊ท๋ฌด ๊ฐ์ค์ ๊ธฐ๊ฐํ ์ ์๋ค. ๋ฐ๋ผ์ $\theta_0=(0.25, 0.25, 0.25, 0.25)$์ด๋ค.
Step5: ์ ์ ํ๋ฅ (p-value)์ด 0.087%์ด๋ฏ๋ก ๊ท๋ฌด ๊ฐ์ค์ ๊ธฐ๊ฐํ ์ ์๋ค. ๋ฐ๋ผ์ $\theta \neq (0.25, 0.25, 0.25, 0.25))$์ด๋ค.
Step6: ์ ์ ํ๋ฅ (p-value)์ด 1.96%์ด๋ฏ๋ก ๋ง์ฝ ์ ์ ์์ค์ด 5% ์ด์ ์ด๋ผ๋ฉด ๊ท๋ฌด ๊ฐ์ค์ ๊ธฐ๊ฐํ ์ ์๋ค. ๋ฐ๋ผ์ $\mu \neq 0$์ด๋ค. ์ด ๊ฒฝ์ฐ๋ ๊ฒ์ ๊ฒฐ๊ณผ๊ฐ ์ค๋ฅ์ธ ์๋ผ๊ณ ๋ณผ ์ ์๋ค. ๊ฒ์ ๊ฒฐ๊ณผ๊ฐ ์ค๋ฅ๋ก ๋์จ ์ด์ ๋ ๋ฐ์ดํฐ ์๊ฐ 10๊ฐ๋ก ๋ถ์กฑํ๊ธฐ ๋๋ฌธ์ด๋ค.
Step7: ์ ์ ํ๋ฅ (p-value)์ด 54.98%์ด๋ฏ๋ก ๊ท๋ฌด ๊ฐ์ค์ ๊ธฐ๊ฐํ ์ ์๋ค. ๋ฐ๋ผ์ $\mu = 0$์ด๋ค.
Step8: ์ ์ ํ๋ฅ (p-value)์ด 4.78%์ด๋ฏ๋ก ๋ง์ฝ ์ ์ ์์ค์ด 5% ์ด์ ์ด๋ผ๋ฉด ๊ท๋ฌด ๊ฐ์ค์ ๊ธฐ๊ฐํ ์ ์๋ค. ๋ฐ๋ผ์ $\mu \neq 0$์ด๋ค. ์ด ๊ฒฝ์ฐ๋ ๊ฒ์ ๊ฒฐ๊ณผ๊ฐ ์ค๋ฅ์ธ ์๋ผ๊ณ ๋ณผ ์ ์๋ค. ๊ฒ์ ๊ฒฐ๊ณผ๊ฐ ์ค๋ฅ๋ก ๋์จ ์ด์ ๋ ๋ฐ์ดํฐ ์๊ฐ 10๊ฐ๋ก ๋ถ์กฑํ๊ธฐ ๋๋ฌธ์ด๋ค.
Step9: ์ ์ ํ๋ฅ (p-value)์ด 55.62%์ด๋ฏ๋ก ๊ท๋ฌด ๊ฐ์ค์ ๊ธฐ๊ฐํ ์ ์๋ค. ๋ฐ๋ผ์ $\mu = 0$์ด๋ค.
Step10: ์ ์ ํ๋ฅ (p-value)์ด 68.4%์ด๋ฏ๋ก ๊ท๋ฌด ๊ฐ์ค์ ๊ธฐ๊ฐํ ์ ์๋ค. ๋ฐ๋ผ์ $\mu_1 \neq \mu_2$์ด๋ค. ์ด ๊ฒฝ์ฐ๋ ๊ฒ์ ๊ฒฐ๊ณผ๊ฐ ์ค๋ฅ์ธ ์๋ผ๊ณ ๋ณผ ์ ์๋ค.
Step11: ๋ฐ์ดํฐ์ ๊ฐฏ์๋ฅผ 50๊ฐ์ 100๊ฐ๋ก ์ฆ๊ฐ์ํจ ๊ฒฝ์ฐ์ ์ ์ ํ๋ฅ ์ 0.8%๋ก ๊ฐ์ํ์๋ค. ๋ฐ๋ผ์ ๋ ํ๋ฅ ๋ถํฌ์ ๊ธฐ๋๊ฐ์ด ์ผ์นํ๋ค๋ ๊ท๋ฌด ๊ฐ์ค์ ๊ธฐ๊ฐํ ์ ์๋ค.
Step12: 5 ๊ฐ์ ๋ฐ์ดํฐ๋ง์ผ๋ก๋ ๋ ํ๊ท ์ด ๋ค๋ฅด๋ค๋ ๊ฒ์ ์ ์ ํ๋ฅ (p-value) 0.2%์ ์ ํ๋๋ก ์์๋ด์์์ ํ์ธํ ์ ์๋ค.
Step13: ๋ฑ๋ถ์ฐ ๊ฒ์ (Equal-variance test)
Step14: ์ ๊ท์ฑ ๊ฒ์
|
<ASSISTANT_TASK:>
Python Code:
N = 10
theta_0 = 0.5
np.random.seed(0)
x = sp.stats.bernoulli(theta_0).rvs(N)
n = np.count_nonzero(x)
n
sp.stats.binom_test(n, N)
N = 100
theta_0 = 0.5
np.random.seed(0)
x = sp.stats.bernoulli(theta_0).rvs(N)
n = np.count_nonzero(x)
n
sp.stats.binom_test(n, N)
N = 100
theta_0 = 0.35
np.random.seed(0)
x = sp.stats.bernoulli(theta_0).rvs(N)
n = np.count_nonzero(x)
n
sp.stats.binom_test(n, N)
N = 10
K = 4
theta_0 = np.ones(K)/K
np.random.seed(0)
x = np.random.choice(K, N, p=theta_0)
n = np.bincount(x, minlength=K)
n
sp.stats.chisquare(n)
N = 100
K = 4
theta_0 = np.array([0.35, 0.30, 0.20, 0.15])
np.random.seed(0)
x = np.random.choice(K, N, p=theta_0)
n = np.bincount(x, minlength=K)
n
sp.stats.chisquare(n)
N = 10
mu_0 = 0
np.random.seed(0)
x = sp.stats.norm(mu_0).rvs(N)
x
def ztest_1samp(x, sigma2=1, mu=0):
z = (x.mean() - mu)/ np.sqrt(sigma2/len(x))
return z, 2 * sp.stats.norm().sf(np.abs(z))
ztest_1samp(x)
N = 100
mu_0 = 0
np.random.seed(0)
x = sp.stats.norm(mu_0).rvs(N)
ztest_1samp(x)
N = 10
mu_0 = 0
np.random.seed(0)
x = sp.stats.norm(mu_0).rvs(N)
sns.distplot(x, kde=False, fit=sp.stats.norm)
plt.show()
sp.stats.ttest_1samp(x, popmean=0)
N = 100
mu_0 = 0
np.random.seed(0)
x = sp.stats.norm(mu_0).rvs(N)
sns.distplot(x, kde=False, fit=sp.stats.norm)
plt.show()
sp.stats.ttest_1samp(x, popmean=0)
N_1 = 10; mu_1 = 0; sigma_1 = 1
N_2 = 10; mu_2 = 0.5; sigma_2 = 1
np.random.seed(0)
x1 = sp.stats.norm(mu_1, sigma_1).rvs(N_1)
x2 = sp.stats.norm(mu_2, sigma_2).rvs(N_2)
sns.distplot(x1, kde=False, fit=sp.stats.norm)
sns.distplot(x2, kde=False, fit=sp.stats.norm)
plt.show()
sp.stats.ttest_ind(x1, x2, equal_var=True)
N_1 = 50; mu_1 = 0; sigma_1 = 1
N_2 = 100; mu_2 = 0.5; sigma_2 = 1
np.random.seed(0)
x1 = sp.stats.norm(mu_1, sigma_1).rvs(N_1)
x2 = sp.stats.norm(mu_2, sigma_2).rvs(N_2)
sns.distplot(x1, kde=False, fit=sp.stats.norm)
sns.distplot(x2, kde=False, fit=sp.stats.norm)
plt.show()
sp.stats.ttest_ind(x1, x2, equal_var=True)
N = 5
mu_1 = 0
mu_2 = 0.5
np.random.seed(1)
x1 = sp.stats.norm(mu_1).rvs(N)
x2 = x1 + sp.stats.norm(mu_2, 0.1).rvs(N)
sns.distplot(x1, kde=False, fit=sp.stats.norm)
sns.distplot(x2, kde=False, fit=sp.stats.norm)
plt.show()
sp.stats.ttest_rel(x1, x2)
def chi2var_test(x, sigma2=1):
v = x.var(ddof=1)
t = (len(x) - 1)*v/sigma2
return t, sp.stats.chi2(df=len(x)-1).sf(np.abs(t))
N = 10
mu_0 = 0
sigma_0 = 1.1
np.random.seed(0)
x = sp.stats.norm(mu_0, sigma_0).rvs(N)
sns.distplot(x, kde=False, fit=sp.stats.norm)
plt.show()
x.std()
chi2var_test(x)
N1 = 100
N2 = 100
sigma_1 = 1
sigma_2 = 1.2
np.random.seed(0)
x1 = sp.stats.norm(0, sigma_1).rvs(N1)
x2 = sp.stats.norm(0, sigma_2).rvs(N2)
sns.distplot(x1, kde=False, fit=sp.stats.norm)
sns.distplot(x2, kde=False, fit=sp.stats.norm)
plt.show()
x1.std(), x2.std()
sp.stats.bartlett(x1, x2)
sp.stats.fligner(x1, x2)
sp.stats.levene(x1, x2)
np.random.seed(0)
N1 = 50
N2 = 100
x1 = sp.stats.norm(0, 1).rvs(N1)
x2 = sp.stats.norm(0.5, 1.5).rvs(N2)
sns.distplot(x1)
sns.distplot(x2)
plt.show()
sp.stats.ks_2samp(x1, x2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import required modules
Step2: Target Configuration
Step3: Workload Execution and Functions Profiling Data Collection
Step4: Parse Trace and Profiling Data
Step5: Report Functions Profiling Data
Step6: Plot Functions Profiling Data
|
<ASSISTANT_TASK:>
Python Code:
import logging
from conf import LisaLogging
LisaLogging.setup()
# Generate plots inline
%matplotlib inline
import json
import os
# Support to access the remote target
import devlib
from env import TestEnv
from executor import Executor
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
# Support for trace events analysis
from trace import Trace
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
"password" : 'juno',
# Folder where all the results will be collected
"results_dir" : "TraceAnalysis_FunctionsProfiling",
# Define devlib modules to load
"modules": ['cpufreq'],
"exclude_modules" : [ 'hwmon' ],
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"functions" : [
"pick_next_task_fair",
"select_task_rq_fair",
"enqueue_task_fair",
"update_curr_fair",
"dequeue_task_fair",
],
"buffsize" : 100 * 1024,
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
# "rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
def experiment(te):
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# FTrace the execution of this workload
te.ftrace.start()
rtapp.run(out_dir=te.res_dir)
te.ftrace.stop()
# Collect and keep track of the trace
trace_file = os.path.join(te.res_dir, 'trace.dat')
te.ftrace.get_trace(trace_file)
# Collect and keep track of the Kernel Functions performance data
stats_file = os.path.join(te.res_dir, 'trace.stats')
te.ftrace.get_stats(stats_file)
# Dump platform descriptor
te.platform_dump(te.res_dir)
experiment(te)
# Base folder where tests folder are located
res_dir = te.res_dir
logging.info('Content of the output folder %s', res_dir)
!tree {res_dir}
with open(os.path.join(res_dir, 'platform.json'), 'r') as fh:
platform = json.load(fh)
print json.dumps(platform, indent=4)
logging.info('LITTLE cluster max capacity: %d',
platform['nrg_model']['little']['cpu']['cap_max'])
trace = Trace(platform, res_dir, events=[])
# Get the DataFrame for the specified list of kernel functions
df = trace.data_frame.functions_stats(['enqueue_task_fair', 'dequeue_task_fair'])
df
# Get the DataFrame for the single specified kernel function
df = trace.data_frame.functions_stats('select_task_rq_fair')
df
# Plot Average and Total execution time for the specified
# list of kernel functions
trace.analysis.functions.plotProfilingStats(
functions = [
'select_task_rq_fair',
'enqueue_task_fair',
'dequeue_task_fair'
],
metrics = [
# Average completion time per CPU
'avg',
# Total execution time per CPU
'time',
]
)
# Plot Average execution time for the single specified kernel function
trace.analysis.functions.plotProfilingStats(
functions = 'update_curr_fair',
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Specify region
Step2: Set data
Step3: Reanalysis
Step4: Once have full data set can then subdivide to create individual files for different regions to reduce the run time when reading in data for individual regions.
Step5: Region masks
Step6: The actual mask files are made with a different script, writen in NCL The code here modifies the generic script based on what region we're interested in at the moment.
Step7: Drawing a map
|
<ASSISTANT_TASK:>
Python Code:
#--- Libraries
import pandas as pd # statistics packages
import numpy as np # linear algebra packages
import matplotlib.pyplot as plt # plotting routines
import seaborn as sns # more plotting routines
import shapefile # routines for using 'shapefiles'
import urllib # routines for interacting with websites
import subprocess # routines for calling external OS commands
from mpl_toolkits.basemap import Basemap # plotting routines for map making
from matplotlib import gridspec # plotting routines for multiple plots
from netCDF4 import Dataset # routines for interacting with NetCDF files
from matplotlib import cm # more plotting routines
from matplotlib.collections import LineCollection # more plotting routines
from cdo import * # routines for interacting with NetCDF files
cdo = Cdo() # via an external program
# place graphics in the notebook document
%matplotlib inline
#--- Identify country for example
# label country
country = 'India'
# define bounding box for region
mlat = '0' ; Mlat = '40' ; mlon = '65' ; Mlon = '105'
#--- Pull in data from DesInvetar records
# Read file of reported heatwaves (original spreadsheet)
heatwave_data = pd.read_csv('../data/Heatwaves_database.csv')
# repair region name with space before name
heatwave_data.loc[(heatwave_data.Region==' Tamil Nadu'),'Region'] = 'Tamil Nadu'
# list out the dates for example country (India)
india_dates = heatwave_data['Date (YMD)'][heatwave_data['Country'].isin(['India'])]
# find year of earliest entry
min_year = np.min([int(x.split('/')[0]) for x in india_dates])
# find year of latest entry
max_year = np.max([int(x.split('/')[0]) for x in india_dates])
#---Download NetCDF files
# path to data directory for max/min daily temperatures
path_maxmin = 'ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface_gauss'
# path to data directory for 6hr temperature records
path_hourly = 'ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis/surface_gauss'
# loop through years
for yr in range(1948,2016) :
# write max 2meter temperature to new file
path = path_maxmin+'/tmax.2m.gauss.'+str(yr)+'.nc'
ofile = open('../data/t2m.max.daily.'+str(yr)+'.nc','w')
ofile.write(urllib.urlopen(path).read())
ofile.close()
# write min 2meter temperature to new file
path = path_maxmin+'/tmin.2m.gauss.'+str(yr)+'.nc'
ofile = open('../data/t2m.min.daily.'+str(yr)+'.nc','w')
ofile.write(urllib.urlopen(path).read())
ofile.close()
# write 2meter temperature at 6-hour intervals to new file
path = path_hourly+'/air.2m.gauss.'+str(yr)+'.nc'
ofile = open('../data/t2m.subdaily.'+str(yr)+'.nc','w')
ofile.write(urllib.urlopen(path).read())
ofile.close()
# set data as single multiyear files
_ = cdo.mergetime(input='../data/t2m.max.daily.*.nc',output='../data/t2m.max.daily.nc')
_ = cdo.mergetime(input='../data/t2m.min.daily.*.nc',output='../data/t2m.min.daily.nc')
_ = cdo.mergetime(input='../data/t2m.subdaily.*.nc',output='../data/t2m.subdaily.nc')
#--- Create data files of region
# select region from min-temperature data
_ = cdo.sellonlatbox(','.join([mlon,Mlon,mlat,Mlat]),
input='../data/t2m.min.daily.nc',
output='../data/'+country+'.t2m.min.daily.nc')
# select region from max-temperature data
_ = cdo.sellonlatbox(','.join([mlon,Mlon,mlat,Mlat]),
input='../data/t2m.max.daily.nc',
output='../data/'+country+'.t2m.max.daily.nc')
# select region from hourly-temperature data
_ = cdo.sellonlatbox(','.join([mlon,Mlon,mlat,Mlat]),
input='../data/t2m.subdaily.nc',
output='../data/'+country+'.t2m.subdaily.nc')
# create a daily mean temperature file
_ = cdo.daymean(input='../data/'+country+'.t2m.subdaily.nc',
output='../data/'+country+'.t2m.daily.nc')
#--- Trim time range of file to match disaster records
# list years in time range
years_in_record = ','.join([ str(x) for x in range(min_year,max_year+1) ])
# subset regional data
_ = cdo.selyear(years_in_record,
input='../data/'+country+'.t2m.min.daily.nc',
output='../data/'+country+'.t2m.min.daily.subset.nc')
_ = cdo.selyear(years_in_record,
input='../data/'+country+'.t2m.max.daily.nc',
output='../data/'+country+'.t2m.max.daily.subset.nc')
# _ = cdo.selyear(years_in_record,
# input='../data/'+country+'.t2m.subdaily.nc',
# output='../data/'+country+'.t2m.subdaily.subset.nc')
_ = cdo.selyear(years_in_record,
input='../data/'+country+'.t2m.daily.nc',
output='../data/'+country+'.t2m.daily.subset.nc')
# retain base period file (needed for one of the heat wave definitions)
years = ','.join([ str(x) for x in range(1960,1991)])
_ = cdo.selyear(years,
input='../data/'+country+'.t2m.max.daily.nc',
output='../data/'+country+'basefile.nc')
#--- Create blank file for region
# write grid information to file
ofile = open('../data/ncep_grid.asc','w')
ofile.write('\n'.join(cdo.griddes(input='../data/'+country+'.t2m.daily.nc')))
ofile.close()
# create data file with all values set to 1
_ = cdo.const('1','../data/ncep_grid.asc',
output='../data/'+country+'.blank.ncepgrid.nc',
options='-f nc')
#--- Identify regions of interest
# make list of unique region names for country
regions = list( set(heatwave_data.Region.where(heatwave_data.Country==country)) )
# remove nans (from regions that arent in the selected country)
regions = [x for x in regions if str(x) != 'nan']
regions = [x.title() for x in regions]
if ( country == 'Chile') :
regions_shapefile = [u'Antofagasta',u'Araucan\xeda',
u'Ais\xe9n del General Carlos Ib\xe1\xf1ez del Campo',
u'Regi\xf3n Metropolitana de Santiago',
u'Magallanes y Ant\xe1rtica Chilena',
u"Libertador General Bernardo O'Higgins"]
else :
regions_shapefile = regions
#--- Create masks
# loop through regions
for i in range(len(regions)) :
# find the name of the region
reg = regions[i].title()
# find the name of the region as defined by the shapefile
reg_shapefile = regions_shapefile[i] #reg_shapefile = regions_shapefile[i].decode('utf-8')
# remove spaces
reg = reg.strip()
# report what's happening
print("Creating masking script for "+reg+", aka "+reg_shapefile)
# create NCL script from defualt file with name of region
with open('maskregions_'+"".join(country.split(" "))+'.ncl', 'r') as input_file, open('crMaskFile.ncl', 'w') as output_file:
# check lines for dummy line
for line in input_file :
if line.strip() == 'region = "STATE/PROVINCE"' :
# overwrite with region name
output_file.write(' region = "'+reg_shapefile.encode('utf-8')+'"\n')
else :
output_file.write(line)
# run NCL routine
print("Running masking script")
# subprocess.call(['/bin/bash','-i','-c','ncl crMaskFile.ncl'])
subprocess.call(['/bin/bash','-c','ncl crMaskFile.ncl'])
# create a file that masks the region
print("Renaming mask and copying to data folder.")
subprocess.call(['cp','mask.nc',"../data/"+"_".join(reg.split())+'.mask.nc'])
#--- Create single mask file showing all considered regions
# combine all the individual mask files
_ = cdo.add(input='../data/Orissa.mask.nc ../data/Uttar_Pradesh.mask.nc',
output='../data/tmp.nc')
_ = cdo.add(input='../data/tmp.nc ../data/Tamil_Nadu.mask.nc',
output='../data/India.masks.nc')
#--- Map regions of India used in this example
# read which regions are included in disaster database
regions = list(set(heatwave_data.loc[(heatwave_data.Country=='India'),'Region']))
# Create a map object
chart = Basemap(projection='lcc',resolution='c',
lat_0=20,lon_0=85,
llcrnrlat=5,urcrnrlat=35,
llcrnrlon=70,urcrnrlon=100)
# add geographic features
chart.shadedrelief()
# draw parallels and meridians.
chart.drawparallels(np.arange(-90.,91.,10.),labels=[False,True,True,False])
chart.drawmeridians(np.arange(-180.,181.,10.),labels=[True,False,False,True])
# add country outline
chart.readshapefile('../data/IND_adm0', 'IND0',drawbounds=True) ;
# add region outlines, for regions in data set
chart.readshapefile('../data/IND_adm1', 'IND1',drawbounds=False) ;
for info, shape in zip(chart.IND1_info, chart.IND1):
if info['NAME_1'] in regions :
x, y = zip(*shape)
chart.plot(x, y, marker=None,color=sns.xkcd_rgb['dusty orange'])
# load file of combined regional masks
ncfile = Dataset('../data/India.masks.nc')
# read mask data
rmask = ncfile.variables['region_mask'][:]
# get coordinates of data
lons = ncfile.variables['lon'][:]
lats = ncfile.variables['lat'][:]
# shift so that lines show grid box boundaries,
# rather than grid point locations
lons = lons - (1.875/2)
lats = lats + (1.9047/2)
# if in western hemisphere, need to label as
# "all the way round", rather than +/-
# lons = lons - 360
# set coordinates list as grid of locations
lons, lats = np.meshgrid(lons,lats)
# overlay region masks
chart.pcolormesh(lons,lats,rmask,shading='flat',latlon=True, alpha=0.2) ;
# save image
plt.savefig('../figures/india.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model definition
Step2: Then, given the gradient of MSE wrt to w and b, we can define how we update the parameters via SGD
Step3: The whole model can be encapsulated in a function, which takes as input x and target, returns the current loss value and updates its parameter according to updates.
Step4: Training
Step5: We can also plot the loss history
Step6: Your Turn
|
<ASSISTANT_TASK:>
Python Code:
import keras.backend as K
import numpy as np
# Placeholders and variables
x = K.placeholder()
target = K.placeholder()
lr = K.variable(0.1)
w = K.variable(np.random.rand())
b = K.variable(np.random.rand())
# Define model and loss
y = w * x + b
loss = K.mean(K.square(y-target))
grads = K.gradients(loss, [w,b])
updates = [(w, w-lr*grads[0]), (b, b-lr*grads[1])]
train = K.function(inputs=[x, target], outputs=[loss], updates=updates)
# Generate data
np_x = np.random.rand(1000)
np_target = 0.96*np_x + 0.24
# Training
loss_history = []
for epoch in range(200):
current_loss = train([np_x, np_target])[0]
loss_history.append(current_loss)
if epoch % 20 == 0:
print("Loss: %.03f, w, b: [%.02f, %.02f]" % (current_loss, K.eval(w), K.eval(b)))
# Plot loss history
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(loss_history)
from kaggle_data import load_data, preprocess_data, preprocess_labels
X_train, labels = load_data('data/kaggle_ottogroup/train.csv', train=True)
X_train, scaler = preprocess_data(X_train)
Y_train, encoder = preprocess_labels(labels)
X_test, ids = load_data('data/kaggle_ottogroup/test.csv', train=False)
X_test, _ = preprocess_data(X_test, scaler)
nb_classes = Y_train.shape[1]
print(nb_classes, 'classes')
dims = X_train.shape[1]
print(dims, 'dims')
feats = dims
training_steps = 25
x = K.placeholder(dtype="float", shape=X_train.shape)
target = K.placeholder(dtype="float", shape=Y_train.shape)
# Set model weights
W = K.variable(np.random.rand(dims, nb_classes))
b = K.variable(np.random.rand(nb_classes))
# Define model and loss
y = K.dot(x, W) + b
loss = K.categorical_crossentropy(y, target)
activation = K.softmax(y) # Softmax
# Minimize error using cross entropy
cross_entropy = K.categorical_crossentropy(activation, target)
loss = K.mean(-K.sum(cross_entropy))
grads = K.gradients(loss, [W,b])
updates = [(W, W-lr*grads[0]), (b, b-lr*grads[1])]
train = K.function(inputs=[x, target], outputs=[loss], updates=updates)
# Training
loss_history = []
for epoch in range(training_steps):
current_loss = train([X_train, Y_train])[0]
loss_history.append(current_loss)
if epoch % 20 == 0:
print("Loss: {}".format(current_loss))
#plotting
plt.plot(range(len(loss_history)), loss_history, 'o', label='Logistic Regression Training phase')
plt.ylabel('cost')
plt.xlabel('epoch')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Pre-processing Functions
Step9: Tokenize Punctuation
Step11: Pre-process all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step17: Input
Step18: Test your dataloader
Step22: Build the Neural Network
Step25: Define forward and backpropagation
Step27: Neural Network Training
Step28: Hyperparameters
Step30: Train
Step32: Question
Step35: Generate TV Script
Step37: Generate a New Script
Step38: Save your favorite scripts
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
view_line_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
# return tuple
return (None, None)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_rnn(RNN, train_on_gpu)
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
DON'T MODIFY ANYTHING IN THIS CELL
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
DON'T MODIFY ANYTHING IN THIS CELL
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The relevent code is under mx.recordio. There are two classes
Step2: Then we can read it back by opening the same file with 'r'
Step3: MXIndexedRecordIO
Step4: We can then access records with keys
Step5: You can list all keys with
Step6: Packing and Unpacking Data
Step7: Image Data
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
import mxnet as mx
import numpy as np
import matplotlib.pyplot as plt
record = mx.recordio.MXRecordIO('tmp.rec', 'w')
for i in range(5):
record.write('record_%d'%i)
record.close()
record = mx.recordio.MXRecordIO('tmp.rec', 'r')
while True:
item = record.read()
if not item:
break
print(item)
record.close()
record = mx.recordio.MXIndexedRecordIO('tmp.idx', 'tmp.rec', 'w')
for i in range(5):
record.write_idx(i, 'record_%d'%i)
record.close()
record = mx.recordio.MXIndexedRecordIO('tmp.idx', 'tmp.rec', 'r')
record.read_idx(3)
record.keys
# pack
data = 'data'
label1 = 1.0
header1 = mx.recordio.IRHeader(flag=0, label=label1, id=1, id2=0)
s1 = mx.recordio.pack(header1, data)
print('float label:', repr(s1))
label2 = [1.0, 2.0, 3.0]
header2 = mx.recordio.IRHeader(flag=0, label=label2, id=2, id2=0)
s2 = mx.recordio.pack(header2, data)
print('array label:', repr(s2))
# unpack
print(*mx.recordio.unpack(s1))
print(*mx.recordio.unpack(s2))
# pack_img
data = np.ones((3,3,1), dtype=np.uint8)
label = 1.0
header = mx.recordio.IRHeader(flag=0, label=label, id=0, id2=0)
s = mx.recordio.pack_img(header, data, quality=100, img_fmt='.jpg')
print(repr(s))
# unpack_img
print(*mx.recordio.unpack_img(s))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Disposition outcoming coding
Step8: Running a simulation
Step9: Predicting case outcomes with court reversal rate
Step10: Predicting case outcomes with Justice reversal rate
Step11: Predicting Justice outcomes with Justice reversal rate
Step12: Predicting Justice outcomes with trailing Justice reversal rate
Step13: Simulating case votes outcomes with trailing Justice reversal rate
|
<ASSISTANT_TASK:>
Python Code:
Setup the outcome map.
Rows correspond to vote types. Columns correspond to disposition types.
Element values correspond to:
* -1: no precedential issued opinion or uncodable, i.e., DIGs
* 0: affirm, i.e., no change in precedent
* 1: reverse, i.e., change in precent
outcome_map = pandas.DataFrame([[-1, 0, 1, 1, 1, 0, 1, -1, -1, -1, -1],
[-1, 1, 0, 0, 0, 1, 0, -1, -1, -1, -1],
[-1, 0, 1, 1, 1, 0, 1, -1, -1, -1, -1],
[-1, 0, 1, 1, 1, 0, 1, -1, -1, -1, -1],
[-1, 0, 1, 1, 1, 0, 1, -1, -1, -1, -1],
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1],
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1],
[-1, 0, 0, 0, -1, 0, -1, -1, -1, -1, -1]])
outcome_map.columns = range(1, 12)
outcome_map.index = range(1, 9)
def get_outcome(vote, disposition):
Return the outcome code.
if pandas.isnull(vote) or pandas.isnull(disposition):
return -1
return outcome_map.loc[int(vote), int(disposition)]
# Map the case-level disposition outcome
scdb_data.loc[:, "case_outcome_disposition"] = outcome_map.loc[1, scdb_data.loc[:, "caseDisposition"]].values
# Map the justice-level disposition outcome
scdb_data.loc[:, "justice_outcome_disposition"] = scdb_data.loc[:, ("vote", "caseDisposition")] \
.apply(lambda row: get_outcome(row["vote"], row["caseDisposition"]), axis=1)
def predict_court_case_rate(historical_data, justice_list):
Prediction method based on the entire Court case-level historical reversal rate.
:param historical_data: SCDB DataFrame to use for out-of-sample calculationi; must be a subset of SCDB justice-centered
data known up to point in time
:param justice_list: list of Justices to generate predictions for
:return: dictionary containing a prediction score for reversal for each Justice
# Calculate the rate
counts = historical_data.loc[:, "case_outcome_disposition"].value_counts()
rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1])
# Create return dictionary
prediction_map = dict([(justice, rate) for justice in justice_list])
return prediction_map
def predict_court_justice_rate(historical_data, justice_list):
Prediction method based on the entire Court justice-level historical reversal rate.
:param historical_data: SCDB DataFrame to use for out-of-sample calculationi; must be a subset of SCDB justice-centered
data known up to point in time
:param justice_list: list of Justices to generate predictions for
:return: dictionary containing a prediction score for reversal for each Justice
# Calculate the rate
counts = historical_data.loc[:, "justice_outcome_disposition"].value_counts()
rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1])
# Create return dictionary
prediction_map = dict([(justice, rate) for justice in justice_list])
return prediction_map
def predict_justice_rate(historical_data, justice_list):
Prediction method based on the per-Justice historical reversal rate.
:param historical_data: SCDB DataFrame to use for out-of-sample calculationi; must be a subset of SCDB justice-centered
data known up to point in time
:param justice_list: list of Justices to generate predictions for
:return: dictionary containing a prediction score for reversal for each Justice
# Create return dictionary
prediction_map = dict([(justice, numpy.nan) for justice in justice_list])
# Calculate the rate
for justice_id, justice_data in historical_data.groupby('justice'):
# Check justice ID
if justice_id not in justice_list:
continue
# Else, get the rate.
counts = justice_data.loc[:, "justice_outcome_disposition"].value_counts()
rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1])
prediction_map[justice_id] = rate
# In some cases, we have a new Justice without historical data. Fill their value with the overall rate.
counts = historical_data.loc[:, "justice_outcome_disposition"].value_counts()
rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1])
for justice in justice_list:
if pandas.isnull(prediction_map[justice]):
prediction_map[justice] = rate
return prediction_map
def predict_justice_last_rate(historical_data, justice_list, last_terms=1):
Prediction method based on the per-Justice historical reversal rate.
:param historical_data: SCDB DataFrame to use for out-of-sample calculationi; must be a subset of SCDB justice-centered
data known up to point in time
:param justice_list: list of Justices to generate predictions for
:param last_terms: number of recent terms to use for rate estimate
:return: dictionary containing a prediction score for reversal for each Justice
# Create return dictionary
prediction_map = dict([(justice, numpy.nan) for justice in justice_list])
# Calculate the rate
for justice_id, justice_data in historical_data.groupby('justice'):
# Check justice ID
if justice_id not in justice_list:
continue
# Else, get the rate.
max_term = justice_data["term"].max()
counts = justice_data.loc[justice_data["term"] >= (max_term-last_terms+1), "justice_outcome_disposition"].value_counts()
rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1])
prediction_map[justice_id] = rate
# In some cases, we have a new Justice without historical data. Fill their value with the overall rate.
counts = historical_data.loc[:, "justice_outcome_disposition"].value_counts()
rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1])
for justice in justice_list:
if pandas.isnull(prediction_map[justice]):
prediction_map[justice] = rate
return prediction_map
def run_simulation(simulation_data, term_list, prediction_method, score_method="binary"):
This method defines the simulation driver.
:param simulation_data: SCDB DataFrame to use for simulation; must be a subset of SCDB justice-centered data
:param term_list: list of terms to simulate, e.g., [2000, 2001, 2002]
:param prediction_method: method that takes historical data and indicates, by justice, predictions for term
:param score_method: "binary" or "stratified"; binary maps to score >= 0.5, stratified maps to score <= random
:return: copy of simulation_data with additional columns representing predictions
# Initialize predictions
return_data = simulation_data.copy()
return_data.loc[:, "prediction"] = numpy.nan
return_data.loc[:, "prediction_score"] = numpy.nan
# Iterate over all terms
for term in term_list:
# Get indices for dockets to predict and use for historical data
before_term_index = simulation_data.loc[:, "term"] < term
current_term_index = simulation_data.loc[:, "term"] == term
# Get the list of justices
term_justices = sorted(simulation_data.loc[current_term_index, "justice"].unique().tolist())
# Get the prediction map
prediction_map = prediction_method(simulation_data.loc[before_term_index, :], term_justices)
# Get the predictions
return_data.loc[current_term_index, "prediction_score"] = [prediction_map[j] for j in simulation_data.loc[current_term_index, "justice"].values]
# Support both most_frequent and stratified approaches
if score_method == "binary":
return_data.loc[current_term_index, "prediction"] = (return_data.loc[current_term_index, "prediction_score"] >= 0.5).apply(int)
elif score_method == "stratified":
return_data.loc[current_term_index, "prediction"] = (return_data.loc[current_term_index, "prediction_score"] >= numpy.random.random(return_data.loc[current_term_index].shape[0])).apply(int)
else:
raise NotImplementedError
# Get the return range and return
term_index = (return_data.loc[:, "term"].isin(term_list)) & (return_data.loc[:, "case_outcome_disposition"] >= 0) & (return_data.loc[:, "justice_outcome_disposition"] >= 0)
return return_data.loc[term_index, :]
# Set parameters
start_term = 1953
end_term = 2013
# Run simulation for simplest model
print("predict_court_case_rate")
output_data = run_simulation(scdb_data, range(start_term, end_term), predict_court_case_rate)
# Analyze results
print(sklearn.metrics.classification_report(output_data["case_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.confusion_matrix(output_data["case_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.accuracy_score(output_data["case_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.f1_score(output_data["case_outcome_disposition"],
output_data["prediction"]))
# Get accuracy over time and store results
output_data.loc[:, "correct"] = (output_data["case_outcome_disposition"].fillna(-1) == output_data["prediction"].fillna(-1))
output_data.to_csv("results/baseline_court_case_rate.csv", index=False)
court_case_accuracy_by_year = output_data.groupby("term")["correct"].mean()
# Run simulation for simplest model
print("predict_court_justice_rate")
output_data = run_simulation(scdb_data, range(start_term, end_term), predict_court_justice_rate)
# Analyze results
print(sklearn.metrics.classification_report(output_data["case_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.confusion_matrix(output_data["case_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.accuracy_score(output_data["case_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.f1_score(output_data["case_outcome_disposition"],
output_data["prediction"]))
# Get accuracy over time and store results
output_data.loc[:, "correct"] = (output_data["case_outcome_disposition"].fillna(-1) == output_data["prediction"].fillna(-1))
output_data.to_csv("results/baseline_court_justice_rate.csv", index=False)
court_justice_accuracy_by_year = output_data.groupby("term")["correct"].mean()
# Run simulation for simplest model
print("predict_justice_rate")
output_data = run_simulation(scdb_data, range(start_term, end_term), predict_justice_rate)
# Analyze results
print(sklearn.metrics.classification_report(output_data["justice_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.confusion_matrix(output_data["justice_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.accuracy_score(output_data["justice_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.f1_score(output_data["justice_outcome_disposition"],
output_data["prediction"]))
# Get accuracy over time and store results
output_data.loc[:, "correct"] = (output_data["justice_outcome_disposition"].fillna(-1) == output_data["prediction"].fillna(-1))
output_data.to_csv("results/baseline_justice_justice_rate.csv", index=False)
justice_accuracy_by_year = output_data.groupby("term")["correct"].mean()
# Run simulation for simplest model
print("predict_justice_last_rate")
output_data = run_simulation(scdb_data, range(start_term, end_term), predict_justice_last_rate)
# Analyze results
print(sklearn.metrics.classification_report(output_data["justice_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.confusion_matrix(output_data["justice_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.accuracy_score(output_data["justice_outcome_disposition"],
output_data["prediction"]))
print(sklearn.metrics.f1_score(output_data["justice_outcome_disposition"],
output_data["prediction"]))
# Get accuracy over time and store results
output_data.loc[:, "correct"] = (output_data["justice_outcome_disposition"].fillna(-1) == output_data["prediction"].fillna(-1))
output_data.to_csv("results/baseline_justice_justice_last_rate.csv", index=False)
justice_last_accuracy_by_year = output_data.groupby("term")["correct"].mean()
# Run vote simulation
print("predict_justice_last_rate")
output_data = run_simulation(scdb_data, range(start_term, end_term), predict_justice_last_rate)
output_data.loc[:, "case_prediction"] = numpy.nan
# Iterate over all dockets
for docket_id, docket_data in output_data.groupby('docketId'):
# Count predictions from docket
output_data.loc[:, "case_prediction"] = int(docket_data.loc[:, "prediction"].value_counts().idxmax())
# Output case level predictionsn
output_data.to_csv("results/baseline_case_justice_last_rate.csv", index=False)
print(sklearn.metrics.classification_report(output_data["case_outcome_disposition"].fillna(-1),
output_data["case_prediction"].fillna(-1)))
print(sklearn.metrics.confusion_matrix(output_data["case_outcome_disposition"].fillna(-1),
output_data["case_prediction"].fillna(-1)))
print(sklearn.metrics.accuracy_score(output_data["case_outcome_disposition"].fillna(-1),
output_data["case_prediction"].fillna(-1)))
print(sklearn.metrics.f1_score(output_data["case_outcome_disposition"].fillna(-1),
output_data["case_prediction"].fillna(-1)))
# Plot all accuracies
f = plt.figure(figsize=(10, 8))
plt.plot(court_case_accuracy_by_year.index, court_case_accuracy_by_year,
marker='o', alpha=0.75)
plt.plot(court_justice_accuracy_by_year.index, court_justice_accuracy_by_year,
marker='o', alpha=0.75)
plt.plot(justice_accuracy_by_year.index, justice_accuracy_by_year,
marker='o', alpha=0.75)
plt.plot(justice_last_accuracy_by_year.index, justice_last_accuracy_by_year,
marker='o', alpha=0.75)
plt.title("Accuracy by term and model", size=24)
plt.xlabel("Term")
plt.ylabel("% correct")
plt.legend(("Court by case disposition", "Court by Justice disposition",
"Justice by justice disposition", "Justice by trailing justice disposition"))
# Plot case disposition rate by term
rate_data = scdb_data.groupby("term")["case_outcome_disposition"].value_counts(normalize=True, sort=True).unstack()
f = plt.figure(figsize=(10, 8))
plt.plot(rate_data.index, rate_data, marker="o", alpha=0.75)
plt.title("Outcome rates by year", size=24)
plt.xlabel("Term")
plt.ylabel("Rate (% of outcomes/term)")
plt.legend(("NA", "Affirm", "Reverse"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NEURAL NETWORK ALGORITHM
Step2: BACKPROPAGATION
Step3: Implementation
|
<ASSISTANT_TASK:>
Python Code:
from learning import *
from notebook import psource, pseudocode
psource(NeuralNetLearner)
pseudocode('Back-Prop-Learning')
psource(BackPropagationLearner)
iris = DataSet(name="iris")
iris.classes_to_numbers()
nNL = NeuralNetLearner(iris)
print(nNL([5, 3, 1, 0.1]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Constructing the experiment
Step2: Next we need to check if the last N throws have been equal to the category we want to observe. To do this we construct a set of the last N tosses. If the size of the set is 1 and the category in the set is the one we are looking for we found a sequence of N equal tosses.
Step3: Running the experiment
Step4: Finally we want to run M experiments and evaluate for the expected number of throws.
Step5: So for 3 heads in a row, what's the expected number of tosses to observe this event"
Step6: As we will see later the non-integer nature of this expectation value is a residual of the numerical procedure we employed and it could easily be cast to an integer.
Step7: Maybe surprisingly the distribution is not very well localized. In fact trying to fit it with an exponential function given the calculated mean fails. Increasing the number of required equals makes the curve flatter and more heavy tailed. Thus the variance itself is also large. In fact it is of the same order as the mean!
Step8: From the plot aboe we see that the number of tosses until we have $N$ equals grows exponential! (Observe the logarithmic scale). For $N=20$ heads in a row we need on the order of 2 million successive throws for a fair coin. If we could manually throw a coin every second, it would take us about 23 days of uniterupted coin tossing until we could expect 20 heads in a row to appear... but remember
|
<ASSISTANT_TASK:>
Python Code:
import random as rd
import numpy as np
from numpy.random import choice
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
matplotlib.style.use('ggplot')
matplotlib.rc_params_from_file("../styles/matplotlibrc" ).update()
list_of_categories = ["H", "T"]
def initializeExperiment(N, prob = [0.5, 0.5]):
tosses = []
for idx in range(N):
tosses.append(choice(list_of_categories, p = prob))
return tosses
def areLastNTossesEqualy(tosses, N, category):
subset = set(tosses[-N:])
if ((len(subset) == 1) and (category in subset)):
return True
else:
return False
def runSingleExperiment(max_num_throws, number_of_equals, prob = [0.5,0.5]):
tosses = initializeExperiment(number_of_equals, prob)
throws = 0
while throws < max_num_throws:
if areLastNTossesEqualy(tosses, number_of_equals, "H"):
return len(tosses)
else:
tosses.append(choice(list_of_categories))
throws += 1
def runKExperimentsAndEvaluate(m_experiments, number_of_equals, number_of_maximum_tosses=500, prob = [0.5,0.5]):
number_of_tosses = []
for idx in range(m_experiments):
number_of_tosses.append(runSingleExperiment(number_of_maximum_tosses, number_of_equals, prob))
return np.mean(number_of_tosses), np.std(number_of_tosses), number_of_tosses
print "We expect to observe 3 heads after %3.2f tosses" % runKExperimentsAndEvaluate(5000, 3)[0]
tosses_three_equals = runKExperimentsAndEvaluate(25000, 3, number_of_maximum_tosses=1000)[2]
tosses_four_equals = runKExperimentsAndEvaluate(25000, 4, number_of_maximum_tosses=1000)[2]
tosses_five_equals = runKExperimentsAndEvaluate(25000, 5, number_of_maximum_tosses=1000)[2]
bin_range = range(0,150, 2)
plt.hist(tosses_three_equals, bins=bin_range, normed=True)
plt.hist(tosses_four_equals, bins=bin_range, normed=True)
plt.hist(tosses_five_equals, bins=bin_range, normed=True)
plt.xlim([0,100])
def expectationValueForNumberOfTosses(p, number_of_equals):
return int(float(1 - np.power(p, number_of_equals))/float(np.power(p, number_of_equals) * (1-p)))
equals = np.linspace(1,20, 20)
y = []
for x in equals:
y.append(expectationValueForNumberOfTosses(0.5, x))
plt.semilogy( equals, y, 'o')
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This chapter introduces two related topics
Step2: Each update uses the same likelihood, but the changes in probability are not the same. The first update decreases the probability by about 14 percentage points, the second by 24, and the third by 26.
Step3: You might notice
Step4: That's true in this example, and we can show that it's true in general by taking the log of both sides of Bayes's Rule.
Step5: I'll read the data and do some cleaning.
Step6: Here are the first few rows
Step7: The columns are
Step9: The following figure shows the relationship between damage and temperature.
Step10: When the outside temperature was below 65 degrees, there was always damage to the O-rings. When the temperature was above 65 degrees, there was usually no damage.
Step11: And for consistency I'll create a copy of the Damage columns called y.
Step12: Before doing a Bayesian update, I'll use statsmodels to run a conventional (non-Bayesian) logistic regression.
Step13: results contains a "point estimate" for each parameter, that is, a single value rather than a posterior distribution.
Step14: We can use the logistic regression equation to compute log odds
Step15: And then convert to probabilities.
Step16: Converting log odds to probabilities is a common enough operation that it has a name, expit, and SciPy provides a function that computes it.
Step17: Here's what the logistic model looks like with these estimated parameters.
Step18: At low temperatures, the probability of damage is high; at high temperatures, it drops off to near 0.
Step19: We can use make_joint to construct the joint prior distribution.
Step20: The values of intercept run across the columns, the values of slope run down the rows.
Step21: joint_pmf is a Pmf with two levels in the index, one for each parameter. That makes it easy to loop through possible pairs of parameters, as we'll see in the next section.
Step22: The result is a DataFrame with two columns
Step23: To compute the likelihood of the data, let's assume temporarily that the parameters we just estimated, slope and inter, are correct.
Step24: ps contains the probability of damage for each launch temperature, according to the model.
Step25: Each element of likes is the probability of seeing k damage incidents in n launches if the probability of damage is p.
Step26: That's how we compute the likelihood of the data for a particular pair of parameters.
Step27: To initialize likelihood, we make a copy of joint_pmf, which is a convenient way to make sure that likelihood has the same type, index, and data type as joint_pmf.
Step28: Because we used a uniform prior, the parameter pair with the highest likelihood is also the pair with maximum posterior probability
Step29: So we can confirm that the results of the Bayesian update are consistent with the maximum likelihood estimate computed by StatsModels
Step30: They are approximately the same, within the precision of the grid we're using.
Step31: The ovals in the contour plot are aligned along a diagonal, which indicates that there is some correlation between slope and inter in the posterior distribution.
Step32: Here's the posterior distribution of inter.
Step33: And here's the posterior distribution of slope.
Step34: Here are the posterior means.
Step35: Both marginal distributions are moderately skewed, so the posterior means are somewhat different from the point estimates.
Step37: Transforming Distributions
Step38: If we call transform and pass expit as a parameter, it transforms the log odds in marginal_inter into probabilities and returns the posterior distribution of inter expressed in terms of probabilities.
Step39: Pmf provides a transform method that does the same thing.
Step40: Here's the posterior distribution for the probability of damage at 70 degrees F.
Step41: The mean of this distribution is about 22%, which is the probability of damage at 70 degrees F, according to the model.
Step42: This result shows the second reason I defined x to be zero when temperature is 70 degrees F; this way, the intercept corresponds to the probability of damage at a relevant temperature, rather than 0 degrees F.
Step43: The result is the posterior distribution of likelihood ratios; here's what it looks like.
Step44: The mean of this distribution is about 0.75, which means that each additional degree Fahrenheit provides evidence against the possibility of damage, with a likelihood ratio (Bayes factor) of 0.75.
Step45: And here's the posterior mean of marginal_slope, transformed to a likelihood ratio, compared to the mean marginal_lr.
Step46: In this example, the differences are not huge, but they can be.
Step47: The result is an array of 101 tuples, each representing a possible pair of parameters.
Step48: To generate predictions, I'll use a range of temperatures from 31 degrees F (the temperature when the Challenger launched) to 82 degrees F (the highest observed temperature).
Step49: The following loop uses xs and the sample of parameters to construct an array of predicted probabilities.
Step50: The result has one column for each value in xs and one row for each element of sample.
Step51: The overlapping lines in this figure give a sense of the most likely value at each temperature and the degree of uncertainty.
Step52: The results are arrays containing predicted probabilities for the lower bound of the 90% CI, the median, and the upper bound of the CI.
Step53: According to these results, the probability of damage to the O-rings at 80 degrees F is near 2%, but there is some uncertainty about that prediction; the upper bound of the CI is around 10%.
Step54: One conclusion we might draw is this
Step55: Exercise
Step56: First, I'm going to "roll" the data so it starts in September rather than January.
Step57: And I'll put it in a DataFrame with one row for each month and the diagnosis rate per 10,000.
Step58: Here's what the diagnosis rates look like.
Step59: For the first 9 months, from September to May, we see what we would expect if some of the excess diagnoses are due to "age-based variation in behavior". For each month of difference in age, we see an increase in the number of diagnoses.
Step60: Make a joint prior distribution and update it using the data for the first nine months.
|
<ASSISTANT_TASK:>
Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
def prob(o):
return o / (o+1)
import pandas as pd
index = ['prior', '1 student', '2 students', '3 students']
table = pd.DataFrame(index=index)
table['odds'] = [10, 10/3, 10/9, 10/27]
table['prob'] = prob(table['odds'])
table['prob diff'] = table['prob'].diff() * 100
table.fillna('--')
import numpy as np
table['log odds'] = np.log(table['odds'])
table['log odds diff'] = table['log odds'].diff()
table.fillna('--')
np.log(1/3)
download('https://raw.githubusercontent.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/master/Chapter2_MorePyMC/data/challenger_data.csv')
data = pd.read_csv('challenger_data.csv', parse_dates=[0])
# avoiding column names with spaces
data.rename(columns={'Damage Incident': 'Damage'}, inplace=True)
# dropping row 3, in which Damage Incident is NaN,
# and row 24, which is the record for the Challenger
data.drop(labels=[3, 24], inplace=True)
# convert the Damage column to integer
data['Damage'] = data['Damage'].astype(int)
data
data.head()
len(data), data['Damage'].sum()
import matplotlib.pyplot as plt
from utils import decorate
def plot_data(data):
Plot damage as a function of temperature.
data: DataFrame
plt.plot(data['Temperature'], data['Damage'], 'o',
label='data', color='C0', alpha=0.4)
decorate(ylabel="Probability of damage",
xlabel="Outside temperature (deg F)",
title="Damage to O-Rings vs Temperature")
plot_data(data)
offset = data['Temperature'].mean().round()
data['x'] = data['Temperature'] - offset
offset
data['y'] = data['Damage']
import statsmodels.formula.api as smf
formula = 'y ~ x'
results = smf.logit(formula, data=data).fit(disp=False)
results.params
inter = results.params['Intercept']
slope = results.params['x']
xs = np.arange(53, 83) - offset
log_odds = inter + slope * xs
odds = np.exp(log_odds)
ps = odds / (odds + 1)
ps.mean()
from scipy.special import expit
ps = expit(inter + slope * xs)
ps.mean()
plt.plot(xs+offset, ps, label='model', color='C1')
plot_data(data)
from utils import make_uniform
qs = np.linspace(-5, 1, num=101)
prior_inter = make_uniform(qs, 'Intercept')
qs = np.linspace(-0.8, 0.1, num=101)
prior_slope = make_uniform(qs, 'Slope')
from utils import make_joint
joint = make_joint(prior_inter, prior_slope)
from empiricaldist import Pmf
joint_pmf = Pmf(joint.stack())
joint_pmf.head()
grouped = data.groupby('x')['y'].agg(['count', 'sum'])
grouped.head()
ns = grouped['count']
ks = grouped['sum']
xs = grouped.index
ps = expit(inter + slope * xs)
from scipy.stats import binom
likes = binom.pmf(ks, ns, ps)
likes
likes.prod()
likelihood = joint_pmf.copy()
for slope, inter in joint_pmf.index:
ps = expit(inter + slope * xs)
likes = binom.pmf(ks, ns, ps)
likelihood[slope, inter] = likes.prod()
posterior_pmf = joint_pmf * likelihood
posterior_pmf.normalize()
pd.Series(posterior_pmf.max_prob(),
index=['slope', 'inter'])
results.params
from utils import plot_contour
joint_posterior = posterior_pmf.unstack()
plot_contour(joint_posterior)
decorate(title='Joint posterior distribution')
from utils import marginal
marginal_inter = marginal(joint_posterior, 0)
marginal_slope = marginal(joint_posterior, 1)
marginal_inter.plot(label='intercept', color='C4')
decorate(xlabel='Intercept',
ylabel='PDF',
title='Posterior marginal distribution of intercept')
marginal_slope.plot(label='slope', color='C2')
decorate(xlabel='Slope',
ylabel='PDF',
title='Posterior marginal distribution of slope')
pd.Series([marginal_inter.mean(), marginal_slope.mean()],
index=['inter', 'slope'])
results.params
def transform(pmf, func):
Transform the quantities in a Pmf.
ps = pmf.ps
qs = func(pmf.qs)
return Pmf(ps, qs, copy=True)
marginal_probs = transform(marginal_inter, expit)
marginal_probs = marginal_inter.transform(expit)
marginal_probs.plot(color='C1')
decorate(xlabel='Probability of damage at 70 deg F',
ylabel='PDF',
title='Posterior marginal distribution of probabilities')
mean_prob = marginal_probs.mean()
mean_prob
marginal_lr = marginal_slope.transform(np.exp)
marginal_lr.plot(color='C3')
decorate(xlabel='Likelihood ratio of 1 deg F',
ylabel='PDF',
title='Posterior marginal distribution of likelihood ratios')
mean_lr = marginal_lr.mean()
mean_lr
expit(marginal_inter.mean()), marginal_probs.mean()
np.exp(marginal_slope.mean()), marginal_lr.mean()
np.random.seed(17)
sample = posterior_pmf.choice(101)
sample.shape
sample.dtype
type(sample[0])
temps = np.arange(31, 83)
xs = temps - offset
pred = np.empty((len(sample), len(xs)))
for i, (slope, inter) in enumerate(sample):
pred[i] = expit(inter + slope * xs)
for ps in pred:
plt.plot(temps, ps, color='C1', lw=0.5, alpha=0.4)
plot_data(data)
low, median, high = np.percentile(pred, [5, 50, 95], axis=0)
plt.fill_between(temps, low, high, color='C1', alpha=0.2)
plt.plot(temps, median, color='C1', label='logistic model')
plot_data(data)
low = pd.Series(low, temps)
median = pd.Series(median, temps)
high = pd.Series(high, temps)
t = 80
print(median[t], (low[t], high[t]))
t = 60
print(median[t], (low[t], high[t]))
t = 31
print(median[t], (low[t], high[t]))
# Solution goes here
# Solution goes here
# Solution goes here
n = np.array([32690, 31238, 34405, 34565, 34977, 34415,
36577, 36319, 35353, 34405, 31285, 31617])
k = np.array([265, 280, 307, 312, 317, 287,
320, 309, 225, 240, 232, 243])
x = np.arange(12)
n = np.roll(n, -8)
k = np.roll(k, -8)
adhd = pd.DataFrame(dict(x=x, k=k, n=n))
adhd['rate'] = adhd['k'] / adhd['n'] * 10000
adhd
def plot_adhd(adhd):
plt.plot(adhd['x'], adhd['rate'], 'o',
label='data', color='C0', alpha=0.4)
plt.axvline(5.5, color='gray', alpha=0.2)
plt.text(6, 64, 'Younger than average')
plt.text(5, 64, 'Older than average', horizontalalignment='right')
decorate(xlabel='Birth date, months after cutoff',
ylabel='Diagnosis rate per 10,000')
plot_adhd(adhd)
qs = np.linspace(-5.2, -4.6, num=51)
prior_inter = make_uniform(qs, 'Intercept')
qs = np.linspace(0.0, 0.08, num=51)
prior_slope = make_uniform(qs, 'Slope')
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Funkcja ta ma swoje minimum w punkcie $x = 0$. Jak widaฤ na powyลผszym rysunku, gdy pochodna jest dodatnia (co oznacza, ลผe funkcja jest rosnฤ
ca) lub gdy pochodna jest ujemna (gdy funkcja jest malejฤ
ca), ลผeby zminimalizowaฤ wartoลฤ funkcji potrzebujemy wykonywaฤ krok optymalizacji w kierunku przeciwnym do tego wyznaczanego przez gradient.
Step2: Backprop - propagacja wsteczna - metoda liczenia gradientรณw przy pomocy reguลy ลaลcuchowej (ang. chain rule)
Step3: Jeลli do wฤzลa przychodzi wiฤcej niลผ jedna krawฤdลบ (np. wฤzeล x), gradienty sumujemy.
Step4: Posiadajฤ
c gradient moลผemy prรณbowaฤ optymalizowaฤ funkcjฤ, podobnie jak poprzenio.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn
%matplotlib inline
x = np.linspace(-3, 3, 100)
plt.plot(x, x**2, label='f(x)') # optymalizowana funkcja
plt.plot(x, 2 * x, label='pochodna -- f\'(x)') # pochodna
plt.legend()
plt.show()
learning_rate = ...
nb_steps = 10
x_ = 1
steps = [x_]
for _ in range(nb_steps):
x_ -= learning_rate * (2 * x_) # learning_rate * pochodna
steps += [x_]
plt.plot(x, x**2, alpha=0.7)
plt.plot(steps, np.array(steps)**2, 'r-', alpha=0.7)
plt.xlim(-3, 3)
plt.ylim(-1, 10)
plt.show()
def sigmoid(x):
pass
def forward_pass(x):
pass
def backward_pass(x):
# kopia z forward_pass, poniewaลผ potrzebujemy wartoลci
# poลrednich by wyliczyฤ pochodne
# >>>
...
# <<<
pass
x = np.linspace(-10, 10, 200)
plt.plot(x, forward_pass(x), label='f(x)')
plt.plot(x, backward_pass(x), label='poochodna -- f\'(x)')
plt.legend()
plt.show()
learning_rate = 1
nb_steps = 100
x_ = ...
steps = [x_]
for _ in range(nb_steps):
x_ -= learning_rate * backward_pass(x_)
steps += [x_]
plt.plot(x, forward_pass(x), alpha=0.7)
plt.plot(steps, forward_pass(np.array(steps)), 'r-', alpha=0.7)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Imports
Step2: Project hyper-parameters
Step3: Set random seed for reproducibility and ignore warning messages.
Step4: Download and load the dataset
Step5: Transform Data
Step6: <h1>Feature Engineering
Step7: Modelling
Step8: <h1> Evaluation and Prediction
Step9: Make submission
|
<ASSISTANT_TASK:>
Python Code:
!pip install -r requirements.txt --user --quiet
import sys, os
from tqdm import tqdm
import subprocess
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
import zipfile
import joblib
import gc
from lightgbm import LGBMRegressor
from sklearn.metrics import mean_squared_error
from sklearn.model_selection._split import _BaseKFold, indexable, _num_samples
from sklearn.utils.validation import _deprecate_positional_args
pd.set_option('display.max_columns', 500)
sys.path.insert(0, 'helper-files')
from local_api import local_api
# Hyper-parameters
LR = 0.379687157316759
N_EST = 100
np.random.seed(42)
# setup kaggle environment for data download
# set kaggle.json path
os.environ['KAGGLE_CONFIG_DIR'] = "/home/jovyan/examples/jpx-tokyo-stock-exchange-kaggle-competition"
# grant rwo permission to .kaggle/kaggle.json
!chmod 600 /home/jovyan/examples/jpx-tokyo-stock-exchange-kaggle-competition/kaggle.json
# download kaggle's jpx-tokyo-stock-exchange-prediction data
!kaggle competitions download -c jpx-tokyo-stock-exchange-prediction
# path to download to
data_path = os.getcwd()
# extract jpx-tokyo-stock-exchange-prediction.zip to load_data_path
with zipfile.ZipFile("jpx-tokyo-stock-exchange-prediction.zip","r") as zip_ref:
zip_ref.extractall(data_path)
# read train_files/stock_prices.csv
df_prices = pd.read_csv(f"{data_path}/train_files/stock_prices.csv", parse_dates=['Date'])
df_prices['Date'].max()
df_prices.tail(3)
# lets check data dimensions
df_prices.shape
df_prices.info()
# check total nan values per column
df_prices.isna().sum()
# sort data by 'Date' and 'SecuritiesCode'
df_prices.sort_values(by=['Date','SecuritiesCode'], inplace=True)
# sort data by 'Date' and 'SecuritiesCode'
df_prices.sort_values(by=['Date','SecuritiesCode'], inplace=True)
# count total trading stocks per day
idcount = df_prices.groupby("Date")["SecuritiesCode"].count().reset_index()
idcount
plt.figure(figsize=(10, 5))
plt.plot(idcount["Date"],idcount["SecuritiesCode"])
plt.axvline(x=['2021-01-01'], color='blue', label='2021-01-01')
plt.axvline(x=['2020-06-01'], color='red', label='2020-06-01')
plt.legend()
plt.show()
idcount[idcount['SecuritiesCode'] >= 2000]
idcount[idcount['SecuritiesCode'] >= 2000]['SecuritiesCode'].sum()
# filter out data with less than 2000 stock counts in a day
# dates before โ2020-12-23โ all have stock counts less than 2000
# This is done to work with consistent data
df_prices = df_prices[(df_prices["Date"]>="2020-12-23")]
df_prices = df_prices.reset_index(drop=True)
df_prices.head()
df_prices.columns
#calculate z-scores of `df`
z_scores = stats.zscore(df_prices[['Open', 'High', 'Low', 'Close','Volume']], nan_policy='omit')
abs_z_scores = np.abs(z_scores)
filtered_entries = (abs_z_scores < 3).all(axis=1)
df_zscore = df_prices[filtered_entries]
df_zscore = df_zscore.reset_index(drop=True)
df_zscore = df_zscore.reset_index(drop=True)
def feat_eng(df, features):
for i in tqdm(range(1, 4)):
# creating lag features
tmp = df[features].shift(i)
tmp.columns = [c + f'_next_shift_{i}' for c in tmp.columns]
df = pd.concat([df, tmp], sort=False, axis=1)
for i in tqdm(range(1, 4)):
df[f'weighted_vol_price_{i}'] = np.log(df[f'Volume_next_shift_{i}'] * df[[col for col in df if col.endswith(f'next_shift_{i}')][:-1]].apply(np.mean, axis=1))
# feature engineering
df['weighted_vol_price'] = np.log(df['Volume'] * (np.mean(df[features[:-1]], axis=1)))
df['BOP'] = (df['Open']-df['Close'])/(df['High']-df['Low'])
df['HL'] = df['High'] - df['Low']
df['OC'] = df['Close'] - df['Open']
df['OHLCstd'] = df[['Open','Close','High','Low']].std(axis=1)
feats = df.select_dtypes(include=float).columns
df[feats] = df[feats].apply(np.log)
# replace inf with nan
df.replace([np.inf, -np.inf], np.nan, inplace=True)
# datetime features
df['Date'] = pd.to_datetime(df['Date'])
df['Day'] = df['Date'].dt.weekday.astype(np.int32)
df["dayofyear"] = df['Date'].dt.dayofyear
df["is_weekend"] = df['Day'].isin([5, 6])
df["weekofyear"] = df['Date'].dt.weekofyear
df["month"] = df['Date'].dt.month
df["season"] = (df["month"]%12 + 3)//3
# fill nan values
df = df.fillna(0)
return df
new_feats = feat_eng(df_zscore, ['High', 'Low', 'Open', 'Close', 'Volume'])
new_feats.shape
new_feats['Target'] = df_zscore['Target']
new_feats.head(7)
new_feats.columns
# columns to be used for modelling.
feats = ['Date','SecuritiesCode', 'Open', 'High', 'Low', 'Close', 'Volume',
'weighted_vol_price_1', 'weighted_vol_price_2', 'weighted_vol_price_3',
'weighted_vol_price', 'BOP', 'HL', 'OC', 'OHLCstd', 'Day', 'dayofyear',
'is_weekend', 'weekofyear', 'month', 'season']
# transform date to int
new_feats['Date'] = new_feats['Date'].dt.strftime("%Y%m%d").astype(int)
# split data into valid for validation and train for model training
valid = new_feats[(new_feats['Date'] >= 20211111)].copy()
train = new_feats[(new_feats['Date'] < 20211111)].copy()
train.shape, valid.shape
# model parameter
params = {
'n_estimators': N_EST,
'learning_rate': LR,
'random_state': 42,
'verbose' : 2}
# model initialization
model = LGBMRegressor(**params)
X = train[feats]
y = train["Target"]
X_test = valid[feats]
y_test = valid["Target"]
# fitting
model.fit(X, y, verbose=False, eval_set=(X_test, y_test))
# model prediction
preds = model.predict(X_test)
# model evaluation
rmse = np.round(mean_squared_error(preds, y_test)**0.5, 5)
print(rmse)
myapi = local_api('data/supplemental_files')
env = myapi.make_env()
iter_test = env.iter_test()
for (prices, options, financials, trades, secondary_prices, sample_prediction) in iter_test:
prices = feat_eng(prices, ['High', 'Low', 'Open', 'Close', 'Volume'])
prices['Date'] = prices['Date'].dt.strftime("%Y%m%d").astype(int)
prices["Target"] = model.predict(prices[feats])
if prices["Volume"].min()==0:
sample_prediction["Prediction"] = 0
else:
sample_prediction["Prediction"] = prices["Target"]/prices["Volume"]
sample_prediction["Prediction"] = prices["Target"]
sample_prediction.sort_values(by="Prediction", ascending=False, inplace=True)
sample_prediction['Rank'] = np.arange(0,2000)
sample_prediction.sort_values(by = "SecuritiesCode", ascending=True, inplace=True)
submission = sample_prediction[["Date","SecuritiesCode","Rank"]]
env.predict(submission)
print(env.score())
submission.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
index = ['x', 'y']
columns = ['a','b','c']
dtype = [('a','int32'), ('b','float32'), ('c','float32')]
values = np.zeros(2, dtype=dtype)
df = pd.DataFrame(values, index=index)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Find articles <a name="parse"></a>
Step2: After having a quick peek at a snippet of the XML. The elements we are interested in are on the child page, which identifies an article. Then we want to get the contents of title and text.
Step3: Then move towards the Portuguese one.
Step4: The articles are filtered according to the presence of the mention to Matteo Renzi, those in Italian have been stored in a .json file whose each line corresponds to a page (title, text). The same holds for the articles in Portuguese. The two corpora are automatically stored in the folder Corpus.
Step5: Take a look at the obtained DataFrame.
Step6: Thus, we extract the number of monthly page views for each article related to the languages of interest (i.e. it and pt) from the page views file - Additional data in the README. To filter the file we use the filter_pageviews_file function and get a dictionary of dictionaries with the following structure (according to our example)
Step7: Thus, a right join between the DataFrames, namely the one obtained from the pageviews and the other obtained from the corpus, is performed. It results that both for the Italian and Portuguese articles there are articles that mention Matteo Renzi that have not been visualized in November. The define_ranked_df function is stored in the pageviews library.
Step8: Having a quick glance at the two top 10, we notice
Step9: Comparing the two DataFrames we immediately notice that even if the maximum number of mentions that Matteo Renzi received for Italian and Portuguese articles are very different. In the Portuguese corpus there are only two articles that have more than 5 mentions. Thus, can be interesting to visualize the distribution of the mentions both for the IT and PT corpora.
Step10: In this direction, one aspect that can be considered is the following
Step11: A scatterplot is used to get how an article is positioned according to these two variables. The plot shows
Step12: About these two features, we can think that another way to explore should be the following
Step13: Proceed to create a DataFrame that contains the information related to those articles.
Step14: Before gooing further, we check whether all the matched IT articles mention Matteo Renzi. In order to do so, we run a query on the DataFrame that stores all the IT articles that cite Renzi.
Step15: The dictionaries that match the PT and IT titles are re-defined taking into account the fact that some IT do not mention Renzi.
Step16: Then, we create a unique DataFrame which contains the mentions in IT an PT articles for the tuple of articles.
Step17: Add a column containing the matches to join the two dfs.
Step18: Perform the join on the Matches and plot the results.
Step19: From the plot
Step20: Define the matching dictionaries according to what said above.
Step21: Add new variable to allow the join
Step22: Join the two DatFrames with a right join, so that we see also the PT articles that have not been visualised in IT.
Step23: We use a bar plot to visualize the results.
|
<ASSISTANT_TASK:>
Python Code:
import plotly
from pageviews import *
from wiki_parser import *
import plotly.tools as tls
from helpers_parser import *
from across_languages import *
plotly.tools.set_credentials_file(username='crimenghini', api_key='***')
# Define the path of the corpora
path = '/Users/cristinamenghini/Downloads/'
# Xml file
xml_files = ['itwiki-20161120-pages-articles-multistream.xml',
'ptwiki-20161201-pages-articles-multistream.xml']
# Parse italian corpus
parse_articles('ita', path + xml_files[0], 'Matteo Renzi')
# Parse portuguese corpus
parse_articles('port', path + xml_files[1], 'Matteo Renzi')
# Get the df for the Italian articles
df_it_titles = article_df_from_json('Corpus/wiki_ita_Matteo_Renzi.json')
# Get the df for the Portuguese articles
df_pt_titles = article_df_from_json('Corpus/wiki_port_Matteo_Renzi.json')
df_it_titles.sample(5)
# Page views file
pageviews_file = 'pagecounts-2016-11-views-ge-5-totals'
# Filter the page view file
articles_pageviews = filter_pageviews_file(path + pageviews_file, ['pt','it'])
# Define the italian ranked article df according to the number of page views
ranked_df_ita = define_ranked_df(articles_pageviews, 'it', df_it_titles)
# Show the df head
ranked_df_ita.head(10)
# Define the italian ranked article df according to the number of page views
ranked_df_port = define_ranked_df(articles_pageviews, 'pt', df_pt_titles)
# Show the df
ranked_df_port.head(10)
# Italian df of mentions per page
df_it_mentions = article_mentions('Corpus/wiki_ita_Matteo_Renzi.json', 'Matteo Renzi')
# Sort the df by the number of mentions and see the top 5
df_it_mentions = df_it_mentions.sort_values('Number of mentions', ascending = False)
# Show results
df_it_mentions.head(5)
# Portuguese df of mentions per page
df_pt_mentions = article_mentions('Corpus/wiki_port_Matteo_Renzi.json', 'Matteo Renzi')
# Sort the df by the number of mentions and see the top 5
df_pt_mentions = df_pt_mentions.sort_values('Number of mentions', ascending = False)
# Show results
df_pt_mentions.head(5)
#boxplot_mentions(df_pt_mentions, df_it_mentions, 'PT', 'IT', 'Number of mentions')
tls.embed("https://plot.ly/~crimenghini/20")
# Merge pageviews and mentions DataFrames for IT
df_it_mension_pageview = pd.merge(df_it_mentions, ranked_df_ita, on=['Title'])
# Show it
df_it_mension_pageview.sample(5)
# Merge pageviews and mentions DataFrames for PT
df_pt_mension_pageview = pd.merge(df_pt_mentions, ranked_df_port, on=['Title'])
# Show it
df_pt_mension_pageview.sample(5)
# def scatter_plot(df_it_mension_pageview, df_pt_mension_pageview, 'Number of mentions', 'Pageviews', 'Italian', 'Portuguese')
tls.embed('https://plot.ly/~crimenghini/36')
# Built the common articles matches
dict_italian = get_matches(df_pt_titles, 'it')
# Create the inverted one
inverted_dict = {v : k for k, v in dict_italian.items()}
print ('The Portuguese articles that mention Matteo Renzi and correspond to an Italian article are: ', len(dict_italian),
'. The number of PT articles that have not been matched is: ', len(df_pt_titles)-len(dict_italian), '.')
# From the dictionary get the titles of both languages
italian_titles = list(dict_italian.values())
portugues_titles = list(dict_italian.keys())
# Run the query
match_with_mention = df_it_titles.query('Title in @italian_titles')
# Get the number
print ('There are ', len(portugues_titles)-len(match_with_mention), 'IT articles that do not mention Matteo Renzi.')
# Re-define the list of IT articles according to the aforementioned "issue"
it_titles_with_mention = list(match_with_mention.Title)
# Re-define the two dictionaries
dict_italian_mentions = {k:v for k,v in dict_italian.items() if v in it_titles_with_mention}
# Define the inverted
inverted_dict_italian_mentions = {v : k for k, v in dict_italian.items()}
# Create the list of titles for PT articles according to the IT that don't mention Renzi
pt_titles_with_mention = list(dict_italian_mentions.keys())
# Create df for IT mentions
df_match_it_mentions = df_it_mentions.query('Title in @it_titles_with_mention').sort_values('Number of mentions', ascending = False)
# Create df for PT mentions
df_match_pt_mentions = df_pt_mentions.query('Title in @pt_titles_with_mention').sort_values('Number of mentions', ascending = False)
# Create new column
new_column_it = ['/'.join([k]+[v]) for i in df_match_it_mentions.Title for k,v in dict_italian_mentions.items() if i == v]
new_column_pt = ['/'.join([k]+[v]) for i in df_match_pt_mentions.Title for k,v in dict_italian_mentions.items() if i == k]
# Add the new column to the two dataframes
df_match_it_mentions['Matches'] = new_column_it
df_match_pt_mentions['Matches'] = new_column_pt
# Join the two dfs on the correspondence tuples
matches_mention = pd.merge(df_match_it_mentions, df_match_pt_mentions, on = 'Matches', suffixes = ('_IT','_PT'))
# Show result
matches_mention.head()
# bar_plot(df, 'Matches', 'Number of mentions_IT', 'Number of mentions_P', 'IT', 'PT', 'Compare IT and PT mentions',
# 'Article','No. mentions', 'color-bar-prova')
tls.embed('https://plot.ly/~crimenghini/38')
# Run the query
match_with_pageviews_it = ranked_df_ita.query('Title in @italian_titles')
match_with_pageviews_pt = ranked_df_port.query('Title in @portugues_titles')
# Get the number
print ('There are ', len(portugues_titles)-len(match_with_pageviews_it), 'IT articles that have not been visited.')
print ('There are ', len(portugues_titles)-len(match_with_pageviews_pt), 'PT articles that have not been visited.')
# Define list of articles that have been visualized
it_titles_with_pageviews = list(match_with_pageviews_it.Title)
pt_titles_with_pageviews = list(match_with_pageviews_pt.Title)
# Re-define the two dictionaries according to this evidence
dict_italian_pageviews = {k:v for k,v in dict_italian.items() if v in it_titles_with_pageviews}
# PT
dict_pt_pageviews = {v : k for k, v in dict_italian.items() if k in pt_titles_with_pageviews}
# Create df for IT mentions
df_match_it_pageviews = ranked_df_ita.query('Title in @it_titles_with_pageviews').sort_values('Pageviews', ascending = False)
# Create df for PT mentions
df_match_pt_pageviews = ranked_df_port.query('Title in @pt_titles_with_pageviews').sort_values('Pageviews', ascending = False)
# Create new column
new_column_it = ['/'.join([k]+[v]) for i in df_match_it_pageviews.Title for k,v in dict_italian_pageviews.items() if i == v]
new_column_pt = ['/'.join([v]+[k]) for i in df_match_pt_pageviews.Title for k,v in dict_pt_pageviews.items() if i == v]
# Add the new column to the two dataframes
df_match_it_pageviews['Matches'] = new_column_it
df_match_pt_pageviews['Matches'] = new_column_pt
df_match_it_pageviews.head()
df_match_pt_pageviews.head()
# Join the two dfs on the correspondence tuples
matches_pageviews = pd.merge(df_match_it_pageviews, df_match_pt_pageviews, how = 'right',on = 'Matches', suffixes = ('_IT','_PT'))
matches_pageviews.fillna(0, inplace =True)
# Show result
matches_pageviews.head()
# bar_plot(df, 'Matches', 'Pageviews_IT', 'Pageviews_PT', 'IT', 'PT', 'Compare IT and PT pageviews', 'Article',
# 'No. pageviews', 'color-bar-pvs')
tls.embed('https://plot.ly/~crimenghini/40')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Modeling Gene Networks Using Ordinary Differential Equations
Step2: Visualizing the activating Hill function
Step3: <h2> <font color='firebrick'> Question Set 1 (3 pts) </font> </h2>
Step4: <h2> <font color='firebrick'> Question Set 2 (3 pts) </font> </h2>
Step5: And now we can generate some plot to compare the logic approximation to the Hill function, for the activating case
Step6: <h2> <font color='firebrick'> Question Set 3 (3 pts) </font> </h2>
Step7: <h2> <font color='firebrick'> Question Set 4 (3 pts) </font> </h2>
Step8: Feed Forward Loops
Step9: Define Python functions for dY/dt and dZ/dt
Step10: <h2> <font color='firebrick'>Question Set 5 (2 pts)</font> </h2>
Step11: <h2> <font color='firebrick'>Question Set 6 (2 pts) </font> </h2>
Step12: Type 1 Coherent FFLs can act as a Sign-Sensitive Delays
|
<ASSISTANT_TASK:>
Python Code:
# import statements to make numeric and plotting functions available
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
def hill_activating(X, B, K, n):
Hill function for an activator
return (B * X**n)/(K**n + X**n)
## generate a plot using the hill_activating function defined above
# setup paramters for our simulation
# CHANGE THESE VALUES TO EXPLORE THE EFFECT OF THESE PARAMTERS
# (see questions below)
n = 1
B = 5
K = 10
# generate a range of x values, representing a range of concentrations of our
# transcription factor X
x = linspace(0,30,200) # generate 200 evenly spaced points between 0 and 30
# calculating corresponding rates of production of Y
y = hill_activating(x, B, K, n)
# plot the hill fxn with the user set parameters
plot(x, y, label='B = {}, K = {}, n={}\n(user specified)'.format(B, K, n))
# plot the hill fxn with a set of reference parameters to facilitate comparison
plot(x, hill_activating(x, 5, 10, 1), label='B = 5, K = 10, n=1\n(reference)', alpha=0.75)
xlabel('Concentration of X')
ylabel('Rate of production of Y')
legend(loc='best')
pass # suppress further output
## Python implementation of repressive Hill function
def hill_repressing(X, B, K, n):
return B/(1.0 + (X/K)**n)
## generate a plot using the hill_repressing function defined above
# CHANGE THESE VALUES TO EXPLORE THE EFFECT OF THESE PARAMTERS
# (see questions below)
n = 1
B = 5
K = 10
# generate a range of x values, representing a range of concentrations of our
# transcription factor X
x = linspace(0,30,200) # generate 200 evenly spaced points between 0 and 30
# calculating corresponding rates of production of Y
y = hill_repressing(x, B, K, n)
# plot the hill fxn with the user set parameters
plot(x, y, label='B = {}, K = {}, n={}\n(user specified)'.format(B, K, n))
# plot the hill fxn with a set of reference parameters to facilitate comparison
plot(x, hill_repressing(x, 5, 10, 1), label='B = 5, K = 10, n=1\n(reference)')
xlabel('Concentration of X')
ylabel('Rate of production of Y')
legend(loc='best')
pass # suppress further output
def logic_activating(X, B, K):
if X > K:
theta = 1
else:
theta = 0
return B*theta
def logic_repressing(X, B, K):
if X < K:
theta = 1
else:
theta = 0
return B*theta
## generate plots using your hill_activating and logic_activating functions defined above
## For X values range from 0 to 30
# CHANGE THESE VALUES TO EXPLORE THE EFFECT OF THESE PARAMTERS
n = 4
B = 5
K = 10
x = linspace(0, 30, 200)
plot(x, hill_activating(x, B, K, n),
label='B = {}, K = {}, n={}'.format(B, K, n))
logicx = [logic_activating(i, B, K) for i in x]
plot(x, logicx, label='logic approximation')
xlabel('Concentration of X')
ylabel('Rate of production of Y')
legend(loc='best')
ylim(-0.5, B*1.1)
pass
## write a function to represent the simple differential equation above
def dYdt(B, K, a, X, Y):
production = logic_activating(X, B, K)
decay = a*Y
return production - decay
## generate a plot of conc of Y over time using your dY function defined above
## Evaluated over 200 time units
B = 5
K = 10
X = K + 1
Y = [0] # initial value of Y
a = 0.05
nsteps = 200
for i in range(nsteps):
yprevious = Y[-1]
deltay = dYdt(B, K, a, X, yprevious)
ynew = Y[-1] + deltay
Y.append(ynew)
plot(Y)
xlabel('Time units')
ylabel('Concentration of Y')
ylim(0, (B/a)*1.1)
pass
B = 5
K = 10
a = 0.05
# setup pulse of X
# off (0) for first 50 steps, on for next 100 steps, off again for last 100 steps
X = [0]*50 + [3*K]*100 + [0]*100
Y = [0] # initial value of Y
nsteps = 250
for i in range(1, nsteps):
xnow = X[i]
yprevious = Y[-1]
deltay = dYdt(B, K, a, xnow, yprevious)
ynew = yprevious + deltay
Y.append(ynew)
plot(X, color='red', linestyle='dashed', label="X")
plot(Y, color='blue', label="Y")
ylim(0, max(max(X)*1.1, (B/a)*1.1))
xlabel('Time units')
ylabel('Concentration')
legend(loc="best")
pass
## We'll specify the behavior of X as a series of pulse of different length
## so we'll define a function to generate pulses
def pulse(ontime, offtime, ntimes, onval=1):
if ontime >= offtime:
raise Exception("Invalid on/off times.")
signal = np.zeros(ntimes)
signal[ontime:offtime] = onval
return signal
nsteps = 150
short_pulse = pulse(20, 23, nsteps) # 5 sec pulse
long_pulse = pulse(50, 100, nsteps) # 50 sec pulse
X = short_pulse + long_pulse # we can then add the pulses to create
# a single time trace
plot(X, color='black')
xlabel('Time units')
ylabel('Amount of Gene Product')
ylim(0, 1.5)
pass
def dYdt(B, K, a, X, Y):
if X > K:
theta = 1
else:
theta = 0
return B * theta - a * Y
def dZdt(B, Kx, Ky, a, X, Y, Z):
theta = 0
if (X > Kx) and (Y > Ky):
theta = 1
return B * theta - a * Z
## Plot X, Y, and Z on the same time scale
nsteps = 150
short_pulse = pulse(20, 23, nsteps) # 5 sec pulse
long_pulse = pulse(50, 100, nsteps) # 50 sec pulse
X = short_pulse + long_pulse
# setup parameters for Y and Z
Y = [0]
betay, alphay = 0.2, 0.1
Kxy = 0.5
Z = [0]
betaz, alphaz = 0.2, 0.1
Kxz = 0.5
Kyz = 1
for i in range(nsteps):
xnow = X[i]
ynow, znow = Y[-1], Z[-1]
ynew = ynow + dYdt(betay, Kxy, alphay, xnow, ynow)
znew = znow + dZdt(betaz, Kxz, Kyz, alphaz, xnow, ynow, znow)
Y.append(ynew)
Z.append(znew)
plot(X, 'k--', label='X', linewidth=1.5)
plot(Y, 'b', label='Y')
plot(Z, 'r', label='Z')
ylim(-0.1, 2.5)
xlabel("Time")
ylabel("Concentration")
legend()
pass
nsteps = 150
p1start = 10
p1duration = 5
p2start = 50
p2duration = 50
short_pulse = pulse(p1start, p1start + p1duration, nsteps) # short pulse
long_pulse = pulse(p2start, p2start + p2duration, nsteps) # long pulse
X = short_pulse + long_pulse
# change this `scale` argument to increase/decrease noise
noise = np.random.normal(loc=0, scale=0.2, size=nsteps) # mean=0, sd=0.2
X = X + noise
# setup parameters for Y and Z
Y = [0]
betay, alphay = 0.2, 0.1
Kxy = 0.5
Z = [0]
betaz, alphaz = 0.2, 0.1
Kxz = 0.5
Kyz = 1
for i in range(nsteps):
xnow = X[i]
ynow, znow = Y[-1], Z[-1]
ynew = ynow + dYdt(betay, Kxy, alphay, xnow, ynow)
znew = znow + dZdt(betaz, Kxz, Kyz, alphaz, xnow, ynow, znow)
Y.append(ynew)
Z.append(znew)
# draw each trace as a subfigure
# subfigures stacked in a vertical grid
subplot2grid((3,1),(0,0))
plot(X, 'k', label='X', linewidth=1)
legend()
subplot2grid((3,1),(1,0))
plot(Y, 'b', label='Y', linewidth=2)
legend()
subplot2grid((3,1),(2,0))
plot(Z, 'r', label='Z', linewidth=2)
vlines(p1start, min(Z),max(Z)*1.1,color='black',linestyle='dashed')
annotate("pulse 1 on", xy=(p1start,1),xytext=(40,20),
textcoords='offset points',
horizontalalignment="center",
verticalalignment="bottom",
arrowprops=dict(arrowstyle="->",color='black',
connectionstyle='arc3,rad=0.5',
linewidth=1))
vlines(p2start, min(Z),max(Z)*1.1,color='black',linestyle='dashed')
annotate("pulse 2 on", xy=(p2start,1),xytext=(-40,0),
textcoords='offset points',
horizontalalignment="center",
verticalalignment="bottom",
arrowprops=dict(arrowstyle="->",color='black',
connectionstyle='arc3,rad=0.5',
linewidth=1))
legend()
pass
def Ton(alpha, KYratio):
return (1.0/alpha) * log(1.0/(1.0-KYratio))
## Create a contour plot for a range of alpha and Kyz/Yst
x = alpha = linspace(0.01, 0.2, 100)
y = KYratio = linspace(0.01, 0.99, 100)
X,Y = meshgrid(x, y)
Z = Ton(X,Y)
levels = MaxNLocator(nbins=20).tick_values(Z.min(), Z.max())
im = contourf(X,Y,Z, cmap=cm.afmhot_r, levels=levels)
contour(X, Y, Z, levels,
colors=('k',),
linewidths=(0.5,))
colorbar(im)
xlabel('alpha')
ylabel("Kyz/Yst")
pass
## A Python function that represents dZ/dt for the Incoherent FFL
## our dY function previously defined stays the same
def dZ_incoh(B1,B2,Kx,Ky,a,X,Y,Z):
pass # define the function here
def dZ_incoh(B1,B2,Kx,Ky,a,X,Y,Z):
theta = 0
B = 0
if (X > Kx) and (Y < Ky):
theta = 1
B = B1
elif (X > Kx) and (Y >= Ky):
theta = 1
B = B2
return B * theta - a * Z
nsteps = 150
short_pulse = pulse(20, 25, nsteps) # 5 sec pulse
long_pulse = pulse(50, 100, nsteps) # 50 sec pulse
X = short_pulse + long_pulse
# setup parameters for Y and Z
Y = [0]
betay, alphay = 0.2, 0.1
Kxy = 0.5
Z = [0]
betaz1, betaz2 = 0.2, 0.001
alphaz = 0.1
Kxz = 0.5
Kyz = 0.5
for i in range(nsteps):
xnow = X[i]
ynow, znow = Y[-1], Z[-1]
ynew = ynow + dYdt(betay, Kxy, alphay, xnow, ynow)
znew = znow + dZ_incoh(betaz1, betaz2, Kxz, Kyz, alphaz, xnow, ynow, znow)
Y.append(ynew)
Z.append(znew)
# draw each trace as a subfigure
# subfigures stacked in a vertical grid
subplot2grid((3,1),(0,0))
plot(X, 'k', label='X', linewidth=1)
legend()
ylim(0,1.1)
subplot2grid((3,1),(1,0))
plot(Y, 'b', label='Y', linewidth=2)
legend()
ylim(0,2.1)
subplot2grid((3,1),(2,0))
plot(Z, 'r', label='Z', linewidth=2)
legend()
ylim(0,0.7)
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Efecto oscilatorio cuyo periodo corresponde a los dรญas de la semana.
Step2: Anรกlisis de polaridad en Estados Unidos, Argentina, Mรฉxico y Espaรฑa
Step3: Anรกlisis Descriptivo de los Tweets en Espaรฑol
Step4: Movilidad de los Tuiteros (de habla hispana)
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import gzip
import json
import numpy as np
def read_data(fname):
with gzip.open(fname) as fpt:
d = json.loads(str(fpt.read(), encoding='utf-8'))
return d
%matplotlib inline
plt.figure(figsize=(20, 10))
mx_pos = read_data('spanish/polarity_by_country/MX.json.gz')
ticks = [str(x[0])[2:] for x in mx_pos]
mu = [x[1] for x in mx_pos]
plt.plot(mu)
index = np.argsort(mu)
index = np.concatenate((index[:1], index[-10:]))
_ = plt.xticks(index, [ticks[x] for x in index], rotation=90)
plt.grid()
def remove_median(pos):
median = np.array(mx_pos[: -int((len(pos) % 7))])[:, 1]
median.shape = (int(median.shape[0] / 7), 7)
median = np.median(median, axis=0)
median = np.concatenate((np.concatenate([median for x in range(int(len(pos) / median.shape[0]))], axis=0),
median[:int(len(mx_pos) % 7)]), axis=0)
return [(x[0], x[1] - y) for x, y in zip(pos, median)]
plt.figure(figsize=(20, 10))
nmx_pos = remove_median(mx_pos)
mu = np.array([x[1] for x in nmx_pos])
plt.plot(mu)
index = np.argsort(mu)
index = np.concatenate((index[:1], index[-10:]))
_ = plt.xticks(index, [ticks[x] for x in index], rotation=90)
plt.grid()
plt.figure(figsize=(20, 20))
for k, D in enumerate([mx_pos, remove_median(mx_pos)]):
ticks = [str(x[0])[2:] for x in D]
mu = [x[1] for x in D]
plt.subplot(4, 1, k+1)
plt.plot(mu)
index = np.argsort(mu)
index = np.concatenate((index[:1], index[-10:]))
_ = plt.xticks(index, [ticks[x] for x in index], rotation=45)
plt.grid()
pos = [read_data('spanish/polarity_by_country/%s.json.gz' % x) for x in ['US', 'AR', 'ES']]
us_pos, ar_pos, es_pos = pos
plt.figure(figsize=(20, 10))
for code, D, k in zip(['US', 'MX', 'AR', 'ES'], [us_pos, mx_pos, ar_pos, es_pos],
range(4)):
D = remove_median(D)
ticks = [str(x[0])[2:] for x in D]
mu = [x[1] for x in D]
plt.subplot(4, 1, k+1)
plt.plot(mu)
plt.title(code)
index = np.argsort(mu)
index = np.concatenate((index[:1], index[-10:]))
_ = plt.xticks(index, [ticks[x] for x in index], rotation=45)
plt.grid()
plt.ylim(-0.20, 0.20)
%matplotlib inline
from glob import glob
from multiprocessing import Pool
from tqdm import tqdm
from collections import Counter
def number_users(fname):
return fname, len(read_data(fname))
fnames = [i for i in glob('spanish/users_by_country/*.json.gz') if len(i.split('.')[0].split('/')[1]) == 2]
p = Pool(8)
res = [x for x in p.imap_unordered(number_users, fnames)]
p.close()
country_code = Counter()
for name, value in res:
code = name.split('.')[0].split('/')[1]
country_code[code] = value
mc = country_code.most_common()
size = 19
first = mc[:size]
extra = ('REST', sum([x[1] for x in mc[size:]]))
first.append(extra)
plt.figure(figsize=(10, 10))
_ = plt.pie([x[1] for x in first], labels=[x[0] for x in first])
def migration(country_code='MX'):
fname = 'spanish/users_by_country/%s.json.gz' % country_code
d = read_data(fname)
other = Counter()
for x in d.values():
if len(x) == 1:
continue
c = Counter(x)
for xx in c.most_common()[1:]:
if xx[0] == country_code:
continue
other[xx[0]] += 1
return other
plt.figure(figsize=(10, 10))
for k, c in enumerate(['US', 'AR', 'MX', 'ES']):
other = migration(c)
mc = other.most_common()
first = mc[:size]
extra = ('REST', sum([x[1] for x in mc[size:]]))
first.append(extra)
plt.subplot(2, 2, k+1)
_ = plt.pie([x[1] for x in first], labels=[x[0] for x in first])
plt.title(c)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then of course we will need TensorFlow
Step2: Basic RNNs
Step3: Using static_rnn()
Step4: Packing sequences
Step5: Using dynamic_rnn()
Step6: Setting the sequence lengths
Step7: Training a sequence classifier
Step8: Warning
Step9: Multi-layer RNN
Step10: Time series
Step11: Using an OuputProjectionWrapper
Step12: At each time step we now have an output vector of size 100. But what we actually want is a single output value at each time step. The simplest solution is to wrap the cell in an OutputProjectionWrapper.
Step13: Without using an OutputProjectionWrapper
Step14: Generating a creative new sequence
Step15: Deep RNN
Step16: Distributing a Deep RNN Across Multiple GPUs
Step17: Instead, you need a DeviceCellWrapper
Step18: Alternatively, since TensorFlow 1.1, you can use the tf.contrib.rnn.DeviceWrapper class (alias tf.nn.rnn_cell.DeviceWrapper since TF 1.2).
Step19: Dropout
Step20: Note
Step21: Oops, it seems that Dropout does not help at all in this particular case.
Step23: Embeddings
Step24: Build the dictionary
Step25: Generate batches
Step26: Build the model
Step27: Train the model
Step28: Let's save the final embeddings (of course you can use a TensorFlow Saver if you prefer)
Step29: Plot the embeddings
Step30: Machine Translation
Step31: Exercise solutions
Step32: Let's generate a few strings based on the default Reber grammar
Step33: Looks good. Now let's generate a few strings based on the embedded Reber grammar
Step34: Okay, now we need a function to generate strings that do not respect the grammar. We could generate a random string, but the task would be a bit too easy, so instead we will generate a string that respects the grammar, and we will corrupt it by changing just one character
Step35: Let's look at a few corrupted strings
Step36: It's not possible to feed a string directly to an RNN
Step37: We can now generate the dataset, with 50% good strings, and 50% bad strings
Step38: Let's take a look at the first training instances
Step39: It's padded with a lot of zeros because the longest string in the dataset is that long. How long is this particular string?
Step40: What class is it?
Step41: Perfect! We are ready to create the RNN to identify good strings. We build a sequence classifier very similar to the one we built earlier to classify MNIST images, with two main differences
Step42: Now let's generate a validation set so we can track progress during training
Step43: Now let's test our RNN on two tricky strings
|
<ASSISTANT_TASK:>
Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 1.x
except Exception:
pass
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "rnn"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
import tensorflow as tf
reset_graph()
n_inputs = 3
n_neurons = 5
X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])
Wx = tf.Variable(tf.random_normal(shape=[n_inputs, n_neurons],dtype=tf.float32))
Wy = tf.Variable(tf.random_normal(shape=[n_neurons,n_neurons],dtype=tf.float32))
b = tf.Variable(tf.zeros([1, n_neurons], dtype=tf.float32))
Y0 = tf.tanh(tf.matmul(X0, Wx) + b)
Y1 = tf.tanh(tf.matmul(Y0, Wy) + tf.matmul(X1, Wx) + b)
init = tf.global_variables_initializer()
import numpy as np
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]]) # t = 0
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]]) # t = 1
with tf.Session() as sess:
init.run()
Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})
print(Y0_val)
print(Y1_val)
n_inputs = 3
n_neurons = 5
reset_graph()
X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])
basic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
output_seqs, states = tf.nn.static_rnn(basic_cell, [X0, X1],
dtype=tf.float32)
Y0, Y1 = output_seqs
init = tf.global_variables_initializer()
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]])
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]])
with tf.Session() as sess:
init.run()
Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})
Y0_val
Y1_val
from datetime import datetime
root_logdir = os.path.join(os.curdir, "tf_logs")
def make_log_subdir(run_id=None):
if run_id is None:
run_id = datetime.utcnow().strftime("%Y%m%d%H%M%S")
return "{}/run-{}/".format(root_logdir, run_id)
def save_graph(graph=None, run_id=None):
if graph is None:
graph = tf.get_default_graph()
logdir = make_log_subdir(run_id)
file_writer = tf.summary.FileWriter(logdir, graph=graph)
file_writer.close()
return logdir
save_graph()
%load_ext tensorboard
%tensorboard --logdir {root_logdir}
n_steps = 2
n_inputs = 3
n_neurons = 5
reset_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
X_seqs = tf.unstack(tf.transpose(X, perm=[1, 0, 2]))
basic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
output_seqs, states = tf.nn.static_rnn(basic_cell, X_seqs,
dtype=tf.float32)
outputs = tf.transpose(tf.stack(output_seqs), perm=[1, 0, 2])
init = tf.global_variables_initializer()
X_batch = np.array([
# t = 0 t = 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
with tf.Session() as sess:
init.run()
outputs_val = outputs.eval(feed_dict={X: X_batch})
print(outputs_val)
print(np.transpose(outputs_val, axes=[1, 0, 2])[1])
n_steps = 2
n_inputs = 3
n_neurons = 5
reset_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
X_batch = np.array([
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
with tf.Session() as sess:
init.run()
outputs_val = outputs.eval(feed_dict={X: X_batch})
print(outputs_val)
save_graph()
%tensorboard --logdir {root_logdir}
n_steps = 2
n_inputs = 3
n_neurons = 5
reset_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
seq_length = tf.placeholder(tf.int32, [None])
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32,
sequence_length=seq_length)
init = tf.global_variables_initializer()
X_batch = np.array([
# step 0 step 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2 (padded with zero vectors)
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
seq_length_batch = np.array([2, 1, 2, 2])
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run(
[outputs, states], feed_dict={X: X_batch, seq_length: seq_length_batch})
print(outputs_val)
print(states_val)
reset_graph()
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
basic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
logits = tf.layers.dense(states, n_outputs)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0
X_test = X_test.astype(np.float32).reshape(-1, 28*28) / 255.0
y_train = y_train.astype(np.int32)
y_test = y_test.astype(np.int32)
X_valid, X_train = X_train[:5000], X_train[5000:]
y_valid, y_train = y_train[:5000], y_train[5000:]
def shuffle_batch(X, y, batch_size):
rnd_idx = np.random.permutation(len(X))
n_batches = len(X) // batch_size
for batch_idx in np.array_split(rnd_idx, n_batches):
X_batch, y_batch = X[batch_idx], y[batch_idx]
yield X_batch, y_batch
X_test = X_test.reshape((-1, n_steps, n_inputs))
n_epochs = 100
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Last batch accuracy:", acc_batch, "Test accuracy:", acc_test)
reset_graph()
n_steps = 28
n_inputs = 28
n_outputs = 10
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
n_neurons = 100
n_layers = 3
layers = [tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons,
activation=tf.nn.relu)
for layer in range(n_layers)]
multi_layer_cell = tf.nn.rnn_cell.MultiRNNCell(layers)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
states_concat = tf.concat(axis=1, values=states)
logits = tf.layers.dense(states_concat, n_outputs)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
n_epochs = 10
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Last batch accuracy:", acc_batch, "Test accuracy:", acc_test)
t_min, t_max = 0, 30
resolution = 0.1
def time_series(t):
return t * np.sin(t) / 3 + 2 * np.sin(t*5)
def next_batch(batch_size, n_steps):
t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)
Ts = t0 + np.arange(0., n_steps + 1) * resolution
ys = time_series(Ts)
return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)
t = np.linspace(t_min, t_max, int((t_max - t_min) / resolution))
n_steps = 20
t_instance = np.linspace(12.2, 12.2 + resolution * (n_steps + 1), n_steps + 1)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.title("A time series (generated)", fontsize=14)
plt.plot(t, time_series(t), label=r"$t . \sin(t) / 3 + 2 . \sin(5t)$")
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "b-", linewidth=3, label="A training instance")
plt.legend(loc="lower left", fontsize=14)
plt.axis([0, 30, -17, 13])
plt.xlabel("Time")
plt.ylabel("Value")
plt.subplot(122)
plt.title("A training instance", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.legend(loc="upper left")
plt.xlabel("Time")
save_fig("time_series_plot")
plt.show()
X_batch, y_batch = next_batch(1, n_steps)
np.c_[X_batch[0], y_batch[0]]
reset_graph()
n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
reset_graph()
n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.contrib.rnn.OutputProjectionWrapper(
tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu),
output_size=n_outputs)
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
learning_rate = 0.001
loss = tf.reduce_mean(tf.square(outputs - y)) # MSE
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 1500
batch_size = 50
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
saver.save(sess, "./my_time_series_model") # not shown in the book
with tf.Session() as sess: # not shown in the book
saver.restore(sess, "./my_time_series_model") # not shown
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
y_pred
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
save_fig("time_series_pred_plot")
plt.show()
reset_graph()
n_steps = 20
n_inputs = 1
n_neurons = 100
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
rnn_outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
n_outputs = 1
learning_rate = 0.001
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 1500
batch_size = 50
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
saver.save(sess, "./my_time_series_model")
y_pred
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
with tf.Session() as sess: # not shown in the book
saver.restore(sess, "./my_time_series_model") # not shown
sequence = [0.] * n_steps
for iteration in range(300):
X_batch = np.array(sequence[-n_steps:]).reshape(1, n_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
sequence.append(y_pred[0, -1, 0])
plt.figure(figsize=(8,4))
plt.plot(np.arange(len(sequence)), sequence, "b-")
plt.plot(t[:n_steps], sequence[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
plt.ylabel("Value")
plt.show()
with tf.Session() as sess:
saver.restore(sess, "./my_time_series_model")
sequence1 = [0. for i in range(n_steps)]
for iteration in range(len(t) - n_steps):
X_batch = np.array(sequence1[-n_steps:]).reshape(1, n_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
sequence1.append(y_pred[0, -1, 0])
sequence2 = [time_series(i * resolution + t_min + (t_max-t_min/3)) for i in range(n_steps)]
for iteration in range(len(t) - n_steps):
X_batch = np.array(sequence2[-n_steps:]).reshape(1, n_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
sequence2.append(y_pred[0, -1, 0])
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.plot(t, sequence1, "b-")
plt.plot(t[:n_steps], sequence1[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
plt.ylabel("Value")
plt.subplot(122)
plt.plot(t, sequence2, "b-")
plt.plot(t[:n_steps], sequence2[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
save_fig("creative_sequence_plot")
plt.show()
reset_graph()
n_inputs = 2
n_steps = 5
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
n_neurons = 100
n_layers = 3
layers = [tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
for layer in range(n_layers)]
multi_layer_cell = tf.nn.rnn_cell.MultiRNNCell(layers)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
X_batch = np.random.rand(2, n_steps, n_inputs)
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run([outputs, states], feed_dict={X: X_batch})
outputs_val.shape
with tf.device("/gpu:0"): # BAD! This is ignored.
layer1 = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
with tf.device("/gpu:1"): # BAD! Ignored again.
layer2 = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
import tensorflow as tf
class DeviceCellWrapper(tf.nn.rnn_cell.RNNCell):
def __init__(self, device, cell):
self._cell = cell
self._device = device
@property
def state_size(self):
return self._cell.state_size
@property
def output_size(self):
return self._cell.output_size
def __call__(self, inputs, state, scope=None):
with tf.device(self._device):
return self._cell(inputs, state, scope)
reset_graph()
n_inputs = 5
n_steps = 20
n_neurons = 100
X = tf.placeholder(tf.float32, shape=[None, n_steps, n_inputs])
devices = ["/cpu:0", "/cpu:0", "/cpu:0"] # replace with ["/gpu:0", "/gpu:1", "/gpu:2"] if you have 3 GPUs
cells = [DeviceCellWrapper(dev,tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons))
for dev in devices]
multi_layer_cell = tf.nn.rnn_cell.MultiRNNCell(cells)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
print(sess.run(outputs, feed_dict={X: np.random.rand(2, n_steps, n_inputs)}))
reset_graph()
n_inputs = 1
n_neurons = 100
n_layers = 3
n_steps = 20
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
keep_prob = tf.placeholder_with_default(1.0, shape=())
cells = [tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
for layer in range(n_layers)]
cells_drop = [tf.nn.rnn_cell.DropoutWrapper(cell, input_keep_prob=keep_prob)
for cell in cells]
multi_layer_cell = tf.nn.rnn_cell.MultiRNNCell(cells_drop)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
learning_rate = 0.01
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 1500
batch_size = 50
train_keep_prob = 0.5
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
_, mse = sess.run([training_op, loss],
feed_dict={X: X_batch, y: y_batch,
keep_prob: train_keep_prob})
if iteration % 100 == 0: # not shown in the book
print(iteration, "Training MSE:", mse) # not shown
saver.save(sess, "./my_dropout_time_series_model")
with tf.Session() as sess:
saver.restore(sess, "./my_dropout_time_series_model")
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
reset_graph()
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(num_units=n_neurons)
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
n_layers = 3
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
lstm_cells = [tf.nn.rnn_cell.BasicLSTMCell(num_units=n_neurons)
for layer in range(n_layers)]
multi_cell = tf.nn.rnn_cell.MultiRNNCell(lstm_cells)
outputs, states = tf.nn.dynamic_rnn(multi_cell, X, dtype=tf.float32)
top_layer_h_state = states[-1][1]
logits = tf.layers.dense(top_layer_h_state, n_outputs, name="softmax")
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
states
top_layer_h_state
n_epochs = 10
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Last batch accuracy:", acc_batch, "Test accuracy:", acc_test)
lstm_cell = tf.nn.rnn_cell.LSTMCell(num_units=n_neurons, use_peepholes=True)
gru_cell = tf.nn.rnn_cell.GRUCell(num_units=n_neurons)
import urllib.request
import errno
import os
import zipfile
WORDS_PATH = "datasets/words"
WORDS_URL = 'http://mattmahoney.net/dc/text8.zip'
def mkdir_p(path):
Create directories, ok if they already exist.
This is for python 2 support. In python >=3.2, simply use:
>>> os.makedirs(path, exist_ok=True)
try:
os.makedirs(path)
except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise
def fetch_words_data(words_url=WORDS_URL, words_path=WORDS_PATH):
os.makedirs(words_path, exist_ok=True)
zip_path = os.path.join(words_path, "words.zip")
if not os.path.exists(zip_path):
urllib.request.urlretrieve(words_url, zip_path)
with zipfile.ZipFile(zip_path) as f:
data = f.read(f.namelist()[0])
return data.decode("ascii").split()
words = fetch_words_data()
words[:5]
from collections import Counter
vocabulary_size = 50000
vocabulary = [("UNK", None)] + Counter(words).most_common(vocabulary_size - 1)
vocabulary = np.array([word for word, _ in vocabulary])
dictionary = {word: code for code, word in enumerate(vocabulary)}
data = np.array([dictionary.get(word, 0) for word in words])
" ".join(words[:9]), data[:9]
" ".join([vocabulary[word_index] for word_index in [5241, 3081, 12, 6, 195, 2, 3134, 46, 59]])
words[24], data[24]
from collections import deque
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=[batch_size], dtype=np.int32)
labels = np.ndarray(shape=[batch_size, 1], dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid:
target = np.random.randint(0, span)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
np.random.seed(42)
data_index = 0
batch, labels = generate_batch(8, 2, 1)
batch, [vocabulary[word] for word in batch]
labels, [vocabulary[word] for word in labels[:, 0]]
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. Here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.random.choice(valid_window, valid_size, replace=False)
num_sampled = 64 # Number of negative examples to sample.
learning_rate = 0.01
reset_graph()
# Input data.
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
vocabulary_size = 50000
embedding_size = 150
# Look up embeddings for inputs.
init_embeds = tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)
embeddings = tf.Variable(init_embeds)
train_inputs = tf.placeholder(tf.int32, shape=[None])
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
# Construct the variables for the NCE loss
nce_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / np.sqrt(embedding_size)))
nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Compute the average NCE loss for the batch.
# tf.nce_loss automatically draws a new sample of the negative labels each
# time we evaluate the loss.
loss = tf.reduce_mean(
tf.nn.nce_loss(nce_weights, nce_biases, train_labels, embed,
num_sampled, vocabulary_size))
# Construct the Adam optimizer
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
# Compute the cosine similarity between minibatch examples and all embeddings.
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), axis=1, keepdims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True)
# Add variable initializer.
init = tf.global_variables_initializer()
num_steps = 10001
with tf.Session() as session:
init.run()
average_loss = 0
for step in range(num_steps):
print("\rIteration: {}".format(step), end="\t")
batch_inputs, batch_labels = generate_batch(batch_size, num_skips, skip_window)
feed_dict = {train_inputs : batch_inputs, train_labels : batch_labels}
# We perform one update step by evaluating the training op (including it
# in the list of returned values for session.run()
_, loss_val = session.run([training_op, loss], feed_dict=feed_dict)
average_loss += loss_val
if step % 2000 == 0:
if step > 0:
average_loss /= 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print("Average loss at step ", step, ": ", average_loss)
average_loss = 0
# Note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = vocabulary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log_str = "Nearest to %s:" % valid_word
for k in range(top_k):
close_word = vocabulary[nearest[k]]
log_str = "%s %s," % (log_str, close_word)
print(log_str)
final_embeddings = normalized_embeddings.eval()
np.save("./my_final_embeddings.npy", final_embeddings)
def plot_with_labels(low_dim_embs, labels):
assert low_dim_embs.shape[0] >= len(labels), "More labels than embeddings"
plt.figure(figsize=(18, 18)) #in inches
for i, label in enumerate(labels):
x, y = low_dim_embs[i,:]
plt.scatter(x, y)
plt.annotate(label,
xy=(x, y),
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom')
from sklearn.manifold import TSNE
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
plot_only = 500
low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only,:])
labels = [vocabulary[i] for i in range(plot_only)]
plot_with_labels(low_dim_embs, labels)
import tensorflow as tf
reset_graph()
n_steps = 50
n_neurons = 200
n_layers = 3
num_encoder_symbols = 20000
num_decoder_symbols = 20000
embedding_size = 150
learning_rate = 0.01
X = tf.placeholder(tf.int32, [None, n_steps]) # English sentences
Y = tf.placeholder(tf.int32, [None, n_steps]) # French translations
W = tf.placeholder(tf.float32, [None, n_steps - 1, 1])
Y_input = Y[:, :-1]
Y_target = Y[:, 1:]
encoder_inputs = tf.unstack(tf.transpose(X)) # list of 1D tensors
decoder_inputs = tf.unstack(tf.transpose(Y_input)) # list of 1D tensors
lstm_cells = [tf.nn.rnn_cell.BasicLSTMCell(num_units=n_neurons)
for layer in range(n_layers)]
cell = tf.nn.rnn_cell.MultiRNNCell(lstm_cells)
output_seqs, states = tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq(
encoder_inputs,
decoder_inputs,
cell,
num_encoder_symbols,
num_decoder_symbols,
embedding_size)
logits = tf.transpose(tf.unstack(output_seqs), perm=[1, 0, 2])
logits_flat = tf.reshape(logits, [-1, num_decoder_symbols])
Y_target_flat = tf.reshape(Y_target, [-1])
W_flat = tf.reshape(W, [-1])
xentropy = W_flat * tf.nn.sparse_softmax_cross_entropy_with_logits(labels=Y_target_flat, logits=logits_flat)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
np.random.seed(42)
default_reber_grammar = [
[("B", 1)], # (state 0) =B=>(state 1)
[("T", 2), ("P", 3)], # (state 1) =T=>(state 2) or =P=>(state 3)
[("S", 2), ("X", 4)], # (state 2) =S=>(state 2) or =X=>(state 4)
[("T", 3), ("V", 5)], # and so on...
[("X", 3), ("S", 6)],
[("P", 4), ("V", 6)],
[("E", None)]] # (state 6) =E=>(terminal state)
embedded_reber_grammar = [
[("B", 1)],
[("T", 2), ("P", 3)],
[(default_reber_grammar, 4)],
[(default_reber_grammar, 5)],
[("T", 6)],
[("P", 6)],
[("E", None)]]
def generate_string(grammar):
state = 0
output = []
while state is not None:
index = np.random.randint(len(grammar[state]))
production, state = grammar[state][index]
if isinstance(production, list):
production = generate_string(grammar=production)
output.append(production)
return "".join(output)
for _ in range(25):
print(generate_string(default_reber_grammar), end=" ")
for _ in range(25):
print(generate_string(embedded_reber_grammar), end=" ")
def generate_corrupted_string(grammar, chars="BEPSTVX"):
good_string = generate_string(grammar)
index = np.random.randint(len(good_string))
good_char = good_string[index]
bad_char = np.random.choice(sorted(set(chars) - set(good_char)))
return good_string[:index] + bad_char + good_string[index + 1:]
for _ in range(25):
print(generate_corrupted_string(embedded_reber_grammar), end=" ")
def string_to_one_hot_vectors(string, n_steps, chars="BEPSTVX"):
char_to_index = {char: index for index, char in enumerate(chars)}
output = np.zeros((n_steps, len(chars)), dtype=np.int32)
for index, char in enumerate(string):
output[index, char_to_index[char]] = 1.
return output
string_to_one_hot_vectors("BTBTXSETE", 12)
def generate_dataset(size):
good_strings = [generate_string(embedded_reber_grammar)
for _ in range(size // 2)]
bad_strings = [generate_corrupted_string(embedded_reber_grammar)
for _ in range(size - size // 2)]
all_strings = good_strings + bad_strings
n_steps = max([len(string) for string in all_strings])
X = np.array([string_to_one_hot_vectors(string, n_steps)
for string in all_strings])
seq_length = np.array([len(string) for string in all_strings])
y = np.array([[1] for _ in range(len(good_strings))] +
[[0] for _ in range(len(bad_strings))])
rnd_idx = np.random.permutation(size)
return X[rnd_idx], seq_length[rnd_idx], y[rnd_idx]
X_train, l_train, y_train = generate_dataset(10000)
X_train[0]
l_train[0]
y_train[0]
reset_graph()
possible_chars = "BEPSTVX"
n_inputs = len(possible_chars)
n_neurons = 30
n_outputs = 1
learning_rate = 0.02
momentum = 0.95
X = tf.placeholder(tf.float32, [None, None, n_inputs], name="X")
seq_length = tf.placeholder(tf.int32, [None], name="seq_length")
y = tf.placeholder(tf.float32, [None, 1], name="y")
gru_cell = tf.nn.rnn_cell.GRUCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(gru_cell, X, dtype=tf.float32,
sequence_length=seq_length)
logits = tf.layers.dense(states, n_outputs, name="logits")
y_pred = tf.cast(tf.greater(logits, 0.), tf.float32, name="y_pred")
y_proba = tf.nn.sigmoid(logits, name="y_proba")
xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,
momentum=momentum,
use_nesterov=True)
training_op = optimizer.minimize(loss)
correct = tf.equal(y_pred, y, name="correct")
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy")
init = tf.global_variables_initializer()
saver = tf.train.Saver()
X_val, l_val, y_val = generate_dataset(5000)
n_epochs = 50
batch_size = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
X_batches = np.array_split(X_train, len(X_train) // batch_size)
l_batches = np.array_split(l_train, len(l_train) // batch_size)
y_batches = np.array_split(y_train, len(y_train) // batch_size)
for X_batch, l_batch, y_batch in zip(X_batches, l_batches, y_batches):
loss_val, _ = sess.run(
[loss, training_op],
feed_dict={X: X_batch, seq_length: l_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, seq_length: l_batch, y: y_batch})
acc_val = accuracy.eval(feed_dict={X: X_val, seq_length: l_val, y: y_val})
print("{:4d} Train loss: {:.4f}, accuracy: {:.2f}% Validation accuracy: {:.2f}%".format(
epoch, loss_val, 100 * acc_train, 100 * acc_val))
saver.save(sess, "./my_reber_classifier")
test_strings = [
"BPBTSSSSSSSXXTTVPXVPXTTTTTVVETE",
"BPBTSSSSSSSXXTTVPXVPXTTTTTVVEPE"]
l_test = np.array([len(s) for s in test_strings])
max_length = l_test.max()
X_test = [string_to_one_hot_vectors(s, n_steps=max_length)
for s in test_strings]
with tf.Session() as sess:
saver.restore(sess, "./my_reber_classifier")
y_proba_val = y_proba.eval(feed_dict={X: X_test, seq_length: l_test})
print()
print("Estimated probability that these are Reber strings:")
for index, string in enumerate(test_strings):
print("{}: {:.2f}%".format(string, 100 * y_proba_val[index][0]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note que a funรงรฃo tem duas raรญzes no entanto, por este mรฉtodo nรฃo temos como convergir para a raรญz negativa. Uma alternativa รฉ obter maneiras distintas de escrever a expressรฃo $u(x)=x$. No caso de nosso exemplo podemmos fazer
Step2: http
Step3: Mรฉtodo de NewtonโRaphson
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return 2-x-np.exp(-x)
x = np.linspace(-10, 10, 400)
y = f(x)
plt.figure()
plt.plot(x, y)
# melhorando a escala para visualizar as possรญveis raรญzes
plt.figure()
plt.plot(x, y)
plt.hlines(0,x.min(),x.max(),colors='C1',linestyles='dashed')
plt.ylim(-5,5)
# implementaรงรฃo numรฉrica do ponto fixo
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return 2-x-np.exp(-x)
xplot = np.linspace(-10, 10, 400)
yplot = f(xplot)
# chute inicial para metodo iterativo
x = 1
for i in range(10):
x = 2 - np.exp(-x)
print(x)
# faz um grรกfico para mostrar a soluรงรฃo
plt.figure()
plt.plot(xplot, yplot)
plt.hlines(0,xplot.min(),xplot.max(),colors='C1',linestyles='dashed')
plt.vlines(x,yplot.min(),yplot.max(),colors='C1',linestyles='dashed')
plt.ylim(-5,5)
# implementaรงรฃo numรฉrica do ponto fixo com expressรฃo alternativa
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return 2-x-np.exp(-x)
xplot = np.linspace(-10, 10, 400)
yplot = f(xplot)
# chute inicial para metodo iterativo
x = -1
for i in range(10):
x = -np.log(2-x)
print(x)
# faz um grรกfico para mostrar a soluรงรฃo
plt.figure()
plt.plot(xplot, yplot)
plt.hlines(0,xplot.min(),xplot.max(),colors='C1',linestyles='dashed')
plt.vlines(x,yplot.min(),yplot.max(),colors='C1',linestyles='dashed')
plt.ylim(-5,5)
import numpy as np
def f(x):
return x**2 - x - 1
# definir intervalo inicial
a = 1.0
b = 2.0
# checa se existe raiz dentro do intervalo inicial
if f(a)*f(b) >= 0:
print("nรฃo existe raiz no intervalo inicial.")
a_n = a
b_n = b
N = 10 # numero de iteraรงรตes
for n in range(N):
m_n = (a_n + b_n)/2 # ponto mรฉdio do intervalo N
f_m_n = f(m_n) # valor da funรงรฃo f(x) no ponto mรฉdio
if f(a_n)*f_m_n < 0:
a_n = a_n
b_n = m_n
elif f(b_n)*f_m_n < 0:
a_n = m_n
b_n = b_n
else:
print("nรฃo foi encontrada raiz.")
print('A raiz encontrada foi: %8.6f +/- %8.6f'%(m_n,(b-a)/2**(N+1)))
# Exemplo de aplicaรงรฃo do mรฉtodo de NewtonโRaphson
import numpy as np
def f(x):
return x**2 - x - 1
def df(x):
return 2*x - 1
#definina o ponto inicial x_0
x0 = 1.
# defina a tolerancia com a qual a raiz serรก determinada
eps = 1.0e-3
x = x0
count=0
while abs(f(x)) > eps:
x = x - f(x)/df(x)
print('solution for iteration %i is %f'%(count,x))
count += 1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Deploying a model with Ray Serve
Step2: Next, we define a Ray Serve deployment with a GPT-2 model, by using the @serve.deployment decorator on a model
Step3: This model can now easily be deployed using a model.deploy() call.
Step4: Defining and launching a Gradio interface
Step5: Apart from our gpt2 function, the only other thing that we need to define a Gradio interface is
Step6: For more complex models served with Ray, you might need multiple gr.inputs
Step7: This should launch an interface that you can interact with that looks like this
Step8: This time we're going to use a Scikit-Learn model that we quickly train
Step9: Equipped with our BoostingModel class, we can now define and launch a Gradio interface as follows.
|
<ASSISTANT_TASK:>
Python Code:
# Install all dependencies for this example.
! pip install ray gradio transformers requests
import gradio as gr
from ray import serve
from transformers import pipeline
import requests
serve.start()
@serve.deployment
def model(request):
language_model = pipeline("text-generation", model="gpt2")
query = request.query_params["query"]
return language_model(query, max_length=100)
model.deploy()
example = "What's the meaning of life?"
response = requests.get(f"http://localhost:8000/model?query={example}")
print(response.text)
def gpt2(query):
response = requests.get(f"http://localhost:8000/model?query={query}")
return response.json()[0]["generated_text"]
iface = gr.Interface(
fn=gpt2,
inputs=[gr.inputs.Textbox(
default=example, label="Input prompt"
)],
outputs=[gr.outputs.Textbox(label="Model output")]
)
iface.launch()
# Install all dependencies for this example.
! pip install ray gradio requests scikit-learn
import gradio as gr
import requests
from sklearn.datasets import load_iris
from sklearn.ensemble import GradientBoostingClassifier
from ray import serve
# Train your model.
iris_dataset = load_iris()
model = GradientBoostingClassifier()
model.fit(iris_dataset["data"], iris_dataset["target"])
# Start Ray Serve.
serve.start()
# Define your deployment.
@serve.deployment(route_prefix="/iris")
class BoostingModel:
def __init__(self, model):
self.model = model
self.label_list = iris_dataset["target_names"].tolist()
async def __call__(self, request):
payload = (await request.json())["vector"]
print(f"Received http request with data {payload}")
prediction = self.model.predict([payload])[0]
human_name = self.label_list[prediction]
return {"result": human_name}
# Deploy your model.
BoostingModel.deploy(model)
# Define gradio function
def iris(sl, sw, pl, pw):
request_input = {"vector": [sl, sw, pl, pw]}
response = requests.get(
"http://localhost:8000/iris", json=request_input)
return response.json()[0]["result"]
# Define gradio interface
iface = gr.Interface(
fn=iris,
inputs=[
gr.inputs.Number(default=1.0, label="sepal length (cm)"),
gr.inputs.Number(default=1.0, label="sepal width (cm)"),
gr.inputs.Number(default=1.0, label="petal length (cm)"),
gr.inputs.Number(default=1.0, label="petal width (cm)"),
],
outputs="text")
# Launch the gradio interface
iface.launch()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ๅฏ่งๅ mnist
Step2: ็ปไน ๏ผๅๆๆฐๆฎ
Step3: ้ฎ้ข 1
Step4: ้ฎ้ข 2
Step5: ้ฎ้ข 3
Step6: ้ฎ้ข 4
Step7: ไฟๅญๆจกๅ
|
<ASSISTANT_TASK:>
Python Code:
from keras.datasets import mnist
(X_raw, y_raw), (X_raw_test, y_raw_test) = mnist.load_data()
n_train, n_test = X_raw.shape[0], X_raw_test.shape[0]
import matplotlib.pyplot as plt
import random
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
for i in range(15):
plt.subplot(3, 5, i+1)
index = random.randint(0, n_train-1)
plt.title(str(y_raw[index]))
plt.imshow(X_raw[index], cmap='gray')
plt.axis('off')
import numpy as np
n_class, n_len, width, height = 11, 5, 28, 28
def generate_dataset(X, y):
X_len = X.shape[0]
X_gen = np.zeros((X_len, height, width*n_len, 1), dtype=np.uint8)
y_gen = [np.zeros((X_len, n_class), dtype=np.uint8) for i in range(n_len)]
# ้ๆบๅ1~5ไธชๆฐๅญ๏ผๅนถๆผๆฅๆๆฐ็ๅพ็
for i in range(X_len):
for j in range(5):
# ่ฟ้็ปๅฎไธไธชๆฆ็่ฎฉ็ฉบ็ฝๅไปฃๆฐๅญ
if 4 < random.randint(0,6):
y_gen[j][i][10] = 1
else:
index = random.randint(0, X_len-1)
X_gen[i,:,j*width:(j+1)*width, 0] = X[index]
y_gen[j][i][y[index]] = 1
return X_gen, y_gen
X_train, y_train = generate_dataset(X_raw, y_raw)
X_test, y_test = generate_dataset(X_raw_test, y_raw_test)
print(y_train[0].shape)
# ๆพ็คบ็ๆ็ๅพ็
for i in range(15):
plt.subplot(5, 3, i+1)
index = random.randint(0, n_test-1)
title = ''
for j in range(n_len):
title += str(np.argmax(y_test[j][index])) + ','
plt.title(title)
plt.imshow(X_test[index][:,:,0], cmap='gray')
plt.axis('off')
# ๅฏผๅ
ฅๆ้็ๆจกๅ
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.layers.core import Dropout
from keras.optimizers import SGD
from keras.models import Model
from keras.layers import Input, Dense
def train_model(X_train, y_train):
main_input = Input(shape=(28, 140, 1), name='main_input')
# ๅท็งฏๅฑ
x = Convolution2D(32, 3, 3, activation="relu")(main_input)
x = MaxPooling2D(pool_size = (2, 2))(x)
x = Convolution2D(32, 3, 3, activation="relu")(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Convolution2D(32, 3, 3 ,activation="relu")(x)
x = Convolution2D(32, 3, 3, activation="relu")(x)
x = Flatten()(x)
x = Dropout(0.3)(x)
# ๅๅซ่ฎญ็ป5ไธชๆฐๅญ
y_out = [Dense(11, activation='softmax')(x) for i in range(n_len)]
# ไฝฟ็จๆณๅ่พๅบ5ไธชๆฐๅญ็ๅบๅ
model = Model(input=main_input, output=y_out)
model.compile(optimizer='rmsprop', loss="categorical_crossentropy", metrics=["accuracy"])
model.fit(X_train, y_train, batch_size=1024, nb_epoch=8, shuffle=True, verbose=1, validation_split=0.2)
return model
model = train_model(X_train, y_train)
from keras.utils.visualize_util import plot, model_to_dot
from IPython.display import Image, SVG
# ๅฏ่งๅ็ฝ็ปๆจกๅ
SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg'))
# ๆต่ฏๆจกๅ
prediction = model.predict(X_test)
# ่ฎก็ฎๆต่ฏ้ไธญๆฐๅญๅบๅ็ๅ็กฎ็
def evaluate(model):
right = 0
error = 0
for i in range (1000):
right_num = 0
for j in range(5):
y_pred_num = np.argmax(prediction[j][i])
y_test_num = np.argmax(y_test[j][i])
if y_pred_num == y_test_num:
right_num += 1
if right_num == 5:
right += 1
else:
error += 1
return float(right)/(right+error)
evaluate(model)
def get_result(result):
# ๅฐ one_hot ็ผ็ ่งฃ็
resultstr = ''
for i in range(n_len):
resultstr += str(np.argmax(result[i])) + ','
return resultstr
index = random.randint(0, n_test-1)
y_pred = model.predict(X_test[index].reshape(1, height, width*n_len, 1))
plt.title('real: %s\npred:%s'%(get_result([y_test[x][index] for x in range(n_len)]), get_result(y_pred)))
plt.imshow(X_test[index,:,:,0], cmap='gray')
plt.axis('off')
model.save_weights('model.h5')
with open('model.json', 'w') as f:
f.write(model.to_json())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Factorial
Step3: Write a function that computes the factorial of small numbers using a Python loop.
Step4: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
def np_fact(n):
Compute n! = n*(n-1)*...*1 using Numpy.
if n==0:
c=1 # this makes it so if 0 is imputed a 1 is reurned
return c
else:
a=np.arange(1,n+1,1) #aranges a function up to n in 1 increments
b=np.cumprod(a) #multiplies each term as it goes up
c=b[::-1] #reverses the array of numbers
d=c[0:1] #takes the largest of them, the first term
return d
print np_fact(10) #ensuring correctness
assert np_fact(0)==1
assert np_fact(1)==1
assert np_fact(10)==3628800
assert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
def loop_fact(n):
K=1 #defines the variable
for x in range(1,n+1):#for loop in the range 1-n
K*=x #makes the factorial
return K
print loop_fact(1) #ensures it was working
assert loop_fact(0)==1
assert loop_fact(1)==1
assert loop_fact(10)==3628800
assert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
%timeit -n1 -r1 np_fact(50) #times the first function
%timeit -n1 -r1 loop_fact(50) #times the second function
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What do these 'vectors' look like? What do the columns refer to?
Step2: Going from a vector back to the metadata reference
Step3: Initial data woes
Step4: Seems the elbow is quite wide and not sharply defined, based on just the line variences. Let's see what it looks like in general.
Step5: x1 vs x2 varience?
Step6: Attempting K-Means
Step7: Interesting!
Step8: How about the area density? In other words, what does it look like if the total area of the block is compared to the area taken up by just the words themselves?
Step9: More outliers skewing the results. This time for blocks with nearly zero varience at either end, but a huge amount of letter area attributed to it by the ocr, but sweeping out a very small overall area. Perhaps mask out the columns which aren't actually columns but dividers mistaken for text? ie skip all blocks that are narrow under 100px perhaps. Another way might be to ignore blocks which are under approximately 40 words (40 words * 5 characters)
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import mpld3
mpld3.enable_notebook()
# Get the dataset:
from clustering import create_cluster_dataset, NewspaperArchive
DBFILE = "1749_1750_no_drift.db"
n = NewspaperArchive()
ds = create_cluster_dataset(n, daterange = [1749, 1750], dbfile = DBFILE)
data, transform, id_list = ds
print(data.toarray())
print(transform.get_feature_names())
from clustering import ClusterDB
db = ClusterDB(DBFILE)
print(dict(db.vecidtoitem(id_list[-1])))
print(data.toarray()[-1])
from burney_data import BurneyDB
bdb = BurneyDB("burney.db")
bdb.get_title_row(titleAbbreviation="B0574REMEMBRA")
from scipy import cluster
from matplotlib import pyplot as plt
import numpy as np
# Where is the K-means 'elbow'?
# Try between 1 and 10
# use only the x1 and x2 variences
vset = [cluster.vq.kmeans(data.toarray()[:, [3,6]], i) for i in range(1,10)]
plt.plot([v for (c,v) in vset])
plt.show()
# Mask off leaving just the front and end variance columns
npdata = data.toarray()
mask = np.ones((8), dtype=bool)
mask[[0,1,2,4,5,7]] = False
marray = npdata[:,mask]
plt.scatter(marray[:,0], marray[:,1])
plt.show()
#trying a different KMeans
from sklearn.cluster import KMeans
estimators = {'k_means_3': KMeans(n_clusters=3),
'k_means_5': KMeans(n_clusters=5),
'k_means_8': KMeans(n_clusters=8),}
fignum = 1
for name, est in estimators.items():
fig = plt.figure(fignum, figsize=(8, 8))
plt.clf()
plt.cla()
est.fit(marray)
labels = est.labels_
plt.scatter(marray[:,0], marray[:,1], c=labels.astype(np.float))
fignum = fignum + 1
plt.show()
mpld3.disable_notebook() # switch off the interactive graph functionality which doesn't work well with the 3D library
from mpl_toolkits.mplot3d import Axes3D
X = npdata[:, [3,5,6]]
fignum = 1
for name, est in estimators.items():
fig = plt.figure(fignum, figsize=(8, 8))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=5, azim=30)
plt.cla()
est.fit(X)
labels = est.labels_
ax.scatter(X[:,0], X[:,2], X[:,1], c=labels.astype(np.float))
ax.set_xlabel('x1 varience')
ax.set_ylabel('x2 varience')
ax.set_zlabel('Average indent')
fignum = fignum + 1
plt.show()
X = npdata[:, [3,0,6]]
fignum = 1
for name, est in estimators.items():
fig = plt.figure(fignum, figsize=(8, 8))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=25, azim=40)
plt.cla()
est.fit(X)
labels = est.labels_
ax.scatter(X[:,0], X[:,2], X[:,1], c=labels.astype(np.float))
ax.set_xlabel('x1 varience')
ax.set_ylabel('x2 varience')
ax.set_zlabel('Density')
fignum = fignum + 1
plt.show()
mask = npdata[:,1] > 40 * 5 # mask based on the ltcount value
print(mask)
print("Amount of vectors: {0}, Vectors with ltcount < 50: {1}".format(len(npdata), sum([1 for item in mask if item == False])))
m_npdata = npdata[mask, :]
X = m_npdata[:, [3,0,6]]
# Let's just plot one graph to see:
est = estimators['k_means_8']
fig = plt.figure(fignum, figsize=(8, 8))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=25, azim=40)
plt.cla()
est.fit(X)
labels = est.labels_
ax.scatter(X[:,0], X[:,2], X[:,1], c=labels.astype(np.float))
ax.set_xlabel('x1 varience')
ax.set_ylabel('x2 varience')
ax.set_zlabel('Density')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Doc2Vec is a core_concepts_model that represents each
Step2: Define a Function to Read and Preprocess Text
Step3: Let's take a look at the training corpus
Step4: And the testing corpus looks like this
Step5: Notice that the testing corpus is just a list of lists and does not contain
Step6: Build a vocabulary
Step7: Essentially, the vocabulary is a list (accessible via
Step8: Next, train the model on the corpus.
Step9: Now, we can use the trained model to infer a vector for any piece of text
Step10: Note that infer_vector() does not take a string, but rather a list of
Step11: Let's count how each document ranks with respect to the training corpus
Step12: Basically, greater than 95% of the inferred documents are found to be most
Step13: Notice above that the most similar document (usually the same text) is has a
Step14: Testing the Model
|
<ASSISTANT_TASK:>
Python Code:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
import os
import gensim
# Set file names for train and test data
test_data_dir = os.path.join(gensim.__path__[0], 'test', 'test_data')
lee_train_file = os.path.join(test_data_dir, 'lee_background.cor')
lee_test_file = os.path.join(test_data_dir, 'lee.cor')
import smart_open
def read_corpus(fname, tokens_only=False):
with smart_open.open(fname, encoding="iso-8859-1") as f:
for i, line in enumerate(f):
tokens = gensim.utils.simple_preprocess(line)
if tokens_only:
yield tokens
else:
# For training data, add tags
yield gensim.models.doc2vec.TaggedDocument(tokens, [i])
train_corpus = list(read_corpus(lee_train_file))
test_corpus = list(read_corpus(lee_test_file, tokens_only=True))
print(train_corpus[:2])
print(test_corpus[:2])
model = gensim.models.doc2vec.Doc2Vec(vector_size=50, min_count=2, epochs=40)
model.build_vocab(train_corpus)
print(f"Word 'penalty' appeared {model.wv.get_vecattr('penalty', 'count')} times in the training corpus.")
model.train(train_corpus, total_examples=model.corpus_count, epochs=model.epochs)
vector = model.infer_vector(['only', 'you', 'can', 'prevent', 'forest', 'fires'])
print(vector)
ranks = []
second_ranks = []
for doc_id in range(len(train_corpus)):
inferred_vector = model.infer_vector(train_corpus[doc_id].words)
sims = model.dv.most_similar([inferred_vector], topn=len(model.dv))
rank = [docid for docid, sim in sims].index(doc_id)
ranks.append(rank)
second_ranks.append(sims[1])
import collections
counter = collections.Counter(ranks)
print(counter)
print('Document ({}): ยซ{}ยป\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('SECOND-MOST', 1), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: ยซ%sยป\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))
# Pick a random document from the corpus and infer a vector from the model
import random
doc_id = random.randint(0, len(train_corpus) - 1)
# Compare and print the second-most-similar document
print('Train Document ({}): ยซ{}ยป\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
sim_id = second_ranks[doc_id]
print('Similar Document {}: ยซ{}ยป\n'.format(sim_id, ' '.join(train_corpus[sim_id[0]].words)))
# Pick a random document from the test corpus and infer a vector from the model
doc_id = random.randint(0, len(test_corpus) - 1)
inferred_vector = model.infer_vector(test_corpus[doc_id])
sims = model.dv.most_similar([inferred_vector], topn=len(model.dv))
# Compare and print the most/median/least similar documents from the train corpus
print('Test Document ({}): ยซ{}ยป\n'.format(doc_id, ' '.join(test_corpus[doc_id])))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: ยซ%sยป\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Section 1
Step2: <b> Part 2 </b>
Step3: <b> Question 2.1.2. </b> Calculate the number of parameters of this model
Step4: Your answer goes here in details
Step5: <b> Question 2.1.4. </b> Implement the evaluation function for accuracy computation
Step6: <b> Question 2.1.5. </b> Implement training pipeline and run the training data through it to train the model.
Step7: <b> Question 2.1.6 </b>
Step8: Comment
Step9: The relu is always equal or better than the sigmoid
Step10: Test accuracy does not change when we change batch size
Step11: An higher learning rate is better, in this specifi case. Later we will see that this is not always the case.
Step12: Adam is always better than Stocasthic gradient descent
Step13: Sometimes adam is vaster than SGD
Step14: Bigger batches mean less training time, this is actually a good news if we consider that the batch dimension does not has a big influence on the final accuracy
Step15: Comment
Step16: Comment
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
print("Image Shape: {}".format(X_train[0].shape))
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
from __future__ import print_function
import tensorflow as tf
#STEP 1
# Parameters
learning_rate = 0.01
training_epochs = 100
batch_size = 128
display_step = 1
logs_path = 'log_files/' # useful for tensorboard
# tf Graph Input: mnist data image of shape 28*28=784
x = tf.placeholder(tf.float32, [None, 784], name='InputData')
# 0-9 digits recognition, 10 classes
y = tf.placeholder(tf.float32, [None, 10], name='LabelData')
# Set model weights
W = tf.Variable(tf.zeros([784, 10]), name='Weights')
b = tf.Variable(tf.zeros([10]), name='Bias')
# Construct model and encapsulating all ops into scopes, making Tensorboard's Graph visualization more convenient
with tf.name_scope('Model'):
# Model
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
with tf.name_scope('Loss'):
# Minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
with tf.name_scope('SGD'):
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
with tf.name_scope('Accuracy'):
# Accuracy
acc = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
acc = tf.reduce_mean(tf.cast(acc, tf.float32))
# Initializing the variables
init = tf.global_variables_initializer()
# Create a summary to monitor cost tensor
tf.summary.scalar("Loss", cost)
# Create a summary to monitor accuracy tensor
tf.summary.scalar("Accuracy", acc)
# Merge all summaries into a single op
merged_summary_op = tf.summary.merge_all()
#STEP 2
# Launch the graph for training
with tf.Session() as sess:
sess.run(init)
# op to write logs to Tensorboard
summary_writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# Run optimization op (backprop), cost op (to get loss value)
# and summary nodes
_, c, summary = sess.run([optimizer, cost, merged_summary_op],
feed_dict={x: batch_xs, y: batch_ys})
# Write logs at every iteration
summary_writer.add_summary(summary, epoch * total_batch + i)
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if (epoch+1) % display_step == 0:
print("Epoch: ", '%02d' % (epoch+1), " =====> Loss=", "{:.9f}".format(avg_cost))
print("Optimization Finished!")
# Test model
# Calculate accuracy
print("Accuracy:", acc.eval({x: mnist.test.images, y: mnist.test.labels}))
#Helper functions for weigths and bias initilization
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
# https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/
# https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks-Part-2/
def LeNet5_Model(data,activation_function=tf.nn.sigmoid):
# layer 1 param
conv1_weights = weight_variable([5,5,1,6])
conv1_bias = bias_variable([6])
# layer 2 param
conv2_weights = weight_variable([5,5,6,16])
conv2_bias = bias_variable([16])
# layer 3 param
layer3_weights = weight_variable([400, 120])
layer3_bias = bias_variable([120])
# layer 4 param
layer4_weights = weight_variable([120, 84])
layer4_bias = bias_variable([84])
# layer 5 param
layer5_weights = weight_variable([84, 10])
layer5_bias = bias_variable([10])
with tf.name_scope('Model'):
with tf.name_scope('Layer1'):
conv1 = tf.nn.conv2d(input=data,filter=conv1_weights,strides=[1,1,1,1],padding='SAME')
print(conv1.shape)
sigmoid1 = activation_function(conv1 + conv1_bias)
pool1 = tf.nn.max_pool(sigmoid1,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='VALID')
print(pool1.shape)
with tf.name_scope('Layer2'):
conv2 = tf.nn.conv2d(input=pool1,filter=conv2_weights,strides=[1,1,1,1],padding='VALID')
print(conv2.shape)
sigmoid2 = activation_function(conv2 + conv2_bias)
pool2 = tf.nn.max_pool(sigmoid2,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='VALID')
print(pool2.shape)
with tf.name_scope('Flatten'):
flat_inputs = tf.contrib.layers.flatten(pool2)
print(flat_inputs.shape)
with tf.name_scope('Layer3'):
out3 = activation_function(tf.matmul(flat_inputs, layer3_weights) + layer3_bias)
with tf.name_scope('Layer4'):
out4 = activation_function(tf.matmul(out3, layer4_weights) + layer4_bias)
with tf.name_scope('Layer5'):
pred = tf.nn.softmax(tf.matmul(out4, layer5_weights) + layer5_bias) # Softmax
return pred
total_parameters = 0
for variable in tf.trainable_variables():
# shape is an array of tf.Dimension
shape = variable.get_shape()
print(shape)
variable_parametes = 1
for dim in shape:
variable_parametes *= dim.value
print(variable_parametes)
total_parameters += variable_parametes
print(total_parameters)
layer1 = 5*5*1*6 + 6
layer2 = 5*5*6*16 + 16
layer3 = 400*120 + 120
layer4 = 120*84 + 84
layer5 = 84*10 + 10
tot = layer1 + layer2 + layer3 + layer4 + layer5
print('total number of parameters: %d' % tot)
from __future__ import print_function
import tensorflow as tf
from numpy import array
import numpy as np
#STEP 1
tf.reset_default_graph()
# Parameters
learning_rate = 0.1
training_epochs = 100
batch_size = 128
display_step = 1
logs_path = 'log_files/' # useful for tensorboard
# tf Graph Input: mnist data image of shape 28*28=784
x = tf.placeholder(tf.float32, [batch_size,28, 28,1], name='InputData')
# 0-9 digits recognition, 10 classes
y = tf.placeholder(tf.float32, [batch_size, 10], name='LabelData')
# Construct model and encapsulating all ops into scopes, making Tensorboard's Graph visualization more convenient
with tf.name_scope('Model'):
# Model
pred = LeNet5_Model(data=x)
with tf.name_scope('Loss'):
# Minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
with tf.name_scope('SGD'):
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
with tf.name_scope('Accuracy'):
# Accuracy
acc = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
acc = tf.reduce_mean(tf.cast(acc, tf.float32))
# Initializing the variables
init = tf.global_variables_initializer()
# Create a summary to monitor cost tensor
tf.summary.scalar("Loss", cost)
# Create a summary to monitor accuracy tensor
tf.summary.scalar("Accuracy", acc)
# Merge all summaries into a single op
merged_summary_op = tf.summary.merge_all()
#STEP 2
# Launch the graph for training
with tf.Session() as sess:
sess.run(init)
# op to write logs to Tensorboard
summary_writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
batch_xs = array(batch_xs).reshape(batch_size, 28,28,1)
#print(batch_xs.shape)
#print(batch_xs.dtype)
# Run optimization op (backprop), cost op (to get loss value)
# and summary nodes
_, c, summary = sess.run([optimizer, cost, merged_summary_op],
feed_dict={x: batch_xs, y: batch_ys})
# Write logs at every iteration
summary_writer.add_summary(summary, epoch * total_batch + i)
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if (epoch+1) % display_step == 0:
print("Epoch: ", '%02d' % (epoch+1), " =====> Loss=", "{:.9f}".format(avg_cost))
print("Optimization Finished!")
def evaluate(model, y):
#your implementation goes here
correct_prediction = tf.equal(tf.argmax(model,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
return accuracy
import numpy as np
# Initializing the variables
def train(learning_rate, training_epochs, batch_size, display_step, optimizer_method=tf.train.GradientDescentOptimizer,activation_function=tf.nn.sigmoid):
tf.reset_default_graph()
# Initializing the session
logs_path = 'log_files/' # useful for tensorboard
# tf Graph Input: mnist data image of shape 28*28=784
x = tf.placeholder(tf.float32, [None,28, 28,1], name='InputData')
# 0-9 digits recognition, 10 classes
y = tf.placeholder(tf.float32, [None, 10], name='LabelData')
# Construct model and encapsulating all ops into scopes, making Tensorboard's Graph visualization more convenient
with tf.name_scope('Model'):
# Model
pred = LeNet5_Model(data=x,activation_function=activation_function)
with tf.name_scope('Loss'):
# Minimize error using cross entropy
# Minimize error using cross entropy
if activation_function == tf.nn.sigmoid:
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
else:
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(tf.clip_by_value(pred,-1.0,1.0)), reduction_indices=1))
#cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
with tf.name_scope('SGD'):
# Gradient Descent
optimizer = optimizer_method(learning_rate).minimize(cost)
with tf.name_scope('Accuracy'):
# Accuracy
acc = evaluate(pred, y)
# Initializing the variables
init = tf.global_variables_initializer()
# Create a summary to monitor cost tensor
tf.summary.scalar("Loss", cost)
# Create a summary to monitor accuracy tensor
tf.summary.scalar("Accuracy", acc)
# Merge all summaries into a single op
merged_summary_op = tf.summary.merge_all()
saver = tf.train.Saver()
print ("Start Training!")
t0 = time()
X_train,Y_train = mnist.train.images.reshape((-1,28,28,1)), mnist.train.labels
X_val,Y_val = mnist.validation.images.reshape((-1,28,28,1)), mnist.validation.labels
# Launch the graph for training
with tf.Session() as sess:
sess.run(init)
# op to write logs to Tensorboard
summary_writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
# train_next_batch shuffle the images by default
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
batch_xs = batch_xs.reshape((-1,28,28,1))
# Run optimization op (backprop), cost op (to get loss value)
# and summary nodes
_, c, summary = sess.run([optimizer, cost, merged_summary_op],
feed_dict={x: batch_xs,
y: batch_ys})
# Write logs at every iteration
summary_writer.add_summary(summary, epoch * total_batch + i)
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if (epoch+1) % display_step == 0:
print("Epoch: ", '%02d' % (epoch+1), "=====> Loss=", "{:.9f}".format(avg_cost))
acc_train = acc.eval({x: X_train, y: Y_train})
print("Epoch: ", '%02d' % (epoch+1), "=====> Accuracy Train=", "{:.9f}".format(acc_train))
acc_val = acc.eval({x: X_val, y: Y_val})
print("Epoch: ", '%02d' % (epoch+1), "=====> Accuracy Validation=", "{:.9f}".format(acc_val))
print ("Training Finished!")
t1 = time()
# Save the variables to disk.
save_path = saver.save(sess, "model.ckpt")
print("Model saved in file: %s" % save_path)
#Your implementation for testing accuracy after training goes here
X_test,Y_test = mnist.test.images.reshape((-1,28,28,1)),mnist.test.labels
acc_test = acc.eval({x: X_test, y: Y_test})
print("Accuracy Test=", "{:.9f}".format(acc_test))
return acc_train,acc_val,acc_test,t1-t0
%time train (0.1,100,128,10,optimizer_method=tf.train.GradientDescentOptimizer)
from time import time
%time train (0.1,100,128,10,optimizer_method=tf.train.GradientDescentOptimizer,activation_function=tf.nn.relu)
%time train (0.1,100,128,10,optimizer_method=tf.train.AdamOptimizer,activation_function=tf.nn.relu)
# your answer goas here
columns = ['optimizer','learning_rate','activation_function','batch_size','training_accuracy','validation_accuracy','test_accuracy','elapsed_time']
optimizer_options = {'gradient_descent':tf.train.GradientDescentOptimizer,'adam':tf.train.AdamOptimizer}
learning_options = [0.001,0.0001]
activation_options = {'sigmoid':tf.nn.sigmoid,'relu':tf.nn.relu}
batch_options = [50,128]
final_results = []
for optimizer_label in optimizer_options:
optimizer = optimizer_options[optimizer_label]
for learning_rate in learning_options:
for activation_label in activation_options:
activation_function = activation_options[activation_label]
for batch_size in batch_options:
#TO DEFINE TrainAndTest
training_accuracy,validation_accuracy,test_accuracy,elapsed_time = train(
learning_rate = learning_rate,
training_epochs=100,
batch_size = batch_size,
display_step = 10,
optimizer_method = optimizer,
activation_function = activation_function
)
obj_test = {'optimizer':optimizer_label,
'learning_rate':learning_rate,
'activation_function':activation_label,
'batch_size':batch_size,
'training_accuracy':training_accuracy,
'validation_accuracy':training_accuracy,
'test_accuracy':test_accuracy,
'elapsed_time': elapsed_time
}
final_results.append(obj_test)
final_results
with open('json.json','r') as input_fp:
results = json.load(input_fp)
import matplotlib.pyplot as plt
sigmoid = [x for x in results if x['activation_function']=='sigmoid']
relu =[x for x in results if x['activation_function'] !='sigmoid']
plt.figure(figsize=(15,5))
plt.plot(range(len(sigmoid)),[x['test_accuracy'] for x in sigmoid])
plt.plot(range(len(sigmoid)),[x['test_accuracy'] for x in relu])
plt.legend(['sigmoid','relu'])
plt.ylabel('test accuracy')
plt.xlabel('index run')
plt.show()
import matplotlib.pyplot as plt
a = [x for x in results if x['batch_size']== 128]
b =[x for x in results if x['batch_size'] !=128]
plt.figure(figsize=(15,5))
plt.plot(range(len(a)),[x['test_accuracy'] for x in a])
plt.plot(range(len(a)),[x['test_accuracy'] for x in b])
plt.legend(['128','50'])
plt.ylabel('test accuracy')
plt.xlabel('index run')
plt.show()
import matplotlib.pyplot as plt
a = [x for x in results if x['learning_rate']== 0.0001]
b =[x for x in results if x['learning_rate'] !=0.0001]
plt.figure(figsize=(15,5))
plt.plot(range(len(a)),[x['test_accuracy'] for x in a])
plt.plot(range(len(a)),[x['test_accuracy'] for x in b])
plt.legend(['learning_rate = 0.0001','learning_rate = 0.001'])
plt.ylabel('test accuracy')
plt.xlabel('index run')
plt.show()
import matplotlib.pyplot as plt
a = [x for x in results if x['optimizer']== 'adam']
b =[x for x in results if x['optimizer'] != 'adam']
plt.figure(figsize=(15,5))
plt.plot(range(len(a)),[x['test_accuracy'] for x in a])
plt.plot(range(len(a)),[x['test_accuracy'] for x in b])
plt.legend(['adam','sgd'])
plt.ylabel('test accuracy')
plt.xlabel('index run')
plt.show()
import matplotlib.pyplot as plt
a = [x for x in results if x['optimizer']== 'adam']
b =[x for x in results if x['optimizer'] != 'adam']
plt.figure(figsize=(15,5))
plt.plot(range(len(a)),[x['elapsed_time'] for x in a])
plt.plot(range(len(a)),[x['elapsed_time'] for x in b])
plt.legend(['adam','sgd'])
plt.ylabel('elapsed_time')
plt.xlabel('index run')
plt.show()
import matplotlib.pyplot as plt
a = [x for x in results if x['batch_size']== 128]
b =[x for x in results if x['batch_size'] !=128]
plt.figure(figsize=(15,5))
plt.plot(range(len(a)),[x['elapsed_time'] for x in a])
plt.plot(range(len(a)),[x['elapsed_time'] for x in b])
plt.legend(['128','50'])
plt.ylabel('elapsed_time')
plt.xlabel('index run')
plt.show()
# https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/
# https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks-Part-2/
def LeNet5_Model(data,keep_prob,activation_function=tf.nn.sigmoid):
# layer 1 param
conv1_weights = weight_variable([5,5,1,6])
conv1_bias = bias_variable([6])
# layer 2 param
conv2_weights = weight_variable([5,5,6,16])
conv2_bias = bias_variable([16])
# layer 3 param
layer3_weights = weight_variable([400, 120])
layer3_bias = bias_variable([120])
# layer 4 param
layer4_weights = weight_variable([120, 84])
layer4_bias = bias_variable([84])
# layer 5 param
layer5_weights = weight_variable([84, 10])
layer5_bias = bias_variable([10])
with tf.name_scope('Model'):
with tf.name_scope('Layer1'):
conv1 = tf.nn.conv2d(input=data,filter=conv1_weights,strides=[1,1,1,1],padding='SAME')
print(conv1.shape)
sigmoid1 = activation_function(conv1 + conv1_bias)
pool1 = tf.nn.max_pool(sigmoid1,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='VALID')
print(pool1.shape)
with tf.name_scope('Layer2'):
conv2 = tf.nn.conv2d(input=pool1,filter=conv2_weights,strides=[1,1,1,1],padding='VALID')
print(conv2.shape)
sigmoid2 = activation_function(conv2 + conv2_bias)
pool2 = tf.nn.max_pool(sigmoid2,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='VALID')
print(pool2.shape)
with tf.name_scope('Flatten'):
flat_inputs = tf.contrib.layers.flatten(pool2)
print(flat_inputs.shape)
with tf.name_scope('Layer3'):
out3 = activation_function(tf.matmul(flat_inputs, layer3_weights) + layer3_bias)
with tf.name_scope('Layer4'):
out4 = activation_function(tf.matmul(out3, layer4_weights) + layer4_bias)
with tf.name_scope('Layer5'):
out_drop = tf.nn.dropout(out4, keep_prob)
pred = tf.nn.softmax(tf.matmul(out_drop, layer5_weights) + layer5_bias) # Softmax
return pred
import numpy as np
# Initializing the variables
def train(learning_rate, training_epochs, batch_size, display_step, optimizer_method=tf.train.GradientDescentOptimizer,activation_function=tf.nn.sigmoid):
tf.reset_default_graph()
# Initializing the session
logs_path = 'log_files/' # useful for tensorboard
# tf Graph Input: mnist data image of shape 28*28=784
x = tf.placeholder(tf.float32, [None,28, 28,1], name='InputData')
# 0-9 digits recognition, 10 classes
y = tf.placeholder(tf.float32, [None, 10], name='LabelData')
keep_prob = tf.placeholder(tf.float32)
# Construct model and encapsulating all ops into scopes, making Tensorboard's Graph visualization more convenient
with tf.name_scope('Model'):
# Model
pred = LeNet5_Model(x,keep_prob,activation_function=activation_function)
with tf.name_scope('Loss'):
# Minimize error using cross entropy
# Minimize error using cross entropy
if activation_function == tf.nn.sigmoid:
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
else:
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(tf.clip_by_value(pred,-1.0,1.0)), reduction_indices=1))
#cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
with tf.name_scope('SGD'):
# Gradient Descent
optimizer = optimizer_method(learning_rate).minimize(cost)
with tf.name_scope('Accuracy'):
# Accuracy
acc = evaluate(pred, y)
# Initializing the variables
init = tf.global_variables_initializer()
# Create a summary to monitor cost tensor
tf.summary.scalar("Loss", cost)
# Create a summary to monitor accuracy tensor
tf.summary.scalar("Accuracy", acc)
# Merge all summaries into a single op
merged_summary_op = tf.summary.merge_all()
saver = tf.train.Saver()
print ("Start Training!")
t0 = time()
X_train,Y_train = mnist.train.images.reshape((-1,28,28,1)), mnist.train.labels
X_val,Y_val = mnist.validation.images.reshape((-1,28,28,1)), mnist.validation.labels
# Launch the graph for training
with tf.Session() as sess:
sess.run(init)
# op to write logs to Tensorboard
summary_writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
# train_next_batch shuffle the images by default
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
batch_xs = batch_xs.reshape((-1,28,28,1))
# Run optimization op (backprop), cost op (to get loss value)
# and summary nodes
_, c, summary = sess.run([optimizer, cost, merged_summary_op],
feed_dict={x: batch_xs,
y: batch_ys,keep_prob:0.75})
# Write logs at every iteration
summary_writer.add_summary(summary, epoch * total_batch + i)
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if (epoch+1) % display_step == 0:
print("Epoch: ", '%02d' % (epoch+1), "=====> Loss=", "{:.9f}".format(avg_cost))
acc_train = acc.eval({x: X_train, y: Y_train,keep_prob:1.0})
print("Epoch: ", '%02d' % (epoch+1), "=====> Accuracy Train=", "{:.9f}".format(acc_train))
acc_val = acc.eval({x: X_val, y: Y_val,keep_prob:1.0})
print("Epoch: ", '%02d' % (epoch+1), "=====> Accuracy Validation=", "{:.9f}".format(acc_val))
print ("Training Finished!")
t1 = time()
# Save the variables to disk.
save_path = saver.save(sess, "model.ckpt")
print("Model saved in file: %s" % save_path)
#Your implementation for testing accuracy after training goes here
X_test,Y_test = mnist.test.images.reshape((-1,28,28,1)),mnist.test.labels
acc_test = acc.eval({x: X_test, y: Y_test,keep_prob:1.0})
print("Accuracy Test=", "{:.9f}".format(acc_test))
return acc_train,acc_val,acc_test,t1-t0
train (0.0001,50,128,10,optimizer_method=tf.train.AdamOptimizer,activation_function=tf.nn.relu)
train (0.0001,50,50,10,optimizer_method=tf.train.AdamOptimizer,activation_function=tf.nn.relu)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
def countTrailingZero(x ) :
count = 0
while(( x & 1 ) == 0 ) :
x = x >> 1
count += 1
return count
if __name__== ' __main __' :
print(countTrailingZero(11 ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1
Step2: Dataframe attributes
Step3: Dataframe rows
Step4: The <code>head()</code> method
Step5: The <code>tail()</code> method
Step6: Selecting and displaying columns in a dataframe
Step7: Applying methods to a dataframe column
Step8: Tasks
Step9: In the code cell below write the code to display the first 10 rows of the dataframe <code>df</code>.
Step10: In the code cell below, select and display the first eight rows from the <code>'Country'</code> and <code>'TB deaths'</code> columns.
Step11: Now go back to the 'Dataframes and CSV files' step in FutureLearn to discuss and mark it complete.
Step12: Task
Step13: Now go back to the 'Comparison operators' step in FutureLearn to discuss and mark it complete.
Step14: If the same columns will be used repeatedly in the program, the code becomes more readable if written as follows
Step15: Task
Step16: Now go back to the 'Bitwise operators' step in FutureLearn to discuss and mark it complete.
Step17: Removing initial spaces
Step18: This shows that <code>' Max Wind SpeedKm/h'</code> is prefixed by a space, as are other columm names such as <code>' Mean Humidity'</code> and <code>' Max Sea Level PressurehPa'</code>.
Step19: Removing extra characters
Step20: <code>'WindDirDegrees< br />'</code> can be changed to <code>'WindDirDegrees'</code> with the <code>rename()</code> method as follows
Step21: To remove the <code>'< br />'</code> html line breaks from the values in the <code>'WindDirDegrees'</code> column you need to use the string method <code>rstrip()</code> which is used to remove characters from the end or 'rear' of a string
Step22: Display the first few rows of the <code>'WindDirDegrees'</code> to confirm the change
Step23: Missing values
Step24: The isnull() method returns True for each row in a column that has a null value. The method can be used to select and display those rows. Scroll the table below to the right to check that the events column is only showing missing values.
Step25: One way to deal with missing values is to replace them by some value. The column method fillna() fills all not available value cells with the value given as argument. In the example below, each missing event is replaced by the empty string.
Step26: All NaN values of Events column are replaced by the empty string.
Step27: The empty dataframe (no rows) confirms there are no more missing event values.
Step28: Note that the table above has fewer than 251 of the original 365 rows, so there must be further null values besides the 114 missing events.
Step29: The type of all the values in a column can be changed using the <code>astype()</code> method. The following code will change the values in the <code>'WindDirDegrees'</code> column from strings (object) to integers (<code>int64</code>).
Step30: The function to_datetime() is needed to change the values in the 'GMT' column from strings (object) to dates (datetime64)
Step31: Values of type datetime64 can be created using the datetime() function where the first integer argument is the year, the second the month and the third the day.
Step32: Queries such as 'Return all the rows where the date is between 8 December and 12 December' can be made
Step33: Tasks
Step34: In the code cell below, write code to get and display all the rows in the dataframe that are beween 1 April 2014 and
Step35: In the cell below, write two lines of code to display the first five rows that have a missing value in the 'Max Gust SpeedKm/h' column. Hint
Step36: Now go back to the 'Changing the value types of columns' step in FutureLearn to discuss and mark it complete.
Step37: The plot() method can make a graph of the values in a column. Gridlines are turned on by the grid argument.
Step38: The graph can be made bigger by giving the method a figsize=(x,y) argument where x and y are integers that determine the length of the x-axis and y-axis.
Step39: Multiple lines can be plotted by selecting multiple columns.
Step40: Task
Step41: Now go back to the 'Every picture tells a story' step in FutureLearn to discuss and mark it complete.
Step42: The irow() method can still be used to get and display rows, but now you can now also use the datetime64 index to get a row using the dataframe's ix (index) property, like this
Step43: A query such as 'Return all the rows where the date is between December 8th and December 12th' can now be done succinctly like this
Step44: Now we have a datetime64 index, let's plot 'Max Wind SpeedKm/h' again
Step45: Now it is much clearer that the worst winds were in mid February.
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.version
import warnings
warnings.simplefilter('ignore', FutureWarning)
from pandas import *
show_versions()
df = read_csv('WHO POP TB all.csv')
df.head()
df.head(10)
df.tail(5)
df.columns
df.iloc[0] # first row, index 0
df.iloc[2] # third row, index 2
df.head() # first five rows
df.head(7) # first seven rows
df.tail() # last five rows
df[['Country', 'Population (1000s)']].head()
df['TB deaths'].iloc[2] # third value of deaths column
df['Population (1000s)'].tail() # last five values of population column
df.iloc[54] # 55th row of dataframe
df.head(10)
df[["Country", "TB deaths"]].head(8)
df[df['Population (1000s)'] > 80000].head()
df[df['TB deaths'] > 10000].head()
df[(df['Population (1000s)'] > 80000) & (df['TB deaths'] > 10000)].head()
population = df['Population (1000s)']
deaths = df['TB deaths']
df[(population > 80000) & (deaths > 10000)].head()
population = df['Population (1000s)']
deaths = df['TB deaths']
df[(population <= 500000) | (deaths >= 20000)].head()
london = read_csv('London_2014.csv')
london.head()
london.columns
london = read_csv('London_2014.csv', skipinitialspace=True)
london['WindDirDegrees<br />'].head()
london = london.rename(columns={'WindDirDegrees<br />' : 'WindDirDegrees'})
london['WindDirDegrees'] = london['WindDirDegrees'].str.rstrip('<br />')
london['WindDirDegrees'].head()
london['Events'].tail()
london[london['Events'].isnull()].head()
london['Events'] = london['Events'].fillna('')
london[london['Events'].isnull()]
london.head()
london.dropna().head()
london.dtypes
london['WindDirDegrees'] = london['WindDirDegrees'].astype('int64')
london['GMT'] = to_datetime(london['GMT'])
london.dtypes
london[london['GMT'] == datetime(2014, 6, 4)]
dates = london['GMT']
start = datetime(2014, 12, 8)
end = datetime(2014, 12, 12)
london[(dates >= start) & (dates <= end)]
windir = london['WindDirDegrees']
london[(windir >= 350) | (windir <= 10)]
dates = london['GMT']
start = datetime(2014, 4, 1)
end = datetime(2014, 4, 11)
london[(dates >= start) & (dates <= end)]
london[london['Max Gust SpeedKm/h'].isnull()].head(5)
%matplotlib inline
london[['Max Wind SpeedKm/h']].plot(grid=True);
# Add a semicolon at the end of the matplotlib statement to remove noise of execution of the statements.
london[['Max Wind SpeedKm/h']].plot(grid=True, figsize=(10,5));
london[['Max Wind SpeedKm/h', 'Mean Wind SpeedKm/h']].plot(grid=False, figsize=(20,8));
london[['Max TemperatureC', 'Mean TemperatureC', 'Min TemperatureC']].plot(grid=False, figsize=(20,8));
london.index = london['GMT']
london.head(2)
london.loc[datetime(2014, 1, 1)]
london.loc[datetime(2014,12,8) : datetime(2014,12,12)]
#The meaning of the above code is get the rows beween and including
#the indices datetime(2014,12,8) and datetime(2014,12,12)
london[['Max Wind SpeedKm/h']].plot(grid=True, figsize=(20,8));
mean_humidity = london[['Mean Humidity']]
start_date = datetime(2014, 3, 1)
end_date = datetime(2014, 5, 31)
spring_df = mean_humidity.loc[start_date : end_date]
mean_humidity.plot(grid=True, figsize=(20,8));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we will establish some general variables for our game, including the 'stake' of the game (how much money each play is worth), as well as a list representing the cards used in the game. To make things easier, we will just use a list of numbers 0-9 for the cards.
Step2: Next, let's define a new class to represent each player in the game. I have provided a rough framework of the class definition along with comments along the way to help you complete it. Places where you should write code are denoted by comments inside [] brackets and CAPITAL TEXT.
Step3: Next we will create some functions outside the class definition which will control the flow of the game. The first function will play one round. It will take as an input the collection of players, and iterate through each one, calling each player's '.play() function.
Step4: Next we will define a function that will check the balances of each player, and print out a message with the player's ID and their balance.
Step5: Now we are ready to start the game. First we create an empy list to store the collection of players in the game.
Step6: Then we create a loop that will run a certain number of times, each time creating a player with a unique ID and a starting balance. Each player should be appended to the empty list, which will store all the players. In this case we pass the 'i' iterator of the loop as the player ID, and set a constant value of 500 for the starting balance.
Step7: Once the players are created, we will create a loop to run the game a certain amount of times. Each step of the loop should start with a print statement announcing the start of the game, and then call the playHand() function, passing as an input the list of players.
Step8: Finally, we will analyze the results of the game by running the 'checkBalances()' function and passing it our list of players.
|
<ASSISTANT_TASK:>
Python Code:
import random
gameStake = 50
cards = range(10)
class Player:
# create here two local variables to store a unique ID for each player and the player's current 'pot' of money
# [FILL IN YOUR VARIABLES HERE]
# in the __init__() function, use the two input variables to initialize the ID and starting pot of each player
def __init__(self, inputID, startingPot):
# [CREATE YOUR INITIALIZATIONS HERE]
# create a function for playing the game. This function starts by taking an input for the dealer's card
# and picking a random number from the 'cards' list for the player's card
def play(self, dealerCard):
# we use the random.choice() function to select a random item from a list
playerCard = random.choice(cards)
# here we should have a conditional that tests the player's card value against the dealer card
# and returns a statement saying whether the player won or lost the hand
# before returning the statement, make sure to either add or subtract the stake from the player's pot so that
# the 'pot' variable tracks the player's money
if playerCard < dealerCard:
# [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE]
else:
# [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE]
# create an accessor function to return the current value of the player's pot
def returnPot(self):
# [FILL IN THE RETURN STATEMENT]
# create an accessor function to return the player's ID
def returnID(self):
# [FILL IN THE RETURN STATEMENT]
def playHand(players):
for player in players:
dealerCard = random.choice(cards)
#[EXECUTE THE PLAY() FUNCTION FOR EACH PLAYER USING THE DEALER CARD, AND PRINT OUT THE RESULTS]
def checkBalances(players):
for player in players:
#[PRINT OUT EACH PLAYER'S BALANCE BY USING EACH PLAYER'S ACCESSOR FUNCTIONS]
players = []
for i in range(5):
players.append(Player(i, 500))
for i in range(10):
print ''
print 'start game ' + str(i)
playHand(players)
print ''
print 'game results:'
checkBalances(players)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import Section class, which contains all calculations
Step2: Initialization of sympy symbolic tool and pint for dimension analysis (not really implemented rn as not directly compatible with sympy)
Step3: Define sympy parameters used for geometric description of sections
Step4: We also define numerical values for each symbol in order to plot scaled section and perform calculations
Step5: First example
Step6: Define section and perform first calculations
Step7: Plot of S1 section in original reference frame
Step8: Plot of S1 section in inertial reference Frame
Step9: Standard Solution
Step10: Loads
Step11: Autosolutions
Step12: Compute H matrix
Step13: Compute $\tilde{K}$ and $\tilde{M}$ as
Step14: Compute eigenvalues and eigenvectors as
Step15: Eigenvalues correspond to $\beta^2$
Step16: Eigenvectors are orthogonal as expected
Step17: From $\beta_i^2$ we compute
|
<ASSISTANT_TASK:>
Python Code:
from pint import UnitRegistry
import sympy
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import sys
%matplotlib inline
from IPython.display import display
from Section import Section
ureg = UnitRegistry()
sympy.init_printing()
A, A0, t, t0, a, b, h, L, E, G = sympy.symbols('A A_0 t t_0 a b h L E G', positive=True)
values = [(A, 150 * ureg.millimeter**2),(A0, 250 * ureg.millimeter**2),(a, 400 * ureg.millimeter), \
(b, 20 * ureg.millimeter),(h, 150 * ureg.millimeter),(L, 2000 * ureg.millimeter), \
(t, 0.8 *ureg.millimeter),(E, 72e3 * ureg.MPa), (G, 28e3 * ureg.MPa)]
datav = [(v[0],v[1].magnitude) for v in values]
stringers = {1:[(a,h),A],
2:[(sympy.Rational(1,2)*a,h),A],
3:[(sympy.Integer(0),h),A],
4:[(sympy.Integer(0),sympy.Integer(0)),A],
5:[(sympy.Rational(1,2)*a,sympy.Integer(0)),A],
6:[(a,sympy.Integer(0)),A]}
panels = {(1,2):t,
(3,4):t,
(4,5):t,
(5,6):t,
(6,1):t,
(5,2):t}
S1 = Section(stringers, panels)
S1.cycles
start_pos={ii: [float(S1.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() }
plt.figure(figsize=(12,8),dpi=300)
nx.draw(S1.g,with_labels=True, arrows= True, pos=start_pos)
plt.arrow(0,0,20,0)
plt.arrow(0,0,0,20)
#plt.text(0,0, 'CG', fontsize=24)
plt.axis('equal')
plt.title("Section in starting reference Frame",fontsize=16);
positions={ii: [float(S1.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() }
x_ct, y_ct = S1.ct.subs(datav)
plt.figure(figsize=(12,8),dpi=300)
nx.draw(S1.g,with_labels=True, pos=positions)
plt.plot([0],[0],'o',ms=12,label='CG')
plt.plot([x_ct],[y_ct],'^',ms=12, label='SC')
#plt.text(0,0, 'CG', fontsize=24)
#plt.text(x_ct,y_ct, 'SC', fontsize=24)
plt.legend(loc='lower right', shadow=True)
plt.axis('equal')
plt.title("Section in pricipal reference Frame",fontsize=16);
Tx, Ty, Nz, Mx, My, Mz, F, ry, ry, mz = sympy.symbols('T_x T_y N_z M_x M_y M_z F r_y r_x m_z')
S1.set_loads(_Tx=0, _Ty=0, _Nz=0, _Mx=0, _My=0, _Mz=Mz)
S1.compute_stringer_actions()
S1.compute_panel_fluxes();
S1.N
S1.q
S1.set_loads(_Tx=0, _Ty=Ty, _Nz=0, _Mx=Mx, _My=0, _Mz=0)
S1.compute_stringer_actions()
S1.compute_panel_fluxes();
S1.N
S1.q
S1.compute_L()
S1.L
S1.compute_H()
S1.H
S1.compute_KM(A,a/2,t,datav)
S1.Ktilde
S1.Mtilde
sol_data = (sympy.N(S1.Ktilde.subs(datav).inv())*(sympy.N(S1.Mtilde.subs(datav)))).eigenvects()
ฮฒ2 = [sol[0] for sol in sol_data]
ฮฒ2
X = [sol[2][0] for sol in sol_data]
X
ฮป = [sympy.N(sympy.sqrt(E*A*b/(G*t)*ฮฒi).subs(datav)) for ฮฒi in ฮฒ2]
ฮป
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Handy function for manufacturing a psuedo-genome with guaranteed hits from simulated data
Step2: Insert SE and PE reads into simulated genome
Step3: Just do SE
Step4: Insert PE reads into the simulated genome
|
<ASSISTANT_TASK:>
Python Code:
## Utility functions.
def revcomp(sequence):
"returns reverse complement of a string"
sequence = sequence[::-1].strip()\
.replace("A", "t")\
.replace("T", "a")\
.replace("C", "g")\
.replace("G", "c").upper()
return sequence
def comp(seq):
returns a seq with small complement
return seq.replace("A", 't')\
.replace('T', 'a')\
.replace('C', 'g')\
.replace('G', 'c')\
.replace('n', 'Z')\
.upper()\
.replace("Z", "n")\
.replace("S", "s")
import itertools
import gzip
import random
from Bio import SeqIO
RAD_DATA = "./sim_rad_test_R1_.fastq.gz"
RAD_DATA_R1 = "./sim_pairddradmerge_R1_.fastq.gz"
RAD_DATA_R2 = "./sim_pairddradmerge_R2_.fastq.gz"
INPUT_CHR = "/Volumes/WorkDrive/ipyrad/refhacking/MusMT.fa"
OUTPUT_CHR = "./sim_mt_genome.fa"
N_INSERTS = 25
INSERT_SIZE = 50
## Get the SE reads
with gzip.open( RAD_DATA, 'rb' ) as infile:
seqs = []
for i, quatro in enumerate( itertools.izip_longest(*[iter(infile)]*4)):
seqs.append(quatro[1])
SE_samp = random.sample(seqs, N_INSERTS)
## Get the PE reads
with gzip.open( RAD_DATA_R1, 'rb' ) as in_R1, gzip.open( RAD_DATA_R2, 'rb' ) as in_R2:
## create iterators to sample 4 lines at a time
quart1 = itertools.izip(*[iter(in_R1)]*4)
## pair second read files, quarts samples both
quart2 = itertools.izip(*[iter(in_R2)]*4)
quarts = itertools.izip(quart1, quart2)
seqs = []
while True:
try:
quart = quarts.next()
except StopIteration:
break
seqs.append( [quart[0][1].strip(), quart[1][1].strip()] )
PE_samp = random.sample(seqs, N_INSERTS)
## Get the input fasta file (Mouse mtdna)
record = SeqIO.read(INPUT_CHR, "fasta")
record.seq = record.seq + record.seq
lenchr = len(record)
print(lenchr)
## Get the positions to insert the hits at, they will be non-overlapping
## divide the total length of the chromosome by the length of the total of
## all the reads to insert # of SE reads + PE reads 2*Readlength + innermate dist
# "working" locs = [x*(lenchr/(N_INSERTS+(N_INSERTS*2+INSERT_SIZE))) for x in range(N_INSERTS*2)]
locs = [x*((lenchr-(INSERT_SIZE*N_INSERTS))/(N_INSERTS+(N_INSERTS*2))) for x in range(N_INSERTS*2)]
## Insert SE
for i in range(N_INSERTS):
loc = locs[i]
record.seq = record.seq[:loc]+SE_samp[i].strip("\n")+record.seq[loc:]
print(locs)
print(len(record.seq))
samp = PE_samp
# Insert PE
for i in range(N_INSERTS):
loc = locs[i+N_INSERTS]
# print("do loc - {}".format(loc))
# For each insertion point get all sequence before it, then insert the first read,
# then insert INSERT_SIZE more bases from the original sequence, then paste the second
# read, then the rest of the sequence.
# For Read 2 [::-1] gives the reverse
record.seq = record.seq[:loc]+PE_samp[i][0]+record.seq[loc:loc+INSERT_SIZE]+\
revcomp(PE_samp[i][1])+record.seq[loc+INSERT_SIZE:] # revcomp
## Methods for generating other types of reads besides illumina pe
#PE_samp[i][1][::1]+record.seq[loc+INSERT_SIZE:] # rev
#comp(PE_samp[i][1])+record.seq[loc+INSERT_SIZE:] # comp
print(len(record.seq))
output_handle = open(OUTPUT_CHR, "w")
SeqIO.write(record, output_handle, "fasta")
output_handle.close()
## Make SE psuedo-genome
import itertools
import gzip
import random
from Bio import SeqIO
RAD_DATA = "./sim_rad_test_R1_.fastq.gz"
INPUT_CHR = "/Volumes/WorkDrive/ipyrad/refhacking/MusMT.fa"
OUTPUT_CHR = "./sim_mt_genome.fa"
N_INSERTS = 25
with gzip.open( RAD_DATA, 'rb' ) as infile:
seqs = []
for i, quatro in enumerate( itertools.izip_longest(*[iter(infile)]*4)):
seqs.append(quatro[1])
samp = random.sample(seqs, N_INSERTS)
record = SeqIO.read(INPUT_CHR, "fasta")
lenchr = len(record)
print(lenchr)
## Evenly space the hits so they don't overlap
locs = [x*(lenchr/N_INSERTS) for x in range(N_INSERTS)]
print(locs)
## The old way where they could sometimes overlap
#locs = random.sample(range(lenchr), N_INSERTS)
for i, loc in enumerate(locs):
record.seq = record.seq[:loc]+samp[i].strip("\n")+record.seq[loc:]
print(len(record.seq))
output_handle = open(OUTPUT_CHR, "w")
SeqIO.write(record, output_handle, "fasta")
output_handle.close()
import itertools
import gzip
import random
from Bio import SeqIO
RAD_DATA_R1 = "./sim_pairddradmerge_R1_.fastq.gz"
RAD_DATA_R2 = "./sim_pairddradmerge_R2_.fastq.gz"
INPUT_CHR = "./sim_mt_genome.fa"
OUTPUT_CHR = "./sim_mt_genome_SE+PE.fa"
N_INSERTS = 25
INSERT_SIZE = 50
with gzip.open( RAD_DATA_R1, 'rb' ) as in_R1, gzip.open( RAD_DATA_R2, 'rb' ) as in_R2:
## create iterators to sample 4 lines at a time
quart1 = itertools.izip(*[iter(in_R1)]*4)
## pair second read files, quarts samples both
quart2 = itertools.izip(*[iter(in_R2)]*4)
quarts = itertools.izip(quart1, quart2)
seqs = []
while True:
try:
quart = quarts.next()
except StopIteration:
break
seqs.append( [quart[0][1].strip(), quart[1][1].strip()] )
samp = random.sample(seqs, N_INSERTS)
record = SeqIO.read(INPUT_CHR, "fasta")
lenchr = len(record)
print(lenchr)
locs = random.sample(range(lenchr), N_INSERTS)
locs.sort()
# Get a random sample of base positions to insert at from across the sim genome
for i, loc in enumerate(locs):
print("do loc - {}".format(loc))
print(samp[i][0], revcomp(samp[i][1]))
# For each insertion point get all sequence before it, then insert the first read,
# then insert INSERT_SIZE more bases from the original sequence, then paste the second
# read, then the rest of the sequence.
# For Read 2 [::-1] gives the reverse
record.seq = record.seq[:loc]+samp[i][0]+record.seq[loc:loc+INSERT_SIZE]+\
revcomp(samp[i][1])+record.seq[loc+INSERT_SIZE:]
#samp[i][1].strip("\n")[::-1]+record.seq[loc+INSERT_SIZE:]
print(len(record.seq))
output_handle = open(OUTPUT_CHR, "w")
SeqIO.write(record, output_handle, "fasta")
output_handle.close()
import gzip
import shutil
outfiles = ["/private/tmp/ipyrad-test-pair/test-refseq-pair_edits/3L0_R1_.fastq.gz", "/private/tmp/ipyrad-test-pair/test-refseq-pair_edits/3L0_R2_.fastq.gz"]
for f in outfiles:
print(f)
with open(f, 'r') as f_in, gzip.open(f, 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
sq = "AAATGCGCCGGGCGCGAA"
print("AAATGCGCCGGGCGCGAA"[::-1])
print(revcomp(sq))
sq[::-1].strip().replace("T","a")
print((PE_samp[1][1])[::-1])
print(PE_samp[1][1])
record = SeqIO.read(INPUT_CHR, "fasta")
record.seq = record.seq + record.seq
print record.seq
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-esm2-1-hr', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create individual-based model
Step2: Generate some fake data
Step3: Check the shape of the output. $T=100$, so the number of pairs should be $T-1$ and the second dimension should be $2$
Step4: Set-up model
Step5: We'll begin by misspecifying priors by having a uniform prior for $R_0$ and the revovery time that is a long way away from the true values. An error should raise that the prior is misspecified
Step6: Let's try that again with properly specified priors that include the true value.
Step7: Print out the summary of the parameters (median and 95% lower and upper percentiles)
Step8: Plot the distribution of the particles
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import ABCPRC as prc
import seaborn as sns
import scipy.stats as stats
def createData(beta,gamma,n=100,T=100,I0=1):
S,I = np.zeros(T),np.zeros(T)
S[0] = n-I0
I[0] = I0
eps=0.1
for i in np.arange(1,T):
if beta*S[i-1]*I[i-1]/n<0: print(beta*S[i-1]*I[i-1]/n)
infections = np.random.poisson(lam=eps*beta*S[i-1]*I[i-1]/n)
recoveries = np.random.poisson(lam=eps*gamma*I[i-1])
S[i] = np.max((0,S[i-1] - infections + recoveries))
I[i] = np.max((0,I[i-1] + infections - recoveries))
return S,I
def ibm(*ps):
n=100
gamma = 1./ps[1]
beta = ps[0]*gamma
S,I = createData(beta,gamma,n=n,T=100,I0=20)
if I[-1]==0:
return np.vstack((0*I[:-1],0*I[1:])).T
else:
return np.vstack((I[:-1],I[1:])).T
r0 = 1.5
rec_time = 1.
gamma = 1./rec_time
beta = r0*gamma
xs = ibm(r0,gamma)
print(xs.shape)
plt.plot(xs[:,0]);
plt.hist(stats.norm(loc=5.,scale=1.).rvs(2000));
m = prc.ABC()
wrong_priors = [stats.uniform(loc=0.,scale=0.1).rvs,stats.uniform(loc=5.,scale=0.1).rvs]
m.setup(modelFunc=ibm,xs=xs,priors=wrong_priors,method='Adaptive',toln=12)
m.run(10)
m = prc.ABC()
priors = [stats.uniform(loc=0.,scale=2.).rvs,stats.uniform(loc=0.5,scale=2.).rvs]
m.setup(modelFunc=ibm,xs=xs,priors=priors,method='Adaptive',toln=10)
m.run(100)
summary = m.fitSummary()
m.trace(plot=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup
Step3: Data simulation
Step4: Let's simulate two timeseries and plot some basic information about them.
Step5: Now we put the signals at two locations on the cortex. We construct a
Step6: Before we simulate the sensor-level data, let's define a signal-to-noise
Step7: Now we run the signal through the forward model to obtain simulated sensor
Step8: We create an
Step9: Power mapping
Step10: We will now compute the cortical power map at 10 Hz. using a DICS beamformer.
|
<ASSISTANT_TASK:>
Python Code:
# Author: Marijn van Vliet <w.m.vanvliet@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from scipy.signal import welch, coherence
from mayavi import mlab
from matplotlib import pyplot as plt
import mne
from mne.simulation import simulate_raw
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.time_frequency import csd_morlet
from mne.beamformer import make_dics, apply_dics_csd
# We use the MEG and MRI setup from the MNE-sample dataset
data_path = sample.data_path(download=False)
subjects_dir = op.join(data_path, 'subjects')
mri_path = op.join(subjects_dir, 'sample')
# Filenames for various files we'll be using
meg_path = op.join(data_path, 'MEG', 'sample')
raw_fname = op.join(meg_path, 'sample_audvis_raw.fif')
trans_fname = op.join(meg_path, 'sample_audvis_raw-trans.fif')
src_fname = op.join(mri_path, 'bem/sample-oct-6-src.fif')
bem_fname = op.join(mri_path, 'bem/sample-5120-5120-5120-bem-sol.fif')
fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')
cov_fname = op.join(meg_path, 'sample_audvis-cov.fif')
# Seed for the random number generator
rand = np.random.RandomState(42)
sfreq = 50. # Sampling frequency of the generated signal
times = np.arange(10. * sfreq) / sfreq # 10 seconds of signal
n_times = len(times)
def coh_signal_gen():
Generate an oscillating signal.
Returns
-------
signal : ndarray
The generated signal.
t_rand = 0.001 # Variation in the instantaneous frequency of the signal
std = 0.1 # Std-dev of the random fluctuations added to the signal
base_freq = 10. # Base frequency of the oscillators in Hertz
n_times = len(times)
# Generate an oscillator with varying frequency and phase lag.
signal = np.sin(2.0 * np.pi *
(base_freq * np.arange(n_times) / sfreq +
np.cumsum(t_rand * rand.randn(n_times))))
# Add some random fluctuations to the signal.
signal += std * rand.randn(n_times)
# Scale the signal to be in the right order of magnitude (~100 nAm)
# for MEG data.
signal *= 100e-9
return signal
signal1 = coh_signal_gen()
signal2 = coh_signal_gen()
fig, axes = plt.subplots(2, 2, figsize=(8, 4))
# Plot the timeseries
ax = axes[0][0]
ax.plot(times, 1e9 * signal1, lw=0.5)
ax.set(xlabel='Time (s)', xlim=times[[0, -1]], ylabel='Amplitude (Am)',
title='Signal 1')
ax = axes[0][1]
ax.plot(times, 1e9 * signal2, lw=0.5)
ax.set(xlabel='Time (s)', xlim=times[[0, -1]], title='Signal 2')
# Power spectrum of the first timeseries
f, p = welch(signal1, fs=sfreq, nperseg=128, nfft=256)
ax = axes[1][0]
# Only plot the first 100 frequencies
ax.plot(f[:100], 20 * np.log10(p[:100]), lw=1.)
ax.set(xlabel='Frequency (Hz)', xlim=f[[0, 99]],
ylabel='Power (dB)', title='Power spectrum of signal 1')
# Compute the coherence between the two timeseries
f, coh = coherence(signal1, signal2, fs=sfreq, nperseg=100, noverlap=64)
ax = axes[1][1]
ax.plot(f[:50], coh[:50], lw=1.)
ax.set(xlabel='Frequency (Hz)', xlim=f[[0, 49]], ylabel='Coherence',
title='Coherence between the timeseries')
fig.tight_layout()
# The locations on the cortex where the signal will originate from. These
# locations are indicated as vertex numbers.
source_vert1 = 146374
source_vert2 = 33830
# The timeseries at each vertex: one part signal, one part silence
timeseries1 = np.hstack([signal1, np.zeros_like(signal1)])
timeseries2 = np.hstack([signal2, np.zeros_like(signal2)])
# Construct a SourceEstimate object that describes the signal at the cortical
# level.
stc = mne.SourceEstimate(
np.vstack((timeseries1, timeseries2)), # The two timeseries
vertices=[[source_vert1], [source_vert2]], # Their locations
tmin=0,
tstep=1. / sfreq,
subject='sample', # We use the brain model of the MNE-Sample dataset
)
snr = 1. # Signal-to-noise ratio. Decrease to add more noise.
# Read the info from the sample dataset. This defines the location of the
# sensors and such.
info = mne.io.read_info(raw_fname)
info.update(sfreq=sfreq, bads=[])
# Only use gradiometers
picks = mne.pick_types(info, meg='grad', stim=True, exclude=())
mne.pick_info(info, picks, copy=False)
# This is the raw object that will be used as a template for the simulation.
raw = mne.io.RawArray(np.zeros((info['nchan'], len(stc.times))), info)
# Define a covariance matrix for the simulated noise. In this tutorial, we use
# a simple diagonal matrix.
cov = mne.cov.make_ad_hoc_cov(info)
cov['data'] *= (20. / snr) ** 2 # Scale the noise to achieve the desired SNR
# Simulate the raw data, with a lowpass filter on the noise
raw = simulate_raw(raw, stc, trans_fname, src_fname, bem_fname, cov=cov,
random_state=rand, iir_filter=[4, -4, 0.8])
t0 = raw.first_samp # First sample in the data
t1 = t0 + n_times - 1 # Sample just before the second trial
epochs = mne.Epochs(
raw,
events=np.array([[t0, 0, 1], [t1, 0, 2]]),
event_id=dict(signal=1, noise=2),
tmin=0, tmax=10,
preload=True,
)
# Plot some of the channels of the simulated data that are situated above one
# of our simulated sources.
picks = mne.pick_channels(epochs.ch_names, mne.read_selection('Left-frontal'))
epochs.plot(picks=picks)
# Compute the inverse operator
fwd = mne.read_forward_solution(fwd_fname)
inv = make_inverse_operator(epochs.info, fwd, cov)
# Apply the inverse model to the trial that also contains the signal.
s = apply_inverse(epochs['signal'].average(), inv)
# Take the root-mean square along the time dimension and plot the result.
s_rms = np.sqrt((s ** 2).mean())
brain = s_rms.plot('sample', subjects_dir=subjects_dir, hemi='both', figure=1,
size=600)
# Indicate the true locations of the source activity on the plot.
brain.add_foci(source_vert1, coords_as_verts=True, hemi='lh')
brain.add_foci(source_vert2, coords_as_verts=True, hemi='rh')
# Rotate the view and add a title.
mlab.view(0, 0, 550, [0, 0, 0])
mlab.title('MNE-dSPM inverse (RMS)', height=0.9)
# Estimate the cross-spectral density (CSD) matrix on the trial containing the
# signal.
csd_signal = csd_morlet(epochs['signal'], frequencies=[10])
# Compute the spatial filters for each vertex, using two approaches.
filters_approach1 = make_dics(
info, fwd, csd_signal, reg=0.05, pick_ori='max-power', normalize_fwd=True,
inversion='single', weight_norm=None)
print(filters_approach1)
filters_approach2 = make_dics(
info, fwd, csd_signal, reg=0.1, pick_ori='max-power', normalize_fwd=False,
inversion='matrix', weight_norm='unit-noise-gain')
print(filters_approach2)
# You can save these to disk with:
# filters_approach1.save('filters_1-dics.h5')
# Compute the DICS power map by applying the spatial filters to the CSD matrix.
power_approach1, f = apply_dics_csd(csd_signal, filters_approach1)
power_approach2, f = apply_dics_csd(csd_signal, filters_approach2)
# Plot the DICS power maps for both approaches.
for approach, power in enumerate([power_approach1, power_approach2], 1):
brain = power.plot('sample', subjects_dir=subjects_dir, hemi='both',
figure=approach + 1, size=600)
# Indicate the true locations of the source activity on the plot.
brain.add_foci(source_vert1, coords_as_verts=True, hemi='lh')
brain.add_foci(source_vert2, coords_as_verts=True, hemi='rh')
# Rotate the view and add a title.
mlab.view(0, 0, 550, [0, 0, 0])
mlab.title('DICS power map, approach %d' % approach, height=0.9)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Solo nos queda pasar un string con la expresiรณn LaTeX para que la funciรณn devuelva la expresiรณn en cรณdigo entendible por Sympy. NOTA
Step2: Ejemplos
Step3: Creando un array numpy elemento a elemento
Step4: Creando el array directamente
Step5: Con un dataframe
Step6: Mรกs de un sistema en el dataframe
Step7: Dataframe en latex
|
<ASSISTANT_TASK:>
Python Code:
from sympy.parsing.latex import parse_latex
parse_latex(r'\frac{x^2}{\sqrt{y}}')
import numpy as np
import pandas as pd
x = np.empty([3], dtype=object)
x
x[0]=parse_latex(r'\frac{x^2}{\sqrt{y}}')
x
y = list(np.array([parse_latex(r'x+y=6'),parse_latex(r'x-y=0')]))
y
from sympy import solve, latex
latex(solve(list(y)))
z = pd.DataFrame(np.array([parse_latex(r'x+y=8'),parse_latex(r'x-y=2')]))
z
z.columns
for i in z.index:
print(z.loc[i][0])
latex(solve(z.loc[i][0] for i in z.index))
solve(z.loc[i][0] for i in z.index)
for i in z.index: print(z.loc[i][0])
v = pd.DataFrame(np.array([[parse_latex(r'x+y=6'),parse_latex(r'x-y=0')]]))
v
for i in v.index:
print(list(v.iloc[i][:]))
print(solve(list(v.loc[i][:])))
z = pd.DataFrame(np.array([['x+y=6','x-y=0']]))
z
z.apply(parse_latex)
#Cรณdigo para exportar el notebook a markdown (Opcional)
!jupyter nbconvert --to=markdown Probando_parse_latex.ipynb
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now you can setup your root folder for Caffe below if you put it somewhere else. You should only be changing the path that's being set for CAFFE_ROOT.
Step2: Here's where you pick your model. There are several listed below such as AlexNet, GoogleNet, and Flickr Style. Uncomment the model you want to download.
Step3: As a reminder, in Caffe, the deploy model is saved in two parts
Step4: Now that we have both the deploy.prototxt and .caffemodel files, we can translate the model to the Caffe2 saved model format, which consists of two serialized protobuf files
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import os
print("Required modules imported.")
# You should have checked out original Caffe
# git clone https://github.com/BVLC/caffe.git
# change the CAFFE_ROOT directory below accordingly
CAFFE_ROOT = os.path.expanduser('~/caffe')
# Make sure Caffe exists where you specified
if not os.path.exists(CAFFE_ROOT):
print("Houston, you may have a problem.")
print("Did you change CAFFE_ROOT to point to your local Caffe repo?")
print("Try running: git clone https://github.com/BVLC/caffe.git")
# Pick a model, and if you don't have it, it will be downloaded
# format below is the model's folder, model's dataset inside that folder
#MODEL = 'bvlc_alexnet', 'bvlc_alexnet.caffemodel'
#MODEL = 'bvlc_googlenet', 'bvlc_googlenet.caffemodel'
#MODEL = 'finetune_flickr_style', 'finetune_flickr_style.caffemodel'
#MODEL = 'bvlc_reference_caffenet', 'bvlc_reference_caffenet.caffemodel'
MODEL = 'bvlc_reference_rcnn_ilsvrc13', 'bvlc_reference_rcnn_ilsvrc13.caffemodel'
# Scripts to download the models reside here (~/caffe/models)
# After downloading the data will exist with the script
CAFFE_MODELS = os.path.join(CAFFE_ROOT, 'models')
# this is like: ~/caffe/models/bvlc_alexnet/deploy.prototxt
CAFFE_MODEL_FILE = os.path.join(CAFFE_MODELS, MODEL[0], 'deploy.prototxt')
# this is like: ~/caffe/models/bvlc_alexnet/bvlc_alexnet.caffemodel
CAFFE_PRETRAINED = os.path.join(CAFFE_MODELS, MODEL[0], MODEL[1])
# If the model folder doesn't have the goods, then download it
# This is usually a pretty big file with the .caffemodel extension
if not os.path.exists(CAFFE_PRETRAINED):
print(CAFFE_PRETRAINED + " not found. Attempting download. Be patient...\n")
os.system(
os.path.join(CAFFE_ROOT, 'scripts/download_model_binary.py') +
' ' +
os.path.join(CAFFE_ROOT, 'models', MODEL[0]))
else:
print("You already have " + CAFFE_PRETRAINED + ", skipping download...\n")
# If the .prototxt file was missing then you're in trouble; cannot continue
if not os.path.exists(CAFFE_MODEL_FILE):
print("Caffe model file, " + CAFFE_MODEL_FILE + " was not found!")
else:
print("Both the deploy.prototxt and .caffemodel files were found, ready to continue!")
# Now we have init net and predict net .pb files to use
# Set the CAFFE2_ROOT
CAFFE2_ROOT = os.path.expanduser('~/caffe2')
init_net_out = os.path.join(CAFFE_MODELS, MODEL[0], 'init_net.pb')
predict_net_out = os.path.join(CAFFE_MODELS, MODEL[0], 'predict_net.pb')
# Run the caffe_translator.py script to translate to Caffe2 if files do not already exist
if (not os.path.exists(init_net_out)) or (not os.path.exists(predict_net_out)):
print("Protobuf files not found. Running translation. Be patient...\n")
os.system(
'python' + ' ' + os.path.join(CAFFE2_ROOT, 'caffe2/python/caffe_translator.py') +
' ' + CAFFE_MODEL_FILE + ' ' + CAFFE_PRETRAINED + ' ' +
'--init_net' + ' ' + init_net_out + ' ' +
'--predict_net' + ' ' + predict_net_out
)
else:
print("You already have both .pb files, skipping translation...\n")
# Print if files are where they are expected to be
if (not os.path.exists(init_net_out)) or (not os.path.exists(predict_net_out)):
print(init_net_out + " and/or " + predict_net_out + " was NOT FOUND!")
else:
print("Protobuf files can be found at: \n",
os.path.join(CAFFE_MODELS, MODEL[0])), "!"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using pandas
Step2: We can now make plots to compare the parameters at the two XGM positions.
Step3: We can also export the dataframe to a CSV file - or any other format pandas supports - for further analysis with other tools.
Step4: Using xarray
Step5: The PES consists of 16 spectrometers arranged in a circle around the beamline. We'll retrieve the data for two of these, separated by 90°. N and E refer to their positions in the circle, although these are not literally North and East.
Step6: We'll get a few other values from slow data to annotate the plot.
Step7: Now we can average the spectra across the trains in this run, and plot them.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from karabo_data import RunDirectory
import matplotlib.pyplot as plt
import numpy as np
import re
import xarray as xr
run = RunDirectory('/gpfs/exfel/exp/SA1/201830/p900025/raw/r0150/')
df = run.get_dataframe(fields=[("*_XGM/*", "*.i[xy]Pos"), ("*_XGM/*", "*.photonFlux")])
df.head()
df.plot.scatter(x='SA1_XTD2_XGM/XGM/DOOCS/pulseEnergy.photonFlux', y='SPB_XTD9_XGM/XGM/DOOCS/pulseEnergy.photonFlux')
ax = df.plot.scatter(x='SA1_XTD2_XGM/XGM/DOOCS/beamPosition.ixPos', y='SPB_XTD9_XGM/XGM/DOOCS/beamPosition.ixPos')
ay = df.plot.scatter(x='SA1_XTD2_XGM/XGM/DOOCS/beamPosition.iyPos', y='SPB_XTD9_XGM/XGM/DOOCS/beamPosition.iyPos')
df.to_csv('xtd2_xtd9_xgm_r150.csv')
run = RunDirectory('/gpfs/exfel/exp/SA3/201830/p900027/raw/r0067/')
run.get_array('SA3_XTD10_PES/ADC/1:network', 'digitizers.channel_4_A.raw.samples')
data_n = run.get_array('SA3_XTD10_PES/ADC/1:network', 'digitizers.channel_4_A.raw.samples')
data_e = run.get_array('SA3_XTD10_PES/ADC/1:network', 'digitizers.channel_3_A.raw.samples')
data_n, data_e = xr.align(data_n, data_e, join='inner')
nsamples = data_n.shape[1]
data_n.shape
# Get the first values from four channels measuring voltage
electr = run.get_dataframe([('SA3_XTD10_PES/MCPS/MPOD', 'channels.U20[0123].measurementSenseVoltage')])
electr_voltages = electr.iloc[0].sort_index()
electr_voltages
gas_interlocks = run.get_dataframe([('SA3_XTD10_PES/DCTRL/*', 'interlock.AActionState')])
# Take the first row of the gas interlock data and find which gas was unlocked
row = gas_interlocks.iloc[0]
print(row)
if (row == 0).any():
key = row[row == 0].index[0]
target_gas = re.search(r'(XENON|KRYPTON|NITROGEN|NEON)', key).group(1).title()
else:
target_gas = 'No gas'
x = np.linspace(0, 0.0005*nsamples, nsamples, endpoint=False)
fig, axes = plt.subplots(1, 2, figsize=(10, 4))
for ax, dataset, start_time in zip(axes, [data_n, data_e], [15.76439411, 15.76289411]):
ax.plot(x, dataset.sum(axis=0))
ax.yaxis.major.formatter.set_powerlimits((0, 0))
ax.set_xlim(15.75, 15.85)
ax.set_xlabel('time ($\mu$s)')
ax.axvline(start_time, color='red', linestyle='dotted', label='Start time')
ax.axvline(start_time + 0.0079, color='magenta', linestyle='dotted', label='Neon K 1s')
ax.axvline(start_time + 0.041, color='black', label='Auger peak')
ax.legend()
axes[0].set_title('Spectrometer 00 N')
axes[1].set_title('Spectrometer 04 E')
fig.suptitle('{gas}; 900 eV; [A B C D] = [{voltages[0]:.1f} {voltages[1]:.1f} {voltages[2]:.1f} {voltages[3]:.1f}] V; run 67'
.format(gas=target_gas, voltages=electr_voltages.values), y=1.05);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ...Depending on which environment you're running this from, you may find yourself needing to upgrade one of these libraries, which you can do by opening an ipython terminal, launching python, and typing pip install [name of library] --upgrade
Step 2
Step2: Before we go any further, we need to map each word in our corpus to a number, so that we have a consistent way of referring to them. To do this, we'll fit a tokenizer to the corpus
Step3: Now the tokenizer knows what tokens (words) are in our corpus and has mapped them to numbers. The keras tokenizer also indexes them in order of frequency (most common first, i.e. index 1 is usually a word like "the"), which will come in handy later.
Step 3
Step4: For the "skip-gram" tast, build two separate models (one for the target word (also called the "pivot"), and one for the context words), and then merge them into one
Step5: Step 4
Step6: Wait a few minutes for training...
Step7: Step 5
Step8: We also want to set aside the tokenizer's word_index and reverse_word_index (so we can look up indices for words and words from indices)
Step9: That's it for learning the embeddings. Now we can try using them
Step10: Then we can define a function to get a most similar word for an input word
Step11: Now let's give it a try (you may get different results)
|
<ASSISTANT_TASK:>
Python Code:
import sklearn
import matplotlib.pyplot as plt
import scipy
import numpy as np
from keras.models import Sequential
from keras.layers.embeddings import Embedding
from keras.layers import Flatten, Activation, Merge
from keras.preprocessing.text import Tokenizer, base_filter
from keras.preprocessing.sequence import skipgrams, make_sampling_table
from glob import glob
text_files = glob('sotu/*.txt')
# locates all .txt files in the folder 'sotu'
# (in the same directory as this notebook)
# define a `text_generator` here,
# so that data can be loaded on-demand,
# avoiding having all data in memory unless it's needed:
def text_generator():
for path in text_files:
with open(path, 'r') as f:
yield f.read()
files = (len(text_files))
if files == 0:
print"something's not right - check the file path?"
else:
print"sucessfully located",len(text_files),"text files to use as training data!"
max_vocab_size = 50000
# setting an upper limit just in case
# `filters` specify what characters to get rid of
# `base_filter()` includes basic punctuation
tokenizer = Tokenizer(nb_words=max_vocab_size, filters=base_filter()+'')
# fit the tokenizer
tokenizer.fit_on_texts(text_generator())
# we also want to keep track of the actual vocab size:
vocab_size = len(tokenizer.word_index) + 1
# note: we add one because `0` is a reserved index in keras' tokenizer
print "found",vocab_size,"unique words"
embedding_dim = 256
n_epochs = 60
# Higher numbers will add time to the training,
# lower numbers may give wildly inaccurate (er, abstract?) results.
loss_function = 'binary_crossentropy'
activation_function = 'sigmoid'
optimizer_type = 'adam'
# the task as we are framing it is to answer the question:
# "do the context words match the target word or not?"
# because this is a binary classification,
# we want the output to be normalized to [0,1] (sigmoid will work),
# and we can use 'binary crossentropy' as our loss
pivot_model = Sequential()
pivot_model.add(Embedding(vocab_size, embedding_dim, input_length=1))
context_model = Sequential()
context_model.add(Embedding(vocab_size, embedding_dim, input_length=1))
# merge the pivot and context models
model = Sequential()
model.add(Merge([pivot_model, context_model], mode='dot', dot_axes=2))
model.add(Flatten())
model.add(Activation(activation_function))
model.compile(optimizer=optimizer_type, loss=loss_function)
# skip this if you want to train the model from scratch!
# run this to load pre-trained weights!
# change the filepath 'weights-60epochs.hdf5' to another path if necessary
model.load_weights('weights-60epochs.hdf5')
print"successfully loaded pre-trained weights"
# then skip to "Step 5: Extract the Embeddings"
# skip this if you want to use pre-trained weights!
# run this to train the model from scratch!
# used to sample words (indices)
sampling_table = make_sampling_table(vocab_size)
for i in range(n_epochs):
loss = 0
for seq in tokenizer.texts_to_sequences_generator(text_generator()):
# generate skip-gram training examples
# - `couples` consists of the pivots (i.e. target words) and surrounding contexts
# - `labels` represent if the context is true or not
# - `window_size` determines how far to look between words
# - `negative_samples` specifies the ratio of negative couples
# (i.e. couples where the context is false)
# to generate with respect to the positive couples;
# i.e. `negative_samples=4` means "generate 4 times as many negative samples"
couples, labels = skipgrams(seq, vocab_size, window_size=5, negative_samples=4, sampling_table=sampling_table)
if couples:
pivot, context = zip(*couples)
pivot = np.array(pivot, dtype='int32')
context = np.array(context, dtype='int32')
labels = np.array(labels, dtype='int32')
loss += model.train_on_batch([pivot, context], labels)
print('epoch %d, %0.02f'%(i, loss))
model.save_weights('weights2.hdf5')
print"successfully saved trained weights"
embeddings = model.get_weights()[0]
word_index = tokenizer.word_index
reverse_word_index = {v: k for k, v in word_index.items()}
def get_embedding(word):
idx = word_index[word]
# make it 2d
return embeddings[idx][:,np.newaxis].T
from scipy.spatial.distance import cdist
ignore_n_most_common = 50
def get_closest(word):
embedding = get_embedding(word)
# get the distance from the embedding
# to every other embedding
distances = cdist(embedding, embeddings)[0]
# pair each embedding index and its distance
distances = list(enumerate(distances))
# sort from closest to furthest
distances = sorted(distances, key=lambda d: d[1])
# skip the first one; it's the target word
for idx, dist in distances[1:]:
# ignore the n most common words;
# they can get in the way.
# because the tokenizer organized indices
# from most common to least, we can just do this
if idx > ignore_n_most_common:
return reverse_word_index[idx]
print'freedom ~',(get_closest('freedom'))
print'justice ~',(get_closest('justice'))
print'america ~',(get_closest('america'))
print'history ~',(get_closest('history'))
print'citizen ~',(get_closest('citizen'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Enter your Google Maps Directions API key below
Step2: Enter your origin and destination
Step3: Try to grab realtime traffic from Google
Step4: Process response data
|
<ASSISTANT_TASK:>
Python Code:
import urllib, json, time
apiKey=""
if not apiKey:
print "Enter your API key for traffic data!"
exit(1)
origin="Empire State Building, NY"
destination="One World Trade Center, NY"
params = urllib.urlencode(
{'origin': origin,
'destination': destination,
'mode': 'driving',
'key': apiKey,
'departure_time':'now'})
url = "https://maps.googleapis.com/maps/api/directions/json?%s" % params
timestamp = int(time.time()) # capture when the request is made
try:
response = urllib.urlopen(url)
except IOError, e:
print e
if response.getcode() == 200:
data = json.loads(response.read())
else:
print "Error %s" % response.getcode()
exit(1)
if 'error_message' in data:
print data['error_message']
exit(1)
if not 'routes' in data or len(data['routes']) < 1:
print "Route data not returned. Check locations?"
exit(1)
elif not 'duration_in_traffic' in data['routes'][0]['legs'][0]:
print "Traffic data not returned"
exit(1)
else:
print "%s,%s" % (timestamp, data['routes'][0]['legs'][0]['duration_in_traffic']['value'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Euler's method
Step4: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation
Step6: You are now going to solve the following differential equation
Step7: In the following cell you are going to solve the above ODE using four different algorithms
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
def solve_euler(derivs, y0, x):
Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
h = x[1]-x[0]
ytemp = np.zeros_like(x)
ytemp[0] = 0
for i in range(len(x)-1):
ytemp[i+1] = ytemp[i] + h*derivs(ytemp[i],x[i])
return ytemp
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
def solve_midpoint(derivs, y0, x):
Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
h = x[1]-x[0]
ytemp = np.zeros_like(x)
ytemp[0] = 0
for i in range(len(x)-1):
ytemp[i+1] = ytemp[i] + h*derivs(ytemp[i] + (h/2)*derivs(ytemp[i],x[i]),x[i]+h/2)
return ytemp
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
def solve_exact(x):
compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
ytemp = np.zeros_like(x)
y = 0.25*np.exp(2*x)-0.5*x-0.25
return y
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
x = np.linspace(0,1,11)
def derivs(y,x):
dy = x + 2*y
return np.array(dy)
euler = solve_euler(derivs,0,x)
midpoint = solve_midpoint(derivs,0,x)
exact = solve_exact(x)
odeint = odeint(derivs,0,x)
odein = []
[odein.extend(i) for i in odeint]
odein
plt.figure(figsize = (15,6))
plt.subplot(1,2,1)
plt.title("Derivative Solutions")
plt.ylabel("y(x)")
plt.xlabel("x")
plt.plot(x,euler,'r',label = 'euler')
plt.plot(x,midpoint,'y',label = 'midpoint')
plt.plot(x,exact,'b',label = 'exact')
plt.plot(x,odeint,'g',label = 'odeint')
plt.legend(bbox_to_anchor = (0.6,1.0))
plt.subplot(1,2,2)
plt.title("Errors")
plt.ylabel("Difference from exact")
plt.xlabel("x")
plt.plot(x,abs(euler - exact),'r',label = "Euler")
plt.plot(x,abs(midpoint - exact),'y',label = "Midpoint")
plt.plot(x,abs(odein - exact),'g',label = "Odeint")
plt.legend(bbox_to_anchor=(0.7,1.0))
plt.tight_layout();
assert True
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h3> Extract sample data from BigQuery </h3>
Step2: Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,000 records -- because there are 1 billion records in the data, we should get back approximately 10,000 records if we do this.
Step3: <h3> Exploring data </h3>
Step4: Hmm ... do you see something wrong with the data that needs addressing?
Step5: What's up with the streaks around 45 dollars and 50 dollars? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.
Step6: Looking at a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool.
Step7: Hmm ... The min, max of longitude look strange.
Step8: As you'd expect, rides that involve a toll are longer than the typical ride.
Step9: The quality control has removed about 300 rows (11400 - 11101) or about 3% of the data. This seems reasonable.
Step10: Let's write out the three dataframes to appropriately named csv files. We can use these csv files for local training (recall that these files represent only 1/100,000 of the full dataset) just to verify our code works, before we run it on all the data.
Step11: <h3> Verify that datasets exist </h3>
Step12: We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data.
Step13: Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them.
Step15: <h2>Benchmark on same dataset</h2>
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
from google.cloud import bigquery
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
%%bigquery
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude, dropoff_longitude,
dropoff_latitude, passenger_count, trip_distance, tolls_amount,
fare_amount, total_amount
FROM
`nyc-tlc.yellow.trips` # TODO 1
LIMIT 10
%%bigquery trips
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
print(len(trips))
# We can slice Pandas dataframes as if they were arrays
trips[:10]
# TODO 2
ax = sns.regplot(
x="trip_distance", y="fare_amount",
fit_reg=False, ci=None, truncate=True, data=trips)
ax.figure.set_size_inches(10, 8)
%%bigquery trips
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
# TODO 3
AND trip_distance > 0
AND fare_amount >= 2.5
print(len(trips))
ax = sns.regplot(
x="trip_distance", y="fare_amount",
fit_reg=False, ci=None, truncate=True, data=trips)
ax.figure.set_size_inches(10, 8)
tollrides = trips[trips["tolls_amount"] > 0]
tollrides[tollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"]
notollrides = trips[trips["tolls_amount"] == 0]
notollrides[notollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"]
trips.describe()
def showrides(df, numlines):
lats = []
lons = []
for iter, row in df[:numlines].iterrows():
lons.append(row["pickup_longitude"])
lons.append(row["dropoff_longitude"])
lons.append(None)
lats.append(row["pickup_latitude"])
lats.append(row["dropoff_latitude"])
lats.append(None)
sns.set_style("darkgrid")
plt.figure(figsize=(10, 8))
plt.plot(lons, lats)
showrides(notollrides, 10)
showrides(tollrides, 10)
def preprocess(trips_in):
trips = trips_in.copy(deep=True)
trips.fare_amount = trips.fare_amount + trips.tolls_amount
del trips["tolls_amount"]
del trips["total_amount"]
del trips["trip_distance"] # we won't know this in advance!
qc = np.all([
trips["pickup_longitude"] > -78,
trips["pickup_longitude"] < -70,
trips["dropoff_longitude"] > -78,
trips["dropoff_longitude"] < -70,
trips["pickup_latitude"] > 37,
trips["pickup_latitude"] < 45,
trips["dropoff_latitude"] > 37,
trips["dropoff_latitude"] < 45,
trips["passenger_count"] > 0
], axis=0)
return trips[qc]
tripsqc = preprocess(trips)
tripsqc.describe()
shuffled = tripsqc.sample(frac=1)
trainsize = int(len(shuffled["fare_amount"]) * 0.70)
validsize = int(len(shuffled["fare_amount"]) * 0.15)
df_train = shuffled.iloc[:trainsize, :]
df_valid = shuffled.iloc[trainsize:(trainsize + validsize), :]
df_test = shuffled.iloc[(trainsize + validsize):, :]
df_train.head(n=1)
df_train.describe()
df_valid.describe()
df_test.describe()
def to_csv(df, filename):
outdf = df.copy(deep=False)
outdf.loc[:, "key"] = np.arange(0, len(outdf)) # rownumber as key
# Reorder columns so that target is first column
cols = outdf.columns.tolist()
cols.remove("fare_amount")
cols.insert(0, "fare_amount")
print (cols) # new order of columns
outdf = outdf[cols]
outdf.to_csv(filename, header=False, index_label=False, index=False)
to_csv(df_train, "taxi-train.csv")
to_csv(df_valid, "taxi-valid.csv")
to_csv(df_test, "taxi-test.csv")
!head -10 taxi-valid.csv
!ls -l *.csv
%%bash
head taxi-train.csv
def distance_between(lat1, lon1, lat2, lon2):
# Haversine formula to compute distance "as the crow flies".
lat1_r = np.radians(lat1)
lat2_r = np.radians(lat2)
lon_diff_r = np.radians(lon2 - lon1)
sin_prod = np.sin(lat1_r) * np.sin(lat2_r)
cos_prod = np.cos(lat1_r) * np.cos(lat2_r) * np.cos(lon_diff_r)
minimum = np.minimum(1, sin_prod + cos_prod)
dist = np.degrees(np.arccos(minimum)) * 60 * 1.515 * 1.609344
return dist
def estimate_distance(df):
return distance_between(
df["pickuplat"], df["pickuplon"], df["dropofflat"], df["dropofflon"])
def compute_rmse(actual, predicted):
return np.sqrt(np.mean((actual - predicted) ** 2))
def print_rmse(df, rate, name):
print ("{1} RMSE = {0}".format(
compute_rmse(df["fare_amount"], rate * estimate_distance(df)), name))
# TODO 4
FEATURES = ["pickuplon", "pickuplat", "dropofflon", "dropofflat", "passengers"]
TARGET = "fare_amount"
columns = list([TARGET])
columns.append("pickup_datetime")
columns.extend(FEATURES) # in CSV, target is first column, after the features
columns.append("key")
df_train = pd.read_csv("taxi-train.csv", header=None, names=columns)
df_valid = pd.read_csv("taxi-valid.csv", header=None, names=columns)
df_test = pd.read_csv("taxi-test.csv", header=None, names=columns)
rate = df_train["fare_amount"].mean() / estimate_distance(df_train).mean()
print ("Rate = ${0}/km".format(rate))
print_rmse(df_train, rate, "Train")
print_rmse(df_valid, rate, "Valid")
print_rmse(df_test, rate, "Test")
validation_query =
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
"unused" AS key
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
client = bigquery.Client()
df_valid = client.query(validation_query).to_dataframe()
print_rmse(df_valid, 2.59988, "Final Validation Set")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Displaying widgets
Step2: Widgets have many input signals to modify their appearance and behavior
Step3: Layout
Step4: React
Step5: Compound widgets
Step6: Reactions in JS
|
<ASSISTANT_TASK:>
Python Code:
from flexx import app, ui, react
app.init_notebook()
b = ui.Button(text='foo')
b
b.text('Push me!')
with ui.HBox() as hbox:
slider = ui.Slider(flex=0)
label = ui.Label(flex=1, text='xx')
hbox
@react.connect('slider.value')
def show_slider_value(v):
label.text(str(v))
class MyWidget(ui.Widget):
def init(self):
with ui.HBox():
self._slider = ui.Slider(flex=0)
self._label = ui.Label(flex=1, text='xx')
@react.connect('_slider.value')
def show_slider_value(self, v):
self._label.text(str(v))
w = MyWidget()
w
class MyWidget2(ui.Widget):
def init(self):
with ui.HBox():
self.slider = ui.Slider(flex=0)
self.label = ui.Label(flex=1, text='xx')
class JS:
@react.connect('slider.value')
def show_slider_value(self, v):
self.label.text(str(v))
w = MyWidget2()
w
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Extract lon, lat variables from vgrid2 and u, v variables from vbaro.
Step2: Using iris to create the CF object.
Step3: Now the phenomena.
Step4: Join the individual CF phenomena into one dataset.
Step5: Save the CF-compliant file!
|
<ASSISTANT_TASK:>
Python Code:
from netCDF4 import Dataset
url = ('http://geoport.whoi.edu/thredds/dodsC/usgs/data2/rsignell/gdrive/'
'nsf-alpha/Data/MIT_MSEAS/MSEAS_Tides_20160317/mseas_tides_2015071612_2015081612_01h.nc')
nc = Dataset(url)
vtime = nc['time']
coords = nc['vgrid2']
vbaro = nc['vbaro']
itime = -1
import iris
iris.FUTURE.netcdf_no_unlimited = True
time = iris.coords.DimCoord(vtime[itime],
var_name='time',
long_name=vtime.long_name,
standard_name='longitude',
units=vtime.units)
longitude = iris.coords.AuxCoord(coords[:, :, 0],
var_name='vlat',
standard_name='longitude',
units='degrees')
latitude = iris.coords.AuxCoord(coords[:, :, 1],
var_name='vlon',
standard_name='latitude',
units='degrees')
import numpy as np
u = vbaro[itime, :, :, 0]
u_cube = iris.cube.Cube(np.broadcast_to(u, (1,) + u.shape),
units=vbaro.units,
long_name=vbaro.long_name,
var_name='u',
standard_name='barotropic_eastward_sea_water_velocity',
dim_coords_and_dims=[(time, 0)],
aux_coords_and_dims=[(latitude, (1, 2)),
(longitude, (1, 2))])
v = vbaro[itime, :, :, 1]
v_cube = iris.cube.Cube(np.broadcast_to(v, (1,) + v.shape),
units=vbaro.units,
long_name=vbaro.long_name,
var_name='v',
standard_name='barotropic_northward_sea_water_velocity',
dim_coords_and_dims=[(time, 0)],
aux_coords_and_dims=[(latitude, (1, 2)),
(longitude, (1, 2))])
cubes = iris.cube.CubeList([u_cube, v_cube])
iris.save(cubes, 'hops.nc')
!ncdump -h hops.nc
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Table 2 - Photometry and Spectral Types for the Objects of Spectral Type M, or Slightly Earlier, Identified Toward Lupus 3
Step2: Table 4-
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import seaborn as sns
sns.set_context("notebook", font_scale=1.5)
#import warnings
#warnings.filterwarnings("ignore")
import pandas as pd
tbl2 = pd.read_clipboard(#"http://iopscience.iop.org/0004-637X/785/2/159/suppdata/apj492858t2_ascii.txt",
sep='\t', skiprows=[0,1,2,4], skipfooter=7, engine='python', na_values=" sdotsdotsdot", usecols=range(15))
tbl2.rename(columns={"Unnamed: 6": "Av_phot", "Unnamed: 8":"Av_spec"}, inplace=True)
tbl2
! mkdir ../data/Muzic2014
tbl2.to_csv("../data/Muzic2014/tbl2.csv", index=False)
tbl4 = pd.read_clipboard(#"http://iopscience.iop.org/0004-637X/785/2/159/suppdata/apj492858t4_ascii.txt",
sep='\t', skiprows=[0,1,2,4], skipfooter=2, engine='python', na_values=" sdotsdotsdot", usecols=range(8))
tbl4
tbl4.to_csv("../data/Muzic2014/tbl4.csv", index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importing the data and the column names from the codebook
Step2: The time in the data is measured with MATLAB's absolute time, converting to a meaningful timestamp
Step3: Now we need to remove any times where there is no Thermal comfort measurement taken and select our uncontrollable variables
Step4: Check if there are still any null values left and check where they are
Step5: Comments
Step6: With the data cleaned we move on to the analysis.
Step 2
Step7: Comments
Step8: Step 3
Step9: The first regression tree will test our environmental and time variables.
Step10: The second regression tree just checks the predictive power of occupant number alone.
Step11: Comments
Step 4
|
<ASSISTANT_TASK:>
Python Code:
import sklearn
sklearn.__version__
import pandas as pd
import numpy as np
import time
import datetime as dt
import matplotlib.pyplot as plt
from sklearn import tree
from sklearn.model_selection import train_test_split
%matplotlib inline
column_names = pd.read_excel('langevincodebook.xlsx',sheetname = 'Sheet2')
data_names = column_names['Description'].values
data = pd.read_csv('LANGEVIN_DATA.txt',sep=' ',names = data_names,index_col =False)
def toTimestamp(datenum):
python_datetime = dt.datetime.fromordinal(int(datenum) - 366) + dt.timedelta(days=datenum%1)
return python_datetime
data['Timestamp'] = [toTimestamp(data['Time'][date]) for date in range(len(data['Time']))]
data['Occupant Number']=data['Occupant Number'].astype(int)
data['Hour'] = data['Timestamp'].dt.hour
data['Month'] = data['Timestamp'].dt.month
data = data[pd.notnull(data['General Thermal Comfort (right now)'])]
uncont_data = data[['Occupant Number','Timestamp','Gender','Age','General Thermal Comfort (right now)','INDOOR Ambient Temp.','INDOOR Relative Humidity','OUTDOOR Ambient Temp.','OUTDOOR Relative Humidity','INDOOR Air Velocity','OUTDOOR Air Velocity','Hour','Month']]
print(uncont_data.isnull().sum())
o = uncont_data.isnull().values
plt.imshow(o,aspect = 'auto',interpolation = 'nearest')
uncont_data = uncont_data.dropna()
uncont_data.head(10)
temp_plot = uncont_data.plot('Timestamp',['INDOOR Ambient Temp.','OUTDOOR Ambient Temp.'],figsize = (8,5))
print('The average absolute difference in temperature is: ' + str(np.average(abs(uncont_data['INDOOR Ambient Temp.']-uncont_data['OUTDOOR Ambient Temp.'])))+' Celsius')
hum_plot = uncont_data.plot('Timestamp',['INDOOR Relative Humidity','OUTDOOR Relative Humidity'],figsize = (10,8))
print('The average absolute difference in relative humidity is: ' + str(np.average(abs(uncont_data['INDOOR Relative Humidity']-uncont_data['OUTDOOR Relative Humidity']))))
velo_plot = uncont_data.plot('Timestamp',['INDOOR Air Velocity','OUTDOOR Air Velocity'],figsize = (8,5))
print('The average absolute difference in air velocity is: ' + str(np.average(abs(uncont_data['INDOOR Air Velocity']-uncont_data['OUTDOOR Air Velocity']))))
y = uncont_data['General Thermal Comfort (right now)']
x1 = uncont_data['INDOOR Ambient Temp.']
x2 = uncont_data['INDOOR Relative Humidity']
x3 = uncont_data['INDOOR Air Velocity']
fig = plt.figure(figsize=(15,10))
ax1 = plt.subplot(311)
plt.scatter(x1,y)
ax1.set_xlabel("Indoor Ambient Temperature")
ax1.set_ylabel('General Thermal Comfort')
ax1.set_ylim(1,6)
ax2 = plt.subplot(312,sharey = ax1)
plt.scatter(x2,y)
ax2.set_xlabel('Indoor Relative Humidity')
ax3 = plt.subplot(313,sharey = ax1)
plt.scatter(x3,y)
ax3.set_xlabel("Indoor Air Velocity")
plt.tight_layout()
plt.savefig('comfort_subplots.png')
# Correlation Values for each plot
z = uncont_data['General Thermal Comfort (right now)']
print('Correlation coefficient for Thermal Comfort and Temp: ',z.corr(uncont_data['INDOOR Ambient Temp.']))
print('Correlation coefficient for Thermal Comfort and Relative Humidity: ',z.corr(uncont_data['INDOOR Relative Humidity']))
print('Correlation coefficient for Thermal Comfort and Air Velocity',z.corr(uncont_data['INDOOR Air Velocity']))
X1 = uncont_data[['INDOOR Ambient Temp.','INDOOR Relative Humidity','INDOOR Air Velocity']]
Y = uncont_data['General Thermal Comfort (right now)']
X1_train, X1_test, Y_train, Y_test = train_test_split(X1, Y, test_size=0.3)
reg = tree.DecisionTreeRegressor()
reg = reg.fit(X1_train,Y_train)
r2_score_avg1 = np.average([reg.score(X1_test,Y_test) for i in range(5000)])
print('R^2 value: ', r2_score_avg1)
X2 = uncont_data['Occupant Number']
X2 = X2[:,None]
X2_train, X2_test, Y_train, Y_test = train_test_split(X2, Y, test_size=0.3)
reg = tree.DecisionTreeRegressor()
reg = reg.fit(X2_train,Y_train)
r2_score_avg2 = np.average([reg.score(X2_test,Y_test) for i in range(5000)])
print('R^2 Value: ',r2_score_avg2)
fig1 = temp_plot.get_figure()
fig1.savefig('temp_plot.png')
fig2 = hum_plot.get_figure()
fig2.savefig('humidity_plot.png')
fig3 = velo_plot.get_figure()
fig3.savefig('velocity_plot.png')
uncont_data.describe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sommer 2015
Step2: Warum geht kein Join ??
Step3: Der Ansatz mit Join funktioniert in dieser Form nicht, da spรคtestens beim 2. Join die Firma Trappo mit 2 Datensรคtzen aus dem 1. Join verknรผpft wird. Deshalb wird auch die Anzahl der Fahren verdoppelt. Dies wiederholt sich beim 3. Join.
Step4: Winter 2015
Step5: ``mysql
Step6: Es geht auch mit einem Subselect
Step7: Versicherung
Step8: Lรถsung
|
<ASSISTANT_TASK:>
Python Code:
%load_ext sql
%sql mysql://steinam:steinam@localhost/sommer_2015
%%sql
select k.kd_id, k.kd_plz,
(select count(a.Au_ID) from auftrag a where a.au_kd_id = k.kd_id ) as AnzahlAuftr,
(select count(f.`f_id`) from fahrten f, auftrag a
where f.f_au_id = a.au_id and a.`au_kd_id` = k.`kd_id`) as AnzahlFahrt,
(select sum(ts.ts_strecke) from teilstrecke ts, fahrten f, auftrag a
where ts.ts_f_id = f.f_id and a.au_id = f.`f_au_id` and a.`au_kd_id` = k.`kd_id`) as SumStrecke
from kunde k order by k.kd_plz;
%sql select count(*) as AnzahlFahrten from fahrten
%%sql
select k.kd_id, k.`kd_firma`, k.`kd_plz`,
count(distinct a.Au_ID) as AnzAuftrag,
count(distinct f.f_id) as AnzFahrt,
sum(ts.ts_strecke) as SumStrecke
from kunde k left join auftrag a
on k.`kd_id` = a.`au_kd_id`
left join fahrten f
on a.`au_id` = f.`f_au_id`
left join teilstrecke ts
on ts.`ts_f_id` = f.`f_id`
group by k.kd_id
order by k.`kd_plz`
%%sql
SELECT kunde.Kd_ID, kunde.Kd_Firma, kunde.Kd_Strasse, kunde.Kd_PLZ,
kunde.Kd_Ort, COUNT(distinct auftrag.Au_ID) AS AnzahlAuftr, COUNT(distinct fahrten.F_ID) AS AnzahlFahrt, SUM(teilstrecke.Ts_Strecke) AS SumStrecke
FROM kunde
LEFT JOIN auftrag ON auftrag.Au_Kd_ID = kunde.Kd_ID
LEFT JOIN fahrten ON fahrten.F_Au_ID = auftrag.Au_ID
LEFT JOIN Teilstrecke ON teilstrecke.Ts_F_ID = fahrten.F_ID
GROUP BY kunde.Kd_ID
ORDER BY kunde.Kd_PLZ desc;
%sql mysql://steinam:steinam@localhost/winter_2015
%%sql
select count(rechnung.`Rg_ID`), kunde.`Kd_Name` from rechnung
inner join kunde on `rechnung`.`Rg_KD_ID` = kunde.`Kd_ID`
inner join `zahlungsbedingung` on kunde.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID`
where `zahlungsbedingung`.`Zb_SkontoProzent` > 3.0
and year(`rechnung`.`Rg_Datum`) = 2015 group by Kunde.`Kd_Name`
%%sql
select kd.`Kd_Name`,
(select COUNT(*) from Rechnung as R
where R.`Rg_KD_ID` = KD.`Kd_ID` and year(R.`Rg_Datum`) = 2015) as Anzahl
from Kunde kd inner join `zahlungsbedingung`
on kd.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID`
and `zahlungsbedingung`.`Zb_SkontoProzent` > 3.0
%%sql
-- wortmann und prinz
select
(select count(rechnung.rg_id) from rechnung
where
rechnung.rg_kd_id = kunde.kd_id
and (select zb_skontoprozent from zahlungsbedingung where zahlungsbedingung.zb_id = kunde.kd_zb_id) > 3
and YEAR(rechnung.rg_datum) = 2015
) as AnzRechnungen,
kunde.*
from kunde;
%%sql
SELECT COUNT(r.rg_id) AS AnzRechnung, k.*
FROM kunde AS k
LEFT JOIN rechnung AS r ON k.kd_id = r.Rg_KD_ID
WHERE k.kd_zb_id IN
(SELECT zb_id FROM zahlungsbedingung WHERE zb_skontoprozent > 3) AND YEAR(r.Rg_Datum) = 2015
GROUP BY k.Kd_ID
%sql -- your code goes here
%sql mysql://steinam:steinam@localhost/versicherung_complete
%%sql
select min(`vv`.`Abschlussdatum`) as 'Erster Abschluss', `vv`.`Mitarbeiter_ID`
from `versicherungsvertrag` vv inner join mitarbeiter m
on vv.`Mitarbeiter_ID` = m.`ID`
where vv.`Mitarbeiter_ID` in ( select m.`ID` from mitarbeiter m
inner join Abteilung a
on m.`Abteilung_ID` = a.`ID`)
group by vv.`Mitarbeiter_ID`
%%sql
-- rm
SELECT m.ID, m.Name, m.Vorname, v.*
FROM versicherungsvertrag AS v
JOIN mitarbeiter AS m ON m.ID = v.Mitarbeiter_ID
WHERE v.Abschlussdatum = (SELECT min(v.Abschlussdatum)
FROM versicherungsvertrag AS v WHERE v.Mitarbeiter_ID = m.ID
)
GROUP BY v.Mitarbeiter_ID
%%sql
-- original
SELECT vv.ID as VV, vv.Vertragsnummer, vv.Abschlussdatum, vv.Art,
mi.ID as MI, mi.Name, mi.Vorname
from Versicherungsvertrag vv
right join ( select MIN(vv2.ID) as ID, vv2.Mitarbeiter_ID
from Versicherungsvertrag vv2
group by vv2.Mitarbeiter_id ) Temp
on Temp.ID = vv.ID
right join Mitarbeiter mi on mi.ID = vv.Mitarbeiter_ID
where mi.Abteilung_ID = ( select ID from Abteilung
where Bezeichnung = 'Vertrieb' );
%%sql
-- rm
SELECT m.ID, m.Name, m.Vorname, v.*
FROM versicherungsvertrag AS v
JOIN mitarbeiter AS m ON m.ID = v.Mitarbeiter_ID
GROUP BY v.Mitarbeiter_ID
ORDER BY v.Abschlussdatum ASC
%%sql
-- ruppert_hartmann
Select mitarbeiter.ID, mitarbeiter.Name, mitarbeiter.Vorname,
mitarbeiter.Personalnummer,
abteilung.Bezeichnung,
min(versicherungsvertrag.abschlussdatum),
versicherungsvertrag.vertragsnummer
FROM mitarbeiter
LEFT JOIN abteilung ON Abteilung_ID = Abteilung.ID
LEFT JOIN versicherungsvertrag ON versicherungsvertrag.Mitarbeiter_ID = mitarbeiter.ID
WHERE abteilung.Bezeichnung = 'Vertrieb'
GROUP BY mitarbeiter.ID
result = _
result
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: HW 10.0
Step2: HW 10.1
Step3: HW 10.1.1
Step4: HW 10.2
Step5: HW 10.3
Step6: The WSSE decreases with the number of iterations from 1 to 20 iterations. After 20 iterations, the centroids converge and the WSSE is stable.
|
<ASSISTANT_TASK:>
Python Code:
%cd ~/Documents/W261/hw10/
import os
import sys
spark_home = os.environ['SPARK_HOME'] = \
'/Users/davidadams/packages/spark-1.5.1-bin-hadoop2.6/'
if not spark_home:
raise ValueError('SPARK_HOME enviroment variable is not set')
sys.path.insert(0,os.path.join(spark_home,'python'))
sys.path.insert(0,os.path.join(spark_home,'python/lib/py4j-0.8.2.1-src.zip'))
execfile(os.path.join(spark_home,'python/pyspark/shell.py'))
''' Example of creating an RDD and bringing the first element back to the driver'''
import numpy as np
dataRDD = sc.parallelize(np.random.random_sample(1000))
data2X= dataRDD.map(lambda x: x*2)
dataGreaterThan1 = data2X.filter(lambda x: x > 1.0)
print dataGreaterThan1.take(1)
def hw10_1():
# create RDD from text file and split at spaces to get words
rdd = sc.textFile("HW10-Public/MIDS-MLS-HW-10.txt")
words = rdd.flatMap(lambda x: x.strip().split(" "))
# count words and sort
sortedcounts = words.map(lambda x: (x, 1)) \
.reduceByKey(lambda x, y: x + y) \
.map(lambda (x,y): (y, x)) \
.sortByKey(False) \
.map(lambda (x,y): (y, x))
for line in mysorted.collect():
print line
return None
hw10_1()
def hw10_1_1():
def isloweraz(word):
'''
check if the word starts with a lower case letter
'''
lowercase = 'abcdefghijklmnopqrstuvwxyz'
try:
return word[0] in lowercase
except IndexError:
return False
# create RDD from text file
rdd = sc.textFile("HW10-Public/MIDS-MLS-HW-10.txt")
# get words and filter for those that start with a lowercase letter
words = rdd.flatMap(lambda x: x.strip().split(" ")) \
.filter(isloweraz)
# count words and sort
sortedcounts = words.map(lambda x: (x, 1)) \
.reduceByKey(lambda x, y: x + y) \
.map(lambda (x,y): (y, x)) \
.sortByKey(False) \
.map(lambda (x,y): (y, x))
for line in sortedcounts.collect():
print line
return None
hw10_1_1()
from pyspark.mllib.clustering import KMeans, KMeansModel
from numpy import array
from math import sqrt
# Load and parse the data
# NOTE kmeans_data.txt is available here https://www.dropbox.com/s/q85t0ytb9apggnh/kmeans_data.txt?dl=0
data = sc.textFile("HW10-Public/kmeans_data.txt")
parsedData = data.map(lambda line: array([float(x) for x in line.split(' ')]))
# Build the model (cluster the data)
clusters = KMeans.train(parsedData, 2, maxIterations=10,
runs=10, initializationMode="random")
# Evaluate clustering by computing Within Set Sum of Squared Errors
def error(point):
center = clusters.centers[clusters.predict(point)]
return sqrt(sum([x**2 for x in (point - center)]))
WSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x, y: x + y)
print("Within Set Sum of Squared Error = " + str(WSSSE))
# Save and load model
clusters.save(sc, "myModelPath")
sameModel = KMeansModel.load(sc, "myModelPath")
for i,ctr in enumerate(clusters.centers):
print("Cluster %i: %.1f, %.1f, %.1f" % (i, ctr[0],ctr[1],ctr[2]))
WSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x, y: x + y)
print("Within Set Sum of Squared Error = " + str(WSSSE))
%matplotlib inline
import numpy as np
import pylab
import json
size1 = size2 = size3 = 100
samples1 = np.random.multivariate_normal([4, 0], [[1, 0],[0, 1]], size1)
data = samples1
samples2 = np.random.multivariate_normal([6, 6], [[1, 0],[0, 1]], size2)
data = np.append(data,samples2, axis=0)
samples3 = np.random.multivariate_normal([0, 4], [[1, 0],[0, 1]], size3)
data = np.append(data,samples3, axis=0)
# Randomlize data
data = data[np.random.permutation(size1+size2+size3),]
np.savetxt('data.csv',data,delimiter = ',')
pylab.plot(samples1[:, 0], samples1[:, 1],'*', color = 'red')
pylab.plot(samples2[:, 0], samples2[:, 1],'o',color = 'blue')
pylab.plot(samples3[:, 0], samples3[:, 1],'+',color = 'green')
pylab.show()
'''
Then run MLlib's Kmean implementation on this data
and report your results as follows:
-- plot the resulting clusters after 1, 10, 20, and 100 iterations
-- in each plot please report the Within Set Sum of Squared Errors
for the found clusters. Comment on the progress of this measure as
the KMeans algorithms runs for more iterations
'''
from pyspark.mllib.clustering import KMeans, KMeansModel
from numpy import array
from math import sqrt
# Load and parse the data
data = sc.textFile("data.csv")
parsedData = data.map(lambda line: array([float(x) for x in line.split(',')]))
import numpy as np
#Calculate which class each data point belongs to
def nearest_centroid(line):
x = np.array([float(f) for f in line.split(',')])
closest_centroid_idx = np.sum((x - centroids)**2, axis=1).argmin()
return (closest_centroid_idx,(x,1))
#plot centroids and data points for each iteration
def plot_iteration(means):
pylab.plot(samples1[:, 0], samples1[:, 1], '.', color = 'blue')
pylab.plot(samples2[:, 0], samples2[:, 1], '.', color = 'blue')
pylab.plot(samples3[:, 0], samples3[:, 1],'.', color = 'blue')
pylab.plot(means[0][0], means[0][1],'*',markersize =10,color = 'red')
pylab.plot(means[1][0], means[1][1],'*',markersize =10,color = 'red')
pylab.plot(means[2][0], means[2][1],'*',markersize =10,color = 'red')
pylab.show()
from time import time
numIters = [1, 10, 20, 100]
for i in numIters:
clusters = KMeans.train(parsedData, k=3, maxIterations=i,
initializationMode = "random")
if i==1:
print("Centroids after %d iteration:" % i)
else:
print("Centroids after %d iterations:" % i)
for centroid in clusters.centers:
print centroid
WSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x, y: x + y)
print("Within Set Sum of Squared Error = " + str(WSSSE))
plot_iteration(clusters.centers)
from numpy.random import rand
#Calculate which class each data point belongs to
def nearest_centroid(line):
x = np.array([float(f) for f in line.split(',')])
closest_centroid_idx = np.sum((x - centroids)**2, axis=1).argmin()
return (closest_centroid_idx,(x,1))
def error_p4(line, centroids):
point = np.array([float(f) for f in line.split(',')])
closest_centroid_idx = np.sum((point - centroids)**2, axis=1).argmin()
center = centroids[closest_centroid_idx]
return sqrt(sum([x**2 for x in (point - center)]))
K = 3
D = sc.textFile("./data.csv").cache()
numIters = [1, 10, 20, 100]
for n in numIters:
# randomly initialize centroids
centroids = rand(3,2)*5
iter_num = 0
for i in range(n):
res = D.map(nearest_centroid).reduceByKey(lambda x,y : (x[0]+y[0],x[1]+y[1])).collect()
res = sorted(res,key = lambda x : x[0]) #sort based on cluster ID
centroids_new = np.array([x[1][0]/x[1][1] for x in res]) #divide by cluster size
if np.sum(np.absolute(centroids_new-centroids))<0.01:
break
iter_num = iter_num + 1
centroids = centroids_new
if n==1:
print("Centroids after %d iteration:" % n)
else:
print("Centroids after %d iterations:" % n)
print centroids
WSSSE = D.map(lambda line: error_p4(line, centroids)).reduce(lambda x, y: x + y)
print("Within Set Sum of Squared Error = " + str(WSSSE))
plot_iteration(centroids)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Parking orbit
Step2: Hyperbolic exit
Step3: Quoting "New Horizons Mission Design"
Step4: So it stays within the same order of magnitude. Which is reasonable, because real life burns are not instantaneous.
Step5: Option b)
Step6: The hyperbolic excess velocity is measured with respect to the Earth
Step7: Which again, stays within the same order of magnitude of the figure given to the Guo & Farquhar report.
|
<ASSISTANT_TASK:>
Python Code:
from astropy import time
from astropy import units as u
from poliastro.bodies import Sun, Earth, Jupiter
from poliastro.ephem import Ephem
from poliastro.frames import Planes
from poliastro.twobody import Orbit
from poliastro.plotting import StaticOrbitPlotter
from poliastro import iod
from poliastro.util import norm
r_p = Earth.R + 165 * u.km
r_a = Earth.R + 215 * u.km
a_parking = (r_p + r_a) / 2
ecc_parking = 1 - r_p / a_parking
parking = Orbit.from_classical(
Earth,
a_parking,
ecc_parking,
0 * u.deg,
0 * u.deg,
0 * u.deg,
0 * u.deg, # We don't mind
time.Time("2006-01-19", scale="utc"),
)
print(parking.v)
parking.plot();
C_3_A = 157.6561 * u.km ** 2 / u.s ** 2 # Designed
a_exit = -(Earth.k / C_3_A).to(u.km)
ecc_exit = 1 - r_p / a_exit
exit = Orbit.from_classical(
Earth,
a_exit,
ecc_exit,
0 * u.deg,
0 * u.deg,
0 * u.deg,
0 * u.deg, # We don't mind
time.Time("2006-01-19", scale="utc"),
)
norm(exit.v).to(u.km / u.s)
v_estimated = 16.2 * u.km / u.s
print(
"Relative error of {:.2f} %".format(
(norm(exit.v) - v_estimated) / v_estimated * 100
)
)
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(8, 8))
op = StaticOrbitPlotter(ax=ax)
op.plot(parking)
op.plot(exit)
ax.set_xlim(-8000, 8000)
ax.set_ylim(-20000, 20000);
nh_date = time.Time("2006-01-19 19:00", scale="utc").tdb
nh_flyby_date = time.Time("2007-02-28 05:43:40", scale="utc").tdb
nh_tof = nh_flyby_date - nh_date
nh_r_0, v_earth = Ephem.from_body(Earth, nh_date).rv(nh_date)
nh_r_f, v_jup = Ephem.from_body(Jupiter, nh_flyby_date).rv(nh_flyby_date)
(nh_v_0, nh_v_f), = iod.lambert(Sun.k, nh_r_0, nh_r_f, nh_tof)
C_3_lambert = (norm(nh_v_0 - v_earth)).to(u.km / u.s) ** 2
C_3_lambert
print("Relative error of {:.2f} %".format((C_3_lambert - C_3_A) / C_3_A * 100))
nh = Orbit.from_vectors(Sun, nh_r_0, nh_v_0, nh_date)
op = StaticOrbitPlotter(plane=Planes.EARTH_ECLIPTIC)
op.plot_body_orbit(Jupiter, nh_flyby_date)
op.plot_body_orbit(Earth, nh_date)
op.plot(nh, label="New Horizons", color="k");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hello JSON
Step2: Using requests to post data
Step3: Formatting the data correctly
Step4: Send it off to Slack
|
<ASSISTANT_TASK:>
Python Code:
from os import environ
slack_hook = environ.get('IRE_CFJ_2017_SLACK_HOOK', None)
import json
import requests
# build a dictionary of payload data
payload = {
'channel': '#general',
'username': 'IRE Python Bot',
'icon_emoji': ':ire:',
'text': 'helllllllo!'
}
# turn it into a string of JSON
payload_as_json = json.dumps(payload)
# check to see if you have the webhook URL
if slack_hook:
# send it to slack!
requests.post(slack_hook, data=payload_as_json)
else:
# if you don't have the webhook env var, print a message to the terminal
print("You don't have the IRE_CFJ_2017_SLACK_HOOK"
" environmental variable")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
Step6: Let's run it
Step7: Problem
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
pickle_file = 'notMNIST.pickle'
#with is just a safe way of dealing with resources. handles correct closing if exceptions, etc
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
#-1 in reshape means 'use whatever makes sense', either flatten the whole thing or keep previous dimensions.
# here it keeps the previous dimension
print(labels.shape)
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
#the first part of this, np.arange(num_labels), creates an array that is [0,1,2...9]
#we then check, for all rows of labels, if that arange is equal to the content of the row
#so if labels[3] = 1, we get something like [false,true,false...], and this is converted to [0.0,1.0,0.0...]
#None is actually optional, it just means don't bother about this dimension or something.
#or actually, we need None if labels has more than 1 column
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
###################################### SKIP ##########################################
# With gradient descent training, even this much data is prohibitive. Subset the training data for faster turnaround.
train_subset = 10000
#set everything up for training
graph = tf.Graph()
with graph.as_default():
# Input data. Load the training, validation and test data into constants that are attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables. These are the parameters that we are going to be training. The weight matrix will be initialized
# using random values following a (truncated) normal distribution. The biases get initialized to zero.
weights = tf.Variable(tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation. We multiply the inputs with the weight matrix, and add biases.
logits = tf.matmul(tf_train_dataset, weights) + biases
#We compute the softmax and cross-entropy (it's one operation in TensorFlow, because it's very common, and it
#can be optimized). We take the average of this cross-entropy across all training examples: that's our loss.
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer. We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
###################################### SKIP ##########################################
num_steps = 801
def accuracy(predictions, labels):
#argmax returns the indices of the maximum values across dimension 1 ie colums
#predictions has shape (10000,10). this == tests if the max from predictions matches the max (ie the only non-null) label
sum_all_correct = np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
sum_all = predictions.shape[0]
return 100.0 * sum_all_correct / sum_all
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the biases.
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph dependencies.
print('Validation accuracy: %.1f%%' % accuracy(valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
###################################### SKIP ##########################################
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
###################################### SKIP ##########################################
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels [offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key is the placeholder node of the graph to be fed and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
import time
def accuracy(predictions, labels):
#argmax returns the indices of the maximum values across dimension 1 ie colums
#predictions has shape (10000,10). this == tests if the max from predictions matches the max (ie the only non-null) label
sum_all_correct = np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
sum_all = predictions.shape[0]
return 100.0 * sum_all_correct / sum_all
# import math
batch_size = 128 #this random number of training patterns will be used
graph = tf.Graph()
hidden1_units = 1024
#initialize everything
with graph.as_default():
# Input data. The training data is currently empty, but a random minibatch will be fed in the placeholder during training
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Input layer
weights = tf.Variable(tf.truncated_normal([image_size * image_size, hidden1_units]))
biases = tf.Variable(tf.zeros([hidden1_units]))
InputLayerOutput = tf.matmul(tf_train_dataset, weights) + biases
# 1st hidden layer
weights1= tf.Variable(tf.truncated_normal([hidden1_units, num_labels]))
biases1 = tf.Variable(tf.zeros([num_labels]))
hidden1 = tf.nn.relu(InputLayerOutput)
# Training computation.
# logits = tf.matmul(tf_train_dataset, weights) + biases
logits = tf.matmul(hidden1, weights1) + biases1
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights) + biases), weights1) + biases1)
test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights) + biases), weights1) + biases1)
#another person's code from the forum, also works. basically is the same.
# num_nodes= 1024
# batch_size = 128
# graph = tf.Graph()
# with graph.as_default():
# # Input data. For the training data, we use a placeholder that will be fed
# # at run time with a training minibatch.
# tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size))
# tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
# tf_valid_dataset = tf.constant(valid_dataset)
# tf_test_dataset = tf.constant(test_dataset)
# # Variables.
# weights_1 = tf.Variable(tf.truncated_normal([image_size * image_size, num_nodes]))
# biases_1 = tf.Variable(tf.zeros([num_nodes]))
# weights_2 = tf.Variable(tf.truncated_normal([num_nodes, num_labels]))
# biases_2 = tf.Variable(tf.zeros([num_labels]))
# # Training computation.
# relu_layer=tf.nn.relu(tf.matmul(tf_train_dataset, weights_1) + biases_1)
# logits = tf.matmul(relu_layer, weights_2) + biases_2
# loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# # Optimizer.
# optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# # Predictions for the training, validation, and test data.
# train_prediction = tf.nn.softmax(logits)
# valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights_1) + biases_1), weights_2) + biases_2)
# test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights_1) + biases_1), weights_2) + biases_2)
#train the thing
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
start_time = time.clock()
print("Initialized")
for step in range(num_steps):
# Generate a minibatch by pick an offset within the (randomized) training data. Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels [offset:(offset + batch_size), :]
# Dictionary telling the session where to feed the minibatch. Keys are the placeholder nodes and the value are the numpy arrays.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
# Run the thing
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
end_time = time.clock()
print("Whole thing took: ", end_time - start_time)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preliminary analysis
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import bokeh.plotting as bkp
from mpl_toolkits.axes_grid1 import make_axes_locatable
%matplotlib inline
# read in readmissions data provided
hospital_read_df = pd.read_csv('data/cms_hospital_readmissions.csv')
# deal with missing and inconvenient portions of data
clean_hospital_read_df = hospital_read_df[(hospital_read_df['Number of Discharges'] != 'Not Available')]
clean_hospital_read_df.loc[:, 'Number of Discharges'] = clean_hospital_read_df['Number of Discharges'].astype(int)
clean_hospital_read_df = clean_hospital_read_df.sort('Number of Discharges')
# generate a scatterplot for number of discharges vs. excess rate of readmissions
# lists work better with matplotlib scatterplot function
x = [a for a in clean_hospital_read_df['Number of Discharges'][81:-3]]
y = list(clean_hospital_read_df['Excess Readmission Ratio'][81:-3])
fig, ax = plt.subplots(figsize=(8,5))
ax.scatter(x, y,alpha=0.2)
ax.fill_between([0,350], 1.15, 2, facecolor='red', alpha = .15, interpolate=True)
ax.fill_between([800,2500], .5, .95, facecolor='green', alpha = .15, interpolate=True)
ax.set_xlim([0, max(x)])
ax.set_xlabel('Number of discharges', fontsize=12)
ax.set_ylabel('Excess rate of readmissions', fontsize=12)
ax.set_title('Scatterplot of number of discharges vs. excess rate of readmissions', fontsize=14)
ax.grid(True)
fig.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) Display some image samples using matplotlib.pyplot
Step2: 3) (If necessary) Reduce the number of training images (for quick training and small-GPU computer)
Step3: 3. Fully-connected NNs
Step4: Convert the data to float and scale the values between 0 to 1
Step5: 2) Define a 2-layer fully connected NN with 256 neurons for the 1st layer and 512 neurons for the 2nd layer. Use the 'sigmoid' activation function for the 2 network layers and 'softmax' for the final output layer. Show the model summary.
Step6: 3) Compile and train the model. Report the training time
Step7: 4) test the model
Step8: 4. CNNs
Step9: 4) Now, add one more CONV layer followed by on Max pooling. The number of filters and filter size to be chosen by yourself.
Step10: 5. CNNs (advanced)
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import keras
from keras.datasets import mnist
# fix random seed for reproducibility
np.random.seed(7)
# load data
(x_train,y_train), (x_test,y_test) = mnist.load_data()
# show the training size, test size, number of class
print("train size : ",x_train.shape)
print("test size : ",x_test.shape)
print("train label : ",y_train.shape)
print("test label : ",y_test.shape)
nclass = len(np.unique(y_train))
print("number of classes:",nclass)
%matplotlib inline
import matplotlib.pyplot as plt
# to complete
for i in range(0,9):
plt.subplot(3, 3, i+1)
plt.imshow(x_train[i], cmap=plt.get_cmap('gray')); plt.axis('off')
#x_train = x_train[0:30000,:]
#y_train = y_train[0:30000]
print("train size : ",x_train.shape)
print("train label : ",y_train.shape)
x_train = x_train.reshape(x_train.shape[0], 28*28)
x_test = x_test.reshape(x_test.shape[0], 28*28)
print("train size : ",x_train.shape)
print("test size : ",x_test.shape)
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
from keras.utils import np_utils
y_train_cat = np_utils.to_categorical(y_train, nclass)
y_test_cat = np_utils.to_categorical(y_test, nclass)
y_train_cat.shape
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import RMSprop
# define the model
model_nn = Sequential()
model_nn.add(Dense(256, input_shape=(784,),activation='sigmoid'))
model_nn.add(Dense(512, activation='sigmoid'))
model_nn.add(Dense(10,activation='softmax')) #Last layer has nclass nodes
model_nn.summary()
import time
# compile the model
model_nn.compile(loss='categorical_crossentropy', optimizer =RMSprop(lr=0.001), metrics=["accuracy"])
# train the model
start_t_mod= time.time()
model_nn.fit(x_train, y_train_cat, batch_size=500, epochs = 20)
finish_t_mod = time.time()
time = finish_t_mod - start_t_mod
print("training time :", time)
# test the model
#
score = model_nn.evaluate(x_test, y_test_cat)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Dense, Activation, Flatten
import time
(x_train,y_train), (x_test,y_test) = mnist.load_data()
x_train = x_train.reshape(x_train.shape[0],28,28,1)
x_test = x_test.reshape(x_test.shape[0],28,28,1)
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
y_train_cat = np_utils.to_categorical(y_train, nclass)
y_test_cat = np_utils.to_categorical(y_test, nclass)
# design the model
model_cnn = Sequential()
model_cnn.add(Conv2D(32,(5,5),input_shape=(28,28,1), activation='relu'))
model_cnn.add(MaxPooling2D(pool_size=(2,2)))
model_cnn.add(Flatten())
model_cnn.add(Dense(128,activation='relu'))
model_cnn.add(Dense(10,activation='softmax'))
model_cnn.summary()
# compile the model
model_cnn.compile(loss='categorical_crossentropy',metrics=['accuracy'],optimizer=RMSprop(lr=0.001))
# train the model
start_t= time.time()
model_cnn.fit(x_train, y_train_cat, batch_size=128, epochs=5)
finish_t = time.time()
time_cnn = finish_t - start_t
print("training time :", time_cnn)
# test the model
score = model_cnn.evaluate(x_test, y_test_cat)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# design new model
# design the model
model_cnn = Sequential()
model_cnn.add(Conv2D(32,(5,5),input_shape=(28,28,1), activation='relu'))
model_cnn.add(MaxPooling2D(pool_size=(2,2)))
model_cnn.add(Conv2D(32,(5,5), activation='relu'))
model_cnn.add(MaxPooling2D(pool_size=(2,2)))
model_cnn.add(Flatten())
model_cnn.add(Dense(128,activation='relu'))
model_cnn.add(Dense(10,activation='softmax'))
model_cnn.summary()
# compile the model
# train the model
# test the model
# Batch normalization
from keras.layers import BatchNormalization
# Drop out
from keras.layers import Dropout
# design new model
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Are we underfitting?
Step2: ...and load our fine-tuned weights.
Step3: We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer
Step4: Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way!
Step5: For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.
Step6: And fit the model in the usual way
Step7: Reducing overfitting
Step8: Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested).
Step9: As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches.
Step10: Adding data augmentation
Step11: When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.
Step12: Now we can compile, train, and save our model as usual - note that we use fit_generator() since we want to pull random images from the directories on every batch.
Step13: Batch normalization
|
<ASSISTANT_TASK:>
Python Code:
from theano.sandbox import cuda
%matplotlib inline
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
#path = "data/dogscats/sample/"
path = "data/dogscats/"
model_path = path + 'models/'
if not os.path.exists(model_path): os.mkdir(model_path)
batch_size=64
model = vgg_ft(2)
model.load_weights(model_path+'finetune3.h5')
layers = model.layers
last_conv_idx = [index for index,layer in enumerate(layers)
if type(layer) is Convolution2D][-1]
last_conv_idx
layers[last_conv_idx]
conv_layers = layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
# Dense layers - also known as fully connected or 'FC' layers
fc_layers = layers[last_conv_idx+1:]
batches = get_batches(path+'train', shuffle=False, batch_size=batch_size)
val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)
val_classes = val_batches.classes
trn_classes = batches.classes
val_labels = onehot(val_classes)
trn_labels = onehot(trn_classes)
val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample)
trn_features = conv_model.predict_generator(batches, batches.nb_sample)
save_array(model_path + 'train_convlayer_features.bc', trn_features)
save_array(model_path + 'valid_convlayer_features.bc', val_features)
trn_features = load_array(model_path+'train_convlayer_features.bc')
val_features = load_array(model_path+'valid_convlayer_features.bc')
trn_features.shape
# Copy the weights from the pre-trained model.
# NB: Since we're removing dropout, we want to half the weights
def proc_wgts(layer): return [o/2 for o in layer.get_weights()]
# Such a finely tuned model needs to be updated very slowly!
opt = RMSprop(lr=0.00001, rho=0.7)
def get_fc_model():
model = Sequential([
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(2, activation='softmax')
])
for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2))
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
fc_model = get_fc_model()
fc_model.fit(trn_features, trn_labels, nb_epoch=8,
batch_size=batch_size, validation_data=(val_features, val_labels))
fc_model.save_weights(model_path+'no_dropout.h5')
fc_model.load_weights(model_path+'no_dropout.h5')
# dim_ordering='tf' uses tensorflow dimension ordering,
# which is the same order as matplotlib uses for display.
# Therefore when just using for display purposes, this is more convenient
gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1,
height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1,
channel_shift_range=10., horizontal_flip=True, dim_ordering='tf')
# Create a 'batch' of a single image
img = np.expand_dims(ndimage.imread('cat.jpg'),0)
# Request the generator to create batches from this image
aug_iter = gen.flow(img)
# Get eight examples of these augmented images
aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)]
# The original
plt.imshow(img[0])
# Augmented data
plots(aug_imgs, (20,7), 2)
# Ensure that we return to theano dimension ordering
K.set_image_dim_ordering('th')
gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1,
height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True)
batches = get_batches(path+'train', gen, batch_size=batch_size)
# NB: We don't want to augment or shuffle the validation set
val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)
fc_model = get_fc_model()
for layer in conv_model.layers: layer.trainable = False
# Look how easy it is to connect two models together!
conv_model.add(fc_model)
conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
conv_model.save_weights(model_path + 'aug1.h5')
conv_model.load_weights(model_path + 'aug1.h5')
conv_layers[-1].output_shape[1:]
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(p),
BatchNormalization(),
Dense(4096, activation='relu'),
Dropout(p),
BatchNormalization(),
Dense(1000, activation='softmax')
]
p=0.6
bn_model = Sequential(get_bn_layers(0.6))
bn_model.load_weights('/data/jhoward/ILSVRC2012_img/bn_do3_1.h5')
def proc_wgts(layer, prev_p, new_p):
scal = (1-prev_p)/(1-new_p)
return [o*scal for o in layer.get_weights()]
for l in bn_model.layers:
if type(l)==Dense: l.set_weights(proc_wgts(l, 0.3, 0.6))
bn_model.pop()
for layer in bn_model.layers: layer.trainable=False
bn_model.add(Dense(2,activation='softmax'))
bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels))
bn_model.save_weights(model_path+'bn.h5')
bn_model.load_weights(model_path+'bn.h5')
bn_layers = get_bn_layers(0.6)
bn_layers.pop()
bn_layers.append(Dense(2,activation='softmax'))
final_model = Sequential(conv_layers)
for layer in final_model.layers: layer.trainable = False
for layer in bn_layers: final_model.add(layer)
for l1,l2 in zip(bn_model.layers, bn_layers):
l2.set_weights(l1.get_weights())
final_model.compile(optimizer=Adam(),
loss='categorical_crossentropy', metrics=['accuracy'])
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
final_model.save_weights(model_path + 'final1.h5')
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
final_model.save_weights(model_path + 'final2.h5')
final_model.optimizer.lr=0.001
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
bn_model.save_weights(model_path + 'final3.h5')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
def calculate_minimum_split(a , k ) :
p =[]
n = len(a )
for i in range(1 , n ) :
p . append(a[i ] - a[i - 1 ] )
p . sort(reverse = True )
min_sum = sum(p[: k - 1 ] )
res = a[n - 1 ] - a[0 ] - min_sum
return res
/ * Driver code * /
if __name__== "__main __":
arr =[4 , 8 , 15 , 16 , 23 , 42 ]
K = 3
print(calculate_minimum_split(arr , K ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unzipping files with house sales data
Step2: Loading Sales data, Sales Training data, and Sales Test data
Step3: Learning a multiple regression model
Step4: Now, creating a list of the features we are interested in, the feature matrix, and the output vector.
Step5: Creating a Linear Regression Object for Sklearn library and using the feature matrix and output vector to perform linear regression.
Step6: Now that we have fitted the model we can extract the regression weights (coefficients)
Step7: Making Predictions
Step8: Compute RSS
Step9: Create some new features
Step10: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.
Step11: Learning Multiple Models
Step12: Now that you have the features, learn the weights for the three different models for predicting target = 'price' and look at the value of the weights/coefficients
Step13: Now, Examine/extract each model's coefficients
Step14: Quiz Question
Step15: Quiz Question
Step16: Think about what this means
Step17: Quiz Question
|
<ASSISTANT_TASK:>
Python Code:
import os
import zipfile
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
# Put files in current direction into a list
files_list = [f for f in os.listdir('.') if os.path.isfile(f)]
# Filenames of unzipped files
unzip_files = ['kc_house_train_data.csv','kc_house_test_data.csv', 'kc_house_data.csv']
# If upzipped file not in files_list, unzip the file
for filename in unzip_files:
if filename not in files_list:
zip_file = filename + '.zip'
unzipping = zipfile.ZipFile(zip_file)
unzipping.extractall()
unzipping.close
# Dictionary with the correct dtypes for the DataFrame columns
dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int,
'sqft_living15':float, 'grade':int, 'yr_renovated':int,
'price':float, 'bedrooms':float, 'zipcode':str,
'long':float, 'sqft_lot15':float, 'sqft_living':float,
'floors':str, 'condition':int, 'lat':float, 'date':str,
'sqft_basement':int, 'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int}
# Loading sales data, sales training data, and test_data into DataFrames
sales = pd.read_csv('kc_house_data.csv', dtype = dtype_dict)
train_data = pd.read_csv('kc_house_train_data.csv', dtype = dtype_dict)
test_data = pd.read_csv('kc_house_test_data.csv', dtype = dtype_dict)
# Looking at head of training data DataFrame
train_data.head()
plt.figure(figsize=(8,6))
plt.plot(train_data['sqft_living'], train_data['price'],'.')
plt.xlabel('Living Area (ft^2)', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.show()
plt.figure(figsize=(12,8))
plt.subplot(1, 2, 1)
plt.plot(train_data['bedrooms'], train_data['price'],'.')
plt.xlabel('# Bedrooms', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.subplot(1, 2, 2)
plt.plot(train_data['bathrooms'], train_data['price'],'.')
plt.xlabel('# Bathrooms', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.show()
example_features = ['sqft_living', 'bedrooms', 'bathrooms']
X_multi_lin_reg = train_data[example_features]
y_multi_lin_reg = train_data['price']
example_model = LinearRegression()
example_model.fit(X_multi_lin_reg, y_multi_lin_reg)
# printing the intercept and coefficients
print example_model.intercept_
print example_model.coef_
# Putting the intercept and weights from the multiple linear regression into a Series
example_weight_summary = pd.Series( [example_model.intercept_] + list(example_model.coef_),
index = ['intercept'] + example_features )
print example_weight_summary
example_predictions = example_model.predict(X_multi_lin_reg)
print example_predictions[0] # should be close to 271789.505878
def get_residual_sum_of_squares(model, data, outcome):
# - data holds the data points with the features (columns) we are interested in performing a linear regression fit
# - model holds the linear regression model obtained from fitting to the data
# - outcome is the y, the observed house price for each data point
# By using the model and applying predict on the data, we return a numpy array which holds
# the predicted outcome (house price) from the linear regression model
model_predictions = model.predict(data)
# Computing the residuals between the predicted house price and the actual house price for each data point
residuals = outcome - model_predictions
# To get RSS, square the residuals and add them up
RSS = sum(residuals*residuals)
return(RSS)
# Creating new 'bedrooms_squared' feature
train_data['bedrooms_squared'] = train_data['bedrooms']*train_data['bedrooms']
test_data['bedrooms_squared'] = test_data['bedrooms']*test_data['bedrooms']
# Creating new 'bed_bath_rooms' feature
train_data['bed_bath_rooms'] = train_data['bedrooms']*train_data['bathrooms']
test_data['bed_bath_rooms'] = test_data['bedrooms']*test_data['bathrooms']
# Creating new 'log_sqft_living' feature
train_data['log_sqft_living'] = np.log(train_data['sqft_living'])
test_data['log_sqft_living'] = np.log(test_data['sqft_living'])
# Creating new 'lat_plus_long' feature
train_data['lat_plus_long'] = train_data['lat'] + train_data['long']
test_data['lat_plus_long'] = test_data['lat'] + test_data['long']
# Displaying head of train_data DataFrame and test_data DataFrame to verify that new features are present
train_data.head()
test_data.head()
print "Mean of Test data 'bedrooms_squared' feature: %.2f " % np.mean(test_data['bedrooms_squared'].values)
print "Mean of Test data 'bed_bath_rooms' feature: %.2f " % np.mean(test_data['bed_bath_rooms'].values)
print "Mean of Test data 'log_sqft_living' feature: %.2f " % np.mean(test_data['log_sqft_living'].values)
print "Mean of Test data 'lat_plus_long' feature: %.2f " % np.mean(test_data['lat_plus_long'].values)
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']
model_2_features = model_1_features + ['bed_bath_rooms']
model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']
# Creating a LinearRegression Object for Model 1 and learning the multiple linear regression model
model_1 = LinearRegression()
model_1.fit(train_data[model_1_features], train_data['price'])
# Creating a LinearRegression Object for Model 2 and learning the multiple linear regression model
model_2 = LinearRegression()
model_2.fit(train_data[model_2_features], train_data['price'])
# Creating a LinearRegression Object for Model 3 and learning the multiple linear regression model
model_3 = LinearRegression()
model_3.fit(train_data[model_3_features], train_data['price'])
# Putting the Model 1 intercept and weights from the multiple linear regression for the 3 models into a Series
model_1_summary = pd.Series( [model_1.intercept_] + list(model_1.coef_),
index = ['intercept'] + model_1_features , name='Model 1 Coefficients' )
print model_1_summary
# Putting the Model 2 intercept and weights from the multiple linear regression for the 3 models into a Series
model_2_summary = pd.Series( [model_2.intercept_] + list(model_2.coef_),
index = ['intercept'] + model_2_features , name='Model 2 Coefficients' )
print model_2_summary
# Putting the Model 3 intercept and weights from the multiple linear regression for the 3 models into a Series
model_3_summary = pd.Series( [model_3.intercept_] + list(model_3.coef_),
index = ['intercept'] + model_3_features , name='Model 3 Coefficients' )
print model_3_summary
print "Positive: ", model_1_summary['bathrooms']
print "Negative: ", model_2_summary['bathrooms']
# Compute the RSS on TRAINING data for each of the three models and record the values:
rss_model_1_train = get_residual_sum_of_squares(model_1, train_data[model_1_features], train_data['price'])
rss_model_2_train = get_residual_sum_of_squares(model_2, train_data[model_2_features], train_data['price'])
rss_model_3_train = get_residual_sum_of_squares(model_3, train_data[model_3_features], train_data['price'])
print "RSS for Model 1 Training Data: ", rss_model_1_train
print "RSS for Model 2 Training Data: ", rss_model_2_train
print "RSS for Model 3 Training Data: ", rss_model_3_train
# Compute the RSS on TESTING data for each of the three models and record the values:
rss_model_1_test = get_residual_sum_of_squares(model_1, test_data[model_1_features], test_data['price'])
rss_model_2_test = get_residual_sum_of_squares(model_2, test_data[model_2_features], test_data['price'])
rss_model_3_test = get_residual_sum_of_squares(model_3, test_data[model_3_features], test_data['price'])
print "RSS for Model 1 Test Data: ", rss_model_1_test
print "RSS for Model 2 Test Data: ", rss_model_2_test
print "RSS for Model 3 T Data: ", rss_model_3_test
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: It seems that having s as a big string and repeatedly converting digits to ints is wasteful, so I convert s to be a list of ints and repeat the above adjusted for dealing with a list of ints.
Step2: Having s be a list of ints made the code faster and easier to read.
Step3: The while stuff is very un-Pythonic, but fast.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import string
import operator
from functools import reduce
s = '''
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
'''
s = ''.join(c for c in s if c in string.digits)
len(str(s)), s
def foo(s, n):
biggest_product = 0
for i in range(len(s) - n + 1):
t = s[i:i+n]
product = reduce(operator.mul, map(int, t))
if product > biggest_product:
biggest_product = product
return biggest_product
foo(s, 4)
n = 13
%timeit foo(s, n)
foo(s, n)
def foo(s, n):
adjacent_digits = (s[i:i+n] for i in range(len(s) - n + 1))
products = (
reduce(operator.mul, map(int, t))
for t in adjacent_digits)
return max(products)
n = 13
%timeit foo(s, n)
foo(s, n)
def foo(s, n):
return max(
reduce(operator.mul, map(int, s[i:i+n]))
for i in range(len(s) - n + 1))
n = 13
%timeit foo(s, n)
foo(s, n)
s = list(map(int, s))
def foo(s, n):
biggest_product = 0
for i in range(len(s) - n + 1):
t = s[i:i+n]
product = reduce(operator.mul, t)
if product > biggest_product:
biggest_product = product
return biggest_product
foo(s, 4)
n = 13
%timeit foo(s, n)
foo(s, n)
def foo(s, n):
adjacent_digits = (s[i:i+n] for i in range(len(s) - n + 1))
products = (
reduce(operator.mul, t)
for t in adjacent_digits)
return max(products)
n = 13
%timeit foo(s, n)
foo(s, n)
def foo(s, n):
return max(
reduce(operator.mul, s[i:i+n])
for i in range(len(s) - n + 1))
n = 13
%timeit foo(s, n)
foo(s, n)
# Keep a running product,
# so that one only needs to
# 1. divide out the digit that is "leaving"
# 2. multiply by the new digit.
#
# Handling zeroes makes the code complicated.
def foo(s, n):
biggest_product = 0
need_to_recalculate = True
for i in range(len(s) - n + 1):
t = s[i:i+n]
if need_to_recalculate:
product = reduce(operator.mul, t)
else:
product *= t[-1]
if product > biggest_product:
biggest_product = product
if product == 0:
need_to_recalculate = True
else:
product //=t[0]
return biggest_product
n = 13
%timeit foo(s, n)
foo(s, n)
# When a zero digit is encountered,
# skip over the subsequences that would include it.
def foo(s, n):
biggest_product = 0
for i in range(len(s) - n + 1):
if all(s[i:i+n]):
break
while i < len(s) - n + 1:
if s[i+n-1] == 0:
i += n
continue
product = reduce(operator.mul, s[i:i+n])
if product > biggest_product:
biggest_product = product
i += 1
return biggest_product
n = 13
%timeit foo(s, n)
foo(s, n)
# When a zero digit is encountered,
# skip over the subsequences that would include it.
#
# Avoid the special case code that frets over
# a zero in the initial subsequence.
def foo(s, n):
biggest_product = 0
i = 0
while i < len(s) - n + 1:
if s[i+n-1] == 0:
i += n
continue
product = reduce(operator.mul, s[i:i+n])
if product > biggest_product:
biggest_product = product
i += 1
return biggest_product
n = 13
%timeit foo(s, n)
foo(s, n)
def foo(s, n):
biggest_product = 0
iter_i = iter(range(len(s) - n + 1))
for i in iter_i:
if s[i+n-1] == 0:
[next(iter_i) for _ in range(n-1)]
continue
product = reduce(operator.mul, s[i:i+n])
if product > biggest_product:
biggest_product = product
return biggest_product
n = 13
%timeit foo(s, n)
foo(s, n)
# Put the repeated next(iter_i) in a for loop
# instead of a comprehension.
#
# I am surprised that it is faster.
# It it very ugly also, although probably a little bit clearer.
def foo(s, n):
biggest_product = 0
iter_i = iter(range(len(s) - n + 1))
for i in iter_i:
if s[i+n-1] == 0:
# Skip over the subsequences that include this zero digit.
# Want to do i += n.
for _ in range(n-1):
next(iter_i)
continue
product = reduce(operator.mul, s[i:i+n])
if product > biggest_product:
biggest_product = product
return biggest_product
n = 13
%timeit foo(s, n)
foo(s, n)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Interactive programming
Step2: Answer 2
Step3: Answer 3
Step4: Answer 4
Step5: Answer 5
|
<ASSISTANT_TASK:>
Python Code:
import pyspark
sc = pyspark.SparkContext(appName="my_spark_app")
sc
## just check that sc variables is not
print("is SpartContext loaded?", sc != '')
rdd = sc.parallelize([x for x in range(1000)],20)
rdd.getNumPartitions()
rdd.take(5)
rdd.setName("my_rdd").persist(pyspark.StorageLevel.MEMORY_AND_DISK_SER)
rdd.map(lambda r: (round(r/100)*100, 1))\
.reduceByKey(lambda x,y: x+y)\
.collect()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Restart the kernel
Step2: Before you begin
Step3: Get your project number
Step4: Region
Step5: Timestamp
Step6: Authenticate your Google Cloud account
Step7: Create a Cloud Storage bucket
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Import libraries and define constants
Step11: Initialize Vertex AI SDK for Python
Step12: Set machine type
Step13: Introduction to Two-Tower algorithm
Step14: Train on Vertex AI Training with CPU
Step15: Prepare your disk specification
Step16: Define the worker pool specification
Step17: Create a custom job
Step18: Execute the custom job
Step19: View output
Step20: Train on Vertex AI Training with GPU
Step21: Create and execute the custom job
Step22: View output
Step23: Train on Vertex AI Training with TFRecords
Step24: Create and execute the custom job
Step25: View output
Step26: Tensorboard
Step27: Hyperparameter tuning
Step28: Run the hyperparameter tuning job
Step29: Display the hyperparameter tuning job trial results
Step30: Best trial
Step31: Delete the hyperparameter tuning job
Step32: View output
Step33: Upload the model to Vertex AI Model resource
Step34: Deploy the model to Vertex AI Endpoint
Step35: Deploying Model resources to an Endpoint resource.
Step36: Creating embeddings
Step37: Make an online prediction with gcloud
Step38: Make a batch prediction
Step39: Send the prediction request
Step40: Get the predicted embeddings
Step41: Save the embeddings in JSONL format
Step42: Store the JSONL formatted embeddings in Cloud Storage
Step43: Create Matching Engine Index
Step44: Setup VPC peering network
Step45: Create the VPC connection
Step46: Check the status of your peering connections.
Step47: Construct the full network name
Step48: Create an IndexEndpoint with VPC Network
Step49: Deploy the Matching Engine Index to the Index Endpoint resource
Step50: Create and execute an online query
Step51: Cleaning up
|
<ASSISTANT_TASK:>
Python Code:
import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} --upgrade tensorflow -q
! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform tensorboard-plugin-profile -q
! gcloud components update --quiet
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
shell_output = ! gcloud projects list --filter="PROJECT_ID:'{PROJECT_ID}'" --format='value(PROJECT_NUMBER)'
PROJECT_NUMBER = shell_output[0]
print("Project Number:", PROJECT_NUMBER)
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
BUCKET_URI = f"gs://{BUCKET_NAME}"
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
BUCKET_URI = "gs://" + BUCKET_NAME
! gsutil mb -l $REGION $BUCKET_URI
! gsutil ls -al $BUCKET_URI
import os
from google.cloud import aiplatform
%load_ext tensorboard
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
DATASET_NAME = "movielens_100k" # Change to your dataset name.
# Change to your data and schema paths. These are paths to the movielens_100k
# sample data.
TRAINING_DATA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/training_data/*"
INPUT_SCHEMA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/input_schema.json"
# URI of the two-tower training Docker image.
LEARNER_IMAGE_URI = "us-docker.pkg.dev/vertex-ai-restricted/builtin-algorithm/two-tower"
# Change to your output location.
OUTPUT_DIR = f"{BUCKET_URI}/experiment/output"
TRAIN_BATCH_SIZE = 100 # Batch size for training.
NUM_EPOCHS = 3 # Number of epochs for training.
print(f"Dataset name: {DATASET_NAME}")
print(f"Training data path: {TRAINING_DATA_PATH}")
print(f"Input schema path: {INPUT_SCHEMA_PATH}")
print(f"Output directory: {OUTPUT_DIR}")
print(f"Train batch size: {TRAIN_BATCH_SIZE}")
print(f"Number of epochs: {NUM_EPOCHS}")
TRAIN_COMPUTE = "n1-standard-8"
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
JOB_NAME = "twotowers_cpu_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME)
CMDARGS = [
f"--training_data_path={TRAINING_DATA_PATH}",
f"--input_schema_path={INPUT_SCHEMA_PATH}",
f"--job-dir={OUTPUT_DIR}",
f"--train_batch_size={TRAIN_BATCH_SIZE}",
f"--num_epochs={NUM_EPOCHS}",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"container_spec": {
"image_uri": LEARNER_IMAGE_URI,
"command": [],
"args": CMDARGS,
},
}
]
job = aiplatform.CustomJob(
display_name="twotower_cpu_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
job.run()
! gsutil ls {OUTPUT_DIR}
! gsutil rm -rf {OUTPUT_DIR}/*
JOB_NAME = "twotowers_gpu_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME)
TRAIN_COMPUTE = "n1-highmem-4"
TRAIN_GPU = "NVIDIA_TESLA_K80"
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": 1,
}
CMDARGS = [
f"--training_data_path={TRAINING_DATA_PATH}",
f"--input_schema_path={INPUT_SCHEMA_PATH}",
f"--job-dir={OUTPUT_DIR}",
"--training_steps_per_epoch=1500",
"--eval_steps_per_epoch=1500",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"container_spec": {
"image_uri": LEARNER_IMAGE_URI,
"command": [],
"args": CMDARGS,
},
}
]
job = aiplatform.CustomJob(
display_name="twotower_cpu_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
job.run()
! gsutil ls {OUTPUT_DIR}
! gsutil rm -rf {OUTPUT_DIR}/*
TRAINING_DATA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/tfrecord/*"
JOB_NAME = "twotowers_tfrec_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME)
TRAIN_COMPUTE = "n1-standard-8"
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
CMDARGS = [
f"--training_data_path={TRAINING_DATA_PATH}",
f"--input_schema_path={INPUT_SCHEMA_PATH}",
f"--job-dir={OUTPUT_DIR}",
f"--train_batch_size={TRAIN_BATCH_SIZE}",
f"--num_epochs={NUM_EPOCHS}",
"--input_file_format=tfrecord",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"container_spec": {
"image_uri": LEARNER_IMAGE_URI,
"command": [],
"args": CMDARGS,
},
}
]
job = aiplatform.CustomJob(
display_name="twotower_cpu_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
job.run()
! gsutil ls {OUTPUT_DIR}
! gsutil rm -rf {OUTPUT_DIR}
try:
TENSORBOARD_DIR = os.path.join(OUTPUT_DIR, "tensorboard")
%tensorboard --logdir {TENSORBOARD_DIR}
except Exception as e:
print(e)
from google.cloud.aiplatform import hyperparameter_tuning as hpt
hpt_job = aiplatform.HyperparameterTuningJob(
display_name="twotowers_" + TIMESTAMP,
custom_job=job,
metric_spec={
"val_auc": "maximize",
},
parameter_spec={
"learning_rate": hpt.DoubleParameterSpec(min=0.0001, max=0.1, scale="log"),
"num_hidden_layers": hpt.IntegerParameterSpec(min=0, max=2, scale="linear"),
"num_nodes_hidden_layer1": hpt.IntegerParameterSpec(
min=1, max=128, scale="log"
),
"num_nodes_hidden_layer2": hpt.IntegerParameterSpec(
min=1, max=128, scale="log"
),
},
search_algorithm=None,
max_trial_count=8,
parallel_trial_count=1,
)
hpt_job.run()
print(hpt_job.trials)
best = (None, None, None, 0.0)
for trial in hpt_job.trials:
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
print(best)
hpt_job.delete()
BEST_MODEL = OUTPUT_DIR + "/trial_" + best[0]
! gsutil ls {BEST_MODEL}
# The following imports the query (user) encoder model.
MODEL_TYPE = "query"
# Use the following instead to import the candidate (movie) encoder model.
# MODEL_TYPE = 'candidate'
DISPLAY_NAME = f"{DATASET_NAME}_{MODEL_TYPE}" # The display name of the model.
MODEL_NAME = f"{MODEL_TYPE}_model" # Used by the deployment container.
model = aiplatform.Model.upload(
display_name=DISPLAY_NAME,
artifact_uri=BEST_MODEL,
serving_container_image_uri="us-central1-docker.pkg.dev/cloud-ml-algos/two-tower/deploy",
serving_container_health_route=f"/v1/models/{MODEL_NAME}",
serving_container_predict_route=f"/v1/models/{MODEL_NAME}:predict",
serving_container_environment_variables={
"MODEL_BASE_PATH": "$(AIP_STORAGE_URI)",
"MODEL_NAME": MODEL_NAME,
},
)
endpoint = aiplatform.Endpoint.create(display_name=DATASET_NAME)
response = endpoint.deploy(
model=model,
deployed_model_display_name=DISPLAY_NAME,
machine_type=DEPLOY_COMPUTE,
traffic_split={"0": 100},
)
print(endpoint)
# Input items for the query model:
input_items = [
{"data": '{"user_id": ["1"]}', "key": "key1"},
{"data": '{"user_id": ["2"]}', "key": "key2"},
]
# Input items for the candidate model:
# input_items = [{
# 'data' : '{"movie_id": ["1"], "movie_title": ["fake title"]}',
# 'key': 'key1'
# }]
encodings = endpoint.predict(input_items)
print(f"Number of encodings: {len(encodings.predictions)}")
print(encodings.predictions[0]["encoding"])
import json
request = json.dumps({"instances": input_items})
with open("request.json", "w") as writer:
writer.write(f"{request}\n")
ENDPOINT_ID = endpoint.resource_name
! gcloud ai endpoints predict {ENDPOINT_ID} \
--region={REGION} \
--json-request=request.json
QUERY_EMBEDDING_PATH = f"{BUCKET_URI}/embeddings/train.jsonl"
import tensorflow as tf
with tf.io.gfile.GFile(QUERY_EMBEDDING_PATH, "w") as f:
for i in range(0, 1000):
query = {"data": '{"user_id": ["' + str(i) + '"]}', "key": f"key{i}"}
f.write(json.dumps(query) + "\n")
print("\nNumber of embeddings: ")
! gsutil cat {QUERY_EMBEDDING_PATH} | wc -l
MIN_NODES = 1
MAX_NODES = 4
batch_predict_job = model.batch_predict(
job_display_name=f"batch_predict_{DISPLAY_NAME}",
gcs_source=[QUERY_EMBEDDING_PATH],
gcs_destination_prefix=f"{BUCKET_URI}/embeddings/output",
machine_type=DEPLOY_COMPUTE,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
result_files = []
for prediction_result in prediction_results:
result_file = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
result_files.append(result_file)
print(result_files)
embeddings = []
for result_file in result_files:
with tf.io.gfile.GFile(result_file, "r") as f:
instances = list(f)
for instance in instances:
instance = instance.replace('\\"', "'")
result = json.loads(instance)
prediction = result["prediction"]
key = prediction["key"][3:]
encoding = prediction["encoding"]
embedding = {"id": key, "embedding": encoding}
embeddings.append(embedding)
print("Number of embeddings", len(embeddings))
print("Encoding Dimensions", len(embeddings[0]["embedding"]))
print("Example embedding", embeddings[0])
with open("embeddings.json", "w") as f:
for i in range(len(embeddings)):
f.write(json.dumps(embeddings[i]).replace('"', "'"))
f.write("\n")
! head -n 2 embeddings.json
EMBEDDINGS_URI = f"{BUCKET_URI}/embeddings/twotower/"
! gsutil cp embeddings.json {EMBEDDINGS_URI}
DIMENSIONS = len(embeddings[0]["embedding"])
DISPLAY_NAME = "movies"
tree_ah_index = aiplatform.MatchingEngineIndex.create_tree_ah_index(
display_name=DISPLAY_NAME,
contents_delta_uri=EMBEDDINGS_URI,
dimensions=DIMENSIONS,
approximate_neighbors_count=50,
distance_measure_type="DOT_PRODUCT_DISTANCE",
description="Two tower generated embeddings",
labels={"label_name": "label_value"},
# TreeAH specific parameters
leaf_node_embedding_count=100,
leaf_nodes_to_search_percent=7,
)
INDEX_RESOURCE_NAME = tree_ah_index.resource_name
print(INDEX_RESOURCE_NAME)
# This is for display only; you can name the range anything.
PEERING_RANGE_NAME = "vertex-ai-prediction-peering-range"
NETWORK = "default"
# NOTE: `prefix-length=16` means a CIDR block with mask /16 will be
# reserved for use by Google services, such as Vertex AI.
! gcloud compute addresses create $PEERING_RANGE_NAME \
--global \
--prefix-length=16 \
--description="peering range for Google service" \
--network=$NETWORK \
--purpose=VPC_PEERING
! gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--network=$NETWORK \
--ranges=$PEERING_RANGE_NAME \
--project=$PROJECT_ID
! gcloud compute networks peerings list --network $NETWORK
full_network_name = f"projects/{PROJECT_NUMBER}/global/networks/{NETWORK}"
index_endpoint = aiplatform.MatchingEngineIndexEndpoint.create(
display_name="index_endpoint_for_demo",
description="index endpoint description",
network=full_network_name,
)
INDEX_ENDPOINT_NAME = index_endpoint.resource_name
print(INDEX_ENDPOINT_NAME)
DEPLOYED_INDEX_ID = "tree_ah_twotower_deployed_" + TIMESTAMP
MIN_NODES = 1
MAX_NODES = 2
DEPLOY_COMPUTE = "n1-standard-16"
index_endpoint.deploy_index(
display_name="deployed_index_for_demo",
index=tree_ah_index,
deployed_index_id=DEPLOYED_INDEX_ID,
machine_type=DEPLOY_COMPUTE,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
print(index_endpoint.deployed_indexes)
# The number of nearest neighbors to be retrieved from database for each query.
NUM_NEIGHBOURS = 10
# Test query
queries = [embeddings[0]["embedding"], embeddings[1]["embedding"]]
matches = index_endpoint.match(
deployed_index_id=DEPLOYED_INDEX_ID, queries=queries, num_neighbors=NUM_NEIGHBOURS
)
for instance in matches:
print("INSTANCE")
for match in instance:
print(match)
# Delete endpoint resource
endpoint.delete(force=True)
# Delete model resource
model.delete()
# Force undeployment of indexes and delete endpoint
try:
index_endpoint.delete(force=True)
except Exception as e:
print(e)
# Delete indexes
try:
tree_ah_index.delete()
brute_force_index.delete()
except Exception as e:
print(e)
# Delete Cloud Storage objects that were created
delete_bucket = False
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil -m rm -r $OUTPUT_DIR
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
def kthgroupsum(k ) :
cur = int(( k *(k - 1 ) ) + 1 )
sum = 0
while k :
sum += cur
cur += 2
k = k - 1
return sum
k = 3
print(kthgroupsum(k ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Scheduling building blocks
Step2: This code creates a CP model container that allows the use of constraints that are specific to constraint programming or to
Step3: Adding the constraints
Step4: Here, the special constraint end_before_start() ensures that one interval variable ends before the other starts.
Step5: Displaying the solution
Step6: To understand the solution found by CP Optimizer to this satisfiability scheduling problem, consider the line
Step7: Note on interval variables
Step8: The declaration of necessary interval variables is done as follows
Step9: Step 3
Step10: To model the cost for starting a task earlier than the preferred starting date, the expression start_of() can be used.
Step 4
Step11: Solving a problem consists of finding a value for each decision variable so that all constraints are satisfied.
Step 5
Step12: Graphical display of the same result is available with
Step13: The overall cost is 5000 and moving will be completed by day 110.
Step14: One part of the objective is based on the time it takes to build a house.
Step 3
Step15: Step 4
Step16: Step 5
Step17: To model the cost associated with the length of time it takes to build a single house, the interval variable associated with the house is constrained to start at the start of the first task of the house and end at the end of the last task.
Step 6
Step18: Step 7
Step19: Each of the tasks requires a particular worker.
Step 8
Step20: Step 9
Step21: The cost for building a house is the sum of the tardiness cost and the number of days it takes from start to finish building the house.
Step 10
Step22: Step 11
Step23: Chapter 4. Adding calendars to the house building problem
Step24: Step 3
Step25: This intensity function is used in creating the task variables for the workers.
Step 4
Step26: The tasks of the house building project have precedence constraints that are added to the model.
Step 5
Step27: Step 6
Step28: Step 7
Step29: Step 8
Step30: Step 9
Step31: Graphical output of the solution is given by
Step32: Chapter 5. Using cumulative functions in the house building problem
Step33: Step 3
Step34: As the workers are equivalent in this problem, it is better to represent them as one pool of workers instead of as individual workers with no overlap constraints as was done in the earlier examples.
Step 4
Step35: Step 5
Step36: Step 6
Step37: Step 7
Step38: Step 8
Step39: Step 9
Step40: Step 10
Step41: Chapter 6. Using alternative resources in the house building problem
Step42: Step 3
Step43: Step 4
Step44: Step 5
Step45: The expression presence_of() is used to represent whether a task is performed by a worker.
Step46: Step 7
Step47: Step 8
Step48: Step 9
Step49: Chapter 7. Using state functions
Step50: Step 3
Step51: Step 4
Step52: Step 5
Step53: Step 6
Step54: Step 7
Step55: Step 8
Step56: Step 9
|
<ASSISTANT_TASK:>
Python Code:
url = None
key = None
import sys
from docplex.cp.model import *
mdl0 = CpoModel()
masonry = mdl0.interval_var(size=35)
carpentry = mdl0.interval_var(size=15)
plumbing = mdl0.interval_var(size=40)
ceiling = mdl0.interval_var(size=15)
roofing = mdl0.interval_var(size=5)
painting = mdl0.interval_var(size=10)
windows = mdl0.interval_var(size=5)
facade = mdl0.interval_var(size=10)
garden = mdl0.interval_var(size=5)
moving = mdl0.interval_var(size=5)
mdl0.add( mdl0.end_before_start(masonry, carpentry) )
mdl0.add( mdl0.end_before_start(masonry, plumbing) )
mdl0.add( mdl0.end_before_start(masonry, ceiling) )
mdl0.add( mdl0.end_before_start(carpentry, roofing) )
mdl0.add( mdl0.end_before_start(ceiling, painting) )
mdl0.add( mdl0.end_before_start(roofing, windows) )
mdl0.add( mdl0.end_before_start(roofing, facade) )
mdl0.add( mdl0.end_before_start(plumbing, facade) )
mdl0.add( mdl0.end_before_start(roofing, garden) )
mdl0.add( mdl0.end_before_start(plumbing, garden) )
mdl0.add( mdl0.end_before_start(windows, moving) )
mdl0.add( mdl0.end_before_start(facade, moving) )
mdl0.add( mdl0.end_before_start(garden, moving) )
mdl0.add( mdl0.end_before_start(painting, moving) )
# Solve the model
print("\nSolving model....")
msol0 = mdl0.solve(url=url, key=key, TimeLimit=10)
print("done")
var_sol = msol0.get_var_solution(masonry)
print("Masonry : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(carpentry)
print("Carpentry : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(plumbing)
print("Plumbing : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(ceiling)
print("Ceiling : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(roofing)
print("Roofing : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(painting)
print("Painting : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(windows)
print("Windows : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(facade)
print("Facade : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(moving)
print("Moving : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
wt = msol0.get_var_solution(masonry)
visu.interval(wt, 'lightblue', 'masonry')
wt = msol0.get_var_solution(carpentry)
visu.interval(wt, 'lightblue', 'carpentry')
wt = msol0.get_var_solution(plumbing)
visu.interval(wt, 'lightblue', 'plumbing')
wt = msol0.get_var_solution(ceiling)
visu.interval(wt, 'lightblue', 'ceiling')
wt = msol0.get_var_solution(roofing)
visu.interval(wt, 'lightblue', 'roofing')
wt = msol0.get_var_solution(painting)
visu.interval(wt, 'lightblue', 'painting')
wt = msol0.get_var_solution(windows)
visu.interval(wt, 'lightblue', 'windows')
wt = msol0.get_var_solution(facade)
visu.interval(wt, 'lightblue', 'facade')
wt = msol0.get_var_solution(moving)
visu.interval(wt, 'lightblue', 'moving')
visu.show()
import sys
from docplex.cp.model import *
mdl1 = CpoModel()
masonry = mdl1.interval_var(size=35)
carpentry = mdl1.interval_var(size=15)
plumbing = mdl1.interval_var(size=40)
ceiling = mdl1.interval_var(size=15)
roofing = mdl1.interval_var(size=5)
painting = mdl1.interval_var(size=10)
windows = mdl1.interval_var(size=5)
facade = mdl1.interval_var(size=10)
garden = mdl1.interval_var(size=5)
moving = mdl1.interval_var(size=5)
mdl1.add( mdl1.end_before_start(masonry, carpentry) )
mdl1.add( mdl1.end_before_start(masonry, plumbing) )
mdl1.add( mdl1.end_before_start(masonry, ceiling) )
mdl1.add( mdl1.end_before_start(carpentry, roofing) )
mdl1.add( mdl1.end_before_start(ceiling, painting) )
mdl1.add( mdl1.end_before_start(roofing, windows) )
mdl1.add( mdl1.end_before_start(roofing, facade) )
mdl1.add( mdl1.end_before_start(plumbing, facade) )
mdl1.add( mdl1.end_before_start(roofing, garden) )
mdl1.add( mdl1.end_before_start(plumbing, garden) )
mdl1.add( mdl1.end_before_start(windows, moving) )
mdl1.add( mdl1.end_before_start(facade, moving) )
mdl1.add( mdl1.end_before_start(garden, moving) )
mdl1.add( mdl1.end_before_start(painting, moving) )
obj = mdl1.minimize( 400 * mdl1.max([mdl1.end_of(moving) - 100, 0])
+ 200 * mdl1.max([25 - mdl1.start_of(masonry), 0])
+ 300 * mdl1.max([75 - mdl1.start_of(carpentry), 0])
+ 100 * mdl1.max([75 - mdl1.start_of(ceiling), 0]) )
mdl1.add(obj)
# Solve the model
print("\nSolving model....")
msol1 = mdl1.solve(url=url, key=key, TimeLimit=20)
print("done")
print("Cost will be " + str(msol1.get_objective_values()[0]))
var_sol = msol1.get_var_solution(masonry)
print("Masonry : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(carpentry)
print("Carpentry : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(plumbing)
print("Plumbing : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(ceiling)
print("Ceiling : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(roofing)
print("Roofing : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(painting)
print("Painting : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(windows)
print("Windows : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(facade)
print("Facade : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(moving)
print("Moving : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
wt = msol1.get_var_solution(masonry)
visu.interval(wt, 'lightblue', 'masonry')
wt = msol1.get_var_solution(carpentry)
visu.interval(wt, 'lightblue', 'carpentry')
wt = msol1.get_var_solution(plumbing)
visu.interval(wt, 'lightblue', 'plumbing')
wt = msol1.get_var_solution(ceiling)
visu.interval(wt, 'lightblue', 'ceiling')
wt = msol1.get_var_solution(roofing)
visu.interval(wt, 'lightblue', 'roofing')
wt = msol1.get_var_solution(painting)
visu.interval(wt, 'lightblue', 'painting')
wt = msol1.get_var_solution(windows)
visu.interval(wt, 'lightblue', 'windows')
wt = msol1.get_var_solution(facade)
visu.interval(wt, 'lightblue', 'facade')
wt = msol1.get_var_solution(moving)
visu.interval(wt, 'lightblue', 'moving')
visu.show()
NbHouses = 5
WorkerNames = ["Joe", "Jim"]
TaskNames = ["masonry", "carpentry", "plumbing",
"ceiling", "roofing", "painting",
"windows", "facade", "garden", "moving"]
Duration = [35, 15, 40, 15, 5, 10, 5, 10, 5, 5]
Worker = {"masonry" : "Joe" ,
"carpentry": "Joe" ,
"plumbing" : "Jim" ,
"ceiling" : "Jim" ,
"roofing" : "Joe" ,
"painting" : "Jim" ,
"windows" : "Jim" ,
"facade" : "Joe" ,
"garden" : "Joe" ,
"moving" : "Jim"}
ReleaseDate = [ 0, 0, 151, 59, 243]
DueDate = [120, 212, 304, 181, 425]
Weight = [100.0, 100.0, 100.0, 200.0, 100.0]
Precedences = [("masonry", "carpentry"),("masonry", "plumbing"),
("masonry", "ceiling"), ("carpentry", "roofing"),
("ceiling", "painting"), ("roofing", "windows"),
("roofing", "facade"), ("plumbing", "facade"),
("roofing", "garden"), ("plumbing", "garden"),
("windows", "moving"), ("facade", "moving"),
("garden", "moving"), ("painting", "moving")]
Houses = range(NbHouses)
import sys
from docplex.cp.model import *
mdl2 = CpoModel()
houses = [mdl2.interval_var(start=(ReleaseDate[i], INTERVAL_MAX), name="house"+str(i)) for i in Houses]
TaskNames_ids = {}
itvs = {}
for h in Houses:
for i,t in enumerate(TaskNames):
_name = str(h)+"_"+str(t)
itvs[(h,t)] = mdl2.interval_var(size=Duration[i], name=_name)
TaskNames_ids[_name] = i
for h in Houses:
for p in Precedences:
mdl2.add(mdl2.end_before_start(itvs[(h,p[0])], itvs[(h,p[1])]) )
for h in Houses:
mdl2.add( mdl2.span(houses[h], [itvs[(h,t)] for t in TaskNames] ) )
transitionTimes = transition_matrix(NbHouses)
for i in Houses:
for j in Houses:
transitionTimes.set_value(i, j, int(abs(i - j)))
workers = {w : mdl2.sequence_var([ itvs[(h,t)] for h in Houses for t in TaskNames if Worker[t]==w ],
types=[h for h in Houses for t in TaskNames if Worker[t]==w ], name="workers_"+w)
for w in WorkerNames}
for w in WorkerNames:
mdl2.add( mdl2.no_overlap(workers[w], transitionTimes) )
# create the obj and add it.
mdl2.add(
mdl2.minimize(
mdl2.sum(Weight[h] * mdl2.max([0, mdl2.end_of(houses[h])-DueDate[h]]) + mdl2.length_of(houses[h]) for h in Houses)
)
)
# Solve the model
print("\nSolving model....")
msol2 = mdl2.solve(url=url, key=key, FailLimit=30000)
print("done")
print("Cost will be " + str(msol2.get_objective_values()[0]))
# Viewing the results of sequencing problems in a Gantt chart
# (double click on the gantt to see details)
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
def showsequence(msol, s, setup, tp):
seq = msol.get_var_solution(s)
visu.sequence(name=s.get_name())
vs = seq.get_value()
for v in vs:
nm = v.get_name()
visu.interval(v, tp[TaskNames_ids[nm]], nm)
for i in range(len(vs) - 1):
end = vs[i].get_end()
tp1 = tp[TaskNames_ids[vs[i].get_name()]]
tp2 = tp[TaskNames_ids[vs[i + 1].get_name()]]
visu.transition(end, end + setup.get_value(tp1, tp2))
visu.timeline("Solution for SchedSetup")
for w in WorkerNames:
types=[h for h in Houses for t in TaskNames if Worker[t]==w]
showsequence(msol2, workers[w], transitionTimes, types)
visu.show()
import sys
from docplex.cp.model import *
mdl3 = CpoModel()
NbHouses = 5;
WorkerNames = ["Joe", "Jim" ]
TaskNames = ["masonry","carpentry","plumbing","ceiling","roofing","painting","windows","facade","garden","moving"]
Duration = [35,15,40,15,5,10,5,10,5,5]
Worker = {"masonry":"Joe","carpentry":"Joe","plumbing":"Jim","ceiling":"Jim",
"roofing":"Joe","painting":"Jim","windows":"Jim","facade":"Joe",
"garden":"Joe","moving":"Jim"}
Precedences = { ("masonry","carpentry"),("masonry","plumbing"),
("masonry","ceiling"),("carpentry","roofing"),
("ceiling","painting"),("roofing","windows"),
("roofing","facade"),("plumbing","facade"),
("roofing","garden"),("plumbing","garden"),
("windows","moving"),("facade","moving"),
("garden","moving"),("painting","moving") }
Breaks ={
"Joe" : [
(5,14),(19,21),(26,28),(33,35),(40,42),(47,49),(54,56),(61,63),
(68,70),(75,77),(82,84),(89,91),(96,98),(103,105),(110,112),(117,119),
(124,133),(138,140),(145,147),(152,154),(159,161),(166,168),(173,175),
(180,182),(187,189),(194,196),(201,203),(208,210),(215,238),(243,245),(250,252),
(257,259),(264,266),(271,273),(278,280),(285,287),(292,294),(299,301),
(306,308),(313,315),(320,322),(327,329),(334,336),(341,343),(348,350),
(355,357),(362,364),(369,378),(383,385),(390,392),(397,399),(404,406),(411,413),
(418,420),(425,427),(432,434),(439,441),(446,448),(453,455),(460,462),(467,469),
(474,476),(481,483),(488,490),(495,504),(509,511),(516,518),(523,525),(530,532),
(537,539),(544,546),(551,553),(558,560),(565,567),(572,574),(579,602),(607,609),
(614,616),(621,623),(628,630),(635,637),(642,644),(649,651),(656,658),(663,665),
(670,672),(677,679),(684,686),(691,693),(698,700),(705,707),(712,714),
(719,721),(726,728)
],
"Jim" : [
(5,7),(12,14),(19,21),(26,42),(47,49),(54,56),(61,63),(68,70),(75,77),
(82,84),(89,91),(96,98),(103,105),(110,112),(117,119),(124,126),(131,133),
(138,140),(145,147),(152,154),(159,161),(166,168),(173,175),(180,182),(187,189),
(194,196),(201,225),(229,231),(236,238),(243,245),(250,252),(257,259),
(264,266),(271,273),(278,280),(285,287),(292,294),(299,301),(306,315),
(320,322),(327,329),(334,336),(341,343),(348,350),(355,357),(362,364),(369,371),
(376,378),(383,385),(390,392),(397,413),(418,420),(425,427),(432,434),(439,441),
(446,448),(453,455),(460,462),(467,469),(474,476),(481,483),(488,490),(495,497),
(502,504),(509,511),(516,518),(523,525),(530,532),(537,539),(544,546),
(551,553),(558,560),(565,581),(586,588),(593,595),(600,602),(607,609),
(614,616),(621,623),(628,630),(635,637),(642,644),(649,651),(656,658),
(663,665),(670,672),(677,679),(684,686),(691,693),(698,700),(705,707),
(712,714),(719,721),(726,728)]
}
from collections import namedtuple
Break = namedtuple('Break', ['start', 'end'])
Calendar = {}
mymax = max(max(v for k,v in Breaks[w]) for w in WorkerNames)
for w in WorkerNames:
step = CpoStepFunction()
step.set_value(0, mymax, 100)
for b in Breaks[w]:
t = Break(*b)
step.set_value(t.start, t.end, 0)
Calendar[w] = step
#TaskNames_ids = {}
itvs = {}
for h in Houses:
for i,t in enumerate(TaskNames):
_name = str(h) + "_" + str(t)
itvs[(h,t)] = mdl3.interval_var(size=Duration[i], intensity=Calendar[Worker[t]], name=_name)
for h in Houses:
for p in Precedences:
mdl3.add( mdl3.end_before_start(itvs[h,p[0]], itvs[h,p[1]]) )
for w in WorkerNames:
mdl3.add( mdl3.no_overlap( [itvs[h,t] for h in Houses for t in TaskNames if Worker[t]==w] ) )
for h in Houses:
for t in TaskNames:
mdl3.add(mdl3.forbid_start(itvs[h,t], Calendar[Worker[t]]))
mdl3.add(mdl3.forbid_end (itvs[h,t], Calendar[Worker[t]]))
mdl3.add( mdl3.minimize(mdl3.max(mdl3.end_of(itvs[h,"moving"]) for h in Houses)))
# Solve the model
print("\nSolving model....")
msol3 = mdl3.solve(url=url, key=key, FailLimit=30000)
print("done")
print("Cost will be "+str( msol3.get_objective_values()[0] ))
# Allocate tasks to workers
tasks = {w : [] for w in WorkerNames}
for k,v in Worker.items():
tasks[v].append(k)
types = {t : i for i,t in enumerate(TaskNames)}
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
visu.timeline('Solution SchedCalendar')
for w in WorkerNames:
visu.panel()
visu.pause(Calendar[w])
visu.sequence(name=w,
intervals=[(msol3.get_var_solution(itvs[h,t]), types[t], t) for t in tasks[w] for h in Houses])
visu.show()
NbWorkers = 3
NbHouses = 5
TaskNames = {"masonry","carpentry","plumbing",
"ceiling","roofing","painting",
"windows","facade","garden","moving"}
Duration = [35, 15, 40, 15, 5, 10, 5, 10, 5, 5]
ReleaseDate = [31, 0, 90, 120, 90]
Precedences = [("masonry", "carpentry"), ("masonry", "plumbing"), ("masonry", "ceiling"),
("carpentry", "roofing"), ("ceiling", "painting"), ("roofing", "windows"),
("roofing", "facade"), ("plumbing", "facade"), ("roofing", "garden"),
("plumbing", "garden"), ("windows", "moving"), ("facade", "moving"),
("garden", "moving"), ("painting", "moving")]
import sys
from docplex.cp.model import *
mdl4 = CpoModel()
itvs = {}
for h in Houses:
for i,t in enumerate(TaskNames):
itvs[h,t] = mdl4.interval_var(start = [ReleaseDate[h], INTERVAL_MAX], size=Duration[i])
workers_usage = step_at(0, 0)
for h in Houses:
for t in TaskNames:
workers_usage += mdl4.pulse(itvs[h,t],1)
cash = step_at(0, 0)
for p in Houses:
cash += mdl4.step_at(60*p, 30000)
for h in Houses:
for i,t in enumerate(TaskNames):
cash -= mdl4.step_at_start(itvs[h,t], 200*Duration[i])
for h in Houses:
for p in Precedences:
mdl4.add( mdl4.end_before_start(itvs[h,p[0]], itvs[h,p[1]]) )
mdl4.add( workers_usage <= NbWorkers )
mdl4.add( cash >= 0 )
mdl4.add(
mdl4.minimize(
mdl4.max( mdl4.end_of(itvs[h,"moving"]) for h in Houses)
)
)
# Solve the model
print("\nSolving model....")
msol4 = mdl4.solve(url=url, key=key, FailLimit=30000)
print("done")
print("Cost will be " + str( msol4.get_objective_values()[0] ))
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
workersF = CpoStepFunction()
cashF = CpoStepFunction()
for p in range(5):
cashF.add_value(60 * p, INT_MAX, 30000)
for h in Houses:
for i,t in enumerate(TaskNames):
itv = msol4.get_var_solution(itvs[h,t])
workersF.add_value(itv.get_start(), itv.get_end(), 1)
cashF.add_value(itv.start, INT_MAX, -200 * Duration[i])
visu.timeline('Solution SchedCumul')
visu.panel(name="Schedule")
for h in Houses:
for i,t in enumerate(TaskNames):
visu.interval(msol4.get_var_solution(itvs[h,t]), h, t)
visu.panel(name="Workers")
visu.function(segments=workersF, style='area')
visu.panel(name="Cash")
visu.function(segments=cashF, style='area', color='gold')
visu.show()
NbHouses = 5
Deadline = 318
Workers = ["Joe", "Jack", "Jim"]
Tasks = ["masonry", "carpentry", "plumbing", "ceiling","roofing", "painting", "windows", "facade","garden", "moving"]
Durations = [35, 15, 40, 15, 5, 10, 5, 10, 5, 5]
Skills = [("Joe","masonry",9),("Joe","carpentry",7),("Joe","ceiling",5),("Joe","roofing",6),
("Joe","windows",8),("Joe","facade",5),("Joe","garden",5),("Joe","moving",6),
("Jack","masonry",5),("Jack","plumbing",7),("Jack","ceiling",8),("Jack","roofing",7),
("Jack","painting",9),("Jack","facade",5),("Jack","garden",5),("Jim","carpentry",5),
("Jim","painting",6),("Jim","windows",5),("Jim","garden",9),("Jim","moving",8)]
Precedences = [("masonry","carpentry"),("masonry","plumbing"),("masonry","ceiling"),
("carpentry","roofing"),("ceiling","painting"),("roofing","windows"),
("roofing","facade"),("plumbing","facade"),("roofing","garden"),
("plumbing","garden"),("windows","moving"),("facade","moving"),
("garden","moving"),("painting","moving")
]
Continuities = [("Joe","masonry","carpentry"),("Jack","roofing","facade"),
("Joe","carpentry", "roofing"),("Jim","garden","moving")]
nbWorkers = len(Workers)
Houses = range(NbHouses)
import sys
from docplex.cp.model import *
mdl5 = CpoModel()
tasks = {}
wtasks = {}
for h in Houses:
for i,t in enumerate(Tasks):
tasks[(h,t)] = mdl5.interval_var(start=[0,Deadline], size=Durations[i])
for s in Skills:
wtasks[(h,s)] = mdl5.interval_var(optional=True)
for h in Houses:
for p in Precedences:
mdl5.add( mdl5.end_before_start(tasks[h,p[0]], tasks[h,p[1]]) )
for h in Houses:
for t in Tasks:
mdl5.add( mdl5.alternative(tasks[h,t], [wtasks[h,s] for s in Skills if s[1]==t]) )
for h in Houses:
for c in Continuities:
for (worker1, task1, l1) in Skills:
if worker1 == c[0] and task1 == c[1]:
for (worker2, task2, l2) in Skills:
if worker2 == c[0] and task2 == c[2]:
mdl5.add(
mdl5.presence_of(wtasks[h,(c[0], task1, l1)])
==
mdl5.presence_of(wtasks[h,(c[0], task2, l2)])
)
for w in Workers:
mdl5.add( mdl5.no_overlap([wtasks[h,s] for h in Houses for s in Skills if s[0]==w]) )
mdl5.add(
mdl5.maximize(
mdl5.sum( s[2] * mdl5.presence_of(wtasks[h,s]) for h in Houses for s in Skills)
)
)
# Solve the model
print("\nSolving model....")
msol5 = mdl5.solve(url=url, key=key, FailLimit=30000)
print("done")
print("Cost will be "+str( msol5.get_objective_values()[0] ))
worker_idx = {w : i for i,w in enumerate(Workers)}
worker_tasks = [[] for w in range(nbWorkers)] # Tasks assigned to a given worker
for h in Houses:
for s in Skills:
worker = s[0]
wt = wtasks[(h,s)]
worker_tasks[worker_idx[worker]].append(wt)
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
visu.timeline('Solution SchedOptional', 0, Deadline)
for i,w in enumerate(Workers):
visu.sequence(name=w)
for t in worker_tasks[worker_idx[w]]:
wt = msol5.get_var_solution(t)
if wt.is_present():
#if desc[t].skills[w] == max(desc[t].skills):
# Green-like color when task is using the most skilled worker
# color = 'lightgreen'
#else:
# Red-like color when task does not use the most skilled worker
# color = 'salmon'
color = 'salmon'
visu.interval(wt, color, wt.get_name())
visu.show()
NbHouses = 5
NbWorkers = 2
AllStates = ["clean", "dirty"]
TaskNames = ["masonry","carpentry", "plumbing", "ceiling","roofing","painting","windows","facade","garden","moving"]
Duration = [35,15,40,15,5,10,5,10,5,5]
States = [("masonry","dirty"),("carpentry","dirty"),("plumbing","clean"),
("ceiling","clean"),("roofing","dirty"),("painting","clean"),
("windows","dirty")]
Precedences = [("masonry","carpentry"),("masonry","plumbing"),("masonry","ceiling"),
("carpentry","roofing"),("ceiling","painting"),("roofing","windows"),
("roofing","facade"),("plumbing","facade"),("roofing","garden"),
("plumbing","garden"),("windows","moving"),("facade","moving"),
("garden","moving"),("painting","moving")]
Houses = range(NbHouses)
import sys
from docplex.cp.model import *
mdl6 = CpoModel()
task = {}
for h in Houses:
for i,t in enumerate(TaskNames):
task[(h,t)] = mdl6.interval_var(size = Duration[i])
workers = step_at(0, 0)
for h in Houses:
for t in TaskNames:
workers += mdl6.pulse(task[h,t], 1)
Index = {s : i for i,s in enumerate(AllStates)}
ttime = CpoTransitionMatrix(name='TTime', size=2)
ttime.set_value(Index["dirty"], Index["clean"], 1)
ttime.set_value(Index["clean"], Index["dirty"], 0)
state = { h : state_function(ttime, name="house"+str(h)) for h in Houses}
for h in Houses:
for p in Precedences:
mdl6.add( mdl6.end_before_start(task[h,p[0]], task[h,p[1]]) )
for s in States:
mdl6.add( mdl6.always_equal(state[h], task[h,s[0]], Index[s[1]]) )
mdl6.add( workers <= NbWorkers )
mdl6.add(
mdl6.minimize( mdl6.max( mdl6.end_of(task[h,"moving"]) for h in Houses )
)
)
# Solve the model
print("\nSolving model....")
msol6 = mdl6.solve(url=url, key=key, FailLimit=30000)
print("done")
print("Cost will be "+str( msol6.get_objective_values()[0] ))
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
workers_function = CpoStepFunction()
for h in Houses:
for t in TaskNames:
itv = msol6.get_var_solution(task[h,t])
workers_function.add_value(itv.get_start(), itv.get_end(), 1)
visu.timeline('Solution SchedState')
visu.panel(name="Schedule")
for h in Houses:
for t in TaskNames:
visu.interval(msol6.get_var_solution(task[h,t]), h, t)
visu.panel(name="Houses state")
for h in Houses:
f = state[h]
visu.sequence(name=f.get_name(), segments=msol6.get_var_solution(f))
visu.panel(name="Nb of workers")
visu.function(segments=workers_function, style='line')
visu.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fully-Connected Neural Nets
Step4: Affine layer
Step5: Affine layer
Step6: ReLU layer
Step7: ReLU layer
Step8: "Sandwich" layers
Step9: Loss layers
Step10: Two-layer network
Step11: Solver
Step12: Multilayer network
Step13: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
Step14: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
Step15: Inline question
Step16: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
Step17: RMSProp and Adam
Step18: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules
Step19: Train a good model!
Step20: Test you model
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print 'Testing affine_forward function:'
print 'difference: ', rel_error(out, correct_out)
# Test the affine_backward function
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print 'Testing affine_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 1e-8
print 'Testing relu_forward function:'
print 'difference: ', rel_error(out, correct_out)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 1e-12
print 'Testing relu_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print 'Testing affine_relu_forward:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
# [0] after svm_loss: indicate to get only the return of 0
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print 'Testing svm_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print '\nTesting softmax_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-2
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print 'Testing initialization ... '
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print 'Testing test-time forward pass ... '
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print 'Testing training loss (no regularization)'
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print 'Running numeric gradient check with reg = ', reg
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
model = TwoLayerNet()
solver = None
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
pass
data = {
'X_train': data['X_train'],
'y_train': data['y_train'],
'X_val': data['X_val'],
'y_val': data['y_val']
}
model = TwoLayerNet(hidden_dim=100, reg=1e-1) # reg is important!
solver = Solver(model, data,
update_rule='sgd',
optim_config={
'learning_rate': 1e-3,
},
# lr_decay is important
lr_decay=0.8,
num_epochs=10, batch_size=100,
verbose=True,
print_every=100)
solver.train()
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print 'Running check with reg = ', reg
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-2
learning_rate = 1e-2
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 3e-4
weight_scale = 1e-1
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=100, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print 'next_w error: ', rel_error(next_w, expected_next_w)
print 'velocity error: ', rel_error(expected_velocity, config['velocity'])
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print 'running with ', update_rule
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.iteritems():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print 'next_w error: ', rel_error(expected_next_w, next_w)
print 'cache error: ', rel_error(expected_cache, config['cache'])
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print 'next_w error: ', rel_error(expected_next_w, next_w)
print 'v error: ', rel_error(expected_v, config['v'])
print 'm error: ', rel_error(expected_m, config['m'])
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print 'running with ', update_rule
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.iteritems():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
pass
hidden_dims = [600, 500, 400, 300, 200, 100]
data = {
'X_train': data['X_train'],
'y_train': data['y_train'],
'X_val': data['X_val'],
'y_val': data['y_val']
}
weight_scale = 2.5e-2
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, dtype=np.float64, use_batchnorm=True, dropout=0.25, reg=1e-2)
solver = Solver(model, data,
num_epochs=30, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 3.1e-4,
},
lr_decay=0.9,
verbose=True, print_every=500)
solver.train()
scores = model.loss(data['X_test'])
y_pred = np.argmax(scores, axis = 1)
acc = np.mean(y_pred == data['y_test'])
print 'test acc: %f' %(acc)
best_model = model
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history)
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, label='train')
plt.plot(solver.val_acc_history, label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
################################################################################
# END OF YOUR CODE #
################################################################################
y_test_pred = np.argmax(best_model.loss(X_test), axis=1)
y_val_pred = np.argmax(best_model.loss(X_val), axis=1)
print 'Validation set accuracy: ', (y_val_pred == y_val).mean()
print 'Test set accuracy: ', (y_test_pred == y_test).mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Single image to MPI example Colab
Step2: Set up the model
Step3: Generate an MPI from an input image, show layers and disparity
Step4: Generate new views from nearby camera positions
Step5: A simple MPI-viewer using HTML + CSS transforms
Step6: View the MPI in a live 3D web viewer
|
<ASSISTANT_TASK:>
Python Code:
#@title License
# Licensed under the Apache License, Version 2.0 (the \"License\");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an \"AS IS\" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!echo Fetching code from github...
!apt install subversion
!svn export --force https://github.com/google-research/google-research/trunk/single_view_mpi
!echo
!echo Fetching trained model weights...
!rm single_view_mpi_full_keras.tar.gz
!rm -rf single_view_mpi_full
!wget https://storage.googleapis.com/stereo-magnification-public-files/models/single_view_mpi_full_keras.tar.gz
!tar -xzvf single_view_mpi_full_keras.tar.gz
!echo
!echo Fetching example image...
!rm -f input.png
!wget https://single-view-mpi.github.io/mpi/7/input.png
pip install -r single_view_mpi/requirements.txt
import tensorflow as tf
from single_view_mpi.libs import mpi
from single_view_mpi.libs import nets
input = tf.keras.Input(shape=(None, None, 3))
output = nets.mpi_from_image(input)
model = tf.keras.Model(inputs=input, outputs=output)
print('Model created.')
# Our full model, trained on RealEstate10K.
model.load_weights('single_view_mpi_full_keras/single_view_mpi_keras_weights')
print('Weights loaded.')
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (20, 10)
# Input image
inputfile = 'input.png'
input_rgb = tf.image.decode_image(tf.io.read_file(inputfile), dtype=tf.float32)
# Generate MPI
layers = model(input_rgb[tf.newaxis])[0]
depths = mpi.make_depths(1.0, 100.0, 32).numpy()
# Layers is now a tensor of shape [L, H, W, 4].
# This represents an MPI with L layers, each of height H and width W, and
# each with an RGB+Alpha 4-channel image.
# Depths is a tensor of shape [L] which gives the depths of the L layers.
# Display layer images
for i in range(32):
plt.subplot(4, 8, i+1)
plt.imshow(layers[i])
plt.axis('off')
plt.title('Layer %d' % i, loc='left')
plt.show()
# Display computed disparity
disparity = mpi.disparity_from_layers(layers, depths)
plt.imshow(disparity[..., 0])
plt.axis('off')
plt.title('Synthesized disparity')
plt.show()
# The reference camera position can just be the identity
reference_pose = tf.constant(
[[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0]])
# Accurate intrinsics are only important if we are trying to match a ground
# truth output. Here we just give intrinsics for a 16:9 image with the
# principal point in the center.
intrinsics = tf.constant([1.0, 1.0 * 16/9, 0.5, 0.5])
def render(xoffset, yoffset, zoffset):
# The translation is the final column of the pose matrix
target_pose = tf.constant(
[[1.0, 0.0, 0.0, -xoffset],
[0.0, 1.0, 0.0, -yoffset],
[0.0, 0.0, 1.0, -zoffset]])
image = mpi.render(layers, depths,
reference_pose, intrinsics, # Reference view
target_pose, intrinsics, # Target view
height=512, width=910)
return image
# First move the camera along the X axis (left to right):
for i in range(5):
xoffset = (i - 2) * 0.05
plt.subplot(1, 5, i + 1)
plt.imshow(render(xoffset, 0.0, 0.0))
plt.title('xoff = %f' % xoffset)
plt.axis('off')
plt.show()
# And next along the Z-axis (moving forwards):
for i in range(5):
zoffset = (i - 2) * 0.15
plt.subplot(1, 5, i + 1)
plt.imshow(render(0.0, 0.0, zoffset))
plt.title('zoff = %f' % zoffset)
plt.axis('off')
plt.show()
import base64
def imgurl(image):
# We resize layers to 512x288 so the whole stack can be serialized in a
# Colab for the HTML viewer without hitting the memory restriction. Outside
# Colab there is no such restriction and 512x512 layers could be used.
image = tf.image.resize(image, [288, 512], method='area')
data = tf.image.encode_png(
tf.image.convert_image_dtype(image, tf.uint8)).numpy()
dataurl = 'data:image/png;base64,{}'.format(base64.b64encode(data).decode())
return dataurl
def html_viewer(layers, depths):
html = []
html.append('''
<style>
#view {
position: relative;
overflow: hidden;
border: 20px solid black;
width: 512px;
height: 288px;
perspective: 500px;
background: #888;
}
#mpi {
transform-style: preserve-3d; -webkit-transform-style: preserve-3d;
height: 100%;
width: 100%;
pointer-events: none;
}
.layer {
position: absolute;
background-size: 100% 100%;
background-repeat: no-repeat;
background-position: center;
width: 100%;
height: 100%;
}
</style>''')
html.append('<h1>MPI Viewer</h1>Hover over the image to control the view.')
html.append('<div id=view><div id=mpi>')
for i in range(len(depths)):
depth = depths[i]
url = imgurl(layers[i])
html.append('''
<div class=layer
style="transform: scale(%.3f) translateZ(-%.3fpx);
background-image: url(%s)"></div>''' % (depth, depth, url))
html.append('</div></div>')
html.append('''
<script>
function setView(mpi, x, y) {
x = 2*x - 1;
y = 2*y - 1;
rx = (-1.5 * y).toFixed(2);
ry = (2.0 * x).toFixed(2);
// Put whatever CSS transform you want in here.
mpi.style.transform =
`rotateX(${rx}deg) rotateY(${ry}deg) translateZ(500px) scaleZ(500)`;
}
view = document.querySelector('#view');
mpi = document.querySelector('#mpi');
setView(mpi, 0.5, 0.5);
// View animates by itself, or you can hover over the image to control it.
let t = 0;
let animate = true;
function tick() {
if (!animate) {
return;
}
t = (t + 1) % 300;
r = Math.PI * 2 * t / 300;
setView(mpi, 0.5 + 0.3 * Math.cos(r), 0.5 + 0.3 * Math.sin(r));
requestAnimationFrame(tick);
}
tick();
view.addEventListener('mousemove',
(e) => {animate=false; setView(mpi, e.offsetX/view.offsetWidth, e.offsetY/view.offsetHeight);});
view.addEventListener('mouseleave',
(e) => {animate=true; tick();});
</script>
''')
return ''.join(html)
from IPython.display import display, HTML
display(HTML(html_viewer(layers, depths)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We want to train the model using the training set, then evaluate it on the test set. As our evaluation metric we will use the ROC AUC, averaged over the 12 tasks included in the dataset. First let's see how to do this with the DeepChem API.
Step2: Simple enough. Now let's see how to do the same thing with the Tensorflow APIs. Fair warning
Step3: Next, you have to use the functions in the tf.feature_column module to create an object representing each feature and weight column (but curiously, not the label columnโdon't ask me why!). These objects describe the data type and shape of each column, and give each one a name. The names must match the keys in the dict returned by the input function.
Step4: Unlike DeepChem models, which allow arbitrary metrics to be passed to evaluate(), estimators require all metrics to be defined up front when you create the estimator. Unfortunately, Tensorflow doesn't have very good support for multitask models. It provides an AUC metric, but no easy way to average this metric over tasks. We therefore must create a separate metric for every task, then define our own metric function to compute the average of them.
Step5: Now we create our Estimator by calling make_estimator() on the DeepChem model. We provide as arguments the objects created above to represent the feature and weight columns, as well as our metric function.
Step6: We are finally ready to train and evaluate it! Notice how the input function passed to each method is actually a lambda. This allows us to write a single function, then use it with different datasets and numbers of epochs.
|
<ASSISTANT_TASK:>
Python Code:
import deepchem as dc
import tensorflow as tf
import numpy as np
tasks, datasets, transformers = dc.molnet.load_tox21()
train_dataset, valid_dataset, test_dataset = datasets
n_tasks = len(tasks)
n_features = train_dataset.X.shape[1]
model = dc.models.MultitaskClassifier(n_tasks, n_features, layer_sizes=[1000], dropouts=0.25)
model.fit(train_dataset, nb_epoch=100)
metric = dc.metrics.Metric(dc.metrics.roc_auc_score, np.mean)
print(model.evaluate(test_dataset, [metric]))
def input_fn(dataset, epochs):
x, y, weights = dataset.make_iterator(batch_size=100, epochs=epochs).get_next()
return {'x': x, 'weights': weights}, y
x_col = tf.feature_column.numeric_column('x', shape=(n_features,))
weight_col = tf.feature_column.numeric_column('weights', shape=(n_tasks,))
def mean_auc(labels, predictions, weights):
metric_ops = []
update_ops = []
for i in range(n_tasks):
metric, update = tf.metrics.auc(labels[:,i], predictions[:,i], weights[:,i])
metric_ops.append(metric)
update_ops.append(update)
mean_metric = tf.reduce_mean(tf.stack(metric_ops))
update_all = tf.group(*update_ops)
return mean_metric, update_all
estimator = model.make_estimator(feature_columns=[x_col],
weight_column=weight_col,
metrics={'mean_auc': mean_auc},
model_dir='estimator')
estimator.train(input_fn=lambda: input_fn(train_dataset, 100))
print(estimator.evaluate(input_fn=lambda: input_fn(test_dataset, 1)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Inspect params
Step2: weights for input to hidden layer
Step3: Quick visual of weights
Step5: Weight of some regions are more variable than others
Step6: Feature importance looks the save across the different output classes
Step7: Indeed looks the same; seems like classification of feature importance does not depend on output class
|
<ASSISTANT_TASK:>
Python Code:
npz_file = "../model-mlp_n100-e100.txt.npz"
# Load network params
with np.load(npz_file) as f:
param_values = [f['arr_%d' % i] for i in range(len(f.files))]
print_statement = "Number of params: %s" % len(param_values)
print(print_statement)
print("="*len(print_statement))
for i in param_values:
print(i.shape)
# save to variable
w_l1 = pd.DataFrame(param_values[0])
w_l2 = pd.DataFrame(param_values[2])
w_l1.head()
w_l2.head()
f, ax = plt.subplots(figsize=(12, 9))
sns.heatmap(w_l1, yticklabels="")
plt.ylabel("Input feature")
plt.xlabel("Hidden layer node")
f, ax = plt.subplots(figsize=(12, 9))
sns.heatmap(w_l2, yticklabels="")
plt.xlabel("Output class")
plt.ylabel("Hidden layer node")
n = 150
f, ax = plt.subplots(figsize=(25, 6))
sns.boxplot(w_l1[:n].transpose())
plt.xlabel("Input feature")
plt.ylabel("weight of input-hidden node")
plt.title("Input-hidden node weights of the first {} features".format(n))
def garson(A, B):
Computes Garson's algorithm
A = matrix of weights of input-hidden layer (rows=input & cols=hidden)
B = vector of weights of hidden-output layer
B = np.diag(B)
# connection weight through the different hidden node
cw = np.dot(A, B)
# weight through node (axis=0 is column; sum per input feature)
cw_h = abs(cw).sum(axis=0)
# relative contribution of input neuron to outgoing signal of each hidden neuron
# sum to find relative contribution of input neuron
rc = np.divide(abs(cw), abs(cw_h))
rc = rc.sum(axis=1)
# normalize to 100% for relative importance
ri = rc / rc.sum()
return(ri)
# Run Garson's algorithm
df = {}
for i in range(w_l2.shape[1]):
df[i] = garson(w_l1, w_l2[i])
# Reformat
df = pd.DataFrame(df)
df.head()
f, ax = plt.subplots(figsize=(25, 6))
df.plot(kind="line", ax=ax)
plt.xlabel("Input feature")
plt.ylabel("Relative importance")
## Index of top 10 feature & their relative imo
df_ri = pd.DataFrame(df[0], columns=["relative_importance"]).sort_values(by="relative_importance", ascending=False)
df_ri.head(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Changing Meshing Options
Step3: Adding Datasets
Step4: Running Compute
Step5: Plotting
Step6: Now let's zoom in so we can see the layout of the triangles. Note that Wilson-Devinney uses trapezoids, but since PHOEBE uses triangles, we take each of the trapezoids and split it into two triangles.
Step7: And now looking down from above. Here you can see the gaps between the surface elements (and you can also see some of the subdivision that's taking place along the limb).
Step8: And see which elements are visible at the current time. This defaults to use the 'RdYlGn' colormap which will make visible elements green, partially hidden elements yellow, and hidden elements red. Note that the observer is in the positive z-direction.
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.0,<2.1"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_compute()
b.set_value_all('mesh_method', 'wd')
# TODO: for now only the 'graham' eclipse algorithm supports wd-style meshes
b.set_value_all('eclipse_method', 'graham')
b.add_dataset('mesh', times=np.linspace(0,10,6), dataset='mesh01')
b.run_compute()
axs, artists = b['mesh01@model'].plot(time=0.5)
axs, artists = b['primary@mesh01@model'].plot(time=0.0, edgecolor='b', facecolor='0.9', xlim=(-0.2,0.2), ylim=(-0.2,0.2))
axs, artists = b['primary@mesh01@model'].plot(time=0.0, x='xs', y='zs', edgecolor='b', facecolor='0.9', xlim=(-0.1,0.1), ylim=(-4.1,-3.9))
axs, artists = b['secondary@mesh01@model'].plot(time=0.0, x='xs', y='zs', edgecolor='None', facecolor='visibilities')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: <!--BOOK_INFORMATION-->
Step5: The classifier can be instantiated as follows
Step6: You can then fit the model to some arbitrary data
Step7: And then you can proceed to predicting the target responses
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from sklearn.base import BaseEstimator, ClassifierMixin
class MyClassifier(BaseEstimator, ClassifierMixin):
An example classifier
def __init__(self, param1=1, param2=2):
Called when initializing the classifier
The constructor is used to define some optional
parameters of the classifier. Store them as class
attributes for future access.
Parameters
----------
param1 : int, optional, default: 1
The first parameter
param2 : int, optional, default: 2
The second parameter
self.param1 = param1
self.param2 = param2
def fit(self, X, y=None):
Fits the classifier to data
This should fit the classifier to the training data.
All the "work" should be done here.
Parameters
----------
X : array-like
The training data, where the first dimension is
the number of training samples, and the second
dimension is the number of features.
y : array-like, optional, default: None
Vector of class labels
Returns
-------
The fit method returns the classifier object it
belongs to.
return self
def predict(self, X):
Predicts target labels
This should predict the target labels of some data `X`.
Parameters
----------
X : array-like
Data samples for which to predict the target labels.
Returns
-------
y_pred : array-like
Target labels for every data sample in `X`
return np.zeros(X.shape[0])
myclass = MyClassifier()
X = np.random.rand(10, 3)
myclass.fit(X)
myclass.predict(X)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: gen_overlay_pngs
Step4: plot_overlays_skullstrip
Step5: Inputs
Step6: Run AFNI 3dSkullStrip to do skull strip
Step7: Run qa_skullstrip.py
|
<ASSISTANT_TASK:>
Python Code:
#import packages
import warnings
warnings.simplefilter("ignore")
import sys
import nibabel as nib
import numpy as np
import os
from PIL import Image, ImageDraw,ImageFont
import matplotlib.pyplot as plt
from m2g.stats.qa_skullstrip import gen_overlay_pngs
def gen_overlay_pngs(
brain, original, outdir, loc=0, mean=False, minthr=2, maxthr=95, edge=False):
Generate a QA image for skullstrip.
will call the function plot_overlays_skullstrip
Parameters
----------
brain: nifti file
Path to the skull-stripped nifti brain
original: nifti file
Path to the original t1w brain, with the skull included
outdir: str
Path to the directory where QA will be saved
loc: int
which dimension of the 4d brain data to use
mean: bool
whether to calculate the mean of the 4d brain data
If False, the loc=0 dimension of the data (mri_data[:, :, :, loc]) is used
minthr: int
lower percentile threshold
maxthr: int
upper percentile threshold
edge: bool
whether to use normalized luminance data
If None, the respective min and max of the color array is used.
original_name = get_filename(original)
brain_data = nb.load(brain).get_data()
if brain_data.ndim == 4: # 4d data, so we need to reduce a dimension
if mean:
brain_data = brain_data.mean(axis=3)
else:
brain_data = brain_data[:, :, :, loc]
fig = plot_overlays_skullstrip(brain_data, original)
# name and save the file
fig.savefig(f"{outdir}/qa_skullstrip__{original_name}.png", format="png")
def plot_overlays_skullstrip(brain, original, cmaps=None, minthr=2, maxthr=95, edge=False):
Shows the skull-stripped brain (green) overlaid on the original t1w (magenta)
Parameter
---------
brain: str, nifti image, numpy.ndarray
an object to open the data for a skull-stripped brain. Can be a string (path to a brain file),
nibabel.nifti1.nifti1image, or a numpy.ndarray.
original: str, nifti image, numpy.ndarray
an object to open the data for t1w brain, with the skull included. Can be a string (path to a brain file),
nibabel.nifti1.nifti1image, or a numpy.ndarray.
cmaps: matplotlib colormap objects
colormap objects based on lookup tables using linear segments.
minthr: int
lower percentile threshold
maxthr: int
upper percentile threshold
edge: bool
whether to use normalized luminance data
If None, the respective min and max of the color array is used.
Returns
---------
foverlay: matplotlib.figure.Figure
plt.rcParams.update({"axes.labelsize": "x-large", "axes.titlesize": "x-large"})
foverlay = plt.figure()
original = get_braindata(original)
brain_shape = get_braindata(brain).shape
brain = get_braindata(brain)
if original.shape != brain.shape:
raise ValueError("Two files are not the same shape.")
brain = pad_im(brain, max(brain_shape[0:3]), pad_val=0, rgb=False)
original = pad_im(original,max(brain_shape[0:3]), pad_val=0, rgb=False)
if cmaps is None:
cmap1 = LinearSegmentedColormap.from_list("mycmap1", ["white", "magenta"])
cmap2 = LinearSegmentedColormap.from_list("mycmap2", ["white", "green"])
cmaps = [cmap1, cmap2]
x, y, z = get_true_volume(brain)
coords = (x, y, z)
labs = [
"Sagittal Slice",
"Coronal Slice",
"Axial Slice",
]
var = ["X", "Y", "Z"]
# create subplot for first slice
# and customize all labels
idx = 0
if edge:
min_val = 0
max_val = 1
else:
min_val, max_val = get_min_max(brain, minthr, maxthr)
for i, coord in enumerate(coords):
for pos in coord:
idx += 1
ax = foverlay.add_subplot(3, 3, idx)
ax.set_title(var[i] + " = " + str(pos))
if i == 0:
image = ndimage.rotate(brain[pos, :, :], 90)
atl = ndimage.rotate(original[pos, :, :], 90)
elif i == 1:
image = ndimage.rotate(brain[:, pos, :], 90)
atl = ndimage.rotate(original[:, pos, :], 90)
else:
image = ndimage.rotate(brain[:, :, pos], 0)
atl = ndimage.rotate(original[:, :, pos], 0)
if idx % 3 == 1:
ax.set_ylabel(labs[i])
ax.yaxis.set_ticks([0, image.shape[0] / 2, image.shape[0] - 1])
ax.xaxis.set_ticks([0, image.shape[1] / 2, image.shape[1] - 1])
if edge:
image = edge_map(image).data
image[image > 0] = max_val
image[image == 0] = min_val
# Set the axis invisible
plt.xticks([])
plt.yticks([])
# Set the frame invisible
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.imshow(atl, interpolation="none", cmap=cmaps[0], alpha=0.9)
ax.imshow(
opaque_colorscale(
cmaps[1], image, alpha=0.9, vmin=min_val, vmax=max_val
)
)
if idx ==3:
plt.plot(0, 0, "-", c="magenta", label='skull')
plt.plot(0, 0, "-", c="green", label='brain')
# box = ax.get_position()
# ax.set_position([box.x0, box.y0, box.width, box.height*0.8])
plt.legend(loc='best', fontsize=15, frameon=False, bbox_to_anchor=(1.5, 1.5))
# Set title for the whole picture
a, b, c = brain_shape
title = 'Skullstrip QA. Scan Volume : ' + str(a) + '*' + str(b) + '*' + str(c)
foverlay.suptitle(title, fontsize=24)
foverlay.set_size_inches(12.5, 10.5, forward=True)
return foverlay
original = r'/mnt/f/JHU/ndd/dataset/part_of_SWU4/sub-0025629_ses-1_T1w.nii.gz'
outdir = r'/mnt/f/JHU/ndd/dataset/output1/sub-0025864/ses-1/'
brainfile = f"{outdir}only_brain.nii.gz"
cmd = f"3dSkullStrip -prefix {brainfile} -input {original} -ld 30"
os.system(cmd)
%matplotlib inline
gen_overlay_pngs(brainfile, original,outdir, loc=0, mean=False, minthr=2, maxthr=95, edge=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NCไปถ
Step2: R็ฑป
Step3: D็ฑป
Step4: s็ฑป
Step5: I็ฑป
Step6: U-U็ฑป
Step7: concat
|
<ASSISTANT_TASK:>
Python Code:
from dist_time import *
import pandas as pd
from collections.abc import Iterator, Iterable, Generator
from collections import defaultdict
import re
from sqlalchemy import create_engine
from sqlalchemy.types import Integer, Text, DateTime, Float, VARCHAR
from sqlalchemy import Table, MetaData, Column
import itertools as it
MACHINE_MAP = {'PreSort': 'm', 'Unload': 'r', 'Secondary': 'c', 'Primary': 'm', 'Export':'e', 'Reload': 're'}
MACHINE_MAP_ID = {'PreSort': 'm', 'Unload': 'a', 'Secondary': 'c', 'Primary': 'm', 'Export':'e', 'Reload': 're'}
UNLOAD_MAP = {'D': 'a', 'I': 'a', 'R': 'r'}
UNLOAD_AREA_MAP = {
'R_Unload1':'R','R_Unload2':'R','R_Unload3':'R','Unload1':'D','Unload2':'D','Unload3':'D',
'Unload4':'D','Unload5':'D','Unload6':'D','Unload7':'D','Unload8':'D','Unload9':'D','Unload10':'D',
'Unload11':'D','Unload12':'D','Unload13':'D','Unload14':'D','Unload22':'D','Unload23':'D',
'Unload15':'I','Unload16':'I','Unload17':'I','Unload18':'I','Unload19':'I','Unload20':'I','Unload21':'I'}
RELOAD_AREA_MAP = {
'Reload30': 'R', 'Reload31': 'R', 'Reload32': 'R', 'Reload33': 'R', 'Reload34': 'R', 'Reload1': 'D', 'Reload2': 'D',
'Reload3': 'D', 'Reload4': 'D', 'Reload5': 'D', 'Reload6': 'D', 'Reload7': 'D', 'Reload8': 'D', 'Reload9': 'D',
'Reload10': 'D', 'Reload11': 'D', 'Reload12': 'D', 'Reload13': 'D', 'Reload14': 'D', 'Reload15': 'D', 'Reload29': 'D',
'Reload16': 'I', 'Reload17': 'I', 'Reload18': 'I', 'Reload19': 'I', 'Reload20': 'I', 'Reload21': 'I', 'Reload22': 'I',
'Reload23': 'I', 'Reload24': 'I', 'Reload25': 'I', 'Reload26': 'I', 'Reload27': 'I', 'Reload28': 'I'}
class MySQLConfig:
HOST = '10.0.149.62'
USER = "root"
PASS = "root123"
DB = "cm_erzhou_v1_1"
CHARSET = 'utf8'
engine = create_engine(
f'mysql+pymysql://{USER}:{PASS}@{HOST}/{DB}?charset={CHARSET}',
isolation_level="READ UNCOMMITTED")
def creat_table():
db_eng = MetaData(bind=MySQLConfig.engine)
machine_table_sche = \
Table(
"i_machine_distance_table",
db_eng,
Column("equipment_port_last", VARCHAR(length=32, )),
Column("equipment_port_next", VARCHAR(length=32, )),
Column("equipment_area_last", VARCHAR(length=32, )),
Column("equipment_area_next", VARCHAR(length=32, )),
Column("source_type", VARCHAR(length=32, )),
Column("process_time", Float(precision=2)),
Column("parameter_name", VARCHAR(length=32, )),
)
machine_table_sche.create(checkfirst=True)
def to_mysql(table_name: str, data: pd.DataFrame):
try:
data.to_sql(name=table_name,
con=MySQLConfig.engine,
if_exists='append',
index=0)
print(f"mysql write table {table_name} succeed!")
except Exception as exc:
print(f"mysql write table {table_name} failed, error: {exc}.")
raise Exception
def dict_to_dataframe(pd: pd, *, parameter_list: list=None):
data = defaultdict(list)
for paramer in parameter_list:
data['process_time'].extend(list(paramer.values()))
data ['parameter_name'].extend(list(paramer.keys()))
return pd.DataFrame(data)
def df_to_excel(sheet, df):
file_name = "d:/Git/git.jupyter/jupyter_app/ipython/py36_erzhou_input/dist_time/out/outdata.xlsx"
df.to_excel(file_name, sheet_name=sheet, index=False)
v = R_Dock_dist.values()
k = R_Dock_dist.keys()
dict_list = {
'UU': [Unload_dist],
'NC': [NC_dist],
'R': [R_Dock_R_PreSort_dist,R_PreSort_R_Secondary_dist,R_PreSort_D_Primary_dist,R_PreSort_S_Primary_dist ,R_PreSort_I_Export_dist],
'D': [D_Dock_D_Primary_dist, D_Dock_S_Primary_dist, D_Primary_D_Secondary_dist, D_Primary_I_Export_dist, D_Primary_R_Secondary_dist],
'S': [S_Primary_S_Secondary_dist, S_Secondary_D_Primary_dist, S_Secondary_I_PreSort_dist],
'I': [I_Dock_I_PreSort_dist, I_PreSort_I_Secondary_dist, I_Export_I_Secondary_dist, I_PreSort_S_Secondary_dist,
I_PreSort_S_Primary_dist, I_PreSort_D_Primary_dist, I_PreSort_R_Secondary_dist]
}
nc_df = dict_to_dataframe(pd, parameter_list=dict_list['NC'])
nc_df['process_time'] = nc_df.process_time.apply(lambda x : '{:.1f}'.format(x/1000))
nc_df['source_type'] ='NC'
nc_df['equipment_port_last'], nc_df['equipment_port_next'] = nc_df.parameter_name.str.split('-', 1).str
nc_df['equipment_area_last'] = nc_df.equipment_port_last.apply(lambda x : UNLOAD_AREA_MAP[x])
nc_df['equipment_area_next'] = nc_df.equipment_port_next.apply(lambda x : RELOAD_AREA_MAP[x])
nc_df.equipment_port_last = nc_df.equipment_port_last.apply(lambda x : x.replace(re.sub('\d+','',x), UNLOAD_MAP[UNLOAD_AREA_MAP[x]]))
nc_df.equipment_port_next = nc_df.equipment_port_next.apply(lambda x : x.replace(re.sub('\d+','',x), MACHINE_MAP[re.sub('\d+','',x)]))
nc_df = nc_df.loc[:,['equipment_port_last', 'equipment_port_next', 'equipment_area_last', 'equipment_area_next', 'source_type',
'process_time','parameter_name'
]]
nc_df = nc_df.sort_values(['equipment_port_last'])
nc_df = nc_df.reset_index(drop=True)
nc_df.sample(10)
r_df = dict_to_dataframe(pd, parameter_list=dict_list['R'])
r_df['process_time'] = r_df.process_time.apply(lambda x : '{:.1f}'.format(x/1000))
r_df['equipment_port_last'], r_df['equipment_port_next'] = r_df.parameter_name.str.split('-', 1).str
r_df['equipment_area_last'], r_df['equipment_port_last'] = r_df.equipment_port_last.str.split('_', 1).str
r_df['equipment_area_next'], r_df['equipment_port_next'] = r_df.equipment_port_next.str.split('_', 1).str
r_df.equipment_port_last = r_df.equipment_port_last.apply(lambda x : x.replace(re.sub('\d+','',x), MACHINE_MAP[re.sub('\d+','',x)]))
r_df.equipment_port_next = r_df.equipment_port_next.apply(lambda x : x.replace(re.sub('\d+','',x), MACHINE_MAP[re.sub('\d+','',x)]))
r_df['source_type'] = 'R'
r_df = r_df.loc[:,['equipment_port_last', 'equipment_port_next', 'equipment_area_last', 'equipment_area_next', 'source_type',
'process_time', 'parameter_name'
]]
r_df.sample(10)
d_df = dict_to_dataframe(pd, parameter_list=dict_list['D'])
d_df['process_time'] = d_df.process_time.apply(lambda x : '{:.1f}'.format(x/1000))
d_df['equipment_port_last'], d_df['equipment_port_next'] = d_df.parameter_name.str.split('-', 1).str
# add D_ flag to unload area
d_df.equipment_port_last = d_df.equipment_port_last.where(d_df.equipment_port_last.str.startswith('D_'),
'D_'+d_df.equipment_port_last)
d_df['equipment_area_last'], d_df['equipment_port_last'] = d_df.equipment_port_last.str.split('_', 1).str
d_df['equipment_area_next'], d_df['equipment_port_next'] = d_df.equipment_port_next.str.split('_', 1).str
d_df.equipment_port_last = d_df.equipment_port_last.apply(lambda x : x.replace(re.sub('\d+','',x), MACHINE_MAP_ID[re.sub('\d+','',x)]))
d_df.equipment_port_next = d_df.equipment_port_next.apply(lambda x : x.replace(re.sub('\d+','',x), MACHINE_MAP_ID[re.sub('\d+','',x)]))
d_df['source_type'] = 'D'
d_df = d_df.loc[:,['equipment_port_last', 'equipment_port_next', 'equipment_area_last', 'equipment_area_next', 'source_type',
'process_time', 'parameter_name'
]]
d_df.sample(10)
s_df = dict_to_dataframe(pd, parameter_list=dict_list['S'])
s_df['process_time'] = s_df.process_time.apply(lambda x : '{:.1f}'.format(x/1000))
s_df['equipment_port_last'], s_df['equipment_port_next'] = s_df.parameter_name.str.split('-', 1).str
# check if hav D_ flag else add D_
s_df.equipment_port_last = s_df.equipment_port_last.where(s_df.equipment_port_last.str.startswith('S_'),
'S_'+s_df.equipment_port_last)
s_df['equipment_area_last'], s_df['equipment_port_last'] = s_df.equipment_port_last.str.split('_', 1).str
s_df['equipment_area_next'], s_df['equipment_port_next'] = s_df.equipment_port_next.str.split('_', 1).str
s_df.equipment_port_last = s_df.equipment_port_last.apply(lambda x : x.replace(re.sub('\d+','',x), MACHINE_MAP_ID[re.sub('\d+','',x)]))
s_df.equipment_port_next = s_df.equipment_port_next.apply(lambda x : x.replace(re.sub('\d+','',x), MACHINE_MAP_ID[re.sub('\d+','',x)]))
s_df['source_type'] = 'S'
s_df = s_df.loc[:,['equipment_port_last', 'equipment_port_next', 'equipment_area_last', 'equipment_area_next', 'source_type',
'process_time', 'parameter_name'
]]
s_df.head(10)
i_df = dict_to_dataframe(pd, parameter_list=dict_list['I'])
i_df['source_type'] = 'I'
i_df['process_time'] = i_df.process_time.apply(lambda x : '{:.1f}'.format(x/1000))
i_df['equipment_port_last'], i_df['equipment_port_next'] = i_df.parameter_name.str.split('-', 1).str
# check if hav D_ flag else add D_
i_df.equipment_port_last = i_df.equipment_port_last.where(i_df.equipment_port_last.str.startswith('I_'),
'I_'+i_df.equipment_port_last)
i_df['equipment_area_last'], i_df['equipment_port_last'] = i_df.equipment_port_last.str.split('_', 1).str
i_df['equipment_area_next'], i_df['equipment_port_next'] = i_df.equipment_port_next.str.split('_', 1).str
i_df.equipment_port_last = i_df.equipment_port_last.apply(lambda x : x.replace(re.sub('\d+','',x), MACHINE_MAP_ID[re.sub('\d+','',x)]))
i_df.equipment_port_next = i_df.equipment_port_next.apply(lambda x : x.replace(re.sub('\d+','',x), MACHINE_MAP_ID[re.sub('\d+','',x)]))
i_df = i_df.loc[:,['equipment_port_last', 'equipment_port_next', 'equipment_area_last', 'equipment_area_next', 'source_type',
'process_time', 'parameter_name'
]]
i_df.head(10)
uu_df = dict_to_dataframe(pd, parameter_list=dict_list['UU'])
uu_df['source_type'] = 'AA'
uu_df['equipment_port_last'], uu_df['equipment_port_next'] = uu_df.parameter_name.str.split('-', 1).str
uu_df['equipment_area_last'] = uu_df.equipment_port_last.apply(lambda x : UNLOAD_AREA_MAP[x])
uu_df['equipment_area_next'] = uu_df.equipment_port_next.apply(lambda x : UNLOAD_AREA_MAP[x])
uu_df.equipment_port_last = uu_df.equipment_port_last.apply(lambda x : x.replace(re.sub('\d+','',x), UNLOAD_MAP[UNLOAD_AREA_MAP[x]]))
uu_df.equipment_port_next = uu_df.equipment_port_next.apply(lambda x : x.replace(re.sub('\d+','',x), UNLOAD_MAP[UNLOAD_AREA_MAP[x]]))
uu_df = uu_df.loc[:,['equipment_port_last', 'equipment_port_next', 'equipment_area_last', 'equipment_area_next', 'source_type',
'process_time','parameter_name'
]]
uu_df = uu_df.sort_values(['equipment_port_last', 'equipment_port_next']).reset_index(drop=True)
uu_df.head(10)
dist_concat = pd.concat([nc_df, r_df, d_df, s_df, i_df, uu_df], axis=0)
dist_concat = dist_concat.loc[:, ['equipment_port_last', 'equipment_port_next', 'equipment_area_last', 'equipment_area_next', 'source_type',
'process_time','parameter_name']]
# dist_concat[dist_concat.equipment_port_next.str.startswith('RE')]
dist_concat.head(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'awi-cm-1-0-hr', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Frequently asked questions
Step3: Loading a YAML configuration
Step4: Exporting YAML or JSON
Step5: How to use transitions with django models?
Step6: transitions memory footprint is too large for my Django app and adding models takes too long.
Step7: Is there a 'during' callback which is called when no transition has been successful?
Step8: How to have a dynamic transition destination based on a function's return value
Step9: Note that this solution has some drawbacks. For instance, the generated graph might not include all possible outcomes.
Step10: Transitions does not add convencience methods to my model
Step12: I have several inter-dependent machines/models and experience deadlocks
Step13: If you consider this too much boilerplate and you don't mind some dependencies and less generalization
|
<ASSISTANT_TASK:>
Python Code:
from transitions import Machine
import json
class Model:
def say_hello(self, name):
print(f"Hello {name}!")
# import json
json_config =
{
"name": "MyMachine",
"states": [
"A",
"B",
{ "name": "C", "on_enter": "say_hello" }
],
"transitions": [
["go", "A", "B"],
{"trigger": "hello", "source": "*", "dest": "C"}
],
"initial": "A"
}
model = Model()
config = json.loads(json_config)
config['model'] = model # adding a model to the configuration
m = Machine(**config) # **config unpacks arguments as kwargs
assert model.is_A()
model.go()
assert model.is_B()
model.hello("world") # >>> Hello world!
assert model.state == 'C'
from transitions import Machine
import yaml
class Model:
def say_hello(self, name):
print(f"Hello {name}!")
yaml_config =
---
name: "MyMachine"
states:
- "A"
- "B"
- name: "C"
on_enter: "say_hello"
transitions:
- ["go", "A", "B"]
- {trigger: "hello", source: "*", dest: "C"}
initial: "A"
model = Model()
config = yaml.safe_load(yaml_config)
config['model'] = model # adding a model to the configuration
m = Machine(**config) # **config unpacks arguments as kwargs
assert model.is_A()
model.go()
assert model.is_B()
model.hello("world") # >>> Hello world!
assert model.state == 'C'
#export
from transitions.extensions.markup import MarkupMachine
import json
import yaml
class Model:
def say_hello(self, name):
print(f"Hello {name}!")
model = Model()
m = MarkupMachine(model=None, name="ExportedMachine")
m.add_state('A')
m.add_state('B')
m.add_state('C', on_enter='say_hello')
m.add_transition('go', 'A', 'B')
m.add_transition(trigger='hello', source='*', dest='C')
m.initial = 'A'
m.add_model(model)
model.go()
print("JSON:")
print(json.dumps(m.markup, indent=2))
print('\nYAML:')
print(yaml.dump(m.markup))
config2 = json.loads(json.dumps(m.markup)) # simulate saving and loading
m2 = MarkupMachine(markup=config2)
model2 = m2.models[0] # get the initialized model
assert model2.is_B() # the model state was preserved
model2.hello('again') # >>> Hello again!
assert model2.state == 'C'
from django.db import models
from django.db.models.signals import post_init
from django.dispatch import receiver
from django.utils.translation import ugettext_lazy as _
from transitions import Machine
class ModelWithState(models.Model):
ASLEEP = 'asleep'
HANGING_OUT = 'hanging out'
HUNGRY = 'hungry'
SWEATY = 'sweaty'
SAVING_THE_WORLD = 'saving the world'
STATE_TYPES = [
(ASLEEP, _('asleep')),
(HANGING_OUT, _('hanging out')),
(HUNGRY, _('hungry')),
(SWEATY, _('sweaty')),
(SAVING_THE_WORLD, _('saving the world')),
]
state = models.CharField(
_('state'),
max_length=100,
choices=STATE_TYPES,
default=ASLEEP,
help_text=_('actual state'),
)
@receiver(post_init, sender=ModelWithState)
def init_state_machine(instance, **kwargs):
states = [state for state, _ in instance.STATE_TYPES]
machine = instance.machine = Machine(model=instance, states=states, initial=instance.state)
machine.add_transition('work_out', instance.HANGING_OUT, instance.HUNGRY)
machine.add_transition('eat', instance.HUNGRY, instance.HANGING_OUT)
from transitions import Machine
from functools import partial
from mock import MagicMock
class Model(object):
machine = Machine(model=None, states=['A', 'B', 'C'], initial=None,
transitions=[
{'trigger': 'go', 'source': 'A', 'dest': 'B', 'before': 'before'},
{'trigger': 'check', 'source': 'B', 'dest': 'C', 'conditions': 'is_large'},
], finalize_event='finalize')
def __init__(self):
self.state = 'A'
self.before = MagicMock()
self.after = MagicMock()
self.finalize = MagicMock()
@staticmethod
def is_large(value=0):
return value > 9000
def __getattribute__(self, item):
try:
return super(Model, self).__getattribute__(item)
except AttributeError:
if item in self.machine.events:
return partial(self.machine.events[item].trigger, self)
raise
model = Model()
model.go()
assert model.state == 'B'
assert model.before.called
assert model.finalize.called
model.check()
assert model.state == 'B'
model.check(value=500)
assert model.state == 'B'
model.check(value=9001)
assert model.state == 'C'
assert model.finalize.call_count == 4
from transitions.core import Machine, State, Event, EventData, listify
class DuringState(State):
# add `on_during` to the dynamic callback methods
# this way on_during_<state> can be recognized by `Machine`
dynamic_methods = State.dynamic_methods + ['on_during']
# parse 'during' and remove the keyword before passing the rest along to state
def __init__(self, *args, **kwargs):
during = kwargs.pop('during', [])
self.on_during = listify(during)
super(DuringState, self).__init__(*args, **kwargs)
def during(self, event_data):
for handle in self.on_during:
event_data.machine.callback(handle, event_data)
class DuringEvent(Event):
def _trigger(self, model, *args, **kwargs):
# a successful transition returns `res=True` if res is False, we know that
# no transition has been executed
res = super(DuringEvent, self)._trigger(model, *args, **kwargs)
if res is False:
state = self.machine.get_state(model.state)
event_data = EventData(state, self, self.machine, model, args=args, kwargs=kwargs)
event_data.result = res
state.during(event_data)
return res
class DuringMachine(Machine):
# we need to override the state and event classes used by `Machine`
state_cls = DuringState
event_cls = DuringEvent
class Model:
def on_during_A(self):
print("Dynamically assigned callback")
def another_callback(self):
print("Explicitly assigned callback")
model = Model()
machine = DuringMachine(model=model, states=[{'name': 'A', 'during': 'another_callback'}, 'B'],
transitions=[['go', 'B', 'A']], initial='A', ignore_invalid_triggers=True)
machine.add_transition('test', source='A', dest='A', conditions=lambda: False)
assert not model.go()
assert not model.test()
from transitions import Machine, Transition
from six import string_types
class DependingTransition(Transition):
def __init__(self, source, dest, conditions=None, unless=None, before=None,
after=None, prepare=None, **kwargs):
self._result = self._dest = None
super(DependingTransition, self).__init__(source, dest, conditions, unless, before, after, prepare)
if isinstance(dest, dict):
try:
self._func = kwargs.pop('depends_on')
except KeyError:
raise AttributeError("A multi-destination transition requires a 'depends_on'")
else:
# use base version in case transition does not need special handling
self.execute = super(DependingTransition, self).execute
def execute(self, event_data):
func = getattr(event_data.model, self._func) if isinstance(self._func, string_types) \
else self._func
self._result = func(*event_data.args, **event_data.kwargs)
super(DependingTransition, self).execute(event_data)
@property
def dest(self):
return self._dest[self._result] if self._result is not None else self._dest
@dest.setter
def dest(self, value):
self._dest = value
# subclass Machine to use DependingTransition instead of standard Transition
class DependingMachine(Machine):
transition_cls = DependingTransition
def func(value):
return value
m = DependingMachine(states=['A', 'B', 'C', 'D'], initial='A')
# define a dynamic transition with a 'depends_on' function which will return the required value
m.add_transition(trigger='shuffle', source='A', dest=({1: 'B', 2: 'C', 3: 'D'}), depends_on=func)
m.shuffle(value=2) # func returns 2 which makes the transition dest to be 'C'
assert m.is_C()
from transitions import Machine, EventData
from transitions.core import listify
from functools import partial
class Model(object):
def fails(self, condition=False):
return False
def success(self, condition=False):
return True
# condition is passed by EventData
def depends_on(self, condition=False):
return condition
def is_state_B(self, condition=False):
return self.state == 'B'
class PeekMachine(Machine):
def _can_trigger(self, model, *args, **kwargs):
# We can omit the first two arguments state and event since they are only needed for
# actual state transitions. We do have to pass the machine (self) and the model as well as
# args and kwargs meant for the callbacks.
e = EventData(None, None, self, model, args, kwargs)
return [trigger_name for trigger_name in self.get_triggers(model.state)
if any(all(c.check(e) for c in t.conditions)
for t in self.events[trigger_name].transitions[model.state])]
# override Machine.add_model to assign 'can_trigger' to the model
def add_model(self, model, initial=None):
for mod in listify(model):
mod = self if mod == 'self' else mod
if mod not in self.models:
setattr(mod, 'can_trigger', partial(self._can_trigger, mod))
super(PeekMachine, self).add_model(mod, initial)
states = ['A', 'B', 'C', 'D']
transitions = [
dict(trigger='go_A', source='*', dest='A', conditions=['depends_on']), # only available when condition=True is passed
dict(trigger='go_B', source='*', dest='B', conditions=['success']), # always available
dict(trigger='go_C', source='*', dest='C', conditions=['fails']), # never available
dict(trigger='go_D', source='*', dest='D', conditions=['is_state_B']), # only available in state B
dict(trigger='reset', source='D', dest='A', conditions=['success', 'depends_on']), # only available in state D when condition=True is passed
dict(trigger='forwards', source='A', dest='D', conditions=['success', 'fails']), # never available
dict(trigger='forwards', source='D', dest='D', unless=['depends_on'])
]
model = Model()
machine = PeekMachine(model, states=states, transitions=transitions, initial='A', auto_transitions=False)
assert model.can_trigger() == ['go_B']
assert set(model.can_trigger(condition=True)) == set(['go_A', 'go_B'])
model.go_B(condition=True)
assert set(model.can_trigger()) == set(['go_B', 'go_D'])
model.go_D()
assert model.can_trigger() == ['go_B', 'forwards']
assert set(model.can_trigger(condition=True)) == set(['go_A', 'go_B', 'reset'])
from transitions import State, Machine
class StateMachineModel:
state = None
def __init__(self):
pass
def transition_one(self):
print('transitioning states...')
def transition_two(self):
print('transitioning states...')
class OverrideMachine(Machine):
def _checked_assignment(self, model, name, func):
setattr(model, name, func)
class CallingMachine(Machine):
def _checked_assignment(self, model, name, func):
if hasattr(model, name):
predefined_func = getattr(model, name)
def nested_func(*args, **kwargs):
predefined_func()
func(*args, **kwargs)
setattr(model, name, nested_func)
else:
setattr(model, name, func)
states = [State(name='A'), State(name='B'), State(name='C'), State(name='D')]
transitions = [
{'trigger': 'transition_one', 'source': 'A', 'dest': 'B'},
{'trigger': 'transition_two', 'source': 'B', 'dest': 'C'},
{'trigger': 'transition_three', 'source': 'C', 'dest': 'D'}
]
state_machine_model = StateMachineModel()
print('OverrideMachine ...')
state_machine = OverrideMachine(model=state_machine_model, states=states, transitions=transitions, initial=states[0])
print('state_machine_model (current state): %s' % state_machine_model.state)
state_machine_model.transition_one()
print('state_machine_model (current state): %s' % state_machine_model.state)
state_machine_model.transition_two()
print('state_machine_model (current state): %s' % state_machine_model.state)
print('\nCallingMachine ...')
state_machine_model = StateMachineModel()
state_machine = CallingMachine(model=state_machine_model, states=states, transitions=transitions, initial=states[0])
print('state_machine_model (current state): %s' % state_machine_model.state)
state_machine_model.transition_one()
print('state_machine_model (current state): %s' % state_machine_model.state)
state_machine_model.transition_two()
print('state_machine_model (current state): %s' % state_machine_model.state)
from transitions import Machine
import logging
class EventBus:
def __init__(self):
self.members = []
def add_member(self, member):
Member can be a model or a machine acting as a model
# We decorate each member with an 'emit' function to fire events.
# EventBus will then broadcast that event to ALL members, including the one that triggered the event.
# Furthermore, we can pass a payload in case there is data that needs to be sent with an event.
setattr(member, 'emit', self.broadcast)
self.members.append(member)
def broadcast(self, event, payload=None):
for member in self.members:
member.trigger(event, payload)
# Our machines can either be off or started
states = ['off', 'started']
class Machine1(Machine):
# this machine can only boot once.
transitions = [['boot', 'off', 'started']]
def __init__(self):
# we pass 'ignore_invalid_triggers' since a machine on an event bus might get events it cannot process
# right now and we do not want to throw an exception every time that happens.
# Furthermore, we will set 'queued=True' to process events sequentially instead of nested.
super(Machine1, self).__init__(states=states, transitions=self.transitions,
ignore_invalid_triggers=True, initial='off', queued=True)
def on_enter_started(self, payload=None):
print("Starting successful")
# We emit out start event and attach ourselves as payload just in case
self.emit("Machine1Started", self)
class Machine2(Machine):
# This machine can also reboot (boot from every state) but only when the 'ready' flag has been set.
# 'ready' is set once the event 'Machine1Started' has been processed (before the transition is from 'off' to 'on'
# is actually executed). Furthermore, we will also boot the machine when we catch that event.
transitions = [{'trigger': 'boot', 'source': '*', 'dest': 'started', 'conditions': 'ready'},
{'trigger': 'Machine1Started', 'source': 'off', 'dest': 'started', 'before': 'on_machine1_started'}]
def __init__(self):
super(Machine2, self).__init__(states=states, transitions=self.transitions,
ignore_invalid_triggers=True, initial='off', queued=True)
self._ready = False
# Callbacks also work with properties. Passing the string 'ready' will evaluate this property
@property
def ready(self):
return self._ready
@ready.setter
def ready(self, value):
self._ready = value
def on_machine1_started(self, payload=None):
self.ready = True
print("I am ready now!")
def on_enter_started(self, payload=None):
print("Booting successful")
logging.basicConfig(level=logging.DEBUG)
bus = EventBus()
machine1 = Machine1()
machine2 = Machine2()
bus.add_member(machine2)
bus.add_member(machine1)
bus.broadcast('boot')
# what will happen:
# - bus will broadcast 'boot' event to machine2
# - machine2 will attempt to boot but fail and return since ready is set to false
# - bus will broadcast 'boot' event to machine1
# - machine1 will boot and emit the 'Machine1Started'
# - bus will broadcast 'Machine1Started' to machine2
# - machine2 will handle the event, boot and return
# - bus will broadcast 'Machine1Started' to machine1
# - machine1 will add that event to its event queue
# - bus broadcast of 'Machine1Started' returns
# - machine1 is done with handling 'boot' and process the next event in the event queue
# - machine1 cannot handle 'Machine1Started' and will ignore it
# - bus broadcast of 'boot' returns
assert machine1.state == machine2.state
bus.broadcast('boot')
# broadcast 'boot' event to all members:
# - machine2 will reboot
# - machine1 won't do anything
from transitions.extensions.asyncio import AsyncMachine
import asyncio
states = ['off', 'started']
class Machine1(AsyncMachine):
transitions = [{'trigger': 'boot', 'source': 'off', 'dest': 'started', 'before': 'heavy_processing'}]
def __init__(self):
super(Machine1, self).__init__(states=states, transitions=self.transitions, initial='off')
async def heavy_processing(self):
# we need to do some heavy lifting before we can proceed with booting
await asyncio.sleep(0.5)
print("Processing done!")
class Machine2(AsyncMachine):
transitions = [['boot', 'off', 'started']]
def __init__(self, dependency):
super(Machine2, self).__init__(states=states, transitions=self.transitions, initial='off')
self.dependency = dependency
async def on_enter_started(self):
while not self.dependency.is_started():
print("Waiting for dependency to be ready...")
await asyncio.sleep(0.1)
print("Machine2 up and running")
machine1 = Machine1()
machine2 = Machine2(machine1)
asyncio.get_event_loop().run_until_complete(asyncio.gather(machine1.boot(), machine2.boot()))
assert machine1.state == machine2.state
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading raw data into a Table
Step2: Accessing data in a Table
Step3: Some prefer the selectors - column and row
Step4: Rows in the table can be indexed and sliced. A row is a little like a record or dict. It is an tupled, ordered according to the table it comes from and keeping the column names.
Step5: Converting data in a Table to make it workable
Step6: Clean derivatives of raw tables
Step7: Tables allow columns to have customized formatters
Step8: Now we get values we can compute on - and they still display as currency.
Step9: Descriptive Statistics Summary
Step10: OK, so it looks like the average salary is about 86k, and it ranges up to 300k with some hefty overtime pay. Let's see if we can understand what is going on.
Step11: Interesting. Base pay is bimodal. Most employees get no overtime, but there is a looong tale. Let's look at the individual level. Who's at the top?
Step12: So where does the $alary go? First, how many employees?
Step13: Grouping and Sorting Table data
Step14: Wow, 305 Job Titles for 1437 employees!
Step15: Plotting data
Step16: How about that, a few job categories have most of the employes and most of the spend, but it is far from uniform. Let's look a little deeper. Which categories consume most of the budget?
Step17: As is often the case in the real world, the categorization used for operations is not directly useful for analysis. We often need to build categories in order to get a handle on what's going on.
Step18: Applying a function to create a new column
Step19: As is often the case working with real data, we often need to iterate a bit to get what we want out of it. With all those titles, a lot of stuff is likely to end up as other.
Step20: So no job title left has more than 10 employees in it, but some have quite a bit of cost. We could go back and add more entries to our category table and iterate a bit. The important thing is that we create new tables, we don't clobber old ones.
Step21: So let's try to understand the police category a bit more.
Step22: How do the pay labels spread across the force?
Step23: Base pay seems to chunk into categories, perhaps by job title.
Step24: We can't just pivot by Job Title because we don't have a uniform number of rows, but what we can do is for pivot and bin (or histogram) so we can see the distribution of a column by job title.
Step25: Let's compare the 2011 data with more recent 2013 data.
Step26: Well, the data base changed. It picked up a few columns over the years. And we need to convert the salary strings to numbers so we can do analysis on them. All in one go...
Step27: Isn't that interesting. They seem to have gotten their overtime under control. Was that management, end of the occupy movement, something else? Let's do a bit of comparison.
Step28: Snap! They decided that case was a good idea for proper nouns. Let's go back to the old way.
Step29: Now we can put the two tables together to see what happened with employees who were around in both years. Here we get to use another powerful operations on tables.
Step30: Let's add a column with increase in total pay.
Step31: On the tails we have people who joined part way through 2011 or left part way through 2013.
Step32: Well that's interesting. Total compensation seems to have dropped. Did the budget actually go down?
Step33: Look at that.
Step34: Let's see who got promoted or demoted
Step35: Perhaps we might want to look at the relationship of these two variables. That leads to another useful operator
|
<ASSISTANT_TASK:>
Python Code:
# This useful nonsense just goes at the top
from datascience import *
import numpy as np
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
%matplotlib inline
# datascience version number of last run of this notebook
version.__version__
raw_berkeley_sal_2011 = Table.read_table("./data/City_of_Berkeley_Employee_Salaries_-_2011.csv")
# Tables are rendered to work in a notebook, even if they are large. Only part shows.
# You can adjust how much of it you see, but here we'd have ~1500 rows!
raw_berkeley_sal_2011
raw_berkeley_sal_2011['Base Pay']
raw_berkeley_sal_2011.column('Overtime Pay')
raw_berkeley_sal_2011.rows[0]
raw_berkeley_sal_2011.row(0).item('Base Pay')
raw_berkeley_sal_2011.row(0)[2]
raw_berkeley_sal_2011.row(0).asdict()
raw_berkeley_sal_2011.rows[0:10]
paylabels = raw_berkeley_sal_2011.labels[2:]
paylabels
berkeley_sal_2011 = raw_berkeley_sal_2011.copy()
berkeley_sal_2011.set_format(paylabels, CurrencyFormatter)
berkeley_sal_2011
berkeley_sal_2011['Base Pay']
max(berkeley_sal_2011['Total Pay Benefits'])
def firstQtile(x) : return np.percentile(x,25)
def thirdQtile(x) : return np.percentile(x,25)
summary_ops = (min, firstQtile, np.median, np.mean, thirdQtile, max)
berkeley_sal_2011.select(paylabels).stats(ops=summary_ops)
berkeley_sal_2011.select(["Base Pay",
"Overtime Pay",
"Total Pay Benefits"]).hist(overlay=False,normed=False,
bins=40)
berkeley_sal_2011.sort('Total Pay Benefits', descending=True)
berkeley_sal_2011.num_rows
# lose the individual names
job_titles = berkeley_sal_2011.drop("Employee Name")
# Build a handy column full of 1s
job_titles["Title"] = 1
# Group by title summing the number of rows per
by_title = job_titles.group("Job Title", sum)
# Sort by the number of employees per title
ordered_by_title = by_title.sort('Title sum', descending = True)
# let's see what we get
ordered_by_title
ordered_by_title.num_rows
"{0:.3} employees per Job Code".format(berkeley_sal_2011.num_rows/ordered_by_title.num_rows)
ordered_by_title.select(['Title sum','Total Pay Benefits sum']).sort('Title sum').plot(overlay=False)
by_title.sort('Total Pay Benefits sum', descending = True)
by_title.sort('Total Pay Benefits sum', descending = True).row(0)
by_title.select(('Job Title', 'Total Pay Benefits sum')).sort('Total Pay Benefits sum', descending=True)
ordered_by_title['Job Title']
categories = {
'Police': ["POLICE"],
'Fire': ["FIRE"],
'Animal Control':["ANIMAL"],
'Health': ["HEALTH", "PSYCH", "HLTH"],
'Library': ['LIBRARY','LIBRARIAN'],
'Offical' : ['MAYOR','COUNCIL', 'COMMISSIONER', 'CITY MANAGER'],
'Trades' :["ELECTRICIAN","MECHANIC", "ENGINEER"],
'Parking' : ["PARKING"],
'Recreation' : ["RECREATION", "AQUATICS"],
'Gardener' : ["GARDEN"],
"Labor" : ["LABOR", "JANITOR"],
'Community': ["COMMUNITY"],
'Admin' : ["ADMIN"],
'Traffic' : ["TRAFFIC"],
'Accounting' : ["ACCOUNT"],
'Dispatch' : ["DISPATCH"],
'Waste' : ["WASTE", "SEWER"],
'Analyst' : ["ANALYS"],
'Office' : ["OFFICE "],
'Legal' : ['LEGISLAT', 'ATTORN', 'ATTY'],
'IT' : ["PROG", "INFORMATION SYSTEMS"],
'School' : ["SCHOOL"],
'Architect' : ["ARCHITECT"],
'Planner' : ["PLANNER", "PERMIT"]
}
categories
def categorize (title) :
for category, keywords in categories.items():
for word in keywords :
if title.find(word) >= 0 : return category
return 'Other'
berkeley_sal_2011['Category'] = berkeley_sal_2011.apply(categorize, 'Job Title')
berkeley_sal_2011
# lose the individual names
job_categories = berkeley_sal_2011.drop("Employee Name")
job_categories["Cat"] = 1
by_categories = job_categories.group("Category", sum)
by_categories.sort("Total Pay Benefits sum", descending=True).show()
job_categories.where('Category', 'Other')
job_categories.where('Category', 'Other').group('Job Title',sum).sort('Cat sum', descending=True)
job_categories.where('Category', 'Other').group('Job Title',sum).sort('Total Pay Benefits sum', descending=True)
by_categories.sort('Total Pay Benefits sum', descending=True).barh('Category', select=['Total Pay Benefits sum', 'Cat sum'], overlay=False)
police = job_categories.where('Category', 'Police')
police
police.select(paylabels).hist(bins=30,normed=False)
police.select(paylabels).hist(bins=30,normed=False, overlay=False)
police.group('Job Title')
# We can actually get all the data by title
police.select(['Job Title','Base Pay', 'Overtime Pay']).group('Job Title', collect=lambda x:x)
police.pivot_bin('Job Title', 'Base Pay', bins=np.arange(0,200000,10000)).show()
police.pivot_bin('Job Title', 'Base Pay', bins=np.arange(0,200000,10000)).bar('bin')
police.pivot_bin('Job Title', 'Overtime Pay', bins=np.arange(0,200000,10000)).bar('bin')
police.pivot_bin('Job Title', 'Total Pay Benefits', bins=np.arange(0,420000,10000)).bar('bin')
fire = job_categories.where('Category','Fire')
fire.select(paylabels).hist(bins=30)
fire.group('Job Title')
fire.pivot_bin('Job Title', 'Total Pay Benefits', bins=np.arange(0,420000,10000)).bar('bin')
raw_berkeley_sal_2013 = Table.read_table("./data/City_of_Berkeley_Employee_Salaries_-_2013.csv")
raw_berkeley_sal_2013
berkeley_sal_2013 = raw_berkeley_sal_2013.drop(['Year','Notes','Agency'])
berkeley_sal_2013
berkeley_sal_2013.set_format(berkeley_sal_2013.labels[2:], CurrencyFormatter)
berkeley_sal_2013["Total Pay & Benefits"]
berkeley_sal_2013.sort('Total Pay & Benefits',descending=True)
b2011 = berkeley_sal_2011.select(["Employee Name", "Job Title", "Total Pay Benefits"])
b2011.relabel('Total Pay Benefits', "Total 2011")
b2011.sort('Total 2011', descending=True)
b2013 = berkeley_sal_2013.select(["Employee Name", "Job Title", "Total Pay & Benefits"])
b2013.relabel('Job Title','Title 2013')
b2013.relabel("Total Pay & Benefits", "Total 2013")
b2013.sort('Total 2013', descending=True)
b2013['Employee Name'] = b2013.apply(str.upper, 'Employee Name')
b2013
b11_13 = b2011.join('Employee Name', b2013)
b11_13
b11_13["Increase"] =b11_13['Total 2013'] - b11_13['Total 2011']
b11_13.sort('Increase', "decr").select('Increase').plot()
b11_13.stats(summary_ops)
sum2011 = np.sum(berkeley_sal_2011['Total Pay Benefits'])
"${:,}".format(int(sum2011))
sum2013 = np.sum(berkeley_sal_2013['Total Pay & Benefits'])
"${:,}".format(int(sum2013))
"${:,}".format(int(sum2013-sum2011))
"{:.1%}".format((sum2013-sum2011)/sum2011)
np.sum(berkeley_sal_2011['Overtime Pay'])
np.sum(berkeley_sal_2013['Overtime Pay'])
b11_13.where(b11_13['Job Title'] != b11_13['Title 2013']).sort('Total 2013', descending=True).show()
b11_13.where(b11_13['Job Title'] == b11_13['Title 2013']).sort('Increase', descending=True).show()
b11_13.scatter('Total 2011', 'Total 2013')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Basic interactions with the Census dataset API
Step2: Get surname data
Step3: Laying out the API response like a table helps illustrate what we're doing here. For easier reading the "surname_data" variable has been replace with "d" in the image below.
Step4: Create a dataframe (table) with variable labels as column names and append data
Step5: Exercises
Step6: Referring to the variables API, decide which variables are of interest and edit accordingly.
Step7: Build the query URL and send the request. Pass the response data into a Pandas dataframe for viewing.
Step8: 4. Create a name demographic pie-chart
Step9: 5. Fix the data type error
|
<ASSISTANT_TASK:>
Python Code:
# http://api.census.gov/data/2010/surname
import requests
import json
import pandas as pd
import matplotlib.pyplot as plt
# First, get the basic info about the dataset.
# References: Dataset API (https://api.census.gov/data/2010/surname.html)
# Requests API (http://docs.python-requests.org/en/master/)
# Python 3 JSON API (https://docs.python.org/3/library/json.html)
api_base_url = "http://api.census.gov/data/2010/surname"
api_info = requests.get(api_base_url)
api_json = api_info.json()
# Uncomment the next line(s) to see the response content.
# NOTE: JSON and TEXT don't look much different to us. They can look very different to a machine!
#print(api_info.text)
print(json.dumps(api_json, indent=4))
# The output is a dictionary - data are stored as key:value pairs and can be nested.
# Request and store a local copy of the dataset variables.
# Note that the URL could be hard coded just from referencing the API, but
# we are navigating the JSON data.
var_link = api_json['dataset'][0]['c_variablesLink']
print(var_link)
# Use the variable info link to make a new request
variables = requests.get(var_link)
jsonData = variables.json()
variable_data = jsonData['variables']
# Note that this is a dictionary of dictionaries.
print(json.dumps(variable_data, indent=4))
print(variable_data.keys())
# References: Pandas (http://pandas.pydata.org/)
# Default vars: 'RANK,COUNT,PCTWHITE,PCTAPI,PCT2PRACE,PCTAIAN,PCTBLACK,PCTHISPANIC'
desired_vars = 'NAME,COUNT,PCTWHITE,PCTAPI,PCT2PRACE,PCTAIAN,PCTBLACK,PCTHISPANIC&RANK=1:10' # Top ten names
base_url = 'http://api.census.gov/data/2010/surname?get='
query_url = base_url + desired_vars
name_stats = requests.get(query_url)
surname_data = name_stats.json()
# The response data are not very human readable.
# Note that this is a list of lists. Data within lists are typically accessed by position number. (There are no keys.)
print('Raw response data:\n')
print(json.dumps(surname_data, indent=4))
# Pass the data to a Pandas dataframe.
# In addition to being easier to read, dataframes simplify further analysis.
# The simplest dataframe would use the variable names returned with the data. Example: PCTWHITE
# It's easier to read the descriptive labels provide via the variables API.
# The code block below replaces variable names with labels as it builds the dataframe.
column_list = []
for each in surname_data[0]: # For each variable in the response data (stored as surname_data[0])
label = variable_data[each]['label'] # look up that variable's label in the variable dictionary
column_list.append(label) # add the variable's label to the list of column headers
print(each, ":", label)
print('\n', column_list)
df = pd.DataFrame([surname_data[1]], columns=column_list) # Create a dataframe using the column names created above. Data
# for the dataframe comes from rows 2-10 (positions 1-9)
# of surname_data.
# The table we just created is empty. Here we add the surname data:
for surname in d[2:]:
tdf = pd.DataFrame([surname], columns=column_list)
df = df.append(tdf)
print('\n\nPandas dataframe:')
df.sort_values(by=["National Rank"])
# Try 'STEUBEN' in order to break the first pie chart example.
# Update 2020-02-26: Surnames should be all caps!
name = 'WHEELER'
name_query = '&NAME=' + name
# Default vars: 'RANK,COUNT,PCTWHITE,PCTAPI,PCT2PRACE,PCTAIAN,PCTBLACK,PCTHISPANIC'
desired_vars = 'RANK,COUNT,PCTWHITE,PCTAPI,PCT2PRACE,PCTAIAN,PCTBLACK,PCTHISPANIC'
# References: Pandas (http://pandas.pydata.org/)
base_url = 'http://api.census.gov/data/2010/surname?get='
query_url = base_url + desired_vars + name_query
name_stats = requests.get(query_url)
d = name_stats.json()
# The response data are not very human readable.
print('Raw response data:\n')
print(d)
# Pass the data to a Pandas dataframe.
# In addition to being easier to read, dataframes simplify further analysis.
# The simplest dataframe would use the variable names returned with the data. Example: PCTWHITE
# It's easier to read the descriptive labels provide via the variables API.
# The code block below replaces variable names with labels as it builds the dataframe.
column_list = []
for each in d[0]: # For each variable in the response data (stored as d[0])
label = v[each]['label'] # look up that variable's label in the variable dictionary
column_list.append(label) # add the variable's label to the list of column headers
df = pd.DataFrame([d[1]], columns=column_list) # Create a dataframe using the column names created above. Data
# for the dataframe comes from d[1]
print('\n\nPandas dataframe:')
df
# Using index positions is good for doing something quick, but in this case makes code easy to break.
# Selecting different surname dataset variables or re-ordering variables will result in errors.
print(d)
pcts = d[1][2:8]
print('\n\n',pcts)
# Create the labels and get the data for the pie chart.
# Note that we are using the downloaded source data, not the dataframe
# used for the table above.
labels = ['White', 'Asian', '2+ Races', 'Native American', 'Black', 'Hispanic']
pcts = d[1][2:8]
#print(pcts)
# Create a pie chart (https://matplotlib.org/2.0.2/examples/pie_and_polar_charts/pie_demo_features.html)
plt.pie(
# using data percentages
pcts,
# Use labels defined above
labels=labels,
# with no shadows
shadow=False,
# with the start angle at 90%
startangle=90,
# with the percent listed as a fraction
autopct='%1.1f%%',
)
# View the plot drop above
plt.axis('equal')
# View the plot
plt.tight_layout()
plt.show()
# First try - just replace string with a zero.
# Here, the for loop iterates through items in a list.
pcts2 = []
for p in pcts:
if p != '(S)':
pcts2.append(p)
else:
pcts2.append(0)
# Create a pie chart (https://matplotlib.org/2.0.2/examples/pie_and_polar_charts/pie_demo_features.html)
plt.pie(
# using data percentages
pcts2,
# Use labels defined above
labels=labels,
# with no shadows
shadow=False,
# with the start angle at 90%
startangle=90,
# with the percent listed as a fraction
autopct='%1.1f%%',
)
# View the plot drop above
plt.axis('equal')
# View the plot
plt.tight_layout()
plt.show()
# Second try - exclude and corresponding label if source data for a given demographic == (S)
# This requires the list index of the data and the label.
# The for loop in this case iterates across a range of integers equal to the length of the list.
pcts3 = []
edit_labels = []
for i in range(len(pcts)):
print(pcts[i])
if pcts[i] != '(S)':
pcts3.append(pcts[i])
edit_labels.append(labels[i])
else:
pass
# Create a pie chart (https://matplotlib.org/2.0.2/examples/pie_and_polar_charts/pie_demo_features.html)
plt.pie(
# using data percentages
pcts3,
# Use labels defined above
labels=edit_labels,
# with no shadows
shadow=False,
# with the start angle at 90%
startangle=90,
# with the percent listed as a fraction
autopct='%1.1f%%',
)
# View the plot drop above
plt.axis('equal')
# View the plot
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-lmec', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <hr>
Step2: You can specify a script to run on startup with the <code>--user-data</code> option. This enables you to do things like automatically install <code>anaconda</code> and <code>nltk</code> when the instance is created, or to configure a server before you launch it.
Step3: Snapshots
Step4: EMR
|
<ASSISTANT_TASK:>
Python Code:
import json
import os
import boto3
s3 = boto3.client("s3")
# create instance
response = !aws ec2 run-instances --image-id ami-a9d276c9 \
--count 1 \
--instance-type t2.micro \
--key-name ec2_rob \
--security-groups ssh_only \
--block-device-mappings file://examples/blockDeviceMappings.json \
--user-data file://examples/bootstrap-ec2.sh
# view block device mappings
with open('examples/blockDeviceMappings.json') as f:
block_device_mappings = json.load(f)
print json.dumps(block_device_mappings[0])+"\n"+json.dumps(block_device_mappings[1])
# convert response to json dict
data = json.loads(''.join(list(response)))
# get instance id
instance_id = data["Instances"][0]["InstanceId"]
print instance_id
# add name tag
!aws ec2 create-tags --resources {instance_id} --tags Key=Name,Value=awsTestInstance
# best practices guide
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-best-practices.html
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next import the dvr_1d module. We import dvr_1d using a series of ipython notebook magic commands so that we can make changes to the module file and test those changes in this notebook without having to restart the notebook kernel.
Step2: First, we'll test the 1-D sinc function DVR on a simple harmonic oscillator potential.
Step3: Let's try the same potential but with a Hermite basis. We see that since the Hermite polynomials are the exact solutions to the SHO problem, we can use npts == num_eigs. The eigenvectors will look horrible but the eigenvalues will be very accurate.
Step4: Next we'll test the 1-D Sinc DVR on a finite and an infinite square well. "Infinite" here just means really really huge.
Step5: And we might as well try it out with the Hermite basis set too.
Step6: Let's repeat all these tests with the 1-D Fourier Sine DVR.
Step7: Let's test the Bessel DVR
Step8: Testing the 2-D DVR
Step9: Now we'll construct a 2-D Sinc DVR from a product basis of 1-D Sinc DVRs.
Step10: Testing the 3-D DVR
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.sparse as sp
import scipy.sparse.linalg as sla
# autoreload the lattice module so that we can make changes to it
# without restarting the ipython notebook server
%load_ext autoreload
%autoreload 1
%aimport dvr_1d
%aimport dvr_2d
%aimport dvr_3d
d = dvr_1d.SincDVR(npts=200, L=14)
d.sho_test(precision=12)
d = dvr_1d.HermiteDVR(npts=5)
d.sho_test(k=1., precision=11)
d = dvr_1d.SincDVR(npts=500, L=20)
d.square_well_test(precision=6)
d.inf_square_well_test(precision=6)
d = dvr_1d.HermiteDVR(npts=268)
d.square_well_test(precision=6)
d.inf_square_well_test(precision=6)
d = dvr_1d.SineDVR(npts=1000, xmin=-15, xmax=15)
d.sho_test(precision=12)
d.square_well_test(precision=6)
d.inf_square_well_test(precision=6)
d = dvr_1d.BesselDVR(npts=100, R=20., dim=3, lam=1)
d.sho_test(xmin=0., xmax=8., ymin=0., ymax=12.)
%matplotlib osx
d1d = dvr_1d.SincDVR(npts = 30, L=10)
d2d = dvr_2d.DVR(dvr1d=d1d)
E, U = d2d.sho_test(num_eigs=5, precision=14)
d1d = dvr_1d.HermiteDVR(npts=5)
d2d = dvr_2d.DVR(dvr1d=d1d)
E, U = d2d.sho_test(num_eigs=5, precision=14)
d1d = dvr_1d.SincDVR(npts = 30, L=10)
d2d = dvr_2d.DVR(dvr1d=d1d)
E, U = d2d.sho_test(num_eigs=5, precision=10, uscale=3.5, doshow=True)
d1d = dvr_1d.SineDVR(npts = 30, xmin=-5., xmax=5.)
d2d = dvr_2d.DVR(dvr1d=d1d)
E, U = d2d.sho_test(num_eigs=5, uscale=3., doshow=False)
#plt.spy(d2d.t().toarray(), markersize=.1);
#plt.savefig('/Users/Adam/Dropbox/amath585_final/K_2D_sparsity.png', bbox_inches='tight', dpi=400)
d1d = dvr_1d.SincDVR(npts = 16, L=10)
d3d = dvr_3d.DVR(dvr1d=d1d, spf='csr')
E, U = d3d.sho_test(num_eigs=5)
#plt.spy(d3d.t().toarray(), markersize=.1);
#plt.savefig('/Users/Adam/Dropbox/amath585_final/K_3D_sparsity.png', bbox_inches='tight', dpi=400)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run a query
Step2: Run a parameterized query
Step3: Into a dataframe
|
<ASSISTANT_TASK:>
Python Code:
#!pip install google-cloud-bigquery
%load_ext google.cloud.bigquery
PROJECT='cloud-training-demos' # CHANGE THIS
%%bigquery --project $PROJECT
SELECT
start_station_name
, AVG(duration) as duration
, COUNT(duration) as num_trips
FROM `bigquery-public-data`.london_bicycles.cycle_hire
GROUP BY start_station_name
ORDER BY num_trips DESC
LIMIT 5
PARAMS = {"num_stations": 3}
%%bigquery --project $PROJECT --params $PARAMS
SELECT
start_station_name
, AVG(duration) as duration
, COUNT(duration) as num_trips
FROM `bigquery-public-data`.london_bicycles.cycle_hire
GROUP BY start_station_name
ORDER BY num_trips DESC
LIMIT @num_stations
%%bigquery df --project $PROJECT
SELECT
start_station_name
, AVG(duration) as duration
, COUNT(duration) as num_trips
FROM `bigquery-public-data`.london_bicycles.cycle_hire
GROUP BY start_station_name
ORDER BY num_trips DESC
df.describe()
df.plot.scatter('duration', 'num_trips');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: End of Warning
Step2: Problem 1) Light Curve Data
Step3: As we have many light curve files (in principle as many as 37 billion...), we will define a Source() class to ease our handling of the data.
Step4: Problem 1c
Step5: One thing that we brushed over previously is that the brightness measurements have units of flux, rather than the traditional use of magnitudes. The reason for this is that LSST will measure flux variations via image differencing, which will for some sources in some filters result in a measurement of negative flux. (You may have already noticed this in 1a.) Statistically there is nothing wrong with such a measurement, but it is impossible to convert a negative flux into a magnitude. Thus we will use flux measurements throughout this exercise.
Step6: Problem 2) Data Preparation
Step7: Problem 2b
Step8: Problem 3) Store the sources in a database
Step9: Following the sqlite3 import, we must first connect to the database. If we attempt a connection to a database that does not exist, then a new database is automatically created. Here we will create a new database file, called MiniBroker.db.
Step10: We now have a database connection object, conn. To interact with the database (create tables, load data, write queries) we need a cursor object.
Step13: Now that we have a cursor object, we can populate the database. As an example we will start by creating a table to hold all the raw photometry (though ultimately we will not use this table for analysis).
Step17: Let's unpack everything that happened in these two commands. First - if the table rawPhot already exists, we drop it to start over from scratch. (this is useful here, but should not be adopted as general practice)
Step21: There are two things to highlight above
Step22: Load all of the raw photometric observations into the rawPhot table in the database.
Step23: Problem 3c
Step25: While we cannot load the light curves to the database in parallel, using the concept of MapReduce we can use the iPython "cluster" that we created to map the I/O process (typically lots of overhead) to multiple machines, and then we can reduce their output to a single object to be loaded into the database.
Step26: You shold have confirmed that loading a full light curve is faster than loading individual observations (not by a ton, but faster is faster and microseconds count when you are dealing with 37 billion sources).
Step27: Now, we use the previously defined lview variable to achieve DirectView access to the cluster to map the load_data function on to all the filenames, and get the results.
Step31: And now that you have results, drop the rawPhot table if it already exists, and load the observations into the database as before.
Step33: Problem 3e Confirm that reading the data in parallel is faster than the seriel operations in 3d using the %%timeit function as before.
Step34: That isn't quite a factor of 4x speed up, but it represents a significant improvement, and recall that writing to the database must happen as a serial process because we are using SQLite.
Step37: Problem 3) Part 2 -- Calculate and store the features in the same database
Step38: The above procedure should look familiar to above, with one exception
Step39: Problem 3h
Step40: Finally, we close by commiting the changes we made to the database.
Step44: Problem 4) Build a Classification Model
Step45: For now - don't worry about what the labels mean (though if you inspect the light curves you may be able to figure this out...)
Step46: Problem 4b
Step47: The SVM model does a decent job of classifying the data. However - we are going to have 10 million alerts every night. Therefore, we need something that runs quickly. For most ML models the training step is slow, while predictions are (relatively) fast.
Step48: Problem 4d
Step49: Problem 4e
Step50: Problem 5b
Step52: Problem 5c
Step53: It turns out that the classification performance in this case isn't particularly good.
Step55: As we discussed in the lecture, one challenge for this is problem is that individual observations are largely spaced out, and have very large uncertainties. The features that we have currently adopted (which are really just moments of the flux distribution) aren't robust to outliers, and are subject to the precise cadence of the observations.
Step56: Lets look at one fit to see what's happening
Step57: Not only does GP regression provide an interpolation between observations, but it returns an uncertainty about that interpolation as well. Thus, it's possible to measure features on interpolated light curves that are observed with precisely the same cadence for all sources.
Step58: Problem 6b
|
<ASSISTANT_TASK:>
Python Code:
# remove display of install details
%%capture --no-display
!pip install ipyparallel
# authorize Google to access Google drive files
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
import subprocess
subprocess.Popen(['ipcluster', 'start', '-n', '4'])
gdrive_train_link = 'https://drive.google.com/file/d/1B8fZkx-be5vvCsmCJ_kACUu51LXxbZWT/view?usp=sharing'
id = '1B8fZkx-be5vvCsmCJ_kACUu51LXxbZWT'
downloaded = drive.CreateFile({'id':id})
downloaded.GetContentFile('training_set_for_LSST_DSFP.tar.gz')
subprocess.call(['tar', '-zxf', 'training_set_for_LSST_DSFP.tar.gz']
gdrive_test_link = 'https://drive.google.com/file/d/181zzrkU02FciaeuV714MB5QZ39N32R3k/view?usp=sharing'
id = '181zzrkU02FciaeuV714MB5QZ39N32R3k'
downloaded = drive.CreateFile({'id':id})
downloaded.GetContentFile('test_set_for_LSST_DSFP.tar.gz')
subprocess.call['tar', '-zxf', 'test_set_for_LSST_DSFP.tar.gz']
# first we need to construct a client that will interface with our cluster
from ipyparallel import Client, require
worker = Client()
# once we create a client, we can decide how to allocate tasks across the cluster
# we've got however many 'engines' you started in the cluster
# lets just use all of them
lview = worker[:]
# now if you want to import packages, you can import them across all the 'engines'
with lview.sync_imports():
import numpy as np
import scipy.stats as spstat
import pandas as pd
import os
# there's not much point in creating plots on the engines - we want to actually see them presumably
%matplotlib inline
import matplotlib.pyplot as plt
# If everything has worked so far, you should see a list of worker IDs, and a corresponding list of process IDs.
# You can verify it with a `ps`
ar = lview.apply_async(os.getpid)
print("Engine ID:PID mapping: ", ar.get_dict())
lc = pd.read_csv( # wranglin time
g_obs = np.where(lc['pb'] == 'g')
fig, ax = plt.subplots(figsize=(7,7))
ax.errorbar(#times,
#flux,
#flux_uncertainty,
fmt = 'o', color = 'green')
ax.set_xlabel('MJD')
ax.set_ylabel('flux')
fig.tight_layout()
class Source():
'''Astronomical source object for NOAO formatted light curve'''
def __init__(self, filename):
'''Read in light curve data'''
lc_df = pd.read_csv(filename, delim_whitespace=True, comment = '#')
u_obs = np.where(lc_df['pb'] == 'u')
if len(u_obs[0]) > 0:
lc_df.drop(lc_df.index[u_obs], inplace=True)
self._lc_df = lc_df
self._filename = filename
self._filters = np.unique(self._lc_df['pb'])
def plot_multicolor_lc(self):
'''Plot the 4 band light curve'''
fig, ax = plt.subplots(figsize=(7,7))
color_dict = {'g': '#78A5A3',
'r': '#CE5A57',
'i': '#E1B16A',
'z': '#444C5C', }
for filt in self._filters:
# complete
# complete
# complete
# complete
ax.legend(fancybox = True)
ax.set_xlabel(r"$\mathrm{MJD}$")
ax.set_ylabel(r"$\mathrm{flux}$")
fig.tight_layout()
# complete
lc60 = # complete
# complete
lc95 = # complete
# complete
class Variable( # complete
'''Variable subclass of fake sources'''
def __init__( # complete
# complete
def filter_flux(self):
'''Store individual passband fluxes as object attributes'''
for filt in self._filters:
filt_obs = np.where(self._lc_df['pb'] == filt)
setattr(self, '_{0}_time'.format(filt),
self._lc_df['t'].iloc[filt_obs])
setattr(self, '_{0}_flux'.format(filt),
self._lc_df['flux'].iloc[filt_obs])
setattr(self, '_{0}_flux_unc'.format(filt),
self._lc_df['dflux'].iloc[filt_obs])
def weighted_mean_flux(self):
'''Measure (SNR weighted) mean flux in griz'''
if not hasattr(self, '_g_flux'):
self.filter_flux()
weighted_mean = lambda flux, dflux: np.sum(flux*(flux/dflux)**2)/np.sum((flux/dflux)**2)
for filt in self._filters:
filt_flux = getattr(self, '_{0}_flux'.format(filt))
filt_dflux = getattr(self, '_{0}_flux_unc'.format(filt))
setattr(self, '_{0}_mean'.format(filt),
weighted_mean(filt_flux, filt_dflux))
def normalized_flux_std(self):
# complete
# complete
# complete
# complete
def normalized_amplitude(self):
# complete
# complete
# complete
# complete
def normalized_MAD(self):
# complete
# complete
# complete
# complete
def beyond_1std(self):
# complete
# complete
# complete
# complete
def skew(self):
# complete
# complete
# complete
# complete
def mean_colors(self):
# complete
# complete
# complete
# complete
var = # complete
var.mean_colors()
print("The g'-r', r'-i', and 'i-z' colors are:\
{:.3f}, {:.3f}, and {:.3f}, respectively.". format(var._g_minus_r, var._r_minus_i, var._i_minus_z))
import sqlite3
conn = sqlite3.connect("MiniBroker.db")
cur = conn.cursor()
cur.execute(drop table if exists rawPhot) # drop the table if is already exists
cur.execute(create table rawPhot(
id integer primary key,
objId int,
t float,
pb varchar(1),
flux float,
dflux float)
)
filename = "training_set_for_LSST_DSFP/FAKE001.dat"
var = Variable(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
cur.execute(insert into rawPhot(objId, t, pb, flux, dflux) values {}.format((objId,) + tuple(var._lc_df.iloc[0])))
cur.execute(insert into rawPhot(objId, t, pb, flux, dflux) values {}.format((objId,) + tuple(var._lc_df.iloc[1])))
cur.execute(insert into rawPhot(objId, t, pb, flux, dflux) values {}.format((objId,) + tuple(var._lc_df.iloc[2])))
cur.execute(drop table if exists rawPhot) # drop the table if it already exists
cur.execute(create table rawPhot(
id integer primary key,
objId int,
t float,
pb varchar(1),
flux float,
dflux float)
)
# next 3 lines are already in name space; repeated for clarity
filename = "training_set_for_LSST_DSFP/FAKE001.dat"
var = Variable(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
data = [(objId,) + tuple(x) for x in var._lc_df.values] # array of tuples
cur.executemany(insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?), data)
# build your list of filenames here
import glob
filenames = glob.glob("training_set_for_LSST_DSFP/FAKE*.dat")
for # complete in complete:
# complete
# complete
# complete
# complete
# complete
cur.execute( # complete
data = cur.fetchall()
data = np.array(data)
fig, ax = plt.subplots()
ax.errorbar(#time,
#flux,
#flux_uncertainties,
fmt = 'o', color = '#78A5A3')
ax.set_xlabel(r"$\mathrm{MJD}$")
ax.set_ylabel(r"$\mathrm{flux}$")
fig.tight_layout()
tmp_conn1 = sqlite3.connect("tmp1.db")
tmp_cur1 = # complete
tmp_cur1.execute(drop table if exists rawPhot) # drop the table if it already exists
tmp_cur1.execute( # create the table
%%timeit
# time how long it takes to load every observation individually
for filename in filenames:
# complete
# complete
# complete
# complete
# complete
tmp_conn2 = # complete
# complete
# complete
# complete
%%timeit
# time how long it takes to load all observations of a single source at once
# complete
# complete
# complete
# complete
# complete
# complete
# execute this
@require(Source, Variable, 'pandas as pd', 'numpy as np')
def load_data(filename):
var = Variable(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
data = [(objId,) + tuple(x) for x in var._lc_df.values] # array of tuples
return data
result = lview.map(load_data, filenames)
all_data = result.get()
cur.execute(drop table if exists rawPhot) # drop the table if it already exists
cur.execute(create table rawPhot(
id integer primary key,
objId int,
t float,
pb varchar(1),
flux float,
dflux float)
)
for data in all_data:
cur.executemany(insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?), data)
tmp_conn3 = # complete
# complete
# complete
# complete
# complete
# complete
%%timeit
# time how long it takes to read files in parallel
result = lview.map(load_data, filenames)
all_data = result.get()
for data in all_data:
tmp_cur3.executemany(insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?), data)
cur.execute( # complete
data = cur.fetchall()
data = np.array(data)
fig, ax = plt.subplots()
ax.errorbar(#time,
#flux,
#flux_uncertainties,
fmt = 'o', color = '#78A5A3')
ax.set_xlabel(r"$\mathrm{MJD}$")
ax.set_ylabel(r"$\mathrm{flux}$")
fig.tight_layout()
cur.execute(drop table if exists lcFeats) # drop the table if it already exists
cur.execute(create table lcFeats(
id integer primary key,
objId int,
gStd float,
rStd float,
iStd float,
zStd float,
gAmp float,
rAmp float,
iAmp float,
zAmp float,
gMAD float,
rMAD float,
iMAD float,
zMAD float,
gBeyond float,
rBeyond float,
iBeyond float,
zBeyond float,
gSkew float,
rSkew float,
iSkew float,
zSkew float,
gMinusR float,
rMinusI float,
iMinusZ float,
FOREIGN KEY(objId) REFERENCES rawPhot(objId)
)
)
# as before, define a function to calculate the features for one lightcurve
@require(Source, Variable, 'pandas as pd', 'numpy as np', 'scipy.stats as spstat')
def calculate_features(filename):
var = Variable(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
var.weighted_mean_flux()
var.normalized_flux_std()
var.normalized_amplitude()
var.normalized_MAD()
var.beyond_1std()
var.skew()
var.mean_colors()
feats = (objId, var._g_std, var._r_std, var._i_std, var._z_std,
var._g_amp, var._r_amp, var._i_amp, var._z_amp,
var._g_mad, var._r_mad, var._i_mad, var._z_mad,
var._g_beyond, var._r_beyond, var._i_beyond, var._z_beyond,
var._g_skew, var._r_skew, var._i_skew, var._z_skew,
var._g_minus_r, var._r_minus_i, var._i_minus_z)
return feats
# and then lets map the function onto all the data
result = lview.map(calculate_features, filenames)
# and get the result
all_feats = result.get()
# and now load it all into our table
for feats in all_feats:
cur.execute( # complete
cur.execute( # complete
nAmp2 = cur.fetchone()[0]
cur.execute( # complete
nNoColor = cur.fetchone()[0]
print("There are {:d} sources with gAmp > 2".format(nAmp2))
print("There are {:d} sources with no measured i' - z' color".format(nNoColor))
conn.commit()
cur.execute(drop table if exists lcLabels) # drop the table if it already exists
cur.execute(create table lcLabels(
objId int,
label int,
foreign key(objId) references rawPhot(objId)
))
labels = np.zeros(100)
labels[20:60] = 1
labels[60:] = 2
data = np.append(np.arange(1,101)[np.newaxis].T, labels[np.newaxis].T, axis = 1)
tup_data = [tuple(x) for x in data]
cur.executemany(insert into lcLabels(objId, label) values (?,?), tup_data)
cur.execute( # complete
y = np.array(cur.fetchall()).ravel()
cur.execute(# complete - note objID is NOT a feature
X = np.array(cur.fetchall())
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
cv_scores = cross_val_score( # complete
print("The SVM model produces a CV accuracy of {:.4f}".format(np.mean(cv_scores)))
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier(n_estimators=25)
svm_clf = SVC(C = 1.0, gamma = 0.1, kernel = 'rbf')
%%timeit
# timing solution for the model you choose
# complete
%%timeit
# timing solution for SVM model
# complete
cv_scores = cross_val_score( # complete
print("My model produces a CV accuracy of {:.4f}".format( # complete
# create new tables testPhot and testFeats
new_obs_filenames = glob.glob("test_set_for_LSST_DSFP/FAKE*.dat")
result = lview.map( # complete
for data in all_data:
# complete
result = lview.map(calculate_features, new_obs_filenames)
all_feats = result.get()
for feats in all_feats:
# complete
# complete
# complete
# complete
clf = # complete
cur.execute( # complete
X_new = np.array(cur.fetchall())
y_preds = # complete
print(There are {:d}, {:d}, and {:d} sources
in classes 1, 2, 3, respectively.format(*list(np.bincount(y_preds)))) # be careful using bincount
! grep label test_set_for_LSST_DSFP/*dat | sed -e 's/label:/ /' | awk '{print $2+1}' | sort | uniq -c
import sklearn.gaussian_process as gp
# we'll create a function that just fits the g band lightcurve
def calculate_gpfit(filename):
Do a quick gaussian process regression of just g-band lightcurve to get a smooth representation
This should be more robust to outliers
var = Variable(filename)
var.filter_flux()
t = var._g_time.values
y = var._g_flux.values
dy = var._g_flux_unc.values
t = t.reshape(-1, 1)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
# a Gaussian Process is defined by a correlation function that relates each point to each other
# I've chosen a simple common kernel - the Matern kernel.
# It has a nice property in that it is 'stationary' - the covariance between points
# only depends on their separation
base_kernel = gp.kernels.Matern(length_scale=10.,nu=2.5, length_scale_bounds=(1.,20.))
gkernel = 0.5*(np.median(dy)**2.)*base_kernel
# the second thing you need to define a Gaussian Process is a mean function, which in our case is
# implictly zero (this is fine for transients - they spend most of eternity at the background!)
gband = gp.GaussianProcessRegressor(kernel=gkernel, alpha=dy**2.).fit(t, y)
# now lets use the Gaussian process to predict the lightcurve on an evenly sampled grid
# this is appropriate for Wavelets, or generally robust feature extraction
newtime = np.linspace(t.min(), t.max(), 200, endpoint=True)
newtime = newtime.reshape(-1, 1)
gnew = gband.predict(newtime, return_std=True)
return (t, y , dy, newtime, gnew)
ftest = filenames[0:20]
gtime, gflux, gdflux, gnewtime, gnewflux = calculate_gpfit(ftest[6])
fig, ax = plt.subplots(figsize=(7,7))
ax.errorbar(gtime, gflux, gdflux, fmt = 'o', color = '#78A5A3')
ax.plot(gnewtime, gnewflux[0], 'k-')
ax.fill(np.concatenate([gnewtime, gnewtime[::-1]]),
np.concatenate([gnewflux[0] - 1.9600 * gnewflux[1],
(gnewflux[0] + 1.9600 * gnewflux[1])[::-1]]),
alpha=.5, fc='0.8', ec='None', label='95% confidence interval')
ax.set_xlabel(r"$\mathrm{MJD}$")
ax.set_ylabel(r"$\mathrm{flux}$")
fig.tight_layout()
from sklearn.ensemble import IsolationForest
isoF_clf = # complete
# complete
# complete
print("The 10 most anomalous sources are: {}".format( # complete
lc491 = Source( # complete
lc491.plot_multicolor_lc()
lc2047 = Source( # complete
lc2047.plot_multicolor_lc()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Adding Datasets
Step3: Running Compute
Step4: Plotting
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
times = np.linspace(0,1,21)
b.add_dataset('lc', times=times, dataset='lc01')
b.add_dataset('rv', times=times, dataset='rv01')
b.add_dataset('mesh', times=times, columns=['visibilities', 'intensities@lc01', 'rvs@rv01'], dataset='mesh01')
b.run_compute(irrad_method='none')
b['lc01@model'].plot(axpos=221)
b['rv01@model'].plot(c={'primary': 'blue', 'secondary': 'red'}, linestyle='solid', axpos=222)
b['mesh@model'].plot(fc='intensities@lc01', ec='None', axpos=425)
b['mesh@model'].plot(fc='rvs@rv01', ec='None', axpos=427)
b['mesh@model'].plot(fc='visibilities', ec='None', y='ws', axpos=224)
fig = plt.figure(figsize=(11,4))
afig, mplanim = b.savefig('animation_binary_complete.gif', fig=fig, tight_layouot=True, draw_sidebars=False, animate=True, save_kwargs={'writer': 'imagemagick'})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 4
sample_id = 10
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return x/255 # Simple division by scalar 255: From [0-255] to [0-1]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer() # create encoder
lb.fit(range(10)) # assigns one-hot vector to 0-9
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
return lb.transform(x) # Transform the labels into one-hot encoded vectors
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
tensor_shape = [None] + list(image_shape)
return tf.placeholder(tf.float32, tensor_shape, name = 'x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
tensor_shape = [None, n_classes]
return tf.placeholder(tf.float32, tensor_shape, name = 'y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
height = conv_ksize[0]
width = conv_ksize[1]
input_depth = x_tensor.get_shape().as_list()[3]
output_depth = conv_num_outputs
W = tf.Variable(tf.truncated_normal((height, width, input_depth, output_depth), stddev=0.1)) # conv layer weight
b = tf.Variable(tf.truncated_normal([output_depth], stddev=0.1)) # conv layer bias
x_conv = tf.nn.conv2d(x_tensor, W, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
x_conv = tf.nn.bias_add(x_conv, b)
x_conv = tf.nn.relu(x_conv) # nonlinear activation ReLU
x_conv_pool = tf.nn.max_pool(x_conv, ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME')
return x_conv_pool
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
import numpy as np # (imported again because of the above check point)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
x_dim = x_tensor.get_shape().as_list() # list with dimmensions of the tensor:[batch_size, ...]
n_input = np.prod(x_dim[1:]) # size of the image (features)
x_flat = tf.reshape(x_tensor, [-1, n_input])
return x_flat
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
x_dim = x_tensor.get_shape().as_list()
n_input = np.prod(x_dim[1:])
W = tf.Variable(tf.truncated_normal([n_input, num_outputs], stddev=0.1))
b = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.1))
fcl = tf.add(tf.matmul(x_tensor, W), b)
fcl = tf.nn.relu(fcl) # nonlinear activation ReLU
return fcl
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
x_dim = x_tensor.get_shape().as_list()
n_input = np.prod(x_dim[1:])
W = tf.Variable(tf.truncated_normal([n_input, num_outputs], stddev=0.1))
b = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.1))
out = tf.add(tf.matmul(x_tensor, W), b)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 32 # >=64 results in memory issues (Frankfurt AWS instance)
conv_ksize = [2,2] # better and more stable results were obtained with [2,2] than using larger masks
conv_strides = [1,1]
pool_ksize = [2,2]
pool_strides = [2,2] # (width and height will be reduced by maxpool)
#3 convolutional+maxpool layers with the same parameters except for the ouput depth:
# conv1: from 32x32x3 to 16x16x32 (maxpool reduces the size)
conv1 = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# conv2: from 16x16x32 to 8x8x128
conv_num_outputs = 128
conv2 = conv2d_maxpool(conv1, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# conv3: 8x8x128 to 4x4x512 (improves the accurracy by ~2%)
conv_num_outputs = 512
conv3 = conv2d_maxpool(conv2, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x_flat = flatten(conv3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc1 = fully_conn(x_flat, 8096) # 2 hidden layers lead to overfitting
fc1 = tf.nn.dropout(fc1, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = output(fc1, 10)
# TODO: return output
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch,
keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.})
valid_acc = session.run(accuracy, feed_dict={
x: valid_features,
y: valid_labels,
keep_prob: 1.})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(
loss,
valid_acc))
# TODO: Tune Parameters
epochs = 10 # tested from 5 to 50
batch_size = 256 # tested from 64 to 2048
keep_probability = 0.8 # tested from 0.5 to 0.9
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Check dynamic offset file for consistency
Step2: Run the ACA model and get new offsets
Step3: Compare values to dynamic offset table from Matlab
Step4: The maximum differences in the offsets between the values via an independent run of the model are within an arcsec.
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
from glob import glob
import json
import numpy as np
from astropy.table import Table
from Chandra.Time import DateTime
from Ska.Matplotlib import plot_cxctime
from chandra_aca import drift
import parse_cm
TEST_DIR = '/proj/sot/ska/ops/SFE/JUL0415O/oflso'
dynam_table = Table.read(glob("{}/*dynamical_offsets.txt".format(TEST_DIR))[0], format='ascii')
# first, check table for self-consistent offsets
ys = []
zs = []
for row in dynam_table:
y, z = drift.get_aca_offsets(row['detector'], row['chip_id'], row['chipx'], row['chipy'],
time=row['mean_date'], t_ccd=row['mean_t_ccd'])
ys.append(y)
zs.append(z)
print "Y offsets consistent: {}".format(np.allclose(dynam_table['aca_offset_y'], ys, atol=0.02))
print "Z offsets consistent: {}".format(np.allclose(dynam_table['aca_offset_z'], zs, atol=0.02))
from starcheck.calc_ccd_temps import get_ccd_temps
obsid_info = json.loads(get_ccd_temps(TEST_DIR,
json_obsids=open("{}/starcheck/obsids.json".format(TEST_DIR)),
model_spec="{}/starcheck/aca_spec.json".format(TEST_DIR),
char_file="/proj/sot/ska/data/starcheck/characteristics.yaml",
orlist="{}/mps/or/JUL0415_A.or".format(TEST_DIR)));
y_diff = []
z_diff = []
for obsid in dynam_table['obsid']:
dyn_rec = dynam_table[dynam_table['obsid'] == obsid][0]
if str(obsid) in obsid_info:
print "{} offset y {: .2f} vs {: .2f} offset z {: .2f} vs {: .2f} t_ccd {: .2f} vs {: .2f}".format(
obsid,
dyn_rec['aca_offset_y'], obsid_info[str(obsid)]['aca_offset_y'],
dyn_rec['aca_offset_z'], obsid_info[str(obsid)]['aca_offset_z'],
dyn_rec['mean_t_ccd'], obsid_info[str(obsid)]['ccd_temp'])
y_diff.append(dyn_rec['aca_offset_y'] - obsid_info[str(obsid)]['aca_offset_y'])
z_diff.append(dyn_rec['aca_offset_z'] - obsid_info[str(obsid)]['aca_offset_z'])
y_diff = np.array(y_diff)
z_diff = np.array(z_diff)
print "Y offset max difference {:.2f} arcsec".format(np.max(np.abs(y_diff)))
print "Z offset max difference {:.2f} arcsec".format(np.max(np.abs(z_diff)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ์ ์์ค ์ฝ๋๋ฅผ .py ํ์ผ ๋๋ jupyter notebook์ ์
๋ ฅํ์ฌ ํ์ด์ฌ์ผ๋ก ์คํ ์ํค๋ฉด "nb_arithmetic_functions.ipynb" ํ์ผ์ด ์์ฑ๋๋ฉฐ, jupyter notebook์ผ๋ก ์คํํ๊ฑฐ๋, ์ฝ์์ฐฝ(cmd)์์ ํด๋น ํ์ผ์ด ์๋ ํด๋๋ก ์ด๋ ํ ์๋์ ๊ฐ์ด ์
๋ ฅํ๋ฉด ํด๋น ํ์ผ์ด ์คํ ๋ ๊ฒ์ด๋ค.
Step2: Problem #2 - minus ํจ์
Step3: ๊ฒฐ๊ณผ ์ ์ถ ํ๊ธฐ
|
<ASSISTANT_TASK:>
Python Code:
import gachon_autograder_client as g_autograder
EMAIL = "#YOUR_EMAIL"
PASSWORD = "#YOUR_PASSWORD"
ASSIGNMENT_NAME = "nb_test"
g_autograder.get_assignment(EMAIL, PASSWORD, ASSIGNMENT_NAME)
def addition(a, b):
result = None
return result
# ์คํ๊ฒฐ๊ณผ
print (addition(5, 3))
print (addition(10, 5))
def minus(a, b):
result = None
return result
# ์คํ๊ฒฐ๊ณผ
print (minus(5, 3))
print (minus(10, 5))
import gachon_autograder_client as g_autograder
EMAIL = "#YOUR_EMAIL"
PASSWORD = "#YOUR_PASSWORD"
ASSIGNMENT_FILE_NAME = "nb_arithmetic_functions.ipynb"
g_autograder.submit_assignment(EMAIL, PASSWORD, ASSIGNMENT_FILE_NAME)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
from sklearn.metrics import mean_squared_error
rmse = mean_squared_error(y_test, y_pred, squared = False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: Part 2
Step3: Unrolling the parameters into one vector
Step4: Part 3
Step5: The cost at the given parameters should be about 0.287629.
Step6: The cost at the given parameters and a regularization factor of 1 should be about 0.38377.
Step7: Part 5
Step8: Part 6
Step9: Part 7
Step10: If your backpropagation implementation is correct, then the relative difference will be small (less than 1e-9).
Step11: Part 8
Step12: The cost at lambda = 3 should be about 0.57.
Step13: Part 8
Step14: Obtain Theta1 and Theta2 back from nn_params
Step15: Part 9
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.io
import scipy.optimize
import matplotlib.pyplot as plt
%matplotlib inline
# uncomment for console - useful for debugging
# %qtconsole
ex3data1 = scipy.io.loadmat("./ex4data1.mat")
X = ex3data1['X']
y = ex3data1['y'][:,0]
m, n = X.shape
m, n
input_layer_size = n # 20x20 Input Images of Digits
hidden_layer_size = 25 # 25 hidden units
num_labels = 10 # 10 labels, from 1 to 10
# (note that we have mapped "0" to label 10)
lambda_ = 1
def display(X, display_rows=5, display_cols=5, figsize=(4,4), random_x=False):
m = X.shape[0]
fig, axes = plt.subplots(display_rows, display_cols, figsize=figsize)
fig.subplots_adjust(wspace=0.1, hspace=0.1)
import random
for i, ax in enumerate(axes.flat):
ax.set_axis_off()
x = None
if random_x:
x = random.randint(0, m-1)
else:
x = i
image = X[x].reshape(20, 20).T
image = image / np.max(image)
ax.imshow(image, cmap=plt.cm.Greys_r)
display(X, random_x=True)
def add_ones_column(array):
return np.insert(array, 0, 1, axis=1)
ex4weights = scipy.io.loadmat('./ex4weights.mat')
Theta1 = ex4weights['Theta1']
Theta2 = ex4weights['Theta2']
print(Theta1.shape, Theta2.shape)
nn_params = np.concatenate((Theta1.flat, Theta2.flat))
nn_params.shape
def sigmoid(z):
return 1 / (1+np.exp(-z))
def nn_cost_function(nn_params, input_layer_size, hidden_layer_size,
num_labels, X, y, lambda_):
#NNCOSTFUNCTION Implements the neural network cost function for a two layer
#neural network which performs classification
# [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
# X, y, lambda) computes the cost and gradient of the neural network. The
# parameters for the neural network are "unrolled" into the vector
# nn_params and need to be converted back into the weight matrices.
#
# The returned parameter grad should be a "unrolled" vector of the
# partial derivatives of the neural network.
#
# Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
# for our 2 layer neural network
t1_len = (input_layer_size+1)*hidden_layer_size
Theta1 = nn_params[:t1_len].reshape(hidden_layer_size, input_layer_size+1)
Theta2 = nn_params[t1_len:].reshape(num_labels, hidden_layer_size+1)
m = X.shape[0]
# You need to return the following variables correctly
J = 0;
Theta1_grad = np.zeros(Theta1.shape);
Theta2_grad = np.zeros(Theta2.shape);
# ====================== YOUR CODE HERE ======================
# Instructions: You should complete the code by working through the
# following parts.
#
# Part 1: Feedforward the neural network and return the cost in the
# variable J. After implementing Part 1, you can verify that your
# cost function computation is correct by verifying the cost
# computed for lambda == 0.
#
# Part 2: Implement the backpropagation algorithm to compute the gradients
# Theta1_grad and Theta2_grad. You should return the partial derivatives of
# the cost function with respect to Theta1 and Theta2 in Theta1_grad and
# Theta2_grad, respectively. After implementing Part 2, you can check
# that your implementation is correct by running checkNNGradients
#
# Note: The vector y passed into the function is a vector of labels
# containing values from 1..K. You need to map this vector into a
# binary vector of 1's and 0's to be used with the neural network
# cost function.
#
# Hint: We recommend implementing backpropagation using a for-loop
# over the training examples if you are implementing it for the
# first time.
#
# Part 3: Implement regularization with the cost function and gradients.
#
# Hint: You can implement this around the code for
# backpropagation. That is, you can compute the gradients for
# the regularization separately and then add them to Theta1_grad
# and Theta2_grad from Part 2.
#
# =========================================================================
# Unroll gradients
gradient = np.concatenate((Theta1_grad.flat, Theta2_grad.flat))
return J, gradient
lambda_ = 0 # No regularization
nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
lambda_ = 1
nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
def sigmoid_gradient(z):
#SIGMOIDGRADIENT returns the gradient of the sigmoid function
#evaluated at z
# g = SIGMOIDGRADIENT(z) computes the gradient of the sigmoid function
# evaluated at z. This should work regardless if z is a matrix or a
# vector. In particular, if z is a vector or matrix, you should return
# the gradient for each element.
g = np.zeros(z.shape)
# ====================== YOUR CODE HERE ======================
# Instructions: Compute the gradient of the sigmoid function evaluated at
# each value of z (z can be a matrix, vector or scalar).
# =============================================================
return g
sigmoid_gradient(np.array([1, -0.5, 0, 0.5, 1]))
def rand_initialize_weight(L_in, L_out):
#RANDINITIALIZEWEIGHTS Randomly initialize the weights of a layer with L_in
#incoming connections and L_out outgoing connections
# W = RANDINITIALIZEWEIGHTS(L_in, L_out) randomly initializes the weights
# of a layer with L_in incoming connections and L_out outgoing
# connections.
#
# Note that W should be set to a matrix of size(L_out, 1 + L_in) as
# the column row of W handles the "bias" terms
#
# You need to return the following variables correctly
W = np.zeros((L_out, L_in))
# ====================== YOUR CODE HERE ======================
# Instructions: Initialize W randomly so that we break the symmetry while
# training the neural network.
#
# Note: The first row of W corresponds to the parameters for the bias units
#
return W
# =========================================================================
def numerical_gradient(f, x, dx=1e-6):
perturb = np.zeros(x.size)
result = np.zeros(x.size)
for i in range(x.size):
perturb[i] = dx
result[i] = (f(x+perturb) - f(x-perturb)) / (2*dx)
perturb[i] = 0
return result
def check_NN_gradients(lambda_=0):
input_layer_size = 3
hidden_layer_size = 5
num_labels = 3
m = 5
def debug_matrix(fan_out, fan_in):
W = np.sin(np.arange(fan_out * (fan_in+1))+1) / 10
return W.reshape(fan_out, fan_in+1)
Theta1 = debug_matrix(hidden_layer_size, input_layer_size)
Theta2 = debug_matrix(num_labels, hidden_layer_size)
X = debug_matrix(m, input_layer_size - 1)
y = 1 + ((1 + np.arange(m)) % num_labels)
nn_params = np.concatenate([Theta1.flat, Theta2.flat])
cost, grad = nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
def just_cost(nn_params):
cost, grad = nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
return cost
return np.sum(np.abs(grad - numerical_gradient(just_cost, nn_params))) / grad.size
check_NN_gradients()
initial_Theta1 = rand_initialize_weight(hidden_layer_size, input_layer_size+1)
initial_Theta2 = rand_initialize_weight(num_labels, hidden_layer_size+1)
def cost_fun(nn_params):
return nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
lambda_ = 3
nn_params = np.concatenate((initial_Theta1.flat, initial_Theta2.flat))
res = scipy.optimize.minimize(cost_fun, nn_params, jac=True, method='L-BFGS-B',
options=dict(maxiter=200, disp=True))
res
res.fun
lambda_ = 1
nn_params = np.concatenate((initial_Theta1.flat, initial_Theta2.flat))
res = scipy.optimize.minimize(cost_fun, nn_params, jac=True, method='L-BFGS-B',
options=dict(maxiter=200, disp=True))
nn_params = res.x
t1_len = (input_layer_size+1)*hidden_layer_size
Theta1 = nn_params[:t1_len].reshape(hidden_layer_size, input_layer_size+1)
Theta2 = nn_params[t1_len:].reshape(num_labels, hidden_layer_size+1)
display(Theta1[:,1:], figsize=(6,6))
def predict(Theta1, Theta2, X):
#PREDICT Predict the label of an input given a trained neural network
# p = PREDICT(Theta1, Theta2, X) outputs the predicted label of X given the
# trained weights of a neural network (Theta1, Theta2)
m = X.shape[0]
num_labels = Theta2.shape[1]
# You need to return the following variables correctly. Remember that
# the given data labels go from 1..10, with 10 representing the digit 0!
p = np.zeros(X.shape[0])
# ====================== YOUR CODE HERE ======================
# ============================================================
return p
predictions = predict(Theta1, Theta2, X)
np.mean(predictions == y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Optimierung des fachlichen Schnitts
Step2: Graph-Abfrage-Ergebnis
Step3: Visualisierungsdaten
|
<ASSISTANT_TASK:>
Python Code:
import py2neo
import pandas as pd
graph= py2neo.Graph()
query=
MATCH
(s1:Subdomain)<-[:BELONGS_TO]-
(type:Type)-[r:DEPENDS_ON*0..1]->
(dependency:Type)-[:BELONGS_TO]->(s2:Subdomain)
RETURN s1.name as from, s2.name as to, COUNT(r) as x_number
result = graph.run(query).data()
df = pd.DataFrame(result)
df.head(10)
import json
json_data = df.to_dict(orient='split')['data']
with open ( "chord_data.json", mode='w') as json_file:
json_file.write(json.dumps(json_data, indent=3))
json_data[:10]
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.