Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
5,000
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Embedding, Input, Flatten, Conv2D, MaxPooling2D
from sklearn.utils import shuffle
from sklearn.preprocessing import MultiLabelBinarizer
!gsutil cp 'gs://ml-design-patterns/so_data.csv' .
data = pd.read_csv('so_data.csv', names=['tags', 'original_tags', 'text'], header=0)
data = data.drop(columns=['original_tags'])
data = data.dropna()
data = shuffle(data, random_state=22)
data.head()
# Encode top tags to multi-hot
tags_split = [tags.split(',') for tags in data['tags'].values]
print(tags_split[0])
tag_encoder = MultiLabelBinarizer()
tags_encoded = tag_encoder.fit_transform(tags_split)
num_tags = len(tags_encoded[0])
print(data['text'].values[0][:110])
print(tag_encoder.classes_)
print(tags_encoded[0])
# Split our data into train and test sets
train_size = int(len(data) * .8)
print ("Train size: %d" % train_size)
print ("Test size: %d" % (len(data) - train_size))
# Split our labels into train and test sets
train_tags = tags_encoded[:train_size]
test_tags = tags_encoded[train_size:]
train_qs = data['text'].values[:train_size]
test_qs = data['text'].values[train_size:]
from tensorflow.keras.preprocessing import text
VOCAB_SIZE=400 # This is a hyperparameter, try out different values for your dataset
tokenizer = text.Tokenizer(num_words=VOCAB_SIZE)
tokenizer.fit_on_texts(train_qs)
body_train = tokenizer.texts_to_matrix(train_qs)
body_test = tokenizer.texts_to_matrix(test_qs)
# Note we're using sigmoid output with binary_crossentropy loss
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(50, input_shape=(VOCAB_SIZE,), activation='relu'))
model.add(tf.keras.layers.Dense(25, activation='relu'))
model.add(tf.keras.layers.Dense(num_tags, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
# Train and evaluate the model
model.fit(body_train, train_tags, epochs=3, batch_size=128, validation_split=0.1)
print('Eval loss/accuracy:{}'.format(
model.evaluate(body_test, test_tags, batch_size=128)))
# Get some test predictions
predictions = model.predict(body_test[:3])
classes = tag_encoder.classes_
for q_idx, probabilities in enumerate(predictions):
print(test_qs[q_idx])
for idx, tag_prob in enumerate(probabilities):
if tag_prob > 0.7:
print(classes[idx], round(tag_prob * 100, 2), '%')
print('')
# First, download the data. We've made it publicly available in Google Cloud Storage
!gsutil cp gs://ml-design-patterns/mushrooms.csv .
mushroom_data = pd.read_csv('mushrooms.csv')
mushroom_data.head()
# 1 = edible, 0 = poisonous
mushroom_data.loc[mushroom_data['class'] == 'p', 'class'] = 0
mushroom_data.loc[mushroom_data['class'] == 'e', 'class'] = 1
labels = mushroom_data.pop('class')
dummy_data = pd.get_dummies(mushroom_data)
# Split the data
train_size = int(len(mushroom_data) * .8)
train_data = dummy_data[:train_size]
test_data = dummy_data[train_size:]
train_labels = labels[:train_size]
test_labels = labels[train_size:]
model = keras.Sequential([
keras.layers.Dense(32, input_shape=(len(dummy_data.iloc[0]),), activation='relu'),
keras.layers.Dense(8, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])
model.summary()
# Since we're using sigmoid output, we use binary_crossentropy for our loss function
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(train_data.values.tolist(), train_labels.values.tolist())
model.evaluate(test_data.values.tolist(), test_labels.values.tolist())
# First, transform the label column to one-hot
def to_one_hot(data):
if data == 0:
return [1, 0]
else:
return [0,1]
train_labels_one_hot = train_labels.apply(to_one_hot)
test_labels_one_hot = test_labels.apply(to_one_hot)
model_softmax = keras.Sequential([
keras.layers.Dense(32, input_shape=(len(dummy_data.iloc[0]),), activation='relu'),
keras.layers.Dense(8, activation='relu'),
keras.layers.Dense(2, activation='softmax')
])
model_softmax.summary()
model_softmax.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model_softmax.fit(train_data.values.tolist(), train_labels_one_hot.values.tolist())
model_softmax.evaluate(test_data.values.tolist(), test_labels_one_hot.values.tolist())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Building a multilabel model with simgoid output
Step2: 🥑🥑🥑
Step3: Parsing sigmoid results
Step4: Sigmoid output for binary classification
Step5: To keep things simple, we'll first convert the label column to numeric and then
Step6: Sidebar
|
5,001
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import string
import numpy as np
import pandas as pd
print pd.__version__
# Dimensions
nb_rand_var = 8
nb_dates = 220
np.random.seed(4321)
# Random choice letters
pickme = lambda x: np.random.choice(26, x, replace=False)
labels = np.array(list(string.ascii_uppercase))[pickme(nb_rand_var)]
labels
# Timeseries
ts = pd.date_range("2015-03-23", periods=nb_dates, freq="B")
ts
# A Brownian Motion
noise = np.random.randn(nb_dates, nb_rand_var)
df = pd.DataFrame(noise.cumsum(axis=0),
index=ts,
columns=labels)
df.head()
import matplotlib.pyplot as plt
plt.style.use("ggplot") # only for matplotlib >= 1.4
df.plot();
import seaborn as sns
with sns.axes_style("darkgrid"):
df.plot()
with sns.axes_style("ticks"):
df.plot()
from pandas_highcharts.display import display_charts
display_charts(df, title="Brownian Motion")
from pandas_highcharts.core import serialize
json_data = serialize(df, render_to="brownian", title="Brownian Motion")
type(json_data)
from IPython.core.display import display, HTML
display(HTML(<div id="{chart_id}"</div>
<script type="text/javascript">{data}</script>.format(chart_id="brownian", data=json_data)))
data = serialize(df, render_to="brownian", output_type="dict")
data.keys()
data["chart"]
data["legend"]
data["subtitle"] = {"text": "a subtitle here..."}
data["chart"]["type"] = "spline"
data["plotOptions"] = {"spline": {
"lineWidth": 2,
"states": {
"hover": {
"lineWidth": 3}
}}}
data['chart']['type'] = 'line'
data['chart']['zoomType'] = 'x'
data['chart']['panning'] = True
data['chart']['panKey'] = 'shift'
data["chart"]["renderTo"] = "new_brownian"
print data["chart"]
print data["subtitle"]
print data["yAxis"]
from pandas_highcharts.core import json_encode
json_data_2 = "new Highcharts.StockChart(%s);" % json_encode(data)
display(HTML(<div id="{chart_id}"</div>
<script type="text/javascript">{data}</script>.format(chart_id="new_brownian", data=json_data_2)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build a DataFrame with a timeseries
Step2: Plotting with matplotlib
Step3: Even if the figure is nicer than the matplotlib default style, I think it's to small, and legends don't fit when the number of variables is higher than 5 or 6.
Step4: The colors palette, the figure size and the legends position are OK for me.
Step5: Let's go with Highcharts
Step6: You can also retrieve the JSON data generated by pandas_highcharts thanks to the function serialize.
Step7: Serialize the previous DataFrame with the pandas_highcharts function serialize.
Step9: And display the charts thanks to the HTML renderer and the function display of IPython
Step10: Try to change some parameters
Step11: Add a subtitle
Step12: Change the chart type
Step14: Serialize this new dict.
|
5,002
|
<ASSISTANT_TASK:>
Python Code:
# for colab
!pip install -q tf-nightly-gpu-2.0-preview
import tensorflow as tf
print(tf.__version__)
# a small sanity check, does tf seem to work ok?
hello = tf.constant('Hello TF!')
print("This works: {}".format(hello))
# this should return True even on Colab
tf.test.is_gpu_available()
tf.test.is_built_with_cuda()
tf.executing_eagerly()
import pandas as pd
print(pd.__version__)
# local
# URL = '../data/sample4.json'
# remote
URL = 'https://raw.githubusercontent.com/DJCordhose/ux-by-tfjs/master//data/sample4.json'
df = pd.read_json(URL, typ='series')
len(df)
df.head()
X = [item['x'] for item in df]
X[0]
y = [item['y'] - 1 for item in df]
y[0]
from math import floor
def make_chunks(list_to_chunk, chunk_size):
length = len(list_to_chunk)
assert length / chunk_size == floor(length / chunk_size), "length of data must be multiple of segment length"
for chunk_start in range(0, length, chunk_size):
yield list_to_chunk[chunk_start : chunk_start + chunk_size]
import numpy as np
CHUNK_SIZE = 25
# only use the final segments
SEGMENTS = 2
X_expanded = []
y_expanded = []
for x_el, y_el in zip(X, y):
chunks = list(make_chunks(x_el, CHUNK_SIZE))
chunks = chunks[len(chunks) - SEGMENTS:]
labels = [y_el] * SEGMENTS
for seq, label in zip(chunks, labels):
X_expanded.append(seq)
y_expanded.append(label)
X_expanded = np.array(X_expanded)
y_expanded = np.array(y_expanded)
X_expanded.shape
X_expanded[100]
X_expanded[100][0]
y_expanded[100]
np.unique(y_expanded)
assert np.array_equal(np.unique(y_expanded), [0, 1, 2])
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense, LSTM, GRU, SimpleRNN, BatchNormalization
from tensorflow.keras.models import Sequential, Model
# experiment with
# - type of RNN: SimpleRNN, LSTM, GRU
# - number of units
# - dropout
# - BatchNormalization: yes/no
n_steps = len(X_expanded[0])
n_features = len(X_expanded[0][0])
n_buttons = 3
model = Sequential()
model.add(SimpleRNN(units=50, activation='tanh', input_shape=(n_steps, n_features), name="RNN_Input",
# model.add(GRU(units=50, activation='tanh', input_shape=(n_steps, n_features), name="RNN_Input",
# recurrent_dropout makes things slow
# dropout=0.1, recurrent_dropout=0.1))
dropout=0.1))
# model.add(GRU(units=50, activation='tanh', input_shape=(n_steps, n_features), name="RNN_Input"))
model.add(BatchNormalization())
model.add(Dense(units=n_buttons, name='softmax', activation='softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
%%time
EPOCHS = 200
BATCH_SIZE = 200
history = model.fit(X_expanded, y_expanded,
batch_size=BATCH_SIZE,
epochs=EPOCHS, verbose=0, validation_split=0.2)
loss, accuracy = model.evaluate([X_expanded], y_expanded, batch_size=BATCH_SIZE)
accuracy
%matplotlib inline
import matplotlib.pyplot as plt
# plt.yscale('log')
plt.ylabel('loss')
plt.xlabel('epochs')
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.legend(['loss', 'val_loss'])
plt.ylabel('accuracy')
plt.xlabel('epochs')
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['accuracy', 'val_accuracy'])
model.predict([[X_expanded[0]]])
model.predict([[X_expanded[0]]]).argmax()
y_expanded[0]
y_pred = model.predict([X_expanded]).argmax(axis=1)
cm = tf.math.confusion_matrix(labels=y_expanded, predictions=y_pred)
cm
import seaborn as sns
classes = ["Left Button", "Middle Button", "Right Button"]
sns.heatmap(cm, annot=True, fmt="d", xticklabels=classes, yticklabels=classes)
model.save('ux.hd5')
!ls -l
!pip install -q tensorflowjs
!tensorflowjs_converter --input_format keras ux.hd5 tfjs
!ls -l tfjs
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: load data
Step2: pre-process data into chunks
Step3: Recurrent Neural Networks
Step4: Convert Model into tfjs format
|
5,003
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
pd.set_option('display.float_format', lambda x: '%.2f' % x)
dtype = {
'Title': str,
'First Name': str,
'Last Name': str,
'Speciality': str,
'Institution Name': str
}
df = pd.read_csv('./data/payments.csv', dtype=dtype)
print "{:,} rows".format(len(df))
print "£{:,} total declared".format(df['Amount'].sum())
print len(df['Company Name'].unique()), 'companies'
print len(df['Organisation Name'].unique()), 'organisations'
print len(df['Speciality'].unique()), 'specialities'
print len(df['Institution Name'].unique()), 'institutions'
# df.describe(include='all')
df_hcps = df[pd.isnull(df['Organisation Name'])]
df_hcos = df[~pd.isnull(df['Organisation Name'])]
a = "{:,} payments to HCOs ({:,.2f}% of total payments)"
print a.format(len(df_hcos), 100 * len(df_hcos) / float(len(df)))
a = "{:,} payments to HCPs ({:,.2f}% of total payments)"
print a.format(len(df_hcps), 100 * len(df_hcps) / float(len(df)))
print
a = "£{:,} paid to HCOs ({:,.2f}% of total payments)"
print a.format(df_hcos['Amount'].sum(), 100 * df_hcos['Amount'].sum() / df['Amount'].sum())
a = "£{:,} paid to HCPs ({:,.2f}% of total payments)"
print a.format(df_hcps['Amount'].sum(), 100 * df_hcps['Amount'].sum() / df['Amount'].sum())
# This is what the raw payments data looks like!
df.head()
print df['Amount'].describe()
print len(df[df['Amount'] > 10000]), 'payments overall are more than £10,000'
print '\nPayments to organisations:'
print df_hcos['Amount'].describe()
print '\nPayments to individuals:'
print df_hcps['Amount'].describe()
df_hcps.sort_values('Amount', ascending=False).head()
df.sort_values('Amount').head()[['Company Name', 'Organisation Name', 'TOV Type', 'Amount']]
df.groupby(('TOV Category', 'TOV Type')).sum().sort_values('Amount', ascending=False)
df_aggregate = pd.read_csv('./data/aggregates.csv')
df_aggregate.head()
undeclared_payments = df_aggregate['Amount'].sum()
declared_payments = df['Amount'].sum()
total_payments = undeclared_payments + declared_payments
print 'Total payments undeclared:', "£{:,}".format(undeclared_payments)
print 'Total payments declared:', "£{:,}".format(declared_payments)
print 'Percentage undeclared: {:.2f}%'.format(undeclared_payments / total_payments * 100)
df_agg_ex_rd = df_aggregate[df_aggregate['TOV Category'] != 'Research & Development']
undeclared_payments_ex_rd = df_agg_ex_rd['Amount'].sum()
print 'Total payments on R&D (none of which are declared):', "£{:,}".format(undeclared_payments - undeclared_payments_ex_rd)
print 'Total payments undeclared ex R&D:', "£{:,}".format(undeclared_payments_ex_rd)
# Get aggregate figures for HCPs only.
df_agg_hcps = df_agg_ex_rd[pd.isnull(df['Organisation Name'])].groupby(('Company Name')).sum().reset_index()
df_agg_hcps = df_agg_hcps[['Company Name', 'Amount', 'No of HCP HCO in Aggregate']]
df_agg_hcps.rename(columns={'No of HCP HCO in Aggregate': 'No of HCP'}, inplace=True)
# print df_agg_hcps.head()
# Count total undeclared payments by company/category/HCP
declared_hcps = df_agg_hcps['No of HCP'].sum()
print 'No of HCP payments by company/category, undeclared:', declared_hcps #
undeclared_hcps = len(df_hcps)
print 'No of HCP payments by company/category, declared:', undeclared_hcps
v = undeclared_hcps / (undeclared_hcps + declared_hcps)
print 'Overall % of payments to HCPs undeclared: {:.2f}%'.format(v*100)
# Group the declared HCP payments by company, then merge with the aggregate data.
df_hcps_by_co = df_hcps.groupby('Company Name')['Amount'].agg(['sum','count']).reset_index()
df_hcps_by_co.rename(columns={'sum': 'Amount', 'count': 'No of HCP'}, inplace=True)
df_m = pd.merge(df_agg_hcps, df_hcps_by_co, on='Company Name', suffixes=(' Undeclared', ' Declared'))
df_m['Total HCPs'] = df_m['No of HCP Undeclared'] + df_m['No of HCP Declared']
df_m['% HCP Payments Undeclared'] = df_m['No of HCP Undeclared'] / df_m['Total HCPs'] * 100
df_m.sort_values('% HCP Payments Undeclared', ascending=False)
# Get all aggregate payments but EXCLUDE R&D.
# This is because NO R&D payments are included in the payments sheet, so
# including them in our per-company analyses will distort results.
# Lump both HCPs and HCOs together.
df_agg_by_co = df_agg_ex_rd.groupby(('Company Name')).sum().reset_index()
df_agg_by_co.rename(columns={'No of HCP HCO in Aggregate': 'No of HCP HCO'}, inplace=True)
df_agg = df_agg_by_co[['Company Name', 'Amount', 'No of HCP HCO']]
# We've already created dataframes for HCOs and HCPs: now get the sums
# and counts of these, and merge them. There's probably a better way to do this.
df_hcos_by_co = df_hcos.groupby('Company Name')['Amount'].agg(['sum','count']).reset_index()
df_hcps_by_co = df_hcps.groupby('Company Name')['Amount'].agg(['sum','count']).reset_index()
df_m = pd.merge(df_hcos_by_co, df_hcps_by_co, on='Company Name', suffixes=('_hco', '_hcp'))
df_m['Amount'] = df_m['sum_hco'] + df_m['sum_hcp']
df_m['No of HCP HCO'] = df_m['count_hco'] + df_m['count_hcp']
df_declared = df_m[['Company Name', 'Amount', 'No of HCP HCO']]
df_combined = pd.merge(df_declared, df_agg, on='Company Name',
suffixes=(' Declared', ' Undeclared'))
# Calculate totals, proportions etc.
df_combined['Total Amount'] = df_combined['Amount Declared'] + df_combined['Amount Undeclared']
df_combined['Total HCOs/HCPs'] = df_combined['No of HCP HCO Declared'] + df_combined['No of HCP HCO Undeclared']
df_combined['Proportion of names undeclared'] = df_combined['No of HCP HCO Undeclared'] / \
df_combined['Total HCOs/HCPs'] * 100
df_combined['Proportion of total amount undeclared'] = df_combined['Amount Undeclared'] / \
df_combined['Total Amount'] * 100
# Print summary stats about the proportion of undeclared payments.
# Note that this number is lower than the 77% above, because we've excluded R&D.
print 'Total % names undeclared:', "{:.2f}%".format(df_combined['No of HCP HCO Undeclared'].sum() / \
df_combined['Total HCOs/HCPs'].sum() * 100)
print 'Total % amount undeclared:', "{:.2f}%".format(df_combined['Amount Undeclared'].sum() / \
df_combined['Total Amount'].sum() * 100)
# Show all companies for which total amount > £1 million,
# sorted by the total amount undeclared.
# Again, these are flattering numbers because we've excluded R&D.
# In most cases, it looks as though the payments that are undeclared are the
# smaller payments - i.e. probably those to individuals. Sanofi seems to be an exception.
# Note how GSK is an outlier too.
df_combined.sort_values('Proportion of total amount undeclared', inplace=True, ascending=False)
df_combined[df_combined['Total Amount'] > 1000000]
# Double-check a sample row in the table above, against the raw data,
# to make sure we've got things right....
print df_agg_by_co[df_agg_by_co['Company Name'] == 'Napp Pharmaceuticals Ltd'][['Amount', 'No of HCP HCO']]
print df[df['Company Name'] == 'Napp Pharmaceuticals Ltd'].sum()['Amount']
print len(df_hcps[df_hcps['Company Name'] == 'Napp Pharmaceuticals Ltd']['Last Name'])
print len(df_hcos[df_hcos['Company Name'] == 'Napp Pharmaceuticals Ltd']['Organisation Name'])
# Save all data to CSV.
df_combined.to_csv('declared_vs_aggregated_payments_by_company.csv')
# As above, EXCLUDE R&D.
df_agg_by_cat = df_agg_ex_rd.groupby('TOV Category').sum().reset_index()
df_agg_cat = df_agg_by_cat[['TOV Category', 'Amount']]
df_by_cat = df.groupby(('TOV Category')).sum().reset_index()
df_merged_cat = pd.merge(df_agg_cat, df_by_cat, on=['TOV Category'],
suffixes=(' Aggregate', ' Declared'))
df_merged_cat['Total Amount'] = df_merged_cat['Amount Declared'] + \
df_merged_cat['Amount Aggregate']
df_merged_cat['Proportion Undeclared'] = df_merged_cat['Amount Aggregate'] / \
df_merged_cat['Total Amount']
df_merged_cat.sort_values('Proportion Undeclared', ascending=False, inplace=True)
df_merged_cat.to_csv('declared_vs_aggregated_payments_by_category.csv')
df_merged_cat
df_individual = df.groupby(('Last Name', 'First Name', 'Speciality')).sum().reset_index()
df_individual.sort_values(by='Amount', ascending=False).head(20)
df_by_co = df.groupby('Company Name').agg(['sum', 'mean', 'median', 'count'])
df_by_co.sort_values(by=('Amount', 'sum'), ascending=False).head(10)
# df[['Company Name', 'TOV Category', 'Amount']].head()
df.groupby(('Company Name', 'TOV Category')).sum().sort_values('Amount', ascending=False).head(10)
df_by_co_and_speciality = df.groupby(('Company Name', 'Speciality')).sum()\
.sort_values('Amount', ascending=False).reset_index()
total = df_by_co_and_speciality.groupby('Company Name')['Amount'].transform('sum')
df_by_co_and_speciality['% of Co Spend'] = df_by_co_and_speciality['Amount']/total * 100
print df_by_co_and_speciality.head(10)
# Validate % calculation
# print df_by_co_and_speciality[df_by_co_and_speciality['Company Name'] == 'AstraZeneca']['% of Company Total Spend'].sum()
df_by_co_and_speciality.to_csv('payments_by_co_and_speciality.csv')
df_by_org = df.groupby('Organisation Name').agg(['sum', 'mean', 'median', 'count'])
df_by_org.sort_values(by=('Amount', 'sum'), ascending=False).head(10)
df_by_spec = df.groupby('Speciality').agg(['sum', 'mean', 'median', 'count']).sort_values(by=('Amount', 'sum'), ascending=False)
print df_by_spec.head(10)
df_by_spec.to_csv('by_speciality.csv')
# Normalise the specialties
def specialty_classifier(row):
other = ['Healthcare Administration',
'Microbiology',
'Laboratory - Medical Analysis',
'Research',
'Occupational Therapist',
'Wholesaler',
'Miscellaneous']
pharmacists = ['Clinical Pharmacology', 'Pharmacist']
nurses = ['Nurse']
classification = 'HCO'
if row['Speciality'] in other:
classification = 'Other'
elif row['Speciality'] in pharmacists:
classification = 'Pharmacists'
elif row['Speciality'] in nurses:
classification = 'Nurses'
elif str(row['Speciality']) != "nan":
# XXX there must be a nicer way to filter out pd.nan values...
classification = 'Doctors'
return classification
df['Normalised specialty'] = df.apply(specialty_classifier, axis=1)
# Re-run the analysis
df.groupby('Normalised specialty').agg(['sum', 'mean', 'median', 'count']).sort_values(by=('Amount', 'sum'), ascending=False)
df_individual = df.groupby(('Last Name', 'First Name', 'Speciality')).sum().reset_index()
df_individual.groupby('Speciality').count()[['Amount']].sort_values("Amount", ascending=False)
df_individual = df.groupby(('Last Name', 'First Name', 'Normalised specialty')).sum().reset_index()
df_individual.groupby('Normalised specialty').count()[['Amount']].sort_values("Amount", ascending=False)
df_sample = df[(df['Company Name'] == 'Genzyme') & (df['TOV Type'] == 'Fees')]
df_sample.describe(include='all')
# print len(df_sample), 'rows'
# df_sample['Amount'].sum()
%pylab inline
plt.figure()
bin_range = np.arange(0, 1100000, 10000)
df['Amount'].plot(kind='hist', bins=bin_range, color='blue', alpha=0.6)
plt.ylabel("Number of payments")
plt.xlabel("Payment size")
plt.yscale('log', nonposy='clip')
plt.title("Payments by amount")
plt.grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic statistics
Step2: Quickly calculate the breakdown between payments to individuals and payments to organisations.
Step3: Most payments are small! 50% are below £355, 75% are below £800. Only 1,164 payments are more than £10,000.
Step4: What are the biggest single payments to individuals?
Step5: Why are there a few negative payments?
Step6: Payment distribution by each TOV category/type, in the payments data.
Step7: Start to compare the declared and undeclared data
Step8: How much is missing?
Step9: Work out how many HCP payments are missing, by company
Step10: Work out how many HCOs/HCPs are missing, by company
Step11: Compare undeclared payments by category
Step12: The individuals with the most declared payments
Step13: Payments by pharma co
Step14: Payments by pharma companies by category
Step15: Payments by pharma companies by speciality
Step16: Payments by organisation
Step17: Payments by speciality
Step18: Payments by specialty, with custom groupings
Step19: Unique individuals by speciality
Step20: The same thing, with custom specialty groupings
Step21: Exploratory work to understand undeclared payments
Step22: Distribution of payments
|
5,004
|
<ASSISTANT_TASK:>
Python Code:
!pip3 install bs4
from bs4 import BeautifulSoup
from urllib.request import urlopen
html_str = urlopen("http://static.decontextualize.com/widgets2016.html").read()
document = BeautifulSoup(html_str, "html.parser")
h3_tags = document.find_all('h3')
print("There is", len(h3_tags), "“h3” tags in widgets2016.html.")
tel = document.find('a', {'class': 'tel'})
print("The telephone number is", tel.string)
widget_names = document.find_all('td', {'class': 'wname'})
for name in widget_names:
print(name.string)
widgets = []
# your code here
widget_infos = document.find_all('tr', {'class': 'winfo'})
for info in widget_infos:
partno = info.find('td', {'class': 'partno'})
price = info.find('td', {'class': 'price'})
quantity = info.find('td', {'class': 'quantity'})
wname = info.find('td', {'class': 'wname'})
widgets.append({'partno': partno.string, 'price': price.string, 'quantity': quantity.string, 'wname': wname.string})
# end your code
widgets
widgets = []
# your code here
widget_infos = document.find_all('tr', {'class': 'winfo'})
for info in widget_infos:
partno = info.find('td', {'class': 'partno'})
price = info.find('td', {'class': 'price'})
quantity = info.find('td', {'class': 'quantity'})
wname = info.find('td', {'class': 'wname'})
widgets.append({'partno': partno.string, 'price': float(price.string[1:]), 'quantity': int(quantity.string), 'wname': wname.string})
# end your code
widgets
total_nb_widgets = 0
for widget in widgets:
total_nb_widgets += widget['quantity']
print(total_nb_widgets)
for widget in widgets:
if widget['price'] > 9.30:
print(widget['wname'])
example_html =
<h2>Camembert</h2>
<p>A soft cheese made in the Camembert region of France.</p>
<h2>Cheddar</h2>
<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>
example_doc = BeautifulSoup(example_html, "html.parser")
cheese_dict = {}
for h2_tag in example_doc.find_all('h2'):
cheese_name = h2_tag.string
cheese_desc_tag = h2_tag.find_next_sibling('p')
cheese_dict[cheese_name] = cheese_desc_tag.string
cheese_dict
hallowed_header = document.find('h3', text='Hallowed widgets')
sibling_table = hallowed_header.find_next_sibling()
for part in sibling_table.find_all('td', {'class': 'partno'}):
print(part.string)
category_counts = {}
# your code here
categories = document.find_all('h3')
for category in categories:
table = category.find_next_sibling('table')
widgets = table.select('td.wname')
category_counts[category.string] = len(widgets)
# end your code
category_counts
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of <h3> tags contained in widgets2016.html.
Step2: Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header.
Step3: In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order)
Step4: Problem set #2
Step5: In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this
Step6: Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.
Step7: In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.
Step9: Problem set #3
Step10: If our task was to create a dictionary that maps the name of the cheese to the description that follows in the <p> tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above
Step11: With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header "Hallowed Widgets."
Step12: Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!
|
5,005
|
<ASSISTANT_TASK:>
Python Code:
%run Regexp-2-NFA.ipynb
%run RegExp-Parser.ipynb
r = parse('(ab + ba)*')
r
converter = RegExp2NFA({'a', 'b'})
nfa = converter.toNFA(r)
nfa
%run FSM-2-Dot.ipynb
d = nfa2dot(nfa)
d.render(view=True)
%run NFA-2-DFA.ipynb
dfa = nfa2dfa(nfa)
dfa
d, S = dfa2dot(dfa)
S
d
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If the regular expression r that is defined below is written in the style of the lecture notes, it reads
Step2: We use converter to create a non-deterministic <span style="font-variant
Step3: I have to use the method render below, because somehow the method displayis buggy and
Step4: After having constructed a non-deterministic <span style="font-variant
|
5,006
|
<ASSISTANT_TASK:>
Python Code:
DATA_PATH = '~/Desktop/sdss_dr7_photometry_source.csv.gz'
import itertools
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn.neighbors
%matplotlib inline
PSF_COLS = ('psfMag_u', 'psfMag_g', 'psfMag_r', 'psfMag_i', 'psfMag_z')
def load_data(x_cols=PSF_COLS,
class_col='class',
class_val='Galaxy',
train_samples_num=1000000):
# Cast x_cols to list so Pandas doesn't complain…
x_cols_l = list(x_cols)
data_iter = pd.read_csv(
DATA_PATH,
iterator=True,
chunksize=100000,
usecols=x_cols_l + [class_col])
# Filter out anything that is not a galaxy without loading the whole file into memory.
data = pd.concat(chunk[chunk[class_col] == class_val]
for chunk in data_iter)
train_X = data[:train_samples_num][x_cols_l].as_matrix()
assert train_X.shape == (train_samples_num, len(x_cols))
return train_X
data = load_data()
def fit(train_X,
bandwidth=1, # By experimentation.
kernel='epanechnikov', # Resembles Gaussian within short distance, but is faster.
leaf_size=400, # For speed.
rtol=1e-3): # Decreased accuracy, but better speed.
estimator = sklearn.neighbors.KernelDensity(bandwidth=bandwidth,
kernel=kernel,
leaf_size=leaf_size,
rtol=rtol)
estimator.fit(train_X)
return estimator
kde = fit(data)
def make_5D_grid(train_X,
grid_samples_per_axis=20): # Careful! This code is O(n^5) in this variable
mins = np.min(train_X, axis=0)
maxs = np.max(train_X, axis=0)
assert mins.shape == maxs.shape == (train_X.shape[1],)
# Produce the 5D grid. This is surprisingly nontrivial.
# http://stackoverflow.com/questions/28825219/
linspaces = [np.linspace(i, j, grid_samples_per_axis)
for i, j in zip(mins, maxs)]
mesh_grids = np.meshgrid(*linspaces,
indexing='ij') # Otherwise numpy swaps the first two dimensions… 😕
sample_points = np.array(mesh_grids)
return sample_points
grid = make_5D_grid(data)
def evaluate_density_at_sample_points(estimator, sample_points):
dims = sample_points.shape[0]
samples_per_axis = sample_points.shape[1]
assert sample_points.shape[1:] == (samples_per_axis,) * dims
sample_points = np.reshape(sample_points, (dims, samples_per_axis ** dims))
densities = estimator.score_samples(sample_points.T)
densities = np.reshape(densities, (samples_per_axis,) * dims)
# Convert from log densities
densities = np.exp(densities)
return densities
grid_densities = evaluate_density_at_sample_points(kde, grid)
def plot_against_one_variable(train_X, sample_points, densities,
bands=PSF_COLS,
bins=1000,
scale_coeff=2500):
dims = len(bands)
assert train_X.shape[1] == sample_points.shape[0] == dims
assert sample_points.shape[1:] == densities.shape
for i in range(dims):
fig, axes = plt.subplots()
# Make histogram.
axes.hist(train_X[:,i], # We only care about one of the five dimensions.
bins=bins,
label='Actual density')
# Make plot of estimated densities.
x_indices = tuple(0 if a != i else slice(None) # Linspace over
for a in range(dims)) # i-th dimension.
x_indices = (i,) + x_indices # Only take i-th dimension. Due to the
# above others are constant anyway.
x = sample_points[x_indices]
assert len(x.shape) == 1 # Sanity check to ensure it is 1D.
y_sum_axes = tuple(a for a in range(dims) if a != i) # Sum over all dimensions except i.
y = np.sum(densities, axis=y_sum_axes)
y *= scale_coeff
assert y.shape == x.shape
axes.plot(x, y, label='Estimated density')
# Labels
plt.ylabel('Count')
plt.xlabel('Magnitude')
plt.title(bands[i])
plt.legend()
plot_against_one_variable(data, grid, grid_densities)
def plot_against_two_variables(train_X, sample_points, densities,
bands=PSF_COLS,
bins=1000):
dims = len(bands)
assert train_X.shape[1] == sample_points.shape[0] == dims
assert sample_points.shape[1:] == densities.shape
mins = sample_points[(slice(None),) + (0,) * dims]
maxs = sample_points[(slice(None),) + (-1,) * dims]
plt.figure(figsize=(10, 40))
upto = 1
for i in range(dims):
for j in range(i + 1, dims):
plt.subplot((dims ** 2 - dims) // 2, 2, upto)
upto += 1
z_sum_axes = tuple(a for a in range(dims) if a != i and a != j) # Sum over all dimensions except i.
z = np.sum(densities, axis=z_sum_axes)
extent = [mins[i], maxs[i], mins[j], maxs[j]]
# plt.axis(extent)
plt.imshow(z.T,
cmap='hot',
interpolation='nearest',
extent=extent,
aspect='auto',
origin='lower')
plt.xlabel(bands[i])
plt.ylabel(bands[j])
plt.title('Estimated')
plt.xlim((16, 26))
plt.ylim((16, 24))
plt.subplot((dims ** 2 - dims) // 2, 2, upto)
upto += 1
plt.hexbin(train_X[:,i], train_X[:,j], gridsize=100)
plt.xlabel(bands[i])
plt.ylabel(bands[j])
plt.title('Actual')
plt.xlim((16, 26))
plt.ylim((16, 24))
plot_against_two_variables(data, grid, grid_densities)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First import the training and testing sets
Step2: Fit the training data.
Step3: Sanity checks
Step4: Two variables
|
5,007
|
<ASSISTANT_TASK:>
Python Code:
# Necessary package imports
import time
import numpy as np
%matplotlib nbagg
import matplotlib.pyplot as plt
from varanneal import va_ode # The ODE version of VarAnneal
def l96(t, x, k):
# Define this as you would any ODE system in Python, when x is a *time series*
# of states.
# Note that using loops will not cause any noticeable slowdown with PYADOLC
# (as opposed to vectorized Numpy operations on vectors), and in fact you
# should be careful which Numpy vector functions you use (all of the "allowed"
# Numpy functions are defined in the PYADOLC source code).
# Define a vector to store the elements of the ODE vector field.
# Note that using numpy.zeros_like avoids the need to overload this function for
# object vs. numerical types (important when function is being taped vs. evaluated).
dxdt = np.zeros_like(x) # will have shape (N, D)
D = x.shape[1]
for i in range(D):
dxdt[:, i] = x[:, (i-1)%D] * (x[:, (i+1)%D] - x[:, (i-2)%D]) - x[:, i] + k
return dxdt
# fancy footwork version below
#return np.roll(x,1,1) * (np.roll(x,-1,1) - np.roll(x,2,1)) - x + k
# Model system dimension
D = 20
# Measured variable indices
Lidx = [0, 2, 4, 6, 8, 10, 14, 16]
# RM, RF0
RM = 1.0 / (0.5**2)
RF0 = 4.0e-6
# alpha, and beta ladder
alpha = 1.5
beta_array = np.linspace(0, 100, 101)
g0 = RF0/RM
gammas_all = g0 * alpha**beta_array
# The data file included with this example contains observations of all D=20
# components. We need to load it and extract just the measured components.
data = np.load("Lorenz96/data/l96_D20_k8p17_data_dt0p05_N161_sm0p5.npy")
# Extract observation times
times_data = data[:, 0]
dt_data = times_data[1] - times_data[0]
N_data = len(times_data)
# Get out just the data
data = data[:, 1:]
data = data[:, Lidx]
# model state discretization
freq_mod = 1.0 # how often to put down a state variable
dt_model = dt_data / freq_mod
if freq_mod == 1.0:
N_model = N_data
else:
N_model = int(N_data * freq_mod) - 1
# State variables
# This should be an array with N_f elements, where element n_f is a D-dimensional
# vector. In other words, this is an array of state vectors at all "model times".
X0 = (20.0*np.random.rand(N_model * D) - 10.0).reshape((N_model, D))
# Parameters
Pidx = [0] # indices of estimated parameters
# Initial guess
P0 = np.array([4.0 * np.random.rand() + 6.0]) # Static parameter
#Pinit = 4.0 * np.random.rand(N_model, 1) + 6.0 # Time-dependent parameter
# Initialize Annealer
anneal_l96 = va_ode.Annealer()
# Set the Lorenz 96 model
anneal_l96.set_model(l96, D)
# Load the data into the Annealer object
anneal_l96.set_data(data, t=times_data)
# First set some options for the optimization.
# The full list of options can be found in the scipy.optimization package documentation.
BFGS_options = {'gtol':1.0e-8, 'ftol':1.0e-8, 'maxfun':1000000, 'maxiter':1000000}
tstart = time.time() # time how long VA takes
# Annealer.anneal() executes VA for all beta values (defined above)
# Note the init_to_data option: this initializes the measured variables to the data.
anneal_l96.anneal(X0, P0, alpha, beta_array, RM, RF0, Lidx, Pidx, dt_model=dt_model,
init_to_data=True, disc='SimpsonHermite', method='L-BFGS-B',
opt_args=BFGS_options, adolcID=0)
print("\nADOL-C annealing completed in %f s."%(time.time() - tstart))
anneal_l96.save_paths("Lorenz96/results/paths_D20_L8_N161.npy")
anneal_l96.save_params("Lorenz96/results/params_D20_L8_N161.npy")
anneal_l96.save_action_errors("Lorenz96/results/aerr_D20_L8_N161.npy")
# Load path & parameter estimates, and action curves
allpaths = np.load("Lorenz96/results/paths_D20_L8_N161.npy")
allparam = np.load("Lorenz96/results/params_D20_L8_N161.npy")
aerr = np.load("Lorenz96/results/aerr_D20_L8_N161.npy")
# Load the true solution
true_soln = np.load("Lorenz96/data/l96_D20_k8p17_truesoln_dt0p05_N161.npy")
beta_show = 50 # plot estimates at this beta value
plot_idx_meas = 1 # measured component index (time has index 0)
plot_idx_meas_dataidx = 0 # measured component index in data array
plot_idx_unmeas = 2 # unmeasured component index (time has index 0)
# plot all path estimates at this beta simultaneously
fig,ax = plt.subplots(2, 1, figsize=(5, 4.5), sharex=True)
fig.set_tight_layout(True)
tplot = allpaths[beta_show, :, 0]
# Observed variable
# plot the data
ax[0].plot(tplot, data[:, plot_idx_meas_dataidx], color="dodgerblue", label="Data", marker=".", ms=3, lw=0)#, alpha=0.7)#, lw=1.5)
# plot the estimate
ax[0].plot(tplot, allpaths[beta_show, :, plot_idx_meas], color="C1", label="Estimate")#, alpha=1.0, lw=2.0)
# plot the true solution
ax[0].plot(tplot, true_soln[:, plot_idx_meas], color="black", label="True", ls="--")#, lw=1.5, alpha=0.7)
ax[0].set_xlim(tplot[0], tplot[-1])
ax[0].set_ylabel(r"$x_%d$ (observed)"%(plot_idx_meas))
ax[0].set_title(r"L96 state estimates ($L=8$, $R_f/R_m = %.1E$)"%(RF0/RM * alpha**beta_show))
h,l = ax[0].get_legend_handles_labels()
ax[0].legend(h,l)
# Unobserved variable
# plot the estimate
ax[1].plot(tplot, allpaths[beta_show, :, plot_idx_unmeas], color="C1")#, alpha=1.0, lw=2.0)
# plot the true solution
ax[1].plot(tplot, true_soln[:, plot_idx_unmeas], color="black", ls="--")#, alpha=0.7)#, lw=1.5)
ax[1].set_xlim(tplot[0], tplot[-1])
ax[1].set_ylabel(r"$x_%d$ (unobserved)"%(plot_idx_unmeas))
ax[1].set_xlabel("Time")
plt.show()
#fig.savefig("/home/prozdeba/CloudStation/write-ups/dynamical_reg_L96_20170123/figs/L96_estimates_L7.pdf")
fig,ax = plt.subplots(1, 1, figsize=(3,3))
fig.set_tight_layout(True)
ax.plot(gammas_all, allparam, color="C1", label="Estimate")#, lw=0, marker='.', ms=3)
ax.axhline(y=8.17, color="black", label="True", ls="--")
ax.set_xscale('log')
ax.set_xlim(gammas_all[0], gammas_all[-1])
ax.set_xlabel(r"$R_f/R_m$")
ax.set_ylabel("$k$")
ax.set_title("L96 parameter")
h,l = ax.get_legend_handles_labels()
ax.legend(h,l)
plt.show()
fig,ax = plt.subplots(1, 3, figsize=(9, 3), sharey=True)
fig.set_tight_layout(True)
ymin = 1.0e20
ymax = 0.0
plotlw = 1.0
plotalpha = 1
#plotcolors = ["C0", "C0", "C0"]
plotcolors = ["black", "black", "black"]
action_vals = aerr[:, 1]
ax[0].plot(gammas_all[:], action_vals, lw=plotlw, color=plotcolors[0], alpha=plotalpha)
ax[0].set_xlabel(r"$R_f/R_m$")
ax[0].set_ylabel("Action")
ax[0].axhline(y=1, lw=1, ls="--", color="C3")#, alpha=.7)
measerr_vals = aerr[:, 2]
ax[1].plot(gammas_all[:], measerr_vals, lw=plotlw, color=plotcolors[1], alpha=plotalpha)
ax[1].set_xlabel(r"$R_f/R_m$")
ax[1].set_ylabel("Meas. error")
moderr_vals = aerr[:, 3]
ax[2].plot(gammas_all[:], moderr_vals, lw=plotlw, color=plotcolors[2], alpha=plotalpha)
ax[2].set_xlabel(r"$R_f/R_m$")
ax[2].set_ylabel("Model error")
fig.suptitle("L96 Action & Errors ($L=8$, single data set)", y=1.0)
for i in range(3):
ax[i].set_yscale('log')
ax[i].set_xscale('log')
ax[i].set_xlim(gammas_all[0], gammas_all[-1])
#ax[i].set_ylim(.001, 1.0e2)
plt.show()
# Necessary package imports
import time
import numpy as np
%matplotlib nbagg
import matplotlib.pyplot as plt
from varanneal import va_ode # The ODE version of VarAnneal
# rate functions
def x_inf(V, Vt, Vs):
return 0.5*(1.0 + np.tanh((V - Vt)/Vs))
def x_tau(V, Vt, Vs, t1, t2):
return t1 + t2*(1.0 - np.tanh((V - Vt)/Vs)**2)
# ODE system
def nakl(t, x, pstim):
# Define this as you would any ODE system in Python, when x is a *time series*
# of states.
# Now define a vector to store the elements of the ODE vector field.
# Note that using numpy.zeros_like avoids the need to overload this function for
# object vs. numerical types (important when function is being taped vs. evaluated).
dxdt = np.zeros_like(x) # will have shape (N, D)
# If there is a stimulus, pstim (third argument) is assumed to be a tuple like
# (parameters, stimlulus time series)
p,Iext = pstim
# name the states and parameters
V, m, h, n = (x[:, 0], x[:, 1], x[:, 2], x[:, 3])
gNa, gK, gL = (p[0], p[1], p[2])
ENa, EK, EL = (p[3], p[4], p[5])
Vtm, Vsm, t1m, t2m = (p[6], p[7], p[8], p[9])
Vth, Vsh, t1h, t2h = (p[10], p[11], p[12], p[13])
Vtn, Vsn, t1n, t2n = (p[14], p[15], p[16], p[17])
dxdt[:, 0] = gNa * m**3 * h * (ENa - V) + gK * n**4 * (EK - V) + gL * (EL - V) + Iext
dxdt[:, 1] = (x_inf(V, Vtm, Vsm) - m) / x_tau(V, Vtm, Vsm, t1m, t2m)
dxdt[:, 2] = (x_inf(V, Vth, Vsh) - h) / x_tau(V, Vth, Vsh, t1h, t2h)
dxdt[:, 3] = (x_inf(V, Vtn, Vsn) - n) / x_tau(V, Vtn, Vsn, t1n, t2n)
return dxdt
# Model system dimension
D = 4
# Measured variable indices
Lidx = [0]
# RM and RF0
RM = 1.0
# Note that we're considering a *different* RF for each model component.
# Definining it this way treats RF like a diagonal matrix.
RF0 = [1.0E-8, 1.0E-4, 1.0E-4, 1.0E-4]
# alpha, and beta ladder
alpha = 1.1
beta_all = np.linspace(0, 400, 401)
gamma0 = RF0[0]/RM
gamma_all = gamma0 * alpha**beta_all
# The data file included with this example contains observations of all D=20
# components. We need to load it and extract just the measured components.
data = np.load("NaKL/data/NaKL_Vdata_dt0p02_N6001_sm1p0.npy")
# Extract observation times
times_data = data[:, 0]
dt_data = times_data[1] - times_data[0]
N_data = len(times_data)
# Get out just the data
data = data[:, 1:]
data = data[:, Lidx]
###################################
# Stimulus
stim = np.load("NaKL/data/NaKL_stim_dt0p02_N6001.npy")
stim = stim[:, 1]
# model state discretization
freq_mod = 1.0 # how often to put down a state variable
dt_model = dt_data / freq_mod
if freq_mod == 1.0:
N_model = N_data
else:
N_model = int(N_data * freq_mod) - 1
# State variables
# This should be an array with N_f elements, where element n_f is a D-dimensional
# vector. In other words, this is an array of state vectors at all "model times".
Xinit = 0.2*np.random.rand(N_model, D) + 0.4
# Note we didn't rescale V init, to reflect its wider dynamical range of -100 to +100.
# Later we're going to set it equal to the data anyway!
##############################
# Parameters
Pidx = range(18) # indices of estimated parameters
#Pidx = [] # leave empty to estimate no parameters
# Initial guess
# Start by defining bounds for initial guesses (later used for optimization bounds)
Pb = []
Pb.append([60.0, 180.0]) # gNa_b
Pb.append([10.0, 30.0]) # gK_b
Pb.append([0.15, 0.45]) # gL_b
Pb.append([47.5, 52.5]) # ENa_b
Pb.append([-80.85, -73.15]) # EK_b
Pb.append([-56.7, -51.3]) # EL_b
Pb.append([-42.0, -38.0]) # Vtm_b
Pb.append([14.25, 15.75]) # Vsm_b
Pb.append([0.095, 0.105]) # t1m_b
Pb.append([0.38, 0.42]) # t2m_b
Pb.append([-63.0, -57.0]) # Vth_b
Pb.append([-15.75, -14.25]) # Vsh_b
Pb.append([0.95, 1.05]) # t1h_b
Pb.append([6.65, 7.35]) # t2h_b
Pb.append([-57.75, -52.25]) # Vtn_b
Pb.append([28.5, 31.5]) # Vsn_b
Pb.append([0.95, 1.05]) # t1n_b
Pb.append([4.75, 5.25]) # t2n_b
# Draw initial guesses from these intervals
Pinit = np.zeros(len(Pidx))
for i,b in enumerate(Pb):
r = b[1] - b[0]
Pinit[i]= r*np.random.rand() + b[0]
Pinit = np.array(Pinit)
#Pinit = np.load("NaKL/NaKL_trueparam_dt0p02_N6001.npy") # use this to load in true parameters
# Initialize Annealer
anneal_nakl = va_ode.Annealer()
# Set the NaKL model
anneal_nakl.set_model(nakl, D)
# Load the data and stimulus into the Annealer object
anneal_nakl.set_data(data, stim=stim, t=times_data)
# First set some options for the optimization and annealing loop
# Bounds
bounds = [[-100.0, 100.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]
for i in xrange(len(Pidx)):
bounds.append(Pb[i])
#bounds.append([None, None])
# The full list of BFGS options can be found in the scipy.optimization package documentation.
BFGS_options = {'gtol':1.0e-8, 'ftol':1.0e-8, 'maxfun':1000000, 'maxiter':1000000}
# Set up dictionaries with necessary info for tracking progress
# Don't pass these as arguments to anneal() if you don't want to track progress
track_paths = {'filename':"NaKL/results/inprog_paths.npy"}
track_params = {'filename':"NaKL/results/inprog_params.npy"}
track_action_errors = {'filename':"NaKL/results/inprog_aerr.npy"}
tstart = time.time() # time how long VA takes
# Annealer.anneal() executes VA for all beta values (defined above)
# Note the init_to_data option: this initializes the measured variables to the data.
anneal_nakl.anneal(Xinit, Pinit, alpha, beta_all, RM, RF0, Lidx, Pidx, dt_model=dt_model,
init_to_data=True, disc='SimpsonHermite', method='L-BFGS-B',
opt_args=BFGS_options, adolcID=0, bounds=bounds, track_paths=track_paths,
track_params=track_params, track_action_errors=track_action_errors)
print("\nADOL-C annealing completed in %f s."%(time.time() - tstart))
anneal_nakl.save_paths("NaKL/results/paths.npy")
anneal_nakl.save_params("NaKL/results/params.npy")
anneal_nakl.save_action_errors("NaKL/results/aerr.npy")
# Load path & parameter estimates, and action curves
allpaths = np.load("NaKL/results/paths.npy")
allparam = np.load("NaKL/results/params.npy")
aerr = np.load("NaKL/results/aerr.npy")
# Load the true solution and parameters
true_soln = np.load("NaKL/data/NaKL_truesoln_dt0p02_N6001.npy")
true_soln_allg = np.resize(true_soln, allpaths.shape)
true_param = np.load("NaKL/data/NaKL_trueparam_dt0p02_N6001.npy")
beta_show = 180 # plot estimates at this beta value
# plot all path estimates at this beta simultaneously
fig,ax = plt.subplots(4, 1, figsize=(6, 8), sharex=True)
fig.set_tight_layout(True)
tplot = allpaths[beta_show, :, 0]
# Observed variable (voltage)
# plot the data
ax[0].plot(tplot, data[:, 0], color="dodgerblue", label="Data", marker=".", ms=3, lw=0)#, alpha=0.7)#, lw=1.5)
# plot the estimate
ax[0].plot(tplot, allpaths[beta_show, :, 1], color="C1", label="Estimate")#, alpha=1.0, lw=2.0)
# plot the true solution
ax[0].plot(tplot, true_soln[:, 1], color="black", label="True", ls="--")#, lw=1.5, alpha=0.7)
ax[0].set_xlim(tplot[0], tplot[-1])
ax[0].set_ylabel(r"$V$ (observed)")
ax[0].set_title(r"NaKL state estimates ($R_f/R_m = %.1E$)"%(RF0[0]/RM * alpha**beta_show))
h,l = ax[0].get_legend_handles_labels()
ax[0].legend(h,l)
# Unobserved variables (gating variables)
gvnames = ["m", "h", "n"]
for i in [1, 2, 3]:
# plot the estimate
ax[i].plot(tplot, allpaths[beta_show, :, i], color="C1")#, alpha=1.0, lw=2.0)
# plot the true solution
ax[i].plot(tplot, true_soln[:, i], color="black", ls="--")#, alpha=0.7)#, lw=1.5)
ax[i].set_xlim(tplot[0], tplot[-1])
ax[i].set_ylabel(r"$%s$ (unobserved)"%(gvnames[i-1]))
#ax[1].set_xlabel("Time")
ax[3].set_xlabel("Time")
plt.show()
#fig.savefig("/home/prozdeba/CloudStation/write-ups/dynamical_reg_L96_20170123/figs/L96_estimates_L7.pdf")
fig,ax = plt.subplots(1, 3, figsize=(9, 3), sharey=True)
fig.set_tight_layout(True)
ymin = 1.0e20
ymax = 0.0
plotlw = 1.0
plotalpha = 1
#plotcolors = ["C0", "C0", "C0"]
plotcolors = ["black", "black", "black"]
action_vals = aerr[:, 1]
ax[0].plot(gammas_all[:], action_vals, lw=plotlw, color=plotcolors[0], alpha=plotalpha)
ax[0].set_xlabel(r"$R_f/R_m$")
ax[0].set_ylabel("Action")
ax[0].axhline(y=1, lw=1, ls="--", color="C3")#, alpha=.7)
measerr_vals = aerr[:, 2]
ax[1].plot(gammas_all[:], measerr_vals, lw=plotlw, color=plotcolors[1], alpha=plotalpha)
ax[1].set_xlabel(r"$R_f/R_m$")
ax[1].set_ylabel("Meas. error")
moderr_vals = aerr[:, 3]
ax[2].plot(gammas_all[:], moderr_vals, lw=plotlw, color=plotcolors[2], alpha=plotalpha)
ax[2].set_xlabel(r"$R_f/R_m$")
ax[2].set_ylabel("Model error")
fig.suptitle("NaKL Action & Errors (single data set)", y=1.0)
for i in range(3):
ax[i].set_yscale('log')
ax[i].set_xscale('log')
ax[i].set_xlim(gammas_all[0], gammas_all[-1])
#ax[i].set_ylim(.001, 1.0e2)
plt.show()
# Necessary package imports
import time
import numpy as np
%matplotlib nbagg
import matplotlib.pyplot as plt
from varanneal import va_nnet
def sigmoid(x, W, b):
linpart = np.dot(W, x) + b
return 1.0 / (1.0 + np.exp(-linpart))
N = 3 # Total umber of layers
D_in = 784 # Number of neurons in the input layer
D_out = 10 # Number of neurons in the output layer
D_hidden = 30 # Number of neurons in the hidden layers
# Network structure is stored in a list, where element i is the number
# of neurons in layer i.
structure = np.zeros(N, dtype='int')
structure[0] = D_in # 3 neurons in the input layer
structure[N-1] = D_out # 2 neurons in the output layer
for i in range(1, N-1):
structure[i] = D_hidden # 5 neurons in the hidden layers
Lidx = [np.linspace(0, D_in-1, D_in, dtype='int'), np.linspace(0, D_out-1, D_out, dtype='int')]
# RM, RF0
RM = 1.0
RF0 = 1.0e-8 * RM * float(np.sum(structure) - structure[0]) / float(structure[0] + structure[-1])
g0 = RF0 / RM
# alpha, and beta ladder
alpha = 1.1
beta_array = np.linspace(0, 435, 436)
gammas_all = g0 * alpha**beta_array
M = 2 # number of training examples
data_in = np.load("MNIST/MNIST_data/imtrain_norm.npy")[:M]
data_out = np.load("MNIST/MNIST_data/labtrain.npy")[:M]
np.random.seed(27509436)
# Neuron states
Xin = np.random.randn(D_in)
Xin = (Xin - np.average(Xin)) / np.std(Xin)
X0 = np.copy(Xin)
for n in xrange(N-2):
X0 = np.append(X0, 0.2*np.random.rand(D_hidden) + 0.4)
X0 = np.append(X0, 0.2*np.random.rand(D_out) + 0.4)
for m in xrange(M - 1):
Xin = np.random.randn(D_in)
Xin = (Xin - np.average(Xin)) / np.std(Xin)
X0 = np.append(X0, Xin)
for n in xrange(N-2):
X0 = np.append(X0, 0.2*np.random.rand(D_hidden) + 0.4)
X0 = np.append(X0, 0.2*np.random.rand(D_out) + 0.4)
X0 = np.array(X0).flatten()
# Parameters
NP = np.sum(structure[1:]*structure[:-1] + structure[1:])
P0 = np.array([], dtype=np.float64)
W_i0 = 0
W_if = structure[0]*structure[1]
b_i0 = W_if
b_if = b_i0 + structure[1]
# Set up Pidx so that only weights, not biases, are estimated
for n in xrange(N - 1):
if n == 0:
Pidx = np.arange(W_i0, W_if, 1, dtype='int')
else:
Pidx = np.append(Pidx, np.arange(W_i0, W_if, 1, dtype='int'))
if n == 0:
P0 = np.append(P0, (2.0*np.random.rand(structure[n]*structure[n+1]) - 1.0) / D_in)
else:
P0 = np.append(P0, (2.0*np.random.rand(structure[n]*structure[n+1]) - 1.0) / D_hidden)
P0 = np.append(P0, np.zeros(structure[n+1]))
if n < N - 2:
W_i0 = b_if
W_if = W_i0 + structure[n+1]*structure[n+2]
b_i0 = W_if
b_if = b_i0 + structure[n+2]
P0 = np.array(P0).flatten()
Pidx = np.array(Pidx).flatten().tolist()
# Initialize Annealer
anneal1 = va_nnet.Annealer()
# Set the network structure
anneal1.set_structure(structure)
# Set the activation function
anneal1.set_activation(sigmoid)
# Set the input and output data
anneal1.set_input_data(data_in)
anneal1.set_output_data(data_out)
BFGS_options = {'gtol':1.0e-12, 'ftol':1.0e-12, 'maxfun':1000000, 'maxiter':1000000}
tstart = time.time()
anneal1.anneal(X0, P0, alpha, beta_array, RM, RF0, Pidx, Lidx=Lidx,
method='L-BFGS-B', opt_args=BFGS_options, adolcID=0)
print("\nADOL-C annealing completed in %f s."%(time.time() - tstart))
anneal1.save_io("MNIST/results/io.npy")
anneal1.save_Wb("MNIST/results/W.npy", "MNIST/results/b.npy")
anneal1.save_action_errors("MNIST/results/action_errors.npy")
# Load action curves
aerr = np.load("MNIST/results/action_errors.npy")
fig,ax = plt.subplots(1, 3, figsize=(6.75, 2.1), sharey=True)
fig.set_tight_layout(True)
ymin = 1.0e20
ymax = 0.0
plotlw = 1.0
plotalpha = .7
#plotcolors = ["C0", "C0", "C0"]
plotcolors = ["black", "black", "black"]
action_vals = aerr[:, 1]
ax[0].plot(gammas_all[:], action_vals, lw=plotlw, color=plotcolors[0], alpha=plotalpha)
ax[0].set_xlabel(r"$R_f/R_m$")
ax[0].set_ylabel("Action")
ax[0].axhline(y=1, lw=1, ls="--", color="C3", alpha=.7)
measerr_vals = aerr[:, 2]
ax[1].plot(gammas_all[:], measerr_vals, lw=plotlw, color=plotcolors[1], alpha=plotalpha)
ax[1].set_xlabel(r"$R_f/R_m$")
ax[1].set_ylabel("Meas. error")
moderr_vals = aerr[:, 3]
ax[2].plot(gammas_all[:], moderr_vals, lw=plotlw, color=plotcolors[2], alpha=plotalpha)
ax[2].set_xlabel(r"$R_f/R_m$")
ax[2].set_ylabel("Model error")
fig.suptitle("L96 action levels ($L=8$, single data set)", y=1.0)
for i in range(3):
ax[i].set_yscale('log')
ax[i].set_xscale('log')
ax[i].set_xlim(gammas_all[0], gammas_all[-1])
#ax[i].set_ylim(.001, 1.0e2)
plt.show()
#structure = np.array([25, 3, 4])
Nbias = np.sum(structure[1:])
Nweight = np.sum(structure[1:] * structure[:-1])
sortedp = []
West = np.load("MNIST/results/W.npy")
best = np.load("MNIST/results/b.npy")
for i in xrange(len(gammas_all)):
#West = np.load("W.npy")
#best = np.load("b.npy")
for j, West_n in enumerate(West[i]):
if j == 0:
pest = West_n.flatten()#, best_n.flatten())
else:
pest = np.append(pest, West_n.flatten())
for j, best_n in enumerate(best[i]):
pest = np.append(pest, best_n.flatten())
sortedp.append(pest.flatten())
sortedp = np.array(sortedp)
fig,ax = plt.subplots(1, 1, figsize=(5,4))
fig.set_tight_layout(True)
nrange = range(Nweight + Nbias)
surf = ax.pcolormesh(gammas_all, nrange, np.abs(sortedp.T), rasterized=True, cmap='jet', norm=mplcolors.LogNorm())#, vmin=1.0E-7, vmax=1.0E3)
#surf = ax.pcolormesh(g, nrange, sortedp.T, rasterized=True, cmap='jet')#, norm=mplcolors.LogNorm())
ax.set_xscale("log")
ax.set_xlabel(r"$R_f/R_m$")
ax.set_ylabel(r"Parameter metaindex")
cbar = fig.colorbar(surf, ax=ax)
cbar.set_label(r"$W$")
ax.set_xlim((gammas_all[0], gammas_all[-1]))
ax.set_ylim((nrange[0], nrange[-1]))
#ax.set_title("Est. err. vs. data, sec%d mem%d"%(secID, ensID))
plt.show()
#fig.savefig("/home/prozdeba/Desktop/plots_20170801/nnet/params_M%d.pdf"%(Ntraining,))
#plt.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: Action/annealing (hyper)parameters
Step3: Load observed data
Step4: Set $\Delta t_f$ based on $\Delta t$.
Step5: Initial path/parameter guesses
Step6: Use VA to estimate states and parameters
Step7: Run VA
Step8: Save action, constituent errors, and state/parameter estimates to file.
Step9: Plot the results
Step10: Plot parameter estimate over course of annealing
Step11: Plot the action
Step12: ODE Example 2
Step13: In this example, observations of the voltage trace of a neuron stimulated with a known input current (known as a current clamp protocol) are simulated in a twin experiment. A "simple" Hodgkin-Huxley model (called NaKL) is integrated forward in time with a known time-dependent stimulus, and the solution is sampled at regular intervals with some random noise to simulate noise in a real electrode.
Step14: Action/annealing (hyper)parameters
Step15: Load observation & stimulus data
Step16: Set $\Delta t_f$ based on $\Delta t$.
Step17: Initial path/parameter guesses
Step18: Initialize VarAnneal object to estimate states and parameters
Step19: Run VA
Step20: Save action, constituent errors, and state/parameter estimates to file.
Step21: Plot the results
Step22: Plot all state variable estimates
Step23: Plot the action
Step24: Neural Network Example 1
Step25: Data
Step26: Network structure
Step27: Action/annealing parameters
Step28: Load MNIST data set (images and labels)
Step29: Initial neuron activation/weight guesses
Step30: Use VA to train the network weights
Step31: Run VA using L-BFGS-B
Step32: Save the output
Step33: Plot the results
Step34: Parameter estimates
|
5,008
|
<ASSISTANT_TASK:>
Python Code:
# First let's install the module
!pip install thermocouples_reference
from thermocouples_reference import thermocouples
typeK = thermocouples['K']
print(typeK)
print(typeK.emf_mVC(42, Tref=0))
print(typeK.emf_mVC([-3.14159, 42, 54], Tref=0))
print(typeK.inverse_CmV(1.1, Tref=23.0))
# check result
print(typeK.emf_mVC(49.907928030075773, Tref=23.0))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below, the first computation shows that the type K thermocouple emf at 42 °C, with reference junction at 0 °C, is 1.694 mV (compare to NIST table); the second calculation shows how passing in an array applies the function for each element, in the style of numpy
Step2: An inverse lookup function is provided that you can use to get a temperature out of a measured voltage, including cold junction compensation effects. If we put our type K thermocouple into a piece of spam and we read 1.1 mV, using our voltmeter at room temperature (23 °C), then the spam is at 50 °C.
|
5,009
|
<ASSISTANT_TASK:>
Python Code:
import sys
def function(): pass
print type(1)
print type("")
print type([])
print type({})
print type(())
print type(object)
print type(function)
print type(sys)
# first.py
class First:
pass
fr = First()
print type(fr)
print type(First)
class Dog:
def __init__(self, name):
self.name = name
phydoux = Dog('Phydoux')
lucky = Dog('Lucky')
print phydoux.name
print lucky.name
class Dynamic:
pass
d = Dynamic()
d.name = "Dynamic"
print d.name
class Circle:
pi = 3.141592
def __init__(self, radius=1):
self.radius = radius
def area(self):
return self.radius * self.radius * Circle.pi
def setRadius(self, radius):
self.radius = radius
def getRadius(self):
return self.radius
c = Circle()
c.setRadius(5)
print c.getRadius()
print c.area()
class Book:
def __init__(self, title, author, pages):
print "A book is created"
self.title = title
self.author = author
self.pages = pages
def __str__(self):
return "Title:%s , author:%s, pages:%s " % \
(self.title, self.author, self.pages)
def __len__(self):
return self.pages
def __del__(self):
print "A book is destroyed"
book = Book("Inside Steve's Brain", "Leander Kahney", 304)
print book
print len(book)
del book
class Animal:
def __init__(self):
print "Animal created"
def whoAmI(self):
print "Animal"
def eat(self):
print "Eating"
class Dog(Animal):
def __init__(self):
Animal.__init__(self)
print "Dog created"
def whoAmI(self):
print "Dog"
def bark(self):
print "Woof!"
d = Dog()
d.whoAmI()
d.eat()
d.bark()
class Animal:
def __init__(self, name=''):
self.name = name
def talk(self):
pass
class Cat(Animal):
def talk(self):
print "Meow!"
class Dog(Animal):
def talk(self):
print "Woof!"
a = Animal()
a.talk()
c = Cat("Missy")
c.talk()
d = Dog("Rocky")
d.talk()
class Encapsulation(object):
def __init__(self, a, b, c):
self.public = a
self._protected = b
self.__private = c
x = Encapsulation(11,13,17)
x.public
x._protected
x._protected = 23
x._protected
x.__private
>>> x._Encapsulation__private
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Python programs can have different styles
Step2: This is our first class. The body of the class is left empty for now. It is a convention to give Classes a name that starts with a capital letter.
Step3: In this code example, we have a Dog class. The special method init() is called automatically right after the object has been created.
Step4: We define and create an empty Dynamic class. The line of code
Step5: In the code example, we have a Circle class. We define three new methods.
Step6: In our code example, we have a book class. Here we introduce four special methods. The __init__(), __str__(), __len__() and the __del__() methods. You can imagine how similar special methods can be used to implement __add__(), __sub__(), etc.
Step7: In this example, we have two classes
Step8: Here we have two species
Step9: The following interactive sessions shows the behaviour of public, protected and private members
Step10: Note, that if you try accessing ____private__ from outside the class, you’ll get an error. But you can still stumble upon something like this
|
5,010
|
<ASSISTANT_TASK:>
Python Code:
# import essentia in streaming mode
import essentia
import essentia.streaming as es
# import matplotlib for plotting
import matplotlib.pyplot as plt
import numpy as np
# algorithm parameters
framesize = 1024
hopsize = 256
inputFilename = 'singing-female.wav'
outputFilename = 'singing-female-stft.wav'
# create an audio loader and import audio file
out = np.array(0)
loader = es.MonoLoader(filename = inputFilename, sampleRate = 44100)
pool = essentia.Pool()
# algorithm instantation
fcut = es.FrameCutter(frameSize = framesize, hopSize = hopsize, startFromZero = False);
w = es.Windowing(type = "hann");
fft = es.FFT(size = framesize);
ifft = es.IFFT(size = framesize);
overl = es.OverlapAdd (frameSize = framesize, hopSize = hopsize, gain = 1./framesize );
awrite = es.MonoWriter (filename = outputFilename, sampleRate = 44100);
loader.audio >> fcut.signal
fcut.frame >> w.frame
w.frame >> fft.frame
fft.fft >> ifft.fft
ifft.frame >> overl.frame
overl.signal >> awrite.audio
overl.signal >> (pool, 'audio')
essentia.run(loader)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: After importing Essentia library, let's import other numerical and plotting tools
Step2: Define the parameters of the STFT workflow
Step3: Specify input and output audio filenames
Step4: Define algorithm chain for frame-by-frame process
Step5: Now we set the algorithm network and store the processed audio samples in the output file
Step6: Finally we run the process that will store an output file in a WAV file
|
5,011
|
<ASSISTANT_TASK:>
Python Code:
from keras.datasets import mnist
(X_raw, y_raw), (X_raw_test, y_raw_test) = mnist.load_data()
n_train, n_test = X_raw.shape[0], X_raw_test.shape[0]
import matplotlib.pyplot as plt
import random
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
for i in range(15):
plt.subplot(3, 5, i+1)
index = random.randint(0, n_train-1)
plt.title(str(y_raw[index]))
plt.imshow(X_raw[index], cmap='gray')
plt.axis('off')
import numpy as np
n_class, n_len, width, height = 11, 5, 28, 28
def generate_dataset(X, y):
X_len = X.shape[0]
X_gen = np.zeros((X_len, height, width*n_len, 1), dtype=np.uint8)
y_gen = [np.zeros((X_len, n_class), dtype=np.uint8) for i in range(n_len)]
# 随机取1~5个数字,并拼接成新的图片
for i in range(X_len):
for j in range(n_len):
# 这里给定一个概率让空白取代数字
if 9 < random.randint(0,11):
y_gen[j][i][10] = 1
else:
index = random.randint(0, X_len-1)
X_gen[i,:,j*width:(j+1)*width, 0] = X[index]
y_gen[j][i][y[index]] = 1
return X_gen, y_gen
X_train, y_train = generate_dataset(X_raw, y_raw)
X_test, y_test = generate_dataset(X_raw_test, y_raw_test)
print(y_train[0].shape)
# 显示生成的图片
for i in range(15):
plt.subplot(5, 3, i+1)
index = random.randint(0, n_test-1)
title = ''
for j in range(n_len):
title += str(np.argmax(y_test[j][index])) + ','
plt.title(title)
plt.imshow(X_test[index][:,:,0], cmap='gray')
plt.axis('off')
# 导入所需的模型
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.layers.core import Dropout
from keras.optimizers import SGD
from keras.models import Model
from keras.layers import Input, Dense
def train_model(X_train, y_train):
main_input = Input(shape=(28, 140, 1), name='main_input')
# 卷积层
x = Convolution2D(32, 3, 3, activation="relu")(main_input)
x = MaxPooling2D(pool_size = (2, 2))(x)
x = Convolution2D(32, 3, 3, activation="relu")(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Convolution2D(32, 3, 3 ,activation="relu")(x)
x = Flatten()(x)
x = Dropout(0.4)(x)
# 分别训练5个数字
y_out = [Dense(11, activation='softmax')(x) for i in range(n_len)]
# 使用泛型输出5个数字的序列
model = Model(input=main_input, output=y_out)
model.compile(optimizer='rmsprop', loss="categorical_crossentropy", metrics=["accuracy"])
model.fit(X_train, y_train, batch_size=1024, nb_epoch=8, shuffle=True, verbose=1, validation_split=0.2)
return model
model = train_model(X_train, y_train)
from keras.utils.visualize_util import plot, model_to_dot
from IPython.display import Image, SVG
# 可视化网络模型
SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg'))
# 测试模型
prediction = model.predict(X_test)
# 计算测试集中数字序列的准确率
def evaluate(model):
right = 0
error = 0
for i in range (1000):
right_num = 0
for j in range(5):
y_pred_num = np.argmax(prediction[j][i])
y_test_num = np.argmax(y_test[j][i])
if y_pred_num == y_test_num:
right_num += 1
if right_num == 5:
right += 1
else:
error += 1
return float(right)/(right+error)
evaluate(model)
def get_result(result):
# 将 one_hot 编码解码
resultstr = ''
for i in range(n_len):
resultstr += str(np.argmax(result[i])) + ','
return resultstr
index = random.randint(0, n_test-1)
y_pred = model.predict(X_test[index].reshape(1, height, width*n_len, 1))
plt.title('real: %s\npred:%s'%(get_result([y_test[x][index] for x in range(n_len)]), get_result(y_pred)))
plt.imshow(X_test[index,:,:,0], cmap='gray')
plt.axis('off')
model.save_weights('model.h5')
with open('model.json', 'w') as f:
f.write(model.to_json())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 可视化 mnist
Step2: 练习:合成数据
Step3: 问题 1
Step4: 问题 2
Step5: 问题 3
Step6: 问题 4
Step7: 保存模型
|
5,012
|
<ASSISTANT_TASK:>
Python Code:
# imports
import sys # for stderr
import numpy as np
import pandas as pd
import sklearn as skl
from sklearn import metrics
import matplotlib.pyplot as plt
%matplotlib inline
# settings
plt.style.use('ggplot')
# plt.rcParams['figure.figsize'] = (10.0, 10.0)
# pd.set_option('display.max_rows', 50)
# pd.set_option('display.max_columns', 50)
# versions
import sys
print(pd.datetime.now())
print('Python: '+sys.version)
print('numpy: '+np.__version__)
print('pandas: '+pd.__version__)
print('sklearn: '+skl.__version__)
import psycopg2
# import sqlite3
db = psycopg2.connect("dbname='ccancer' user='paulperry' host='localhost' password=''")
#db = sqlite3.connect('../input/database.sqlite')
import datetime
start = datetime.datetime.now()
print(start)
fdir = './features/'
train_file = './input/patients_train.csv.gz'
train = pd.read_csv(train_file)
train.set_index('patient_id', inplace=True)
train.drop('patient_gender', axis = 1, inplace = True )
train_exclude = pd.read_csv('./input/train_patients_to_exclude.csv', header=None, names=['patient_id'])
train.drop(train_exclude.patient_id, inplace=True)
original_train_rows = train.shape[0]
print(train.shape)
train[:5]
test_file = './input/patients_test.csv.gz'
test = pd.read_csv(test_file)
test.set_index('patient_id', inplace=True)
test.drop( 'patient_gender', axis = 1, inplace = True )
test_exclude = pd.read_csv('./input/test_patients_to_exclude.csv', header=None, names=['patient_id'])
test.drop(test_exclude.patient_id, inplace=True)
original_test_rows = test.shape[0]
print(test.shape)
test[:5]
train.is_screener.mean()
age = pd.DataFrame([train.groupby('patient_age_group').is_screener.count(),
train.groupby('patient_age_group').is_screener.mean()])
age = age.T
age.columns = ['count','screened']
age
# train.groupby('ethinicity').is_screener.count()
train.groupby('ethinicity').is_screener.mean()
#train.groupby('household_income').is_screener.count()
train.groupby('household_income').is_screener.mean()
#train.groupby('education_level').is_screener.count()
train.groupby('education_level').is_screener.mean()
medical_exam = pd.read_sql_query("select * from diagnosis where diagnosis_code in \
('V70', 'V70.0', 'V70.1', 'V70.2', 'V70.3', 'V70.4', 'V70.5', 'V70.6', 'V70.7', 'V70.8', 'V70.9' ) \
and patient_id in (select patient_id from patients_train );", db)
medical_exam = pd.read_csv('./features/train_medical_exam.csv')
medical_exam.drop_duplicates('patient_id', inplace=True)
medical_exam.shape[0]
medical_exam.shape[0] / float(train.shape[0])
train.loc[medical_exam.patient_id].is_screener.mean()
train.loc[~train.index.isin(medical_exam.patient_id)].is_screener.mean()
gyn_exam = pd.read_sql_query("select t1.patient_id, claim_type, diagnosis_date, diagnosis_code from diagnosis t1 \
right join patients_train t2 on (t1.patient_id=t2.patient_id ) \
where diagnosis_code in ('V72.3', 'V72.31');", db)
gyn_exam[:5]
gyn_exam.drop_duplicates('patient_id', inplace=True)
gyn_exam.shape[0]
gyn_exam.shape[0] / float(train.shape[0])
train.loc[gyn_exam.patient_id].is_screener.mean()
no_gyn_exam = train.loc[~train.index.isin(gyn_exam.patient_id)]
no_gyn_exam.is_screener.mean()
medical_and_gyn_exam_df = train.loc[medical_exam.loc[medical_exam.patient_id.isin(gyn_exam.patient_id)].patient_id]
medical_and_gyn_exam = medical_and_gyn_exam_df.is_screener.mean()
medical_and_no_gyn_exam_ids = set(medical_exam.patient_id) - set(gyn_exam.patient_id)
medical_and_no_gyn_exam = train.loc[medical_and_no_gyn_exam_ids].is_screener.mean()
no_medical_and_gyn_exam_ids = set(gyn_exam.patient_id) - set(medical_exam.patient_id)
no_medical_and_gyn_exam = train.loc[no_medical_and_gyn_exam_ids].is_screener.mean()
no_medical_and_no_gyn_exam_ids = set(train.index) - (set(medical_exam.patient_id) | set(gyn_exam.patient_id))
no_medical_and_no_gyn_exam = train.loc[no_medical_and_no_gyn_exam_ids].is_screener.mean()
mat = [[medical_and_gyn_exam, medical_and_no_gyn_exam], [no_medical_and_gyn_exam, no_medical_and_no_gyn_exam]]
# Pick a colormap
# I'd like to set alpha=0.2 , but haven't figured it out yet
cmap='RdYlGn'
mat_df = pd.DataFrame(mat, index=['Medical Exam', 'No Medical Exam'], columns=['Gyn Exam', 'No Gyn Exam'])
mat_df
df = train.loc[no_medical_and_no_gyn_exam_ids].groupby(['patient_age_group',
'household_income']).is_screener.mean().unstack()
df.style.background_gradient(cmap, axis=0, low=.5, high=1)
df = train.loc[no_medical_and_no_gyn_exam_ids].groupby(['patient_age_group',
'ethinicity']).is_screener.mean().unstack()
df.style.background_gradient(cmap, axis=0, low=.5, high=1)
df = train.loc[no_medical_and_no_gyn_exam_ids].groupby(['patient_age_group',
'education_level']).is_screener.mean().unstack()
df.style.background_gradient(cmap, axis=0, low=.5, high=1)
pregnancy = pd.read_sql_query("select t1.patient_id, claim_type, diagnosis_date, diagnosis_code from diagnosis t1 \
right join patients_train t2 on (t1.patient_id=t2.patient_id) where diagnosis_code in \
('V22.0','V22.1','V22.2','V24.2','V25.2','V27.0','V28.3','V70.0','V74.5');", db)
pregnancy.shape
pregnancy.to_csv('train_pregnant.csv', index=False)
pregnancy.drop_duplicates('patient_id', inplace=True)
pregnancy.shape
train.loc[pregnancy.patient_id].is_screener.mean()
train.loc[~train.index.isin(pregnancy.patient_id)].is_screener.mean()
df = train.loc[pregnancy.patient_id].groupby(['patient_age_group',
'household_income']).is_screener.mean().unstack()
df.style.background_gradient(cmap, axis=0, low=.5, high=1)
df = train.loc[~train.index.isin(pregnancy.patient_id)].groupby(['patient_age_group',
'household_income']).is_screener.mean().unstack()
df.style.background_gradient(cmap, axis=0, low=.5, high=1)
df = train.loc[pregnancy.patient_id].groupby(['patient_age_group',
'ethinicity']).is_screener.mean().unstack()
df.style.background_gradient(cmap, axis=0, low=.5, high=1)
df = train.loc[~train.index.isin(pregnancy.patient_id)].groupby(['patient_age_group',
'ethinicity']).is_screener.mean().unstack()
df.style.background_gradient(cmap, axis=0, low=.5, high=1)
from mpl_toolkits.basemap import Basemap
from shapely.geometry import Point, Polygon, MultiPoint, MultiPolygon
from matplotlib.collections import PatchCollection
import geopandas as gpd
print('geopandas: '+gpd.__version__)
# settings
plt.rcParams['figure.figsize']=(16,14)
# US Bounding box
minx, miny, maxx, maxy = -125,22,-65,50 # USA bounding box
usbbox = Polygon([(minx,miny),(minx,maxy),(maxx, maxy),(maxx,miny)])
# We grabbed the USA shape files from : https://github.com/matplotlib/basemap/tree/master/examples
# or state files from here: https://www.census.gov/geo/maps-data/data/cbf/cbf_state.html
# We took the CBSA files from https://www.census.gov/geo/maps-data/data/cbf/cbf_msa.html
state_pct = pd.read_csv('./features/state_screen_percent.csv')
state_pct.set_index('patient_state', inplace=True)
state_pct[:5]
states_key = pd.read_csv('./datastudy/state.csv')
states_key.set_index('name_long', inplace=True)
states_key[:5]
state_file = './datastudy/st99_d00.shp'
#bbox=(-119,22,64,49)
bbox=(-125,22,-65,50)
kw = dict(bbox=bbox)
states_map = gpd.GeoDataFrame.from_file(state_file, **kw)
#state_file = 'cb_2014_us_state_500k.shp'
# states = gpd.GeoDataFrame.from_file(state_file)
if gpd.__version__ != '0.1.0.dev-': # temp fix
states_map.geometry = convert_3D_2D(states.geometry)
states_map = states_map.merge(states_key, left_on='NAME', right_index=True, how='left')
states_map = states_map.merge(state_pct, left_on='name_short', right_index=True, how='left')
states_map = states_map[states_map.geometry.within(usbbox)]
# plt.figure(figsize=(12,10))
ax = states_map[states_map.state_pct.notnull()].plot(column='state_pct', scheme='QUANTILES', k=5, colormap='RdYlGn',
alpha=0.9, linewidth=0.1, legend=True)
ax.axis('off')
plt.title('Percentage screened by State')
plt.show()
cbsa_pct = pd.read_csv('./features/cbsa_pct.csv.gz')
cbsa_pct.set_index('cbsa', inplace=True)
cbsa_pct[:5]
cbsa = gpd.GeoDataFrame.from_file('./datastudy/cb_2014_us_cbsa_500k.shp')
cbsa.GEOID = cbsa.GEOID.astype(float)
cbsa.set_index('GEOID', inplace=True)
cbsa.shape
cbsa['pct_screened'] = cbsa_pct.pct_screened
cbsa = cbsa[cbsa.geometry.within(usbbox)]
ax = states_map.plot(column=None, color='white', linewidth=0.1)
ax = cbsa[cbsa.pct_screened.notnull()].plot(column='pct_screened', scheme='QUANTILES', k=5, colormap='RdYlGn',
alpha=0.9, ax=ax, linewidth=0.1, legend=True)
ax.axis('off')
plt.title('Percentage screened by CBSA')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Overall screening percentage
Step2: Screening by Age, Ethnicity, Household income, and Education level
Step3: Interaction with the Medical System
Step4: There are 1157817 patients in the train set, of which, 498157 have had a medical exam during the entire 7 years of the data.
Step5: So less than half of the population has had a regular medical exam
Step6: But the average is 66% for patients who have had any medical exam
Step7: And only 47% if they never have had one
Step8: Patient was referred to a Gynecological Exam
Step9: More than have the patients in this database have had a gynecological exam
Step10: And if they have had an exam then the likelihood that they have been screened is high
Step11: And quite low if they have not had a gynecological exam
Step12: We can look at whether a patient has had a medical exam and a gynecological exam
Step13: Looking at the each of the demographics separately, at the aggregate level, for the patients that
Step14: Patient was pregnant or related
Step15: Patients that had one of the above diagnosis were screened with the following likelihood
Step16: And those that were not
Step17: Looking at these pregnancy diagnosis by demographics
Step18: Versus the patients who has no such diagnosis
Step19: Again, by ethnicity, for patients that had such diagnosis
Step20: And those patients that had no such diagnosis
Step21: Patient has other highly predictive diagnosis
Step22: Map States
Step23: By CBSA
|
5,013
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
from IPython.display import Image
Image('images/decision-tree.png')
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
from IPython.display import Image
Image('images/decision_tree.png')
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier().fit(X, y)
def visualize_classifier(model, X, y, ax=None, cmap='rainbow'):
ax = ax or plt.gca()
# Plot the training points
ax.scatter(X[:, 0], X[:, 1], c=y, s=30, cmap=cmap,
clim=(y.min(), y.max()), zorder=3)
ax.axis('tight')
ax.axis('off')
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# fit the estimator
model.fit(X, y)
xx, yy = np.meshgrid(np.linspace(*xlim, num=200),
np.linspace(*ylim, num=200))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
# Create a color plot with the results
n_classes = len(np.unique(y))
contours = ax.contourf(xx, yy, Z, alpha=0.3,
levels=np.arange(n_classes + 1) - 0.5,
cmap=cmap, clim=(y.min(), y.max()),
zorder=1)
ax.set(xlim=xlim, ylim=ylim)
visualize_classifier(DecisionTreeClassifier(), X, y)
from helper.interactive_tree import plot_tree_interactive
plot_tree_interactive(X, y);
from IPython.display import Image
Image('images/decision-tree-overfitting.png')
from helper.interactive_tree import randomized_tree_interactive
randomized_tree_interactive(X, y)
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
tree = DecisionTreeClassifier()
bag = BaggingClassifier(tree, n_estimators=100, max_samples=0.8, random_state=1)
bag.fit(X, y)
visualize_classifier(bag, X, y)
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=100, random_state=0)
visualize_classifier(model, X, y);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Motivating Random Forests
Step2: The binary splitting makes this extremely efficient (Given a proper tree). Why?
Step3: A simple decision tree built on this data will iteratively split the data along one or the axis.
Step4: Why is the uppermost branch not split?
Step5: What do you think is the best classifier?
Step6: In some regions the trees produce consistent results
Step7: Ensembles of Estimators
Step8: We've built a Random Forest by hand. But Scikit-Learn comes with a RandomForestClassifier estimator which is easier to handle
|
5,014
|
<ASSISTANT_TASK:>
Python Code:
all_data_list = []
for year in range(1990,2017):
data = pd.read_csv('{}_Output.csv'.format(year), header=None)
all_data_list.append(data) # list of dataframes
data = pd.concat(all_data_list, axis=0)
data.columns = ['id','date','headline', 'lead']
data.head()
data.shape
data.dropna(inplace=True)
data.shape
data['yearmonth'] = map(lambda x: x[:7], data.date)
def getStems(lead):
stemmer = nltk.stem.SnowballStemmer("english")
tokens = nltk.word_tokenize(''.join(ch for ch in lead if ch not in set(string.punctuation)))
return map(lambda x: stemmer.stem(x.decode('utf-8')), tokens)
def joinstems(stemlist):
return ' '.join(stem for stem in stemlist)
stems = map(getStems, data.lead)
data['leadstems'] = map(joinstems, stems)
data.head()
def getBigrams(stems):
stems = stems.split()
finder = nltk.collocations.BigramCollocationFinder.from_words(stems)
return(finder.ngram_fd.items())
bigramList = map(getBigrams, data.leadstems)
flatBigrams = [item for sublist in bigramList for item in sublist]
def reduceByKey(toIterate): # For us, the bigram is the key
d = defaultdict(int)
for i, (key, value) in enumerate(toIterate):
d[key] += value
return list(d.items())
summedBigrams = reduceByKey(flatBigrams)
with open("summedBigrams.pkl", "wb") as output_file:
pickle.dump(summedBigrams, output_file)
with open("summedBigrams.pkl", "rb") as input_file:
summedBigrams2 = pickle.load(input_file)
def xLogX(x):
if x==0:
return 0
else:
return x * np.log(x)
def entropy(things):
theSum = 0
theResult = 0
for thing in things:
theResult += xLogX(thing)
theSum += thing;
return xLogX(theSum) - theResult;
def logLikelihoodRatio(k11, k12, k21, k22):
rowEntropy = entropy([k11 + k12, k21 + k22]);
columnEntropy = entropy([k11 + k21, k12 + k22]);
matrixEntropy = entropy([k11, k12, k21, k22]);
return(2.0 * (rowEntropy + columnEntropy - matrixEntropy))
def getLLR(summedBigram):
k11 = summedBigram[1]
stem1 = summedBigram[0][0]
stem2 = summedBigram[0][1]
k12 = bigrams1Dict[stem1] - k11
k21 = bigrams2Dict[stem2] - k11
k22 = totalBigrams - k11 - k12 - k21
return(logLikelihoodRatio(k11, k12, k21, k22))
bigrams1 = map(lambda x: (x[0][0], x[1]), summedBigrams)
summedBigrams1= reduceByKey(bigrams1)
bigrams2 = map(lambda x: (x[0][1], x[1]), summedBigrams)
summedBigrams2= reduceByKey(bigrams2)
bigrams1Dict = dict(summedBigrams1)
bigrams2Dict = dict(summedBigrams2)
onlyValues = np.array(map(lambda x: x[1], summedBigrams))
totalBigrams = onlyValues.sum()
LLRs = map(getLLR, summedBigrams)
plt.hist(LLRs, bins=200000);
plt.xlim((0,100))
plt.xlabel('Dunning Log-likelihood Ratio')
plt.ylabel('Counts')
plt.title('Distribution of Bigram Log-Likelihood Ratios')
print 'Most likely bigram: {}'.format(np.array(summedBigrams)[np.argmax(np.array(LLRs))][0])
justBigrams = map(lambda x: x[0], summedBigrams)
bigramLLR = zip(justBigrams,LLRs)
with open("bigramLLR.pkl", "wb") as output_file:
cPickle.dump(bigramLLR, output_file)
with open("bigramLLR.pkl", "rb") as input_file:
bigramLLR = cPickle.load(input_file)
bigramVocab = np.array(summedBigrams)[list(np.where(np.array(LLRs)>1500)[0])]
len(bigramVocab)
stopwords = ['or','the','a','an','and','as','at','it','this','that','he','I','she','of','is','was','are','be','did','do','has','had','have','with','for','from', 'to', 'on', 'in', 'monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday', 'yesterday', 'tomorrow','year']
bigramVocab = [bigram for bigram in bigramVocab if (set(bigram[0]).isdisjoint(stopwords))]
bigramVocab = map(lambda x: x[0][0]+' '+x[0][1], list(bigramVocab))
countVec = CountVectorizer(stop_words='english', max_df=0.8, min_df=.005, strip_accents='unicode')
wordMatrix = countVec.fit_transform(data.leadstems)
unigramVocab = countVec.get_feature_names()
totalVocab = unigramVocab + bigramVocab
countVecAll = CountVectorizer(ngram_range = (1,2), vocabulary = totalVocab)
wordMatrix = countVecAll.fit_transform(data.leadstems)
def save_sparse_csr(filename,array):
np.savez(filename,data = array.data ,indices=array.indices,
indptr =array.indptr, shape=array.shape )
def load_sparse_csr(filename):
loader = np.load(filename)
return scipy.sparse.csr_matrix(( loader['data'], loader['indices'], loader['indptr']),
shape = loader['shape'])
save_sparse_csr('bigramWordMatrix.npz',wordMatrix)
grouped = data.groupby('yearmonth')
wordsByMonth = np.zeros((len(grouped.groups.keys()),wordMatrix.shape[1]))
for i, month in enumerate(np.sort(grouped.groups.keys())):
wordsByMonth[i] = np.sum(wordMatrix[grouped.get_group(month).index], axis=0)
fig = plt.figure(figsize=(11,5))
ax = fig.add_subplot(111)
ax.plot(range(len(grouped)),wordsByMonth[:,103])
ax.set_title('Occurrences of "{}"'.format(totalVocab[103]))
ax.set_ylabel('Occurrences')
ax.set_xlabel('Time')
ax.set_xlim((0,len(grouped)))
ax.set_xticklabels(list(np.arange(1960,2018,8)));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Removing missing data
Step2: Adding 'yearmonth'
Step3: Stemming
Step4: Extracting Unigrams and Bigrams
Step5: It's useful to be able to save and load this result since it takes a few minutes to run.
Step6: Calculating Dunning Log-Likelihood Ratio
Step7: It is also useful to have a dictionary of the number of times each word appears in any general bigram so that we can find the $N(A, not\,B)$ and like terms
Step8: Finally, we can get the DLLR statistic for each bigram.
Step9: Just as a sanity check, it is good to note that "New York" is most likely to be a bigram according to our metric. This makes sense as "New" and "York" are way more likely to occur together than separately, especially in the New York Times.
Step10: Using the Bigram Vocabulary
Step11: Much like with unigrams, it makes sense to eliminate bigrams that contain stopwords as those are not likely to have any meaning in our future analysis.
Step12: Lastly, we will stitch together just the stems from the bigram vocabulary list so that they are exactly how they will appear in the stemmed lead paragraphs.
Step13: Count Vectorizing
Step14: The, we can do a count vectorizer that takes as an argument the particular vocabulary that we are looking for.
Step15: We also made a couple of helper functions to save and load the sparse arrays that come out of the count vectorizer.
Step16: Monthly Aggregates
Step17: As an example of the monthly counts by word, we can see how often "America" appears
|
5,015
|
<ASSISTANT_TASK:>
Python Code:
df['Gender'] = df['Sex'].map( {'female': 0, 'male': 1} ).astype(int)
df.head()
df['Age'].dropna().hist(bins=16, range=(0,80), alpha = .5)
P.show()
median_ages = np.zeros((2,3))
median_ages
for i in range(0, 2):
for j in range(0, 3):
median_ages[i,j] = df[(df['Gender'] == i) & \
(df['Pclass'] == j+1)]['Age'].dropna().median()
median_ages
df['AgeFill'] = df['Age']
df.head()
df[df['Age'].isnull()][['Gender','Pclass','Age','AgeFill']].head(10)
for i in range(0, 2):
for j in range(0, 3):
df.loc[ (df.Age.isnull()) & (df.Gender == i) & (df.Pclass == j+1),\
'AgeFill'] = median_ages[i,j]
df[ df['Age'].isnull() ][['Gender','Pclass','Age','AgeFill']].head(10)
df['AgeIsNull'] = pd.isnull(df.Age).astype(int)
df.describe()
df[df['Embarked'].isnull()][['Fare','Pclass','AgeFill','Embarked']].head(10)
df['Port'] = df['Embarked']
df.loc[df.Embarked.isnull(), 'Port'] = ['S', 'S']
df['Port'] = df['Port'].map( {'S': 1, 'Q': 2, 'C': 3} ).astype(int)
df.head()
df['FamilySize'] = df['SibSp'] + df['Parch']
df['Age*Class'] = df.AgeFill * df.Pclass
df.describe()
df['Age*Class'].hist(bins=16, alpha=.5)
P.show()
df.dtypes[df.dtypes.map(lambda x: x=='object')]
df = df.drop(['Name', 'Sex', 'Age', 'Ticket', 'Cabin', 'Embarked'], axis=1)
df.info()
df.describe()
train_data = df.values
train_data
df_test = pd.read_csv('data/titanic-kaggle/test.csv', header=0)
df_test.describe()
df_test['Gender'] = df_test['Sex'].map( {'female': 0, 'male': 1} ).astype(int)
# Create the random forest object which will include all the parameters
# for the fit
forest = RandomForestClassifier(n_estimators = 100)
# Fit the training data to the Survived labels and create the decision trees
forest = forest.fit(train_data[0::,1::],train_data[0::,0])
# Take the same decision trees and run it on the test data
output = forest.predict(test_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fill missing Age values
Step2: Fill missing Embarked
Step3: Feature Engineering
|
5,016
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# -*- coding:utf-8 -*-
from __future__ import print_function
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
# データ読み込み
data = pd.read_csv('example/k0901.csv')
data
# 説明変数設定
X = data[['X', 'D']]
X = sm.add_constant(X)
X
# 被説明変数設定
Y = data['Y']
Y
# OLSの実行(Ordinary Least Squares: 最小二乗法)
model = sm.OLS(Y,X)
results = model.fit()
print(results.summary())
# ダミー別データ
data_d0 = data[data["D"] == 0]
data_d1 = data[data["D"] == 1]
# グラフ生成
plt.plot(data["X"], data["Y"], 'o', label="data")
plt.plot(data_d0.X, results.fittedvalues[data_d0.index], label="D=0")
plt.plot(data_d1.X, results.fittedvalues[data_d1.index], label="D=1")
plt.xlim(min(data["X"])-1, max(data["X"])+1)
plt.ylim(min(data["Y"])-1, max(data["Y"])+1)
plt.title('9-1: Dummy Variable')
plt.legend(loc=2)
plt.show()
# データ読み込み
data = pd.read_csv('example/k0902.csv')
data
# 説明変数設定
X = data[['X', 'D', 'DX']]
X = sm.add_constant(X)
X
# 被説明変数設定
Y = data['Y']
Y
# OLSの実行(Ordinary Least Squares: 最小二乗法)
model = sm.OLS(Y,X)
results = model.fit()
print(results.summary())
# ダミー別データ
data_d0 = data[data["D"] == 0]
data_d1 = data[data["D"] == 1]
# グラフ生成
plt.plot(data["X"], data["Y"], 'o', label="data")
plt.plot(data_d0.X, results.fittedvalues[data_d0.index], label="D=0")
plt.plot(data_d1.X, results.fittedvalues[data_d1.index], label="D=1")
plt.xlim(min(data["X"])-1, max(data["X"])+1)
plt.ylim(min(data["Y"])-1, max(data["Y"])+1)
plt.title('9-2: Dummy Variable')
plt.legend(loc=2)
plt.show()
# ダミー変数を加えない時のOLSモデル作成
X = data[['X']]
X = sm.add_constant(X)
model2 = sm.OLS(Y,X)
results2 = model2.fit()
# anova(Analysis of Variance)
print(sm.stats.anova_lm(results2, results))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 例題9-2 「係数ダミー」
Step2: 例題9-3 「t検定による構造変化のテスト」
|
5,017
|
<ASSISTANT_TASK:>
Python Code:
# Import packages
%run startup.py
bf = Session(host="localhost")
# Initialize the example network and snapshot
NETWORK_NAME = "example_network"
BASE_SNAPSHOT_NAME = "base"
SNAPSHOT_PATH = "networks/failure-analysis"
bf.set_network(NETWORK_NAME)
bf.init_snapshot(SNAPSHOT_PATH, name=BASE_SNAPSHOT_NAME, overwrite=True)
# Fork a new snapshot with London deactivated
FAIL_LONDON_SNAPSHOT_NAME = "fail_london"
bf.fork_snapshot(BASE_SNAPSHOT_NAME, FAIL_LONDON_SNAPSHOT_NAME, deactivate_nodes=["london"], overwrite=True)
# Get the answer of a traceroute question from Paris to the PoP's prefix
pop_prefix = "2.128.0.0/24"
tr_answer = bf.q.traceroute(
startLocation="paris",
headers=HeaderConstraints(dstIps=pop_prefix),
maxTraces=1
).answer(FAIL_LONDON_SNAPSHOT_NAME)
# Display the result in a pretty form
show(tr_answer.frame())
# Get the answer to the differential reachability question given two snapshots
diff_reachability_answer = bf.q.differentialReachability(
headers=HeaderConstraints(dstIps=pop_prefix), maxTraces=1).answer(
snapshot=FAIL_LONDON_SNAPSHOT_NAME,
reference_snapshot=BASE_SNAPSHOT_NAME)
# Display the results
show(diff_reachability_answer.frame())
# Fix for demonstration purpose
random.seed(0)
max_iterations = 5
# Get all links in the network
links = bf.q.edges().answer(BASE_SNAPSHOT_NAME).frame()
for i in range(max_iterations):
# Get two links at random
failed_link1_index = random.randint(0, len(links) - 1)
failed_link2_index = random.randint(0, len(links) - 1)
# Fork a snapshot with the link failures
FAIL_SNAPSHOT_NAME = "fail_snapshot"
bf.fork_snapshot(
BASE_SNAPSHOT_NAME,
FAIL_SNAPSHOT_NAME,
deactivate_interfaces=[links.loc[failed_link1_index].Interface,
links.loc[failed_link2_index].Interface],
overwrite=True)
# Run a differential reachability question
answer = bf.q.differentialReachability(
headers=HeaderConstraints(dstIps=pop_prefix)
).answer(
snapshot=FAIL_SNAPSHOT_NAME,
reference_snapshot=BASE_SNAPSHOT_NAME
)
# A non-empty returned answer means changed forwarding behavior
# We print the bad failure scenario and exit
if len(answer.frame()) > 0:
show(links.iloc[[failed_link1_index, failed_link2_index]])
break
show(answer.frame())
diff_routes = bf.q.bgpRib(network="2.128.0.0/16").answer(snapshot=FAIL_SNAPSHOT_NAME,
reference_snapshot=BASE_SNAPSHOT_NAME)
diff_routes
# View all defined structres on 'hongkong'
bf.q.definedStructures(nodes="hongkong").answer()
# See the config lines where the route map as1_to_as3 is defined
!cat networks/failure-analysis/configs/hongkong.cfg | head -121 | tail -4
# See the config lines where the access list '102' is defined
!cat networks/failure-analysis/configs/hongkong.cfg | head -118 | tail -5
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: bf.fork_snapshot
Step2: In the code, bf.fork_snapshot accepts four parameters
Step3: Great! We have confirmed that Paris can still reach PoP via Asia even when London has failed.
Step4: We see from the result that the failures of London would in fact permit a flow that was originally being blocked by the AS1_TO_AS2 ACL on New York. This difference reveals a potential security vulnerability! Luckily, Batfish allows us to catch and fix it before something bad happens in production. Similarly, if there were flows that were carried in BASE_SNAPSHOT_NAME but dropped in FAIL_LONDON_SNAPSHOT_NAME (an availability issue), Batfish would have caught it.
Step5: We see that there is a failure scenario under to which the network is not robust, that is, the failure will lead to a change in the forwarding behavior of at least some flows. This scenario is the failure of two links that connect Seattle to Philadelphia and San Francisco. This is unexpected because Seattle has another link that connects it to the rest of the network and should generally be available for traffic.
Step6: We see that when the links fail, if we ignore flows that end in Seattle (whose links have failed), a general pattern is that Asia loses connectivity to US. Given the network topology, this is quite surprising because after those failure we would have expected Asia to be able to reach US via Europe.
Step7: We see that routers in Asia (Hongkong, Singapore, and Tokyo) and Seattle do not have any BGP routes to the prefix in the failure snapshot, which they did in the reference snapshot. The missing route in Seattle can be explained via missing routes in Asia since Seattle depended on Asia after losing its two other links.
Step8: We see the route map as1_to_as3 is defined on line 119 and 120. Now we can quickly navigate to the lines in the config file, as showing below.
Step9: We see that the route map is denying routes that match the access-list '102.' Let's look at the definition of this list, which is on lines 115-117 per the defined structures list above.
|
5,018
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np # modulo de computo numerico
import matplotlib.pyplot as plt # modulo de graficas
import pandas as pd # modulo de datos
# esta linea hace que las graficas salgan en el notebook
%matplotlib inline
xurl="http://spreadsheets.google.com/pub?key=phAwcNAVuyj2tPLxKvvnNPA&output=xls"
df=pd.read_excel(xurl)
print("Tamano completo es %s"%str(df.shape))
df.head()
df = df.rename(columns={'Life expectancy with projections. Yellow is IHME': 'Life expectancy'})
df.index=df['Life expectancy']
df=df.drop('Life expectancy',axis=1)
df=df.transpose()
df.head()
df['Mexico'].plot()
print("== Esperanza de Vida en Mexico ==")
subdf=df[ df.index >= 1890 ]
subdf=subdf[ subdf.index <= 1955 ]
subdf['Mexico'].plot()
plt.title("Esperanza de Vida en Mexico entre 1890 y 1955")
plt.show()
df['Mexico'].plot()
plt.xlim(1890,1955)
plt.title("Esperanza de Vida en Mexico entre 1890 y 1955")
plt.show()
df[['Mexico','United States','Canada']].plot()
plt.title("Esperanza de Vida en Norte-America")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Crear graficas (plot)
Step2: Arreglando los Datos
Step3: Entonces ahora podemos ver la calidad de vida en Mexico atravez del tiempo
Step4: de esta visualizacion vemos que la caldiad ha ido subiendo apartir de 1900, ademas vemos mucho movimiento entre 1890 y 1950, justo cuando habia muchas guerras en Mexico.
Step5: o sin tanto rollo, podemos restringuir el rango de nuestra grafica con xlim (los limites del eje X)
Step6: Tambien es importante ver como esto se compara con otros paises, podemos comparar con todo Norteamerica
|
5,019
|
<ASSISTANT_TASK:>
Python Code:
from k2datascience import classification
from k2datascience import plotting
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
%matplotlib inline
weekly = classification.Weekly()
weekly.data.info()
weekly.data.describe()
weekly.data.head()
plotting.correlation_heatmap_plot(
data=weekly.data, title='Weekly Stockmarket')
plotting.correlation_pair_plot(
weekly.data, title='Weekly Stockmarket')
weekly.logistic_regression(data=weekly.data)
weekly.logistic_model.summary()
weekly.confusion
print(weekly.classification)
weekly.logistic_regression(data=weekly.x_train)
weekly.logistic_model.summary()
weekly.confusion
print(weekly.classification)
weekly.categorize(weekly.x_test)
weekly.calc_prediction(weekly.y_test, weekly.prediction_nom)
weekly.confusion
print(weekly.classification)
weekly.lda()
weekly.confusion
print(weekly.classification)
weekly.qda()
weekly.confusion
print(weekly.classification)
weekly.knn()
weekly.confusion
print(weekly.classification)
auto = classification.Auto()
auto.data.info()
auto.data.describe()
auto.data.head()
plotting.correlation_heatmap_plot(
data=auto.data, title='Auto')
plotting.correlation_pair_plot(
data=auto.data, title='Auto')
auto.box_plots()
auto.x_train.info()
auto.y_train.head()
auto.x_test.info()
auto.y_test.head()
auto.classify_data(model='LDA')
auto.confusion
print(auto.classification)
auto.classify_data(model='QDA')
auto.confusion
print(auto.classification)
auto.classify_data(model='LR')
auto.confusion
print(auto.classification)
auto.accuracy_vs_k()
auto.classify_data(model='KNN', n=13)
auto.confusion
print(auto.classification)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1
Step2: FINDINGS
Step3: FINDINGS
Step4: FINDINGS
Step5: FINDINGS
Step6: FINDINGS
Step7: FINDINGS
Step8: FINDINGS
Step9: FINDINGS
Step10: 2. Explore the data graphically in order to investigate the association
Step11: FINDINGS
Step12: 4. Perform LDA on the training data in order to predict mpg01
Step13: 5. Perform QDA on the training data in order to predict mpg01
Step14: 6. Perform logistic regression on the training data in order to predict
Step15: 7. Perform KNN on the training data, with several values of K, in
|
5,020
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-cm6-1-hr', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
5,021
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
from pandas.tools.plotting import scatter_matrix
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
import matplotlib.colors as colors
import xgboost as xgb
import numpy as np
from sklearn.metrics import confusion_matrix, f1_score, accuracy_score
from classification_utilities import display_cm, display_adj_cm
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import validation_curve
from sklearn.datasets import load_svmlight_files
from sklearn.model_selection import StratifiedKFold
from sklearn.datasets import make_classification
from xgboost.sklearn import XGBClassifier
from scipy.sparse import vstack
seed = 123
np.random.seed(seed)
import pandas as pd
filename = './facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data.head(10)
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data.info()
training_data.describe()
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00','#1B4F72',
'#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']
facies_counts = training_data['Facies'].value_counts().sort_index()
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,title='Distribution of Training Data by Facies')
sns.heatmap(training_data.corr(), vmax=1.0, square=True)
import xgboost as xgb
X_train = training_data.drop(['Facies', 'Well Name','Formation','Depth'], axis = 1 )
Y_train = training_data['Facies' ] - 1
dtrain = xgb.DMatrix(X_train, Y_train)
train = X_train.copy()
train['Facies']=Y_train
train.head()
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
target='Facies'
def modelfit(alg, dtrain, features, useTrainCV=True,
cv_fold=10,early_stopping_rounds = 50):
if useTrainCV:
xgb_param = alg.get_xgb_params()
xgb_param['num_class']=9
xgtrain = xgb.DMatrix(train[features].values,label = train[target].values)
cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=
alg.get_params()['n_estimators'],nfold=cv_fold,
metrics='merror',early_stopping_rounds = early_stopping_rounds)
alg.set_params(n_estimators=cvresult.shape[0])
#Fit the algorithm on the data
alg.fit(dtrain[features], dtrain[target],eval_metric='merror')
#Predict training set:
dtrain_prediction = alg.predict(dtrain[features])
dtrain_predprob = alg.predict_proba(dtrain[features])[:,1]
#Pring model report
print ("\nModel Report")
print ("Accuracy : %.4g" % accuracy_score(dtrain[target],
dtrain_prediction))
print ("F1 score (Train) : %f" % f1_score(dtrain[target],
dtrain_prediction,average='weighted'))
feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)
feat_imp.plot(kind='bar',title='Feature Importances')
plt.ylabel('Feature Importance Score')
features =[x for x in X_train.columns]
features
from xgboost import XGBClassifier
xgb1 = XGBClassifier(
learning_rate = 0.1,
n_estimators=1000,
max_depth=5,
min_child_weight=1,
gamma = 0,
subsample=0.8,
colsample_bytree=0.8,
objective='multi:softmax',
nthread =4,
seed = 123,
)
modelfit(xgb1, train, features)
xgb1
from sklearn.model_selection import GridSearchCV
param_test1={
'max_depth':range(3,10,2),
'min_child_weight':range(1,6,2)
}
gs1 = GridSearchCV(xgb1,param_grid=param_test1,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs1.fit(train[features],train[target])
gs1.grid_scores_, gs1.best_params_,gs1.best_score_
param_test2={
'max_depth':[8,9,10],
'min_child_weight':[1,2]
}
gs2 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=5,
min_child_weight=1, n_estimators=290, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test2,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs2.fit(train[features],train[target])
gs2.grid_scores_, gs2.best_params_,gs2.best_score_
gs2.best_estimator_
param_test3={
'gamma':[i/10.0 for i in range(0,5)]
}
gs3 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=370, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test3,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs3.fit(train[features],train[target])
gs3.grid_scores_, gs3.best_params_,gs3.best_score_
xgb2 = XGBClassifier(
learning_rate = 0.1,
n_estimators=1000,
max_depth=9,
min_child_weight=1,
gamma = 0.2,
subsample=0.8,
colsample_bytree=0.8,
objective='multi:softmax',
nthread =4,
scale_pos_weight=1,
seed = seed,
)
modelfit(xgb2,train,features)
xgb2
param_test4={
'subsample':[i/10.0 for i in range(6,10)],
'colsample_bytree':[i/10.0 for i in range(6,10)]
}
gs4 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=236, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test4,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs4.fit(train[features],train[target])
gs4.grid_scores_, gs4.best_params_,gs4.best_score_
param_test4b={
'subsample':[i/10.0 for i in range(5,7)],
}
gs4b = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=236, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test4b,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs4b.fit(train[features],train[target])
gs4b.grid_scores_, gs4b.best_params_,gs4b.best_score_
param_test5={
'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100]
}
gs5 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=236, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.6),param_grid=param_test5,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs5.fit(train[features],train[target])
gs5.grid_scores_, gs5.best_params_,gs5.best_score_
param_test6={
'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05]
}
gs6 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=236, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.6),param_grid=param_test6,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs6.fit(train[features],train[target])
gs6.grid_scores_, gs6.best_params_,gs6.best_score_
xgb3 = XGBClassifier(
learning_rate = 0.1,
n_estimators=1000,
max_depth=9,
min_child_weight=1,
gamma = 0.2,
subsample=0.6,
colsample_bytree=0.8,
reg_alpha=0.05,
objective='multi:softmax',
nthread =4,
scale_pos_weight=1,
seed = seed,
)
modelfit(xgb3,train,features)
xgb3
model = XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=0.8,
gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, missing=None, n_estimators=122, nthread=4,
objective='multi:softprob', reg_alpha=0.05, reg_lambda=1,
scale_pos_weight=1, seed=123, silent=True, subsample=0.6)
model.fit(X_train, Y_train)
xgb.plot_importance(model)
xgb4 = XGBClassifier(
learning_rate = 0.01,
n_estimators=5000,
max_depth=9,
min_child_weight=1,
gamma = 0.2,
subsample=0.6,
colsample_bytree=0.8,
reg_alpha=0.05,
objective='multi:softmax',
nthread =4,
scale_pos_weight=1,
seed = seed,
)
modelfit(xgb4,train,features)
xgb4
# Load data
filename = './facies_vectors.csv'
data = pd.read_csv(filename)
# Change to category data type
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
# Leave one well out for cross validation
well_names = data['Well Name'].unique()
f1=[]
for i in range(len(well_names)):
# Split data for training and testing
X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 )
Y_train = data['Facies' ] - 1
train_X = X_train[X_train['Well Name'] != well_names[i] ]
train_Y = Y_train[X_train['Well Name'] != well_names[i] ]
test_X = X_train[X_train['Well Name'] == well_names[i] ]
test_Y = Y_train[X_train['Well Name'] == well_names[i] ]
train_X = train_X.drop(['Well Name'], axis = 1 )
test_X = test_X.drop(['Well Name'], axis = 1 )
# Final recommended model based on the extensive parameters search
model_final = XGBClassifier(base_score=0.5, colsample_bylevel=1,
colsample_bytree=0.8, gamma=0.2,
learning_rate=0.01, max_delta_step=0, max_depth=9,
min_child_weight=1, missing=None, n_estimators=432, nthread=4,
objective='multi:softmax', reg_alpha=0.05, reg_lambda=1,
scale_pos_weight=1, seed=123, silent=1,
subsample=0.6)
# Train the model based on training data
model_final.fit( train_X , train_Y , eval_metric = 'merror' )
# Predict on the test set
predictions = model_final.predict(test_X)
# Print report
print ("\n------------------------------------------------------")
print ("Validation on the leaving out well " + well_names[i])
conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) )
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("-F1 Score: %.6f" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) ))
f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ))
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
print ("\nConfusion Matrix Results")
from classification_utilities import display_cm, display_adj_cm
display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True)
print ("\n------------------------------------------------------")
print ("Final Results")
print ("-Average F1 Score: %6f" % (sum(f1)/(1.0*len(f1))))
# Load data
filename = './facies_vectors.csv'
data = pd.read_csv(filename)
# Change to category data type
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
# Split data for training and testing
X_train_all = data.drop(['Facies', 'Formation','Depth'], axis = 1 )
Y_train_all = data['Facies' ] - 1
X_train_all = X_train_all.drop(['Well Name'], axis = 1)
# Final recommended model based on the extensive parameters search
model_final = XGBClassifier(base_score=0.5, colsample_bylevel=1,
colsample_bytree=0.8, gamma=0.2,
learning_rate=0.01, max_delta_step=0, max_depth=9,
min_child_weight=1, missing=None, n_estimators=432, nthread=4,
objective='multi:softmax', reg_alpha=0.05, reg_lambda=1,
scale_pos_weight=1, seed=123, silent=1,
subsample=0.6)
# Train the model based on training data
model_final.fit(X_train_all , Y_train_all , eval_metric = 'merror' )
# Leave one well out for cross validation
well_names = data['Well Name'].unique()
f1=[]
for i in range(len(well_names)):
X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 )
Y_train = data['Facies' ] - 1
train_X = X_train[X_train['Well Name'] != well_names[i] ]
train_Y = Y_train[X_train['Well Name'] != well_names[i] ]
test_X = X_train[X_train['Well Name'] == well_names[i] ]
test_Y = Y_train[X_train['Well Name'] == well_names[i] ]
train_X = train_X.drop(['Well Name'], axis = 1 )
test_X = test_X.drop(['Well Name'], axis = 1 )
#print(test_Y)
predictions = model_final.predict(test_X)
# Print report
print ("\n------------------------------------------------------")
print ("Validation on the leaving out well " + well_names[i])
conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) )
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("-F1 Score: %.6f" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) ))
f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ))
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
print ("\nConfusion Matrix Results")
from classification_utilities import display_cm, display_adj_cm
display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True)
print ("\n------------------------------------------------------")
print ("Final Results")
print ("-Average F1 Score: %6f" % (sum(f1)/(1.0*len(f1))))
# Load test data
test_data = pd.read_csv('validation_data_nofacies.csv')
test_data['Well Name'] = test_data['Well Name'].astype('category')
X_test = test_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
# Predict facies of unclassified data
Y_predicted = model_final.predict(X_test)
test_data['Facies'] = Y_predicted + 1
# Store the prediction
test_data.to_csv('Prediction3.csv')
test_data
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Preparation and Model Selection
Step2: The accuracy function and accuracy_adjacent function are defined in the following to quatify the prediction correctness.
Step3: Before processing further, we define a functin which will help us create XGBoost models and perform cross-validation.
Step4: General Approach for Parameter Tuning
Step5: Step 2
Step6: Step 3
Step7: Step 4
Step8: Step 5
Step9: Step 6
Step10: Cross Validation
Step11: Model from all data set
Step12: Use final model to predict the given test data set
|
5,022
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn;
from sklearn import neighbors, datasets
import pylab as pl
seaborn.set()
iris = datasets.load_iris()
X, y = iris.data, iris.target
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
X_reduced = pca.transform(X)
print("Reduced dataset shape:", X_reduced.shape)
import pylab as pl
pl.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y,
cmap='RdYlBu')
print("Meaning of the 2 components:")
for component in pca.components_:
print(" + ".join("%.3f x %s" % (value, name)
for value, name in zip(component,
iris.feature_names)))
np.random.seed(1)
X = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T
plt.plot(X[:, 0], X[:, 1], 'o')
plt.axis('equal');
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_)
print(pca.components_)
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.5)
for length, vector in zip(pca.explained_variance_ratio_, pca.components_):
v = vector * 3 * np.sqrt(length)
plt.plot([0, v[0]], [0, v[1]], '-k', lw=3)
plt.axis('equal');
clf = PCA(0.95) # keep 95% of variance
X_trans = clf.fit_transform(X)
print(X.shape)
print(X_trans.shape)
X_new = clf.inverse_transform(X_trans)
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.2)
plt.plot(X_new[:, 0], X_new[:, 1], 'ob', alpha=0.8)
plt.axis('equal');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dimensionality Reduction
Step2: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution
Step3: Notice that one vector is longer than the other. In a sense, this tells us that that direction in the data is somehow more "important" than the other direction.
Step4: Isomap
|
5,023
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
#%config InlineBackend.figure_format = 'pdf'
import kgof
import kgof.data as data
import kgof.density as density
import kgof.goftest as gof
import kgof.kernel as ker
import kgof.util as util
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
# font options
font = {
#'family' : 'normal',
#'weight' : 'bold',
'size' : 18
}
plt.rc('font', **font)
plt.rc('lines', linewidth=2)
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
# true p
seed = 13
d =
# sample
n = 800
mean = np.zeros(d)
variance = 1.0
qmean = mean.copy()
qmean[0] = 0
qvariance = variance
p = density.IsotropicNormal(mean, variance)
ds = data.DSIsotropicNormal(qmean, qvariance)
# ds = data.DSLaplace(d=d, loc=0, scale=1.0/np.sqrt(2))
dat = ds.sample(n, seed=seed+1)
X = dat.data()
# Test
alpha = 0.01
# Gaussian kernel with median heuristic
sig2 = util.meddistance(X, subsample=1000)**2
k = ker.KGauss(sig2)
# inverse multiquadric kernel
# From Gorham & Mackey 2017 (https://arxiv.org/abs/1703.01717)
# k = ker.KIMQ(b=-0.5, c=1.0)
bootstrapper = gof.bootstrapper_rademacher
kstein = gof.KernelSteinTest(p, k, bootstrapper=bootstrapper,
alpha=alpha, n_simulate=500, seed=seed+1)
kstein_result = kstein.perform_test(dat, return_simulated_stats=True,
return_ustat_gram=True)
kstein_result
#kstein.compute_stat(dat)
print('p-value: ', kstein_result['pvalue'])
print('reject H0: ', kstein_result['h0_rejected'])
sim_stats = kstein_result['sim_stats']
plt.figure(figsize=(10, 4))
plt.hist(sim_stats, bins=20, normed=True);
plt.stem([kstein_result['test_stat']], [0.03], 'r-o', label='Stat')
plt.legend()
from scipy.spatial.distance import squareform, pdist
def simulatepm(N, p_change):
'''
:param N:
:param p_change:
:return:
'''
X = np.zeros(N) - 1
change_sign = np.random.rand(N) < p_change
for i in range(N):
if change_sign[i]:
X[i] = -X[i - 1]
else:
X[i] = X[i - 1]
return X
class _GoodnessOfFitTest:
def __init__(self, grad_log_prob, scaling=1):
#scaling is the sigma^2 as in exp(-|x_y|^2/2*sigma^2)
self.scaling = scaling*2
self.grad = grad_log_prob
# construct (slow) multiple gradient handle if efficient one is not given
def grad_multiple(self, X):
#print self.grad
return np.array([(self.grad)(x) for x in X])
def kernel_matrix(self, X):
# check for stupid mistake
assert X.shape[0] > X.shape[1]
sq_dists = squareform(pdist(X, 'sqeuclidean'))
K = np.exp(-sq_dists/ self.scaling)
return K
def gradient_k_wrt_x(self, X, K, dim):
X_dim = X[:, dim]
assert X_dim.ndim == 1
differences = X_dim.reshape(len(X_dim), 1) - X_dim.reshape(1, len(X_dim))
return -2.0 / self.scaling * K * differences
def gradient_k_wrt_y(self, X, K, dim):
return -self.gradient_k_wrt_x(X, K, dim)
def second_derivative_k(self, X, K, dim):
X_dim = X[:, dim]
assert X_dim.ndim == 1
differences = X_dim.reshape(len(X_dim), 1) - X_dim.reshape(1, len(X_dim))
sq_differences = differences ** 2
return 2.0 * K * (self.scaling - 2 * sq_differences) / self.scaling ** 2
def get_statistic_multiple_dim(self, samples, dim):
num_samples = len(samples)
log_pdf_gradients = self.grad_multiple(samples)
# n x 1
log_pdf_gradients = log_pdf_gradients[:, dim]
# n x n
K = self.kernel_matrix(samples)
assert K.shape[0]==K.shape[1]
# n x n
gradient_k_x = self.gradient_k_wrt_x(samples, K, dim)
assert gradient_k_x.shape[0] == gradient_k_x.shape[1]
# n x n
gradient_k_y = self.gradient_k_wrt_y(samples, K, dim)
# n x n
second_derivative = self.second_derivative_k(samples, K, dim)
assert second_derivative.shape[0] == second_derivative.shape[1]
# use broadcasting to mimic the element wise looped call
pairwise_log_gradients = log_pdf_gradients.reshape(num_samples, 1) \
* log_pdf_gradients.reshape(1, num_samples)
A = pairwise_log_gradients * K
B = gradient_k_x * log_pdf_gradients
C = (gradient_k_y.T * log_pdf_gradients).T
D = second_derivative
V_statistic = A + B + C + D
#V_statistic = C
stat = num_samples * np.mean(V_statistic)
return V_statistic, stat
def compute_pvalues_for_processes(self, U_matrix, chane_prob, num_bootstrapped_stats=300):
N = U_matrix.shape[0]
bootsraped_stats = np.zeros(num_bootstrapped_stats)
with util.NumpySeedContext(seed=10):
for proc in range(num_bootstrapped_stats):
# W = np.sign(orsetinW[:,proc])
W = simulatepm(N, chane_prob)
WW = np.outer(W, W)
st = np.mean(U_matrix * WW)
bootsraped_stats[proc] = N * st
stat = N * np.mean(U_matrix)
return float(np.sum(bootsraped_stats > stat)) / num_bootstrapped_stats
def is_from_null(self, alpha, samples, chane_prob):
dims = samples.shape[1]
boots = 10 * int(dims / alpha)
num_samples = samples.shape[0]
U = np.zeros((num_samples, num_samples))
for dim in range(dims):
U2, _ = self.get_statistic_multiple_dim(samples, dim)
U += U2
p = self.compute_pvalues_for_processes(U, chane_prob, boots)
return p, U
#sigma = np.array([[1, 0.2, 0.1], [0.2, 1, 0.4], [0.1, 0.4, 1]])
def grad_log_correleted(x):
#sigmaInv = np.linalg.inv(sigma)
#return - np.dot(sigmaInv.T + sigmaInv, x) / 2.0
return -(x-mean)/variance
#me = _GoodnessOfFitTest(grad_log_correleted)
qm = _GoodnessOfFitTest(grad_log_correleted, scaling=sig2)
#X = np.random.multivariate_normal([0, 0, 0], sigma, 200)
p_val, U = qm.is_from_null(0.05, X, 0.1)
print(p_val)
plt.imshow(U, interpolation='none')
plt.colorbar()
# U-statistic matrix from the new implementation
H = kstein_result['H']
plt.imshow(H, interpolation='none')
plt.colorbar()
plt.imshow(U-H, interpolation='none')
plt.colorbar()
x = np.random.randint(1, 5, 5)
y = np.random.randint(1, 3, 3)
x
y
x[:, np.newaxis] - y[np.newaxis, :]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem
Step2: Test original implementation
Step3:
|
5,024
|
<ASSISTANT_TASK:>
Python Code:
from pomegranate import *
import numpy as np
model = NaiveBayes( MultivariateGaussianDistribution, n_components=2 )
X = np.array([[ 6, 180, 12 ],
[ 5.92, 190, 11 ],
[ 5.58, 170, 12 ],
[ 5.92, 165, 10 ],
[ 6, 160, 9 ],
[ 5, 100, 6 ],
[ 5.5, 100, 8 ],
[ 5.42, 130, 7 ],
[ 5.75, 150, 9 ],
[ 5.5, 140, 8 ]])
y = np.array([ 0, 0, 0, 0, 0, 1, 1, 1, 1, 1 ])
model.fit( X, y )
data = np.array([[ 5.75, 130, 8 ]])
for sample, probs in zip( data, model.predict_proba( data ) ):
print "Height {}, weight {}, and foot size {} is {:.3}% male, {:.3}% female.".format( sample[0], sample[1], sample[2], 100*probs[0], 100*probs[1] )
for sample, result in zip( data, model.predict( data ) ):
print "Person with height {}, weight {}, and foot size {} is {}".format( sample[0], sample[1], sample[2], "female" if result else "male" )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Since we are simply using two Multivariate Gaussian Distributions, our Naive Bayes model is very simple to initialize.
Step2: Of course currently our model is unitialized and needs data in order to be able to classify people as male or female. So let's create the data. For multivariate distributions, the training data set has to be specified as a list of lists with each entry being a single case for the data set. We will specify males as a 0 and females with a 1.
Step3: Now we can fit our Naive Bayes model to the set of data.
Step4: Now let's test our model on the following sample.
Step5: First the probability of the data occurring under each model.
Step6: We can see that the probability that the sample is a female is significantly larger than the probability that it is male. Logically when we classify the data as either male (0) or female (1) we get the output
|
5,025
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -a "Sebastian Raschka" -d -v
import pandas as pd
import numpy as np
import matplotlib.ticker as ticker
np.random.seed(123)
variables = ['A','B','C','X','Y','Z']
labels = ['ID_0','ID_1','ID_2','ID_3','ID_4','ID_5','ID_6',
'ID_7','ID_8','ID_9','ID_10']
X = np.random.random_sample([len(labels),len(variables)])*10
df = pd.DataFrame(X, columns=variables, index=labels)
df
from scipy.spatial.distance import pdist,squareform
row_dist = pd.DataFrame(squareform(pdist(df, metric='euclidean')), columns=labels, index=labels)
row_dist
from scipy.cluster.hierarchy import linkage
row_clusters = linkage(row_dist, method='complete', metric='euclidean')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'],
index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])])
row_clusters = linkage(pdist(df, metric='euclidean'), method='complete')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'],
index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])])
row_clusters = linkage(df.values, method='complete', metric='euclidean')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'],
index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])])
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.cluster.hierarchy import dendrogram
row_dendr = dendrogram(row_clusters, labels=labels)
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(df, interpolation='nearest', cmap='hot_r')
fig.colorbar(cax)
tick_spacing = 1
ax.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
ax.yaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
ax.set_xticklabels([''] + list(df.columns))
ax.set_yticklabels([''] + list(df.index))
plt.show()
row_dendr['leaves']
# reorder rows with respect to the clustering
row_dendr = dendrogram(row_clusters, labels=labels, no_plot=True)
df_rowclust = df.ix[row_dendr['leaves']]
# plot
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(df_rowclust,
interpolation='nearest',
cmap='hot_r')
fig.colorbar(cax)
tick_spacing = 1
ax.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
ax.yaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
ax.set_xticklabels([''] + list(df_rowclust.columns))
ax.set_yticklabels([''] + list(df_rowclust.index))
plt.show()
from scipy.cluster import hierarchy
# makes dendrogram black (1)
hierarchy.set_link_color_palette(['black'])
# plot row dendrogram
fig = plt.figure(figsize=(8,8))
axd = fig.add_axes([0.09,0.1,0.2,0.6])
row_dendr = dendrogram(row_clusters, orientation='left',
color_threshold=np.inf, ) # makes dendrogram black (2))
# reorder data with respect to clustering
df_rowclust = df.ix[row_dendr['leaves'][::-1]]
axd.set_xticks([])
axd.set_yticks([])
# remove axes spines from dendrogram
for i in axd.spines.values():
i.set_visible(False)
# reorder rows with respect to the clustering
df_rowclust = df.ix[row_dendr['leaves'][::-1]]
# plot heatmap
axm = fig.add_axes([0.20, 0.1, 0.6, 0.6]) # x-pos, y-pos, width, height
cax = axm.matshow(df_rowclust, interpolation='nearest', cmap='hot_r')
fig.colorbar(cax)
axm.set_xticklabels([''] + list(df_rowclust.columns))
axm.set_yticklabels([''] + list(df_rowclust.index))
tick_spacing = 1
axm.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
axm.yaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
plt.show()
# Compute pairwise distances for columns
col_dists = pdist(df.T, metric='euclidean')
col_clusters = linkage(col_dists, method='complete')
# plot column dendrogram
fig = plt.figure(figsize=(8,8))
axd2 = fig.add_axes([0.38,0.74,0.36,0.10])
col_dendr = dendrogram(col_clusters, orientation='top',
color_threshold=np.inf) # makes dendrogram black)
axd2.set_xticks([])
axd2.set_yticks([])
# plot row dendrogram
axd1 = fig.add_axes([0.09,0.1,0.2,0.6])
row_dendr = dendrogram(row_clusters, orientation='left',
count_sort='ascending',
color_threshold=np.inf) # makes dendrogram black
axd1.set_xticks([])
axd1.set_yticks([])
# remove axes spines from dendrogram
for i,j in zip(axd1.spines.values(), axd2.spines.values()):
i.set_visible(False)
j.set_visible(False)
# reorder columns and rows with respect to the clustering
df_rowclust = df.ix[row_dendr['leaves'][::-1]]
df_rowclust.columns = [df_rowclust.columns[col_dendr['leaves']]]
# plot heatmap
axm = fig.add_axes([0.20,0.1,0.6,0.6])
cax = axm.matshow(df_rowclust, interpolation='nearest', cmap='hot_r')
fig.colorbar(cax)
axm.set_xticklabels([''] + list(df_rowclust.columns))
axm.set_yticklabels([''] + list(df_rowclust.index))
tick_spacing = 1
axm.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
axm.yaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
plt.show()
from scipy.cluster import hierarchy
# makes dendrogram black (1)
hierarchy.set_link_color_palette(['black'])
# plot row dendrogram
fig = plt.figure(figsize=(8,8))
axd = fig.add_axes([0.09,0.1,0.2,0.6])
row_dendr = dendrogram(row_clusters, orientation='left',
labels=labels,
color_threshold=np.inf, ) # makes dendrogram black (2))
axd.set_xticks([])
# uncomment to hide dendrogram labels
#axd.set_yticks([])
# remove axes spines from dendrogram
for i in axd.spines.values():
i.set_visible(False)
# reorder columns and rows with respect to the clustering
df_rowclust = df.ix[row_dendr['leaves'][::-1]]
# plot heatmap
axm = fig.add_axes([0.20,0.1,0.6,0.6]) # x-pos, y-pos, width, height
cax = axm.matshow(df_rowclust, interpolation='nearest', cmap='hot_r')
fig.colorbar(cax)
axm.set_xticklabels([''] + list(df_rowclust.columns))
axm.set_yticklabels([])
tick_spacing = 1
axm.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
axm.yaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hierarchical Agglomerative Clustering - Complete Linkage Clustering
Step2: <br>
Step3: <br>
Step4: b) Condensed distance matrix (correct)
Step5: c) Input sample matrix (correct)
Step6: <br>
Step7: <br>
Step8: Thus, in order to sort the DataFrame according to the clustering, we can simply use the 'leaves' as indices like so
Step9: <br>
Step10: <br>
Step11: <br>
|
5,026
|
<ASSISTANT_TASK:>
Python Code:
from landlab import RasterModelGrid
import numpy as np
mg = RasterModelGrid((4, 4))
mg.status_at_node
mg.imshow(mg.status_at_node)
mg.status_at_node[2] = mg.BC_NODE_IS_CLOSED
mg.imshow(mg.status_at_node, color_for_closed="blue")
mg.set_status_at_node_on_edges(
right=mg.BC_NODE_IS_CLOSED,
top=mg.BC_NODE_IS_FIXED_VALUE,
left=mg.BC_NODE_IS_CLOSED,
bottom=mg.BC_NODE_IS_FIXED_VALUE,
)
# the same thing could be done as ...
# mg.set_status_at_node_on_edges(right=4, top=1, left=4, bottom=1)
mg.imshow(mg.status_at_node, color_for_closed="blue")
mg1 = RasterModelGrid((4, 4), 1.0)
mg1.set_closed_boundaries_at_grid_edges(True, False, True, False)
mg1.imshow(mg1.status_at_node, color_for_closed="blue")
mg2 = RasterModelGrid((4, 4), 1.0)
mg2.set_looped_boundaries(True, False)
mg2.imshow(mg2.status_at_node)
mg2.set_closed_boundaries_at_grid_edges(True, False, True, False)
mg2.imshow(mg2.status_at_node, color_for_closed="Blue")
mg3 = RasterModelGrid((4, 4), 1.0)
mg3.status_at_node[mg3.y_of_node == 0] = mg.BC_NODE_IS_FIXED_GRADIENT
mg3.status_at_node[mg3.y_of_node == 3] = mg.BC_NODE_IS_FIXED_GRADIENT
mg3.imshow(mg3.status_at_node, color_for_closed="Blue")
# there are no closed boundaries so we didn't need the color_for_closed option,
# but no problem if you accidentally include it!
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Instantiate a grid.
Step2: The node boundary condition options are
Step3: The default conditions are for the perimeter to be fixed value (status of 1) and the interior nodes to be core (status of 0).
Step4: Now let's choose one node on the perimeter to be closed.
Step5: We could set the boundary condition at each node individually, or at groups of nodes (e.g. where the x_of_node value is greater than some specified value). But in many cases we just want to set the edges in one way or another. There are some functions for setting the boundary conditions around the perimeter of a raster. (Remember that initially all of the perimeter nodes are mg.BC_NODE_IS_FIXED_VALUE by default.)
Step6: There are multiple ways to set edge boundary conditions. If above isn't intuitive to you, keep reading.
Step7: Now let's try setting looped boundaries using set_looped_bondaries.
Step8: Note that this has the right and left edges as mg.BC_NODE_IS_FIXED_VALUE (status value of 1).
Step9: Note that there are not methods for setting mg.BC_NODE_IS_FIXED_GRADIENT conditions on the boundary edges. But we can do that. We could use set_status_at_node_on_edges. Below is another way to do this.
|
5,027
|
<ASSISTANT_TASK:>
Python Code:
mps_to_mmph = 1000 * 3600
from cmt.components import Meteorology
met = Meteorology()
%cd input
met.initialize('meteorology-P-linear.cfg')
bprecip = met.get_value('atmosphere_water__precipitation_leq-volume_flux')
print type(bprecip)
print bprecip.size
print bprecip.shape
bprecip * mps_to_mmph
time = [met.get_current_time()]
flux = [bprecip.max() * mps_to_mmph]
count = 1
while met.get_current_time() < met.get_end_time():
met.update(met.get_time_step()*count)
time.append(met.get_current_time())
flux.append(met.get_value('atmosphere_water__precipitation_leq-volume_flux').max() * mps_to_mmph)
count += 1
time
flux
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the Babel-wrapped Meteorology component and create an instance
Step2: Initialize the model.
Step3: The initial model precipitation volume flux is the first value from precip_rates.txt
Step4: Advance the model to the end, saving the model time and output P values (converted back to mm/hr for convenience) at each step
Step5: Check the time and flux values (noting that I've included the time = 0.0 value here)
|
5,028
|
<ASSISTANT_TASK:>
Python Code:
import sympy as sp
sp.init_printing(use_latex=True)
from sympy.matrices import zeros
tau_m, tau_s, C, h = sp.symbols('tau_m, tau_s, C, h')
A = sp.Matrix([[-1/tau_s,0,0],[1,-1/tau_s,0],[0,1/C,-1/tau_m]])
PA = sp.simplify(sp.exp(A*h))
PA
As = sp.Matrix([[-1/tau_m,0,0],[1,-1/tau_m,0],[0,1/C,-1/tau_m]])
As
PAs = sp.simplify(sp.exp(As*h))
PAs
PA_32 = PA.row(2).col(1)[0]
sp.limit(PA_32, tau_s, tau_m)
PA_32_series = PA_32.series(x=tau_s,x0=tau_m,n=2)
PA_32_series
import nest
import numpy as np
import pylab as pl
taum = 10.
C_m = 250.
# array of distances between tau_m and tau_ex
epsilon_array = np.hstack(([0.],10.**(np.arange(-6.,1.,1.))))[::-1]
dt = 0.1
fig = pl.figure(1)
NUM_COLORS = len(epsilon_array)
cmap = pl.get_cmap('gist_ncar')
maxVs = []
for i,epsilon in enumerate(epsilon_array):
nest.ResetKernel() # reset simulation kernel
nest.SetKernelStatus({'resolution':dt})
# Current based alpha neuron
neuron = nest.Create('iaf_psc_alpha')
neuron.set(C_m=C_m, tau_m=taum, t_ref=0., V_reset=-70., V_th=1e32,
tau_syn_ex=taum+epsilon, tau_syn_in=taum+epsilon, I_e=0.)
# create a spike generator
spikegenerator_ex = nest.Create('spike_generator')
spikegenerator_ex.spike_times = [50.]
# create a voltmeter
vm = nest.Create('voltmeter', params={'interval':dt})
## connect spike generator and voltmeter to the neuron
nest.Connect(spikegenerator_ex, neuron, 'all_to_all', {'weight':100.})
nest.Connect(vm, neuron)
# run simulation for 200ms
nest.Simulate(200.)
# read out recording time and voltage from voltmeter
times = vm.get('events','times')
voltage = vm.get('events', 'V_m')
# store maximum value of voltage trace in array
maxVs.append(np.max(voltage))
# plot voltage trace
if epsilon == 0.:
pl.plot(times,voltage,'--',color='black',label='singular')
else:
pl.plot(times,voltage,color = cmap(1.*i/NUM_COLORS),label=str(epsilon))
pl.legend()
pl.xlabel('time t (ms)')
pl.ylabel('voltage V (mV)')
fig = pl.figure(2)
pl.semilogx(epsilon_array,maxVs,color='red',label='maxV')
#show singular solution as horizontal line
pl.semilogx(epsilon_array,np.ones(len(epsilon_array))*maxVs[-1],color='black',label='singular')
pl.xlabel('epsilon')
pl.ylabel('max(voltage V) (mV)')
pl.legend()
pl.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For alpha-shaped currents we have
Step2: Non-singular case ($\tau_m\neq \tau_s$)
Step3: Note that the entry in the third line and the second column $A_{32}$ would also appear in the propagator matrix in case of an exponentially shaped current
Step4: The propagator is
Step5: Numeric stability of propagator elements
Step6: 2. The Taylor-series up to the second order of the function $PA_{32}(\tau_s)$ is
Step7: Therefore we have
Step8: Neuron, simulation and plotting parameters
Step9: Loop through epsilon array
Step10: Show maximum values of voltage traces
|
5,029
|
<ASSISTANT_TASK:>
Python Code:
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
HOST = "app.verta.ai"
PROJECT_NAME = "Wine Multiclassification"
EXPERIMENT_NAME = "Boosted Trees"
# import os
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import datasets
from sklearn import model_selection
import xgboost as xgb
data = datasets.load_wine()
X = data['data']
y = data['target']
dtrain = xgb.DMatrix(X, label=y)
df = pd.DataFrame(np.hstack((X, y.reshape(-1, 1))),
columns=data['feature_names'] + ['species'])
df.head()
grid = model_selection.ParameterGrid({
'eta': [0.5, 0.7],
'max_depth': [1, 2, 3],
'num_class': [10],
})
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
def run_experiment(hyperparams):
run = client.set_experiment_run()
# log hyperparameters
run.log_hyperparameters(hyperparams)
# run cross validation on hyperparameters
cv_history = xgb.cv(hyperparams, dtrain,
nfold=5,
metrics=("merror", "mlogloss"))
# log observations from each iteration
for _, iteration in cv_history.iterrows():
for obs, val in iteration.iteritems():
run.log_observation(obs, val)
# log error from final iteration
final_val_error = iteration['test-merror-mean']
run.log_metric("val_error", final_val_error)
print("{} Mean error: {:.4f}".format(hyperparams, final_val_error))
# NOTE: run_experiment() could also be defined in a module, and executed in parallel
for hyperparams in grid:
run_experiment(hyperparams)
best_run = expt.expt_runs.sort("metrics.val_error", descending=False)[0]
print("Validation Error: {:.4f}".format(best_run.get_metric("val_error")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
model = xgb.XGBClassifier(**best_hyperparams)
model.fit(X, y)
train_acc = model.score(X, y)
best_run.log_metric("train_acc", train_acc)
print("Training accuracy: {:.4f}".format(train_acc))
# create deployment artifacts
model_api = ModelAPI(X, model.predict(X))
requirements = ["scikit-learn", "xgboost"]
best_run.log_model(model, model_api=model_api)
best_run.log_requirements(requirements)
best_run
from verta._demo_utils import DeployedModel
deployed_model = DeployedModel(HOST, best_run.id)
for x in itertools.cycle(np.random.permutation(X).tolist()):
print(deployed_model.predict([x]))
time.sleep(.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This example features
Step2: Imports
Step3: Log Workflow
Step4: Prepare Hyperparameters
Step5: Instantiate Client
Step6: Run Validation
Step7: Revisit Workflow
Step8: Train on Full Dataset
Step9: Calculate and Log Accuracy on Full Training Set
Step10: Log Model for Deployment
Step11: Make Live Predictions
Step12: Load Deployed Model
Step13: Query Deployed Model
|
5,030
|
<ASSISTANT_TASK:>
Python Code:
from sympy.abc import rho
rho, u, c = symbols('rho u c')
A = Matrix([[u, rho, 0], [0, u, rho**-1], [0, c**2 * rho, u]])
A
A.eigenvals()
R = A.eigenvects() # this returns a tuple for each eigenvector with multiplicity -- unpack it
r = []
lam = []
for (ev, _, rtmp) in R:
r.append(rtmp[0])
lam.append(ev)
# we can normalize them anyway we want, so let's make the first entry 1
for n in range(len(r)):
v = r[n]
r[n] = v/v[0]
r[0]
lam[0]
r[1]
lam[1]
r[2]
lam[2]
R = zeros(3,3)
R[:,0] = r[1]
R[:,1] = r[0]
R[:,2] = r[2]
R
B = A.transpose()
B
L = B.eigenvects()
l = []
laml = []
for (ev, _, ltmp) in L:
l.append(ltmp[0].transpose())
laml.append(ev)
for n in range(len(l)):
if lam[n] == laml[n]:
ltmp = l[n]
p = ltmp.dot(r[n])
l[n] = ltmp/p
l[0]
l[1]
l[2]
ps = symbols('p_s')
As = Matrix([[u, rho, 0], [c**2/rho, u, ps/rho], [0, 0, u]])
As
As.eigenvals()
R = As.eigenvects() # this returns a tuple for each eigenvector with multiplicity -- unpack it
r = []
lam = []
for (ev, _, rtmp) in R:
r.append(rtmp[0])
lam.append(ev)
# we can normalize them anyway we want, so let's make the first entry 1
for n in range(len(r)):
v = r[n]
r[n] = v/v[0]
r[0], lam[0]
r[1], lam[1]
r[2], lam[2]
Bs = As.transpose()
L = B.eigenvects()
l = []
laml = []
for (ev, _, ltmp) in L:
l.append(ltmp[0].transpose())
laml.append(ev)
for n in range(len(l)):
if lam[n] == laml[n]:
ltmp = l[n]
p = ltmp.dot(r[n])
l[n] = ltmp/p
simplify(l[0])
l[1]
l[2]
rho, u, v, c = symbols('rho u v c')
A = Matrix([[u, rho, 0, 0], [0, u, 0, rho**-1], [0,0, u, 0], [0, c**2 * rho, 0, u]])
A
A.eigenvals()
R = A.eigenvects() # this returns a tuple for each eigenvector with multiplicity -- unpack it
r = []
lam = []
for (ev, _, rtmp) in R:
for rv in rtmp:
r.append(rv)
lam.append(ev)
# we can normalize them anyway we want, so let's make the first entry 1
for n in range(len(r)):
v = r[n]
if not v[0] == 0:
r[n] = v/v[0]
r[0], lam[0]
r[1], lam[1]
r[2], lam[2]
r[3], lam[3]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The eigenvalues are the speeds at which information propagates with. SymPy returns them as a
Step2: The right eigenvectors are what SymPy gives natively. For a given eigenvalue, $\lambda$, these
Step3: 0-th right eigenvector
Step4: this corresponds to the eigenvalue
Step5: 1-st right eigenvector
Step6: this corresponds to the eigenvalue
Step7: 2-nd right eigenvector
Step8: this corresponds to the eigenvalue
Step9: Here they are as a matrix, $R$, in order from smallest to largest eigenvalue
Step10: Left Eigenvectors
Step11: Traditionally, we normalize these such that $l^{(\mu)} \cdot r^{(\nu)} = \delta_{\mu\nu}$
Step12: 0-th left eigenvector
Step13: 1-st left eigenvector
Step14: 2-nd left eigenvector
Step15: Entropy formulation
Step16: left eigenvectors
Step17: normalization
Step18: 2-d system
|
5,031
|
<ASSISTANT_TASK:>
Python Code:
import gammalib
import ctools
import cscripts
%matplotlib inline
import matplotlib.pyplot as plt
caldb = 'prod2'
irf = 'South_0.5h'
emin = 0.1 # TeV
emax = 160.0 # TeV
evfile = 'events.fits'
obssim = ctools.ctobssim()
obssim['ra'] = 83.63
obssim['dec'] = 22.51
obssim['rad'] = 5.0
obssim['tmin'] = 0
obssim['tmax'] = 3600
obssim['emin'] = emin
obssim['emax'] = emax
obssim['caldb'] = caldb
obssim['irf'] = irf
obssim['inmodel'] = '$CTOOLS/share/models/crab_beyond.xml'
obssim['outevents'] = evfile
obssim.execute()
skymap = ctools.ctskymap()
skymap['inobs'] = evfile
skymap['emin'] = emin
skymap['emax'] = emax
skymap['nxpix'] = 40
skymap['nypix'] = 40
skymap['binsz'] = 0.02
skymap['proj'] = 'TAN'
skymap['coordsys'] = 'CEL'
skymap['xref'] = 83.63
skymap['yref'] = 22.01
skymap['bkgsubtract'] = 'IRF'
skymap['caldb'] = caldb
skymap['irf'] = irf
skymap.run()
# Slightly smooth the map for display to suppress statistical fluctuations
skymap.skymap().smooth('GAUSSIAN',0.02)
from matplotlib.colors import SymLogNorm
# The SymLogNorm scale is a Log scale for both positive and negative values
# and is linear within a certain range around 0
ax = plt.subplot()
plt.imshow(skymap.skymap().array(),origin='lower',
extent=[83.63+0.02*20,83.63-0.02*20,22.01-0.02*20,22.01+0.02*20],
# boundaries of the coord grid
norm=SymLogNorm(1)) # the scale will be linear within +-1 count
ax.set_xlabel('R.A. (deg)')
ax.set_ylabel('Dec (deg)')
cbar = plt.colorbar()
cbar.set_label('Counts')
like = ctools.ctlike()
like['inobs'] = evfile
like['caldb'] = caldb
like['irf'] = irf
like['inmodel'] = '$CTOOLS/share/models/crab.xml'
like.run()
print(like.opt())
like1 = like.opt().value()
print(like.obs().models())
resmap = cscripts.csresmap(like.obs())
resmap['algorithm'] = 'SIGNIFICANCE'
resmap['emin'] = emin
resmap['emax'] = emax
resmap['nxpix'] = 40
resmap['nypix'] = 40
resmap['binsz'] = 0.02
resmap['proj'] = 'TAN'
resmap['coordsys'] = 'CEL'
resmap['xref'] = 83.63
resmap['yref'] = 22.01
resmap.run()
def plot_residuals(resid):
# Slightly smooth the map for display to suppress statistical fluctuations
resid.smooth('GAUSSIAN',0.04)
# Plotting
fig = plt.figure()
ax = plt.subplot()
plt.imshow(resid.array(),origin='lower',
cmap='bwr',vmin=-3,vmax=3,
extent=[83.63+0.02*20,83.63-0.02*20,22.01-0.02*20,22.01+0.02*20])
# Boundaries of the coord grid
ax.set_xlabel('R.A. (deg)')
ax.set_ylabel('Dec (deg)')
cbar = plt.colorbar()
cbar.set_label('Significance ($\sigma$)')
plot_residuals(resmap._resmap)
newpntsrc = gammalib.GModelSky(gammalib.GModelSpatialPointSource(83.7,21.9),
gammalib.GModelSpectralPlaw(1.e-17,-2.,gammalib.GEnergy(3.e5,'MeV')),
gammalib.GModelTemporalConst(1))
newpntsrc.name('Src1')
newpntsrc['RA'].free()
newpntsrc['DEC'].free()
like.obs().models().append(newpntsrc)
like.run()
like2 = like.opt().value()
ts = -2.0 * (like2 - like1)
print(ts)
resmap.models(like.obs().models())
resmap.run()
plot_residuals(resmap._resmap)
like.obs().models()['Src1']['RA'].fix()
like.obs().models()['Src1']['DEC'].fix()
resspec = cscripts.csresspec(like.obs())
resspec['algorithm'] = 'SIGNIFICANCE'
resspec['mask'] = True
resspec['ra'] = 83.63
resspec['dec'] = 22.01
resspec['rad'] = 0.2
resspec['components'] = True
resspec['outfile'] = 'resspec_1.fits'
resspec.execute()
import sys
import os
sys.path.append(os.environ['CTOOLS']+'/share/examples/python/')
from show_residuals import plot_residuals
plot_residuals('resspec_1.fits','',0)
crab = like.obs().models()['Crab']
expplaw = gammalib.GModelSpectralExpPlaw()
expplaw['Prefactor'].value(crab['Prefactor'].value())
expplaw['Index'].value(crab['Index'].value())
expplaw['PivotEnergy'].value(crab['PivotEnergy'].value())
expplaw['CutoffEnergy'].value(1.e6)
crab.spectral(expplaw)
like.run()
like3 = like.opt().value()
ts = -2.0 * (like3 - like2)
print(ts)
print(crab)
resspec = cscripts.csresspec(like.obs())
resspec['algorithm'] = 'SIGNIFICANCE'
resspec['mask'] = True
resspec['ra'] = 83.63
resspec['dec'] = 22.01
resspec['rad'] = 0.2
resspec['components'] = True
resspec['outfile'] = 'resspec_2.fits'
resspec.execute()
plot_residuals('resspec_2.fits','',0)
expplaw2 = like.obs().models()['Crab'].spectral().clone()
newcomp = gammalib.GModelSpectralPlaw(1.e-18,-2.,gammalib.GEnergy(1,'TeV'))
comp_spec = gammalib.GModelSpectralComposite()
comp_spec.append(expplaw2)
comp_spec.append(newcomp)
like.obs().models()['Crab'].spectral(comp_spec)
like.run()
like4 = like.opt().value()
ts = -2.0 * (like4 - like3)
print(ts)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now import the matplotlib package for plotting.
Step2: Simulated dataset
Step3: Now proceed to simulate the dataset. It consists of an hour of observations of the Crab nebula region, as usual pointed at a slightly offset position from the target. The input model is different from the one you have been using so far and contains some surprises. Don't look at it until you have completed the exercises at the end of the tutorial.
Step4: You have saved the events on disk in the file events.fits. In this way you can easily re-use the code below by substituting events.fits with your own event list or observation definition XML file.
Step5: Model fitting and residual inspection
Step6: As usual it is wise to look at the output from the optimizer. Store the best-fit minus log-likelihood value for later usage.
Step7: The fit has properly converged after 10 iterations. Let's take a look at the best-fit model.
Step8: The background model reproduces well the data, since its best-fit normalization is 1 and the index/tilt is 0. The Crab nebula has a best-fit spectral index of 2.28.
Step9: To easily inspecting the residual maps along the tutorial, define a function to plot them, and apply it to the latest run.
Step10: Adding model components
Step11: You have defined a sky model object that has three components
Step12: Finally append the new source to the model container
Step13: and fit the model including the new source to the data.
Step14: Does the addition of the new source provide a better fit to the data? You can quantify this using the test statistic (TS) given by twice the log-likelihood difference.
Step15: TS is expected to be distributed as a $\chi^2_n$ with n degrees of freedom, where n is the additional number of degrees of freedom in the model including the new source, in our case 4 (RA, Dec, Prefactor, and Index). The integral of the $\chi^2_n$ from TS to $\infty$ is the chance probability that the likelihood improved by that much due to statistical fluctuations. A large value, like the one we got, means that the chance probability is very low, thus we are likely to have found a new source.
Step16: The addition of the new point source has flattened the spatial residuals, even though the fit is obviously not perfect yet. From now on fix the position of Src1.
Step17: Modifying the spectral models
Step18: We can use an example script to display the residuals.
Step19: From the residuals it is clear that the model does not reproduce the data well. There is an excess at low energies and the model overshoots the data at high energies. You may try to change the spectral model for the Crab nebula.Try for example an exponentially cutoff power-law rather than a simple power law. Let's use as starting values for the Prefactor and Index the current best-fit values and tentatively set the cutoff energy at 1 TeV.
Step20: Now fit the modified model to the data
Step21: and use TS to quantify the model improvement (in this case there is only one additional degree of freedom, the cutoff energy).
Step22: The model improvement was again significant. Let's look at the fit results for the Crab nebula.
Step23: The best-fit cutoff energy is 3.2 TeV, which seems consistent with the behavior in the energy-bands residual maps above. Note that this has changed significantly the Index value, that now is 1.82. Check the residual spectrum again to make sure that the residuals are reduced.
Step24: Now you actually have an excess at the highest energies. In spite of the significant likelihood improvement the model is not quite perfect yet. Is there perhaps a new spectral component appearing at high energies? Test if the addition of a power-law spectral component with a hard index (2) improves things. To do this, add the hard spectral component to the exponentially-cutoff power law using a composite spectral model.
Step25: Let's fit this new model to the data and check the likelihood improvement.
|
5,032
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
import sys
sys.path.append(os.path.expanduser("~/nta/nupic.research/projects/"))
# general imports
import os
import numpy as np
# torch imports
import torch
import torch.optim as optim
import torch.optim.lr_scheduler as schedulers
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from torchsummary import summary
# nupic research imports
from nupic.research.frameworks.pytorch.image_transforms import RandomNoise
from nupic.torch.modules import KWinners
# local library
from dynamic_sparse.networks import *
from dynamic_sparse.models import *
from dynamic_sparse.common.utils import *
# local files
import math
# plotting
import matplotlib.pyplot as plt
from matplotlib import rcParams
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
rcParams['figure.figsize'] = (12,6)
PATH_TO_WHERE_DATASET_WILL_BE_SAVED = PATH = "~/nta/datasets"
# load dataset
config = (dict(
dataset_name="MNIST",
data_dir="~/nta/datasets",
test_noise=True
))
dataset = Dataset(config)
# load a regular network
network = MLP()
# load a regular base model for training
model = BaseModel(network=network)
model.setup()
%%time
large_dense = model.train(dataset, 3, test_noise=False);
# load a regular network
network = MLP()
# load a regular base model for training
config = dict(
debug_sparse=True,
on_perc=0.1,
)
model = SparseModel(network=network, config=config)
model.setup()
%%time
large_sparse = model.train(dataset, 3, test_noise=False);
results = large_sparse
h, w = math.ceil(len(results)/4), 4
combinations = []
for i in range(h):
for j in range(w):
combinations.append((i,j))
fig, axs = plt.subplots(h, w, gridspec_kw={'hspace': 0.5, 'wspace': 0.5})
fig.set_size_inches(16,16)
for (i, j), k in zip(combinations[:len(results)], sorted(results.keys())):
axs[i, j].plot(range(len(results[k])), results[k])
axs[i, j].set_title(k)
# load a regular network
network = MLPHeb()
# load a regular base model for training
config = dict(
weight_prune_perc=0.15,
hebbian_prune_perc=0.60,
pruning_es =False,
pruning_active=True,
pruning_interval=1,
debug_sparse=True,
on_perc=0.1,
)
model = DSNNMixedHeb(network=network, config=config)
model.setup()
# debug sparsity
# [b/a for a,b in zip([784*400, 400*400, 400*400, 400*10], model.num_params)]
%%time
dynamic_sparse = model.train(dataset, 3, test_noise=False);
# results.keys(), len(results)
# results
results = dynamic_sparse
h, w = math.ceil(len(results)/4), 4
combinations = []
for i in range(h):
for j in range(w):
combinations.append((i,j))
fig, axs = plt.subplots(h, w, gridspec_kw={'hspace': 0.5, 'wspace': 0.5})
fig.set_size_inches(16,16)
for (i, j), k in zip(combinations[:len(results)], sorted(results.keys())):
axs[i, j].plot(range(len(results[k])), results[k])
axs[i, j].set_title(k)
# load a regular network
config = dict(
hidden_sizes = [12,16,23]
)
network = MLP(config=config)
# load a regular base model for training
model = BaseModel(network=network)
model.setup()
%%time
small_dense = model.train(dataset, 3, test_noise=False);
from matplotlib import rcParams
rcParams['image.cmap'] = 'Accent'
plt.figure(figsize=(14,7))
names = ['large_dense', 'large_sparse', 'dynamic_sparse', 'small_dense']
for name, res in zip(names, [large_dense, large_sparse, dynamic_sparse, small_dense]):
plt.plot(res['train_acc'], label=name)
plt.legend()
plt.ylim((0.85,1.0));
plt.figure(figsize=(14,7))
names = ['large_dense', 'large_sparse', 'dynamic_sparse', 'small_dense']
for name, res in zip(names, [large_dense, large_sparse, dynamic_sparse, small_dense]):
plt.plot(res['val_acc'], label=name)
plt.legend()
plt.ylim((0.85,1.0));
from models import *
from networks import *
# load dataset
config = (dict(
dataset_name="MNIST",
data_dir="~/nta/datasets",
test_noise=True
))
dataset = Dataset(config)
test_noise = True
use_kwinners = True
epochs = 3
on_perc = 0.1
# large dense
config = dict(hidden_sizes=[100,100,100], use_kwinners=use_kwinners)
network = MLP(config=config)
model = BaseModel(network=network)
model.setup()
print("\nLarge Dense")
large_dense = model.train(dataset, epochs, test_noise=test_noise);
# large sparse
config = dict(hidden_sizes=[100,100,100], use_kwinners=use_kwinners)
network = MLP(config=config)
config = dict(debug_sparse=True, on_perc=on_perc)
model = SparseModel(network=network, config=config)
model.setup()
print("\nLarge Sparse")
large_sparse = model.train(dataset, epochs, test_noise=test_noise);
# dynamic sparse
config = dict(hidden_sizes=[100,100,100], use_kwinners=use_kwinners)
network = MLPHeb(config=config)
config = dict(
weight_prune_perc=0.15,
hebbian_prune_perc=0.60,
pruning_es =False,
pruning_active=True,
pruning_interval=1,
debug_sparse=True,
on_perc=on_perc,
hebbian_grow=False
)
model = DSNNMixedHeb(network=network, config=config)
model.setup()
print("\nDynamic Sparse")
dynamic_sparse = model.train(dataset, epochs, test_noise=test_noise);
# small dense
config = dict(hidden_sizes=[12,16,23], use_kwinners=use_kwinners)
network = MLP(config=config)
# load a regular base model for training
model = BaseModel(network=network)
model.setup()
print("\nSmall Dense")
small_dense = model.train(dataset, epochs, test_noise=test_noise);
plt.figure(figsize=(14,7))
names = ['large_dense', 'large_sparse', 'dynamic_sparse', 'small_dense']
for name, res in zip(names, [large_dense, large_sparse, dynamic_sparse, small_dense]):
plt.plot(res['train_acc'], label=name)
plt.legend()
plt.ylim((0.85,1.0));
plt.figure(figsize=(14,7))
names = ['large_dense', 'large_sparse', 'dynamic_sparse', 'small_dense']
for name, res in zip(names, [large_dense, large_sparse, dynamic_sparse, small_dense]):
plt.plot(res['val_acc'], label=name)
plt.legend()
plt.ylim((0.85,1.0));
plt.figure(figsize=(14,7))
names = ['large_dense', 'large_sparse', 'dynamic_sparse', 'small_dense']
for name, res in zip(names, [large_dense, large_sparse, dynamic_sparse, small_dense]):
plt.plot(res['noise_acc'], label=name)
plt.legend()
plt.ylim((0.85,1.0));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Large dense
Step2: Large sparse
Step3: Large dynamic sparse
Step4: Small dense
Step5: Comparing all
Step6: Test with kwinners
Step7: test_noise = True
|
5,033
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-4, 4, 0.02)
y = np.exp(-(x * x)/2)
plt.plot(x, y)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
def f(x):
return np.exp(-x*x/2)
# first derivative
def f_d(x):
return -x * f(x)
# second derivative
def f_d_d(x):
return (x*x-1) * f(x)
# z is the approximation at x=2
a = 1.2
x = np.arange(-4, 4, 0.02)
z = f(a) + (x-a)*f_d(a) + (x-a)*(x-a)*f_d_d(a)/2
plt.plot(x, y, x, z)
#plt.axis([1, 3, 0, 0.5])
plt.show()
# update xn
a = 1.2
xn = a - f_d(a)/f_d_d(a)
print xn
a = 0.9
z = f(a) + (x-a)*f_d(a) + (x-a)*(x-a)*f_d_d(a)/2
plt.plot(x, y, x, z)
#plt.axis([1, 3, 0, 0.5])
plt.show()
# update xn
xn = a - f_d(a)/f_d_d(a)
print xn
a = -0.3
z = f(a) + (x-a)*f_d(a) + (x-a)*(x-a)*f_d_d(a)/2
plt.plot(x, y, x, z)
#plt.axis([1, 3, 0, 0.5])
plt.show()
# update xn
xn = a - f_d(a)/f_d_d(a)
print xn
def g(x):
return np.abs(x) - np.abs(x**3/(x*x - 1.0))
p = np.arange(0,0.73, 0.02)
q = g(p)
plt.plot(p, q)
plt.show()
print g(0.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: $ f(x) = e^{- \frac{x^2}{2} }$ is an un-normalized gaussian distribution whose maximum is at x=0
Step2: The <b>Taylor series</b> (quadratic) approximation to the function at x=a is $f(a) + (x-a) f^{'}(a)+ (x-a)^2 f^{"}(a)/2$
Step3: Quadratic approximation at x=a when a>1 or a<-1
Step4: The Newton update is $ x_{n+1} = x_n - \frac{f^{'}(a)}{f^{"}(a)}$
Step5: Quadratic approximation when $1/\sqrt2 < a < 1 $ or $ -1 < a < -1/\sqrt2$ <br>
Step6: When |a| < $1/\sqrt 2$
Step7: Where is the value $1\sqrt 2$ coming from??
|
5,034
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
df = pd.read_csv("https://github.com/chris1610/pbpython/blob/master/data/sales_data_types.csv?raw=True")
df
df.info()
df['2016'] + df['2017']
df['Customer Number'].astype('int')
df.dtypes
df["Customer Number"] = df['Customer Number'].astype('int')
df.dtypes
df
df['Jan Units'].astype('int')
df['2016'].astype('float')
df['Active'].astype('bool')
# astype can take a dictionary of column names and data types
df.astype({'Customer Number': 'int', 'Customer Name': 'str'}).dtypes
def convert_currency(val):
Convert the string number value to a float
- Remove $
- Remove commas
- Convert to float type
new_val = val.replace(',','').replace('$', '')
return float(new_val)
def convert_percent(val):
Convert the percentage string to an actual floating point percent
new_val = val.replace('%', '')
return float(new_val) / 100
df['2016'].apply(convert_currency)
df['2017'].apply(convert_currency)
df['2016'].apply(lambda x: x.replace('$', '').replace(',', '')).astype('float')
# Assign the converted values back to the columns
df['2016'] = df['2016'].apply(convert_currency)
df['2017'] = df['2017'].apply(convert_currency)
df['Percent Growth'].apply(lambda x: x.replace('%', '')).astype('float') / 100
df['Percent Growth'] = df['Percent Growth'].apply(convert_percent)
df.dtypes
# Let's look at the data so far
df
pd.to_numeric(df['Jan Units'], errors='coerce')
# Fill in the NaN with 0
pd.to_numeric(df['Jan Units'], errors='coerce').fillna(0)
df["Jan Units"] = pd.to_numeric(df['Jan Units'], errors='coerce').fillna(0)
pd.to_datetime(df[['Month', 'Day', 'Year']])
df["Start_Date"] = pd.to_datetime(df[['Month', 'Day', 'Year']])
# Check out the dataframe
df
df["Active"] = np.where(df["Active"] == "Y", True, False)
df
df.dtypes
df_2 = pd.read_csv("https://github.com/chris1610/pbpython/blob/master/data/sales_data_types.csv?raw=True",
dtype={'Customer Number':'int'},
converters={'2016':convert_currency,
'2017': convert_currency,
'Percent Growth': convert_percent,
'Jan Units': lambda x: pd.to_numeric(x, errors='coerce'),
'Active': lambda x: np.where(x == "Y", True, False)
})
df_2.dtypes
df_2
# This can not be applied at the time the data is read in
df_2["Start_Date"] = pd.to_datetime(df_2[['Month', 'Day', 'Year']])
df_2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use df.info and df.dtypes to look at the types that pandas automatically infers based on the data
Step2: df.dtypes
Step3: The simplest way to to convert to a type is using astype.
Step4: The code above does not alter the original dataframe
Step5: Assign the new integer customer number back to the original frame and check the type
Step6: The data all looks good for the Customer Number.
Step7: In a similar manner we get an error if we try to convert the sales column
Step8: We can try to use astype with a bool type but that does not give expected results
Step11: In order to convert the currency and percentages, we need to use custom functions
Step12: Use apply to convert the 2016 and 2017 columns to floating point numbers
Step13: We could use a lambda function as well but it may be more difficult for new users to understand
Step14: Use a lambda function to convert the percentage strings to numbers
Step15: pd.to_numeric is another option for handling column conversions when invalid values are included
Step16: Make sure to populate the original column of data
Step17: pd.to_datetime is very useful for working with date conversions
Step18: Use np.where to convert the active column to a boolean
Step19: Many of the examples shown above can be used when reading in data using dtypes or converters arguments
|
5,035
|
<ASSISTANT_TASK:>
Python Code:
import sklearn
import mglearn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC
X, y = mglearn.datasets.make_forge()
fig, axes = plt.subplots(1, 2, figsize=(10,3))
for model, ax in zip([LinearSVC(), LogisticRegression()], axes):
clf = model.fit(X, y)
mglearn.plots.plot_2d_separator(clf, X, fill=False, eps=0.5,
ax=ax, alpha=0.7)
mglearn.discrete_scatter(X[:,0], X[:,1], y, ax=ax)
ax.set_title("{}".format(clf.__class__.__name__))
ax.set_xlabel("Feature 0")
ax.set_ylabel("Feature 1")
axes[0].legend(loc=4)
mglearn.plots.plot_linear_svc_regularization()
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
print(cancer.keys())
print(cancer['target_names'])
print(cancer['feature_names'])
type(cancer)
cancer.data.shape
cancer_df = pd.DataFrame(X_train, columns=cancer.feature_names)
cancer_df.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, stratify=cancer.target, random_state=42)
logreg = LogisticRegression().fit(X_train, y_train)
print("Training set score: {:.3f}".format(logreg.score(X_train, y_train)))
print("Test set score: {:.3f}".format(logreg.score(X_test, y_test)))
logreg100 = LogisticRegression(C=100).fit(X_train, y_train)
print("Training set score: {:.3f}".format(logreg100.score(X_train, y_train)))
print("Test set score: {:.3f}".format(logreg100.score(X_test, y_test)))
logreg001 = LogisticRegression(C=0.01).fit(X_train, y_train)
print("Training set score: {:.3f}".format(logreg001.score(X_train, y_train)))
print("Test set score: {:.3f}".format(logreg001.score(X_test, y_test)))
plt.plot(logreg.coef_.T, 'o', label="C=1")
plt.plot(logreg100.coef_.T, '^', label="C=100")
plt.plot(logreg001.coef_.T, 'v', label="C=0.01")
plt.xticks(range(cancer.data.shape[1]), cancer.feature_names, rotation=90)
plt.hlines(0,0, cancer.data.shape[1])
plt.ylim(-5, 5)
plt.xlabel("Coefficient Index")
plt.xlabel("Coefficient Magnitude")
plt.legend()
for C, marker in zip([0.01, 1, 100], ['v', 'o', '^']):
lr_l1 = LogisticRegression(C=C, penalty="l1").fit(X_train, y_train)
print("Training accuracy of L1 logreg with C={:.3f}: {:.2f}".format(
C, lr_l1.score(X_train, y_train)))
print("Test accuracy of L1 logreg with C={:.3f}: {:.2f}".format(
C, lr_l1.score(X_test, y_test)))
plt.plot(lr_l1.coef_.T, marker, label="C={:.3f}".format(C))
plt.xticks(range(cancer.data.shape[1]), cancer.feature_names, rotation=90)
plt.hlines(0,0, cancer.data.shape[1])
plt.xlabel("Coefficient Index")
plt.xlabel("Coefficient Magnitude")
plt.ylim(-5, 5)
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Figure 1. Decision boundaries of linear SVM and logistic regresison on forge data with default parameters
Step2: Figure 2.
Step3: Logistic Regression
Step4: Regularization Parameter settings
Step5: Use lower value of C to fit more 'regularized' model
Step6: Plot Coefficients of Logistic Regression for different values of C
Step7: Creating More Interpretable Model
|
5,036
|
<ASSISTANT_TASK:>
Python Code:
import graphlab as gl
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# import wiki data
wiki = gl.SFrame('people_wiki.gl/')
wiki
wiki_docs = gl.text_analytics.count_words(wiki['text'])
wiki_docs = wiki_docs.dict_trim_by_keys(gl.text_analytics.stopwords(), exclude=True)
topic_model = gl.topic_model.create(wiki_docs, num_topics=10, num_iterations=200)
topic_model
topic_model = gl.load_model('lda_assignment_topic_model')
[x['words'] for x in topic_model.get_topics(output_type='topic_words', num_words=10)]
temp = topic_model.get_topics(num_words=50)
temp['score'][20:30].sum()
themes = ['science and research','team sports','music, TV, and film','American college and politics','general politics', \
'art and publishing','Business','international athletics','Great Britain and Australia','international music']
for i in range(10):
plt.plot(range(100), topic_model.get_topics(topic_ids=[i], num_words=100)['score'])
plt.xlabel('Word rank')
plt.ylabel('Probability')
plt.title('Probabilities of Top 100 Words in each Topic')
top_probs = [sum(topic_model.get_topics(topic_ids=[i], num_words=10)['score']) for i in range(10)]
ind = np.arange(10)
width = 0.5
fig, ax = plt.subplots()
ax.bar(ind-(width/2),top_probs,width)
ax.set_xticks(ind)
plt.xlabel('Topic')
plt.ylabel('Probability')
plt.title('Total Probability of Top 10 Words in each Topic')
plt.xlim(-0.5,9.5)
plt.ylim(0,0.15)
plt.show()
obama = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Barack Obama')[0])]])
pred1 = topic_model.predict(obama, output_type='probability')
pred2 = topic_model.predict(obama, output_type='probability')
print(gl.SFrame({' topics':themes, 'predictions (first draw)':pred1[0], 'predictions (second draw)':pred2[0]}))
def average_predictions(model, test_document, num_trials=100):
avg_preds = np.zeros((model.num_topics))
for i in range(num_trials):
avg_preds += model.predict(test_document, output_type='probability')[0]
avg_preds = avg_preds/num_trials
result = gl.SFrame({' topics':themes, 'average predictions':avg_preds})
result = result.sort('average predictions', ascending=False)
return result
print average_predictions(topic_model, obama, 100)
bush = gl.SArray([wiki_docs[int(np.where(wiki['name']=='George W. Bush')[0])]])
print average_predictions(topic_model, bush, 100)
gerrard = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Steven Gerrard')[0])]])
print average_predictions(topic_model, gerrard, 100)
wiki['lda'] = topic_model.predict(wiki_docs, output_type='probability')
wiki['word_count'] = gl.text_analytics.count_words(wiki['text'])
wiki['tf_idf'] = gl.text_analytics.tf_idf(wiki['word_count'])
model_tf_idf = gl.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
model_lda_rep = gl.nearest_neighbors.create(wiki, label='name', features=['lda'],
method='brute_force', distance='cosine')
model_tf_idf.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)
model_lda_rep.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)
rets = model_tf_idf.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000)
list(rets['reference_label']).index('Mariano Rivera')
rets2 = model_lda_rep.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000)
list(rets2['reference_label']).index('Mariano Rivera')
topic_model
tpm_low_alpha = gl.load_model('lda_low_alpha')
tpm_high_alpha = gl.load_model('lda_high_alpha')
a = np.sort(tpm_low_alpha.predict(obama,output_type='probability')[0])[::-1]
b = np.sort(topic_model.predict(obama,output_type='probability')[0])[::-1]
c = np.sort(tpm_high_alpha.predict(obama,output_type='probability')[0])[::-1]
ind = np.arange(len(a))
width = 0.3
def param_bar_plot(a,b,c,ind,width,ylim,param,xlab,ylab):
fig = plt.figure()
ax = fig.add_subplot(111)
b1 = ax.bar(ind, a, width, color='lightskyblue')
b2 = ax.bar(ind+width, b, width, color='lightcoral')
b3 = ax.bar(ind+(2*width), c, width, color='gold')
ax.set_xticks(ind+width)
ax.set_xticklabels(range(10))
ax.set_ylabel(ylab)
ax.set_xlabel(xlab)
ax.set_ylim(0,ylim)
ax.legend(handles = [b1,b2,b3],labels=['low '+param,'original model','high '+param])
plt.tight_layout()
param_bar_plot(a,b,c,ind,width,ylim=1.0,param='alpha',
xlab='Topics (sorted by weight of top 100 words)',ylab='Topic Probability for Obama Article')
paul = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Paul Krugman')[0])]])
preds = average_predictions(tpm_low_alpha, paul, 100)
(np.array(preds['average predictions'] > 0.3, dtype=int) | np.array(preds['average predictions'] < 0.05, dtype=int)).sum()
preds2 = average_predictions(tpm_high_alpha, paul, 100)
(np.array(preds2['average predictions'] > 0.3, dtype=int) | np.array(preds2['average predictions'] < 0.05, dtype=int)).sum()
del tpm_low_alpha
del tpm_high_alpha
tpm_low_gamma = gl.load_model('lda_low_gamma')
tpm_high_gamma = gl.load_model('lda_high_gamma')
a_top = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
b_top = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
c_top = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
a_bot = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
b_bot = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
c_bot = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
ind = np.arange(len(a))
width = 0.3
param_bar_plot(a_top, b_top, c_top, ind, width, ylim=0.6, param='gamma',
xlab='Topics (sorted by weight of top 100 words)',
ylab='Total Probability of Top 100 Words')
param_bar_plot(a_bot, b_bot, c_bot, ind, width, ylim=0.0002, param='gamma',
xlab='Topics (sorted by weight of bottom 1000 words)',
ylab='Total Probability of Bottom 1000 Words')
nums = [len(tpm_low_gamma.get_topics([x], num_words=100000, cdf_cutoff=0.5)) for x in range(10)]
np.array(nums).mean()
nums2 = [len(tpm_high_gamma.get_topics([x], num_words=100000, cdf_cutoff=0.5)) for x in range(10)]
np.array(nums2).mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a bag of words, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be.
Step2: Model fitting and interpretation
Step3: GraphLab provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that GraphLab Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results.
Step4: It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will
Step5: Identifying topic themes by top words
Step6: We propose the following themes for each topic
Step7: Measuring the importance of top words
Step8: In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total!
Step9: Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary.
Step10: To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document
Step11: Quiz Question
Step12: Quiz Question
Step13: Next we add the TF-IDF document representations
Step14: For each of our two different document representations, we can use GraphLab Create to compute a brute-force nearest neighbors model
Step15: Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist
Step16: Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents.
Step17: Understanding the role of LDA model hyperparameters
Step18: We'll start by loading some topic models that have been trained using different settings of alpha and gamma. Specifically, we will start by comparing the following two models to our original topic model
Step19: Changing the hyperparameter alpha
Step20: Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics.
Step21: Quiz Question
Step22: From these two plots we can see that the low gamma model results in higher weight placed on the top words and lower weight placed on the bottom words for each topic, while the high gamma model places relatively less weight on the top words and more weight on the bottom words. Thus increasing gamma results in topics that have a smoother distribution of weight across all the words in the vocabulary.
Step23: Quiz Question
|
5,037
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import display
from sympy import init_printing
from sympy import symbols, as_finite_diff, solve, latex
from sympy import Function, Eq
fg, f0, f1, f2 = symbols('f_g, f_0, f_1, f_2')
z, h = symbols('z, h')
a, b = symbols('a, b')
f = Function('f')
init_printing()
extraPolate = Eq(fg, 16*a/5 - 3*f0 + f1 - f2/5)
display(extraPolate)
eq1 = Eq(0, extraPolate.rhs - extraPolate.lhs)
display(eq1)
deriv = as_finite_diff(f(z).diff(z), [z-h/2, z+h/2])
deriv = Eq(b ,deriv.subs([(f(z-h/2), fg),\
(f(z+h/2), f0),\
]).together())
display(deriv)
eq2 = Eq(0, deriv.rhs - deriv.lhs)
display(eq2)
full = Eq(eq1.rhs, eq2.rhs)
display(full)
fullSolvedForFg = Eq(fg, solve(full, fg)[0].collect(symbols('f_0, f_1, f_2, h'), exact=True).simplify())
display(fullSolvedForFg)
print(latex(fullSolvedForFg))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Extrapolation of $f(0) = a$ to the ghost point yields (see ghost4thOrder for calculation) yields
Step2: Which can be rewritten to
Step3: Furthermore a second order FD of $\partial_z f\big|_0 = b$ reads
Step4: Which can be rewritten to
Step5: Thus
|
5,038
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
text_set = set(text)
vocab_to_int = dict((word, index) for index, word in enumerate(text_set))
int_to_vocab = dict((index, word) for index, word in enumerate(text_set))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
punct_tokens = {"." : "||period||",
"," : "||comma||",
"\"" : "||quotation_mark||",
";" : "||semicolon||",
"!" : "||exclamation_mark||",
"?" : "||question_mark||",
"(" : "||left_parentheses||",
")" : "||right_parentheses||",
"--" : "||dash||",
"\n" : "||return||"}
return punct_tokens
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None], name="input")
targets = tf.placeholder(tf.int32, [None, None], name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
return input, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
n_layers = 2
keep_prob = 0.6
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.5)
cell = tf.contrib.rnn.MultiRNNCell([drop] * n_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embedded_input = tf.nn.embedding_lookup(embedding, input_data)
return embedded_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed_dim = 200
embed_input = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed_input)
logits = tf.contrib.layers.fully_connected(outputs,
vocab_size,
activation_fn=None,
weights_initializer = tf.truncated_normal_initializer(mean=0.0, stddev=0.1),
biases_initializer=tf.zeros_initializer())
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_elements = len(int_text)
n_batches = n_elements // (batch_size * seq_length)
x_data = np.array(int_text[: n_batches * batch_size * seq_length])
y_data = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
x_batches = np.split(x_data.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(y_data.reshape(batch_size, -1), n_batches, 1)
batches = np.array(list(zip(x_batches, y_batches)))
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 1024
# Sequence Length
seq_length = 15
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 34
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
input_tensor = loaded_graph.get_tensor_by_name("input:0")
initial_state_tensor = loaded_graph.get_tensor_by_name("initial_state:0")
final_state_tensor = loaded_graph.get_tensor_by_name("final_state:0")
probs_tensor = loaded_graph.get_tensor_by_name("probs:0")
return input_tensor, initial_state_tensor, final_state_tensor, probs_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
predicted_word = np.random.choice(list(int_to_vocab.values()),p=probabilities)
return predicted_word
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
5,039
|
<ASSISTANT_TASK:>
Python Code:
dv = DenseVector([1.0,0.,0.,0.,4.5,0])
dv
sv = SparseVector(6, {0:1.0, 4:4.5})
sv
DenseVector(sv.toArray())
active_elements_dict = {index: value for index, value in enumerate(dv) if value != 0}
active_elements_dict
SparseVector(len(dv), active_elements_dict)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Three components of a sparse vector
Step2: Convert sparse vector to dense vector
Step3: Convert dense vector to sparse vector
|
5,040
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(777)
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (10, 6)
noise_level = 0.1
def f(x, noise_level=noise_level):
return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) + np.random.randn() * noise_level
# Plot f(x) + contours
x = np.linspace(-2, 2, 400).reshape(-1, 1)
fx = [f(x_i, noise_level=0.0) for x_i in x]
plt.plot(x, fx, "r--", label="True (unknown)")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate(([fx_i - 1.9600 * noise_level for fx_i in fx],
[fx_i + 1.9600 * noise_level for fx_i in fx[::-1]])),
alpha=.2, fc="r", ec="None")
plt.legend()
plt.grid()
plt.show()
from skopt import gp_minimize
from skopt.acquisition import gaussian_lcb
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import Matern
# Note that we have fixed the hyperparameters of the kernel, because it is
# sufficient for this easy problem.
gp = GaussianProcessRegressor(kernel=Matern(length_scale_bounds="fixed"),
alpha=noise_level**2, random_state=0)
res = gp_minimize(f, # the function to minimize
[(-2.0, 2.0)], # the bounds on each dimension of x
x0=[0.], # the starting point
acq="LCB", # the acquisition function (optional)
base_estimator=gp, # a GP estimator (optional)
n_calls=15, # the number of evaluations of f including at x0
n_random_starts=0, # the number of random initialization points
random_state=777)
"x^*=%.4f, f(x^*)=%.4f" % (res.x[0], res.fun)
for key, value in sorted(res.items()):
print(key, "=", value)
print()
from skopt.plots import plot_convergence
plot_convergence(res)
plt.rcParams["figure.figsize"] = (20, 20)
x = np.linspace(-2, 2, 400).reshape(-1, 1)
fx = np.array([f(x_i, noise_level=0.0) for x_i in x])
# Plot first five iterations.
for n_iter in range(5):
gp = res.models[n_iter]
curr_x_iters = res.x_iters[: n_iter+1]
curr_func_vals = res.func_vals[: n_iter+1]
# Plot true function.
plt.subplot(5, 2, 2*n_iter+1)
plt.plot(x, fx, "r--", label="True (unknown)")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate([fx - 1.9600 * noise_level, fx[::-1] + 1.9600 * noise_level]),
alpha=.2, fc="r", ec="None")
# Plot GP(x) + contours
y_pred, sigma = gp.predict(x, return_std=True)
plt.plot(x, y_pred, "g--", label=r"$\mu_{GP}(x)$")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate([y_pred - 1.9600 * sigma,
(y_pred + 1.9600 * sigma)[::-1]]),
alpha=.2, fc="g", ec="None")
# Plot sampled points
plt.plot(curr_x_iters, curr_func_vals,
"r.", markersize=15, label="Observations")
plt.title(r"$x_{%d} = %.4f, f(x_{%d}) = %.4f$" % (
n_iter, res.x_iters[n_iter][0], n_iter, res.func_vals[n_iter]))
plt.grid()
if n_iter == 0:
plt.legend(loc="best", prop={'size': 8}, numpoints=1)
plt.subplot(5, 2, 2*n_iter+2)
acq = gaussian_lcb(x, gp)
plt.plot(x, acq, "b", label="LCB(x)")
plt.fill_between(x.ravel(), -2.0, acq.ravel(), alpha=0.3, color='blue')
next_x = np.asarray(res.x_iters[n_iter + 1])
next_acq = gaussian_lcb(next_x.reshape(-1, 1), gp)
plt.plot(next_x[0], next_acq, "bo", markersize=10, label="Next query point")
plt.grid()
if n_iter == 0:
plt.legend(loc="best", prop={'size': 12}, numpoints=1)
plt.suptitle("Sequential model-based minimization using gp_minimize.", fontsize=20)
plt.show()
# Plot f(x) + contours
plt.rcParams["figure.figsize"] = (10, 6)
x = np.linspace(-2, 2, 400).reshape(-1, 1)
fx = [f(x_i, noise_level=0.0) for x_i in x]
plt.plot(x, fx, "r--", label="True (unknown)")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate(([fx_i - 1.9600 * noise_level for fx_i in fx],
[fx_i + 1.9600 * noise_level for fx_i in fx[::-1]])),
alpha=.2, fc="r", ec="None")
# Plot GP(x) + concours
gp = res.models[-1]
y_pred, sigma = gp.predict(x, return_std=True)
plt.plot(x, y_pred, "g--", label=r"$\mu_{GP}(x)$")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate([y_pred - 1.9600 * sigma,
(y_pred + 1.9600 * sigma)[::-1]]),
alpha=.2, fc="g", ec="None")
# Plot sampled points
plt.plot(res.x_iters,
res.func_vals,
"r.", markersize=15, label="Observations")
# Plot LCB(x) + next query point
acq = gaussian_lcb(x, gp)
plt.plot(x, gaussian_lcb(x, gp), "b", label="LCB(x)")
next_x = np.argmin(acq)
plt.plot([x[next_x]], [acq[next_x]], "b.", markersize=15, label="Next query point")
plt.title(r"$x^* = %.4f, f(x^*) = %.4f$" % (res.x[0], res.fun))
plt.legend(loc="best")
plt.grid()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem statement
Step2: Note. In skopt, functions $f$ are assumed to take as input a 1D vector $x$ represented as an array-like and to return a scalar $f(x)$.
Step3: Bayesian optimization based on gaussian process regression is implemented in skopt.gp_minimize and can be carried out as follows
Step4: Accordingly, the approximated minimum is found to be
Step5: For further inspection of the results, attributes of the res named tuple provide the following information
Step6: Together these attributes can be used to visually inspect the results of the minimization, such as the convergence trace or the acquisition function at the last iteration
Step7: Let us visually examine
Step8: Finally, as we increase the number of points, the GP model approaches the actual function. The final few points are cluttered around the minimum because the GP does not gain anything more by further exploration.
|
5,041
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
print accuracy_score(outcomes, predictions)
vs.survival_stats(data, outcomes, 'Sex')
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == "female":
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
print accuracy_score(outcomes, predictions)
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == "female":
predictions.append(1)
else:
if passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
print accuracy_score(outcomes, predictions)
vs.survival_stats(data, outcomes, 'Sex', ["Sex == 'male'", "Pclass == 1"])
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == "female":
if passenger['Pclass'] == 1:
predictions.append(1)
elif passenger['SibSp'] <= 2:
predictions.append(1)
elif passenger['Parch'] == 0:
predictions.append(1)
else:
predictions.append(0)
else:
if passenger['Age'] < 10:
predictions.append(1)
elif passenger['Pclass'] == 1 and passenger['Parch'] >= 2 and passenger['SibSp'] == 1:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
print accuracy_score(outcomes, predictions)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
Step5: Tip
Step6: Question 1
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Step18: Question 4
|
5,042
|
<ASSISTANT_TASK:>
Python Code:
###### 0123456789012345678901234567890123456789012345678901234567890'
record = '....................100 .......513.25 ..........'
cost = int(record[20:32]) * float(record[40:48])
print(cost)
SHARES = slice(20,32)
PRICE = slice(40,48)
cost = int(record[SHARES]) * float(record[PRICE])
print(cost)
a = slice(10, 50, 2)
print(a.start)
print(a.stop)
print(a.step)
s = 'HelloWorld'
a = slice(5, 10, 2)
a.indices(len(s))
for i in range(*a.indices(len(s))):
print(s[i])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Why use slice()
Step2: In addition, you can map a slice onto a sequence of a specific size by using its indices(size) method.
|
5,043
|
<ASSISTANT_TASK:>
Python Code:
# suposing the datset is downloaded here
# workdir = '/media/samuel/dataspikesorting/DataSpikeSortingHD2/kampff/polytrode Impedance/'
workdir = '/home/samuel/Documents/projet/DataSpikeSorting/kampff/polytrode Impedance/'
# Input file
filename = workdir + 'amplifier2017-02-02T17_18_46/amplifier2017-02-02T17_18_46.bin'
# dirname is where tridesclous will put eveything
dirname = workdir + 'tdc_amplifier2017-02-02T17_18_46'
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import tridesclous as tdc
from tridesclous import DataIO, CatalogueConstructor, Peeler
import os, shutil
if os.path.exists(dirname):
#remove is already exists to restart from stractch
shutil.rmtree(dirname)
dataio = DataIO(dirname=dirname)
# feed DataIO with one file
dataio.set_data_source(type='RawData', filenames=[filename],
sample_rate=20000., dtype='uint16', total_channel=32,
bit_to_microVolt=0.195)
print(dataio)
# set the probe file
dataio.set_probe_file('kampff_polytrode_impedance_32.prb')
cc = CatalogueConstructor(dataio=dataio, chan_grp=0)
cc.set_preprocessor_params(chunksize=1024, common_ref_removal=False,
highpass_freq=250., lowpass_freq=9500.,
peak_sign='-', relative_threshold=5., peak_span=0.0001)
cc.estimate_signals_noise(duration=30.)
cc.run_signalprocessor(duration=280.)
cc.extract_some_waveforms(n_left=-15, n_right=20, mode='rand', nb_max=20000)
cc.clean_waveforms(alien_value_threshold=100.)
cc.extract_some_features(method='peak_max')
cc.find_clusters(method='sawchaincut', kde_bandwith=1.0)
print(cc)
dataio = DataIO(dirname=dirname)
tdc.summary_noise(dataio=dataio, chan_grp=0)
tdc.summary_catalogue_clusters(dataio=dataio, chan_grp=0, label=0)
cc.make_catalogue_for_peeler()
initial_catalogue = dataio.load_catalogue(chan_grp=0)
peeler = Peeler(dataio)
peeler.change_params(catalogue=initial_catalogue)
peeler.run(duration=None, progressbar=True)
tdc.summary_after_peeler_clusters(dataio, chan_grp=0, label=0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: create a DataIO (and remove if already exists)
Step2: CatalogueConstructor
Step3: Noise measurement
Step4: Inspect waveform quality at catalogue level
Step5: construct catalogue
Step6: apply peeler
Step7: final inspection of cells
|
5,044
|
<ASSISTANT_TASK:>
Python Code:
try:
flip
except:
assert False
else:
assert True
np.testing.assert_allclose(flip(1.0), 1.0, rtol = 0.01)
np.testing.assert_allclose(flip(0.0), 0.0, rtol = 0.01)
results = np.zeros(10000, dtype = np.int)
for i in range(10000):
results[i] = flip(0.5)
np.testing.assert_allclose(results.mean(), 0.5, rtol = 0.1)
try:
nflips
except:
assert False
else:
assert True
import numpy as np
assert (np.array(nflips(1000, 1.0)) == 1).all()
assert (np.array(nflips(1000, 0.0)) == 0).all()
results = np.array(nflips(10000, 0.5))
np.testing.assert_allclose(results.mean(), 0.5, rtol = 0.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: B
|
5,045
|
<ASSISTANT_TASK:>
Python Code:
import csv
import urllib2
def pr_min_max(ip_addr):
mintemp = {'Value': 1000.0}
maxtemp = {'Value': 0.0}
cr = csv.DictReader(urllib2.urlopen("http://%s:7645/data.csv" % ip_addr))
for row in cr:
temp = float(row['Value'])
var = row['Variable']
if var == 'tasmax' and temp > float(maxtemp['Value']):
maxtemp = row
if var == 'tasmin' and temp < float(mintemp['Value']):
mintemp = row
print "The minimum temperature is %.2f degrees C on %s at (%.3fW, %.3fN)" % \
(float(mintemp['Value'])-273.15, mintemp['Date'][:7], \
-float(mintemp['Longitude']), float(mintemp['Latitude']))
print "The maximum temperature is %.2f degrees C on %s at (%.3fW, %.3fN)" % \
(float(maxtemp['Value'])-273.15, maxtemp['Date'][:7], \
-float(maxtemp['Longitude']), float(maxtemp['Latitude']))
# Note: replace with the IP address of your data server
pr_min_max("192.168.99.100")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: pr_min_max
Step2: Analyze the Data
|
5,046
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-3', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
5,047
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import numpy as np
x = np.linspace(-np.pi, np.pi, 256,endpoint=True)
y,z = np.sin(x), np.cos(x)
plt.plot(x,y)
plt.plot(x,z)
plt.show()
plt.plot(x, y, color="blue", linewidth=2.5, linestyle="-")
# Plot sine using green color with a continuous line of width 1 (pixels)
plt.plot(x, z, color="red", linewidth=2.5, linestyle="-")
plt.xlim(x.min()*1.1, x.max()*1.1)
plt.ylim(y.min()*1.1, y.max()*1.1)
plt.xticks( [-np.pi, -np.pi/2, 0, np.pi/2, np.pi])
plt.yticks([-1, 0, +1])
plt.grid()
plt.show()
plt.plot(x, y, color="blue", linewidth=2.5, linestyle="-", label="cosine")
# Plot sine using red color with a continuous line of width 1 (pixels)
plt.plot(x, z, color="red", linewidth=2.5, linestyle="-", label='sin')
plt.xlim(x.min()*1.1, x.max()*1.1)
plt.ylim(y.min()*1.1, y.max()*1.1)
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi],
[r'$-\pi$', r'$-\pi/2$', r'$0$', r'$+\pi/2$', r'$+\pi$'])
plt.yticks([-1, 0, +1],
[r'$-1$', r'$0$', r'$+1$'])
plt.grid()
plt.legend(loc='upper left', frameon=False)
plt.show()
plt.plot(x, y, color="blue", linewidth=2.5, linestyle="-", label="cosine")
# Plot sine using red color with a continuous line of width 1 (pixels)
plt.plot(x, z, color="red", linewidth=2.5, linestyle="-", label='sin')
plt.xlim(x.min()*1.1, x.max()*1.1)
plt.ylim(y.min()*1.1, y.max()*1.1)
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi],
[r'$-\pi$', r'$-\pi/2$', r'$0$', r'$+\pi/2$', r'$+\pi$'])
plt.yticks([-1, 0, +1],
[r'$-1$', r'$0$', r'$+1$'])
ax = plt.gca() # get current axis
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
plt.legend(loc='upper left', frameon=False)
plt.show()
import pandas as pd
planetData = pd.DataFrame({'Distance from sun [m]' :
pd.Series([5.79e10, 1.08e11, 1.496e11, 2.28e11, 7.78e11, 1.43e12,
2.87e12, 4.5e12, 5.91e12],
index=['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter',
'Saturn', 'Uranus', 'Neptune', 'Pluto']),
'Orbital period [s]' :
pd.Series([7.6e6, 1.94e7, 3.156e7, 5.94e7, 3.74e8, 9.35e8, 2.64e9,
5.22e9, 7.82e9],
index=['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter',
'Saturn', 'Uranus', 'Neptune', 'Pluto'])
})
planetData
fig = plt.figure()
ax = fig.add_subplot(111)
planetData.plot(x="Distance from sun [m]", y="Orbital period [s]", ax=ax, logy=True, style='o')
ax.annotate('Earth', (1.e11+1.5e11, 31560000), xycoords='data')
plt.grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Improving the range of the plot
Step2: Intuitive Mapping from Data to Visualization
|
5,048
|
<ASSISTANT_TASK:>
Python Code:
!pip uninstall systemml --y
!pip install --user https://repository.apache.org/content/groups/snapshots/org/apache/systemml/systemml/1.0.0-SNAPSHOT/systemml-1.0.0-20171201.070207-23-python.tar.gz
!pip show systemml
from systemml import MLContext, dml, dmlFromResource
ml = MLContext(sc)
print "Spark Version:", sc.version
print "SystemML Version:", ml.version()
print "SystemML Built-Time:", ml.buildTime()
ml.execute(dml(s = 'Hello World!').output("s")).get("s")
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets
plt.switch_backend('agg')
script =
X = rand(rows=$nr, cols=1000, sparsity=0.5)
A = t(X) %*% X
s = sum(A)
prog = dml(script).input('$nr', 1e5).output('s')
s = ml.execute(prog).get('s')
print (s)
%matplotlib inline
diabetes = datasets.load_diabetes()
diabetes_X = diabetes.data[:, np.newaxis, 2]
diabetes_X_train = diabetes_X[:-20]
diabetes_X_test = diabetes_X[-20:]
diabetes_y_train = diabetes.target[:-20].reshape(-1,1)
diabetes_y_test = diabetes.target[-20:].reshape(-1,1)
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
diabetes.data.shape
script =
# add constant feature to X to model intercept
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
A = t(X) %*% X
b = t(X) %*% y
w = solve(A, b)
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias')
w, bias = ml.execute(prog).get('w','bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='blue', linestyle ='dotted')
script =
# add constant feature to X to model intercepts
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
max_iter = 100
w = matrix(0, rows=ncol(X), cols=1)
for(i in 1:max_iter){
XtX = t(X) %*% X
dw = XtX %*%w - t(X) %*% y
alpha = -(t(dw) %*% dw) / (t(dw) %*% XtX %*% dw)
w = w + dw*alpha
}
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias')
w, bias = ml.execute(prog).get('w', 'bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')
script =
# add constant feature to X to model intercepts
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
m = ncol(X); i = 1;
max_iter = 20;
w = matrix (0, rows = m, cols = 1); # initialize weights to 0
dw = - t(X) %*% y; p = - dw; # dw = (X'X)w - (X'y)
norm_r2 = sum (dw ^ 2);
for(i in 1:max_iter) {
q = t(X) %*% (X %*% p)
alpha = norm_r2 / sum (p * q); # Minimizes f(w - alpha*r)
w = w + alpha * p; # update weights
dw = dw + alpha * q;
old_norm_r2 = norm_r2; norm_r2 = sum (dw ^ 2);
p = -dw + (norm_r2 / old_norm_r2) * p; # next direction - conjugacy to previous direction
i = i + 1;
}
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias')
w, bias = ml.execute(prog).get('w','bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')
prog = dmlFromResource('scripts/algorithms/LinearRegDS.dml').input(X=diabetes_X_train, y=diabetes_y_train).input('$icpt',1.0).output('beta_out')
w = ml.execute(prog).get('beta_out')
w = w.toNumPy()
bias=w[1]
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w[0]*diabetes_X_test)+bias, color='red', linestyle ='dashed')
from pyspark.sql import SQLContext
from systemml.mllearn import LinearRegression
sqlCtx = SQLContext(sc)
regr = LinearRegression(sqlCtx)
# Train the model using the training sets
regr.fit(diabetes_X_train, diabetes_y_train)
predictions = regr.predict(diabetes_X_test)
# Use the trained model to perform prediction
%matplotlib inline
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, predictions, color='black')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Import SystemML API
Step3: Import numpy, sklearn, and define some helper functions
Step5: Example 1
Step6: Load diabetes dataset from scikit-learn
Step8: Example 2
Step10: Algorithm 2
Step12: Algorithm 3
Step13: Example 3
Step14: Example 4
|
5,049
|
<ASSISTANT_TASK:>
Python Code:
ls -1
! ls -1 | wc -l
! gunzip --help
! gunzip -f *gz
3+3
asdf = 'beyonce'
asdf
asdf + ' runs the world'
ls
! head GSM1657872_1772078217.C04.csv
import glob
import pandas as pd
pd.read_table('GSM1657872_1772078217.C04.csv')
pd.read_table('GSM1657872_1772078217.C04.csv', index_col=0)
dataframe = pd.read_table('GSM1657872_1772078217.C04.csv', index_col=0, header=None)
dataframe
series = pd.read_table('GSM1657872_1772078217.C04.csv', index_col=0, header=None, squeeze=True)
series
dataframe.shape
series.shape
series.name
filename = 'GSM1657872_1772078217.C04.csv'
filename
filename.split('.')
filename.split('.csv')
filename.split('.csv')[0]
cells = []
for filename in glob.iglob('*.csv'):
cell = pd.read_table(filename, index_col=0, squeeze=True, header=None)
name = filename.split('_')[0]
cell.name = name
cells.append(cell)
expression = pd.concat(cells, axis=1)
expression.index = expression.index.map(lambda x: x.strip(' '))
print(expression.shape)
expression.head()
! gunzip /Users/kirkreardon/Downloads/*_series_matrix.txt.gz
! head /Users/kirkreardon/Downloads/*_series_matrix.txt
! head -n 20 /Users/kirkreardon/Downloads/*_series_matrix.txt
"Whooo!!!!!!!!!".strip("!")
"Whooo!!!!!!!!!".strip("o")
metadata1 = pd.read_table('/Users/kirkreardon/Downloads/GSE67835-GPL15520_series_matrix.txt',
skiprows=37, header=None, index_col=0)
metadata1.index = metadata1.index.map(lambda x: x.strip('!'))
# Transpose so each row is a cell
metadata1 = metadata1.T
metadata1.head()
metadata2 = pd.read_table('/Users/kirkreardon/Downloads/GSE67835-GPL18573_series_matrix.txt',
skiprows=37, header=None, index_col=0)
metadata2.index = metadata2.index.map(lambda x: x.strip('!'))
# transpose
metadata2 = metadata2.T
metadata2.head()
dataframes = [metadata1, metadata2]
metadata = pd.concat(dataframes)
print(metadata.shape)
metadata.head()
metadata = metadata.set_index('Sample_geo_accession')
metadata.head()
mkdir -p ~/projects/darmanis2015/processed_data
expression.to_csv('~/projects/darmanis2015/processed_data/expression.csv')
metadata.to_csv('~/projects/darmanis2015/processed_data/metadata.csv')
expression.GSM1657884
bad_rows = ['no_feature', 'ambiguous', 'alignment_not_unique']
good_genes = expression.index[~expression.index.isin(bad_rows)]
good_genes
expression.shape
expression_actually_genes = expression.loc[good_genes]
expression_actually_genes.shape
expression_actually_genes.tail()
expression_actually_genes.to_csv("/Users/kirkreardon/projects/darmanis2015/processed_data/expression_actually_genes.csv")
expression_actually_genes.dtypes
expression_actually_genes.tail().index
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: oof, this is in pure bytes and I can't convert to multiples of 1024 easily in my head (1024 bytes = 1 kilobyte, 1024 kilobytes = 1 megabtye, etc - the 1000/byte is a lie that the hard drive companies use!). So let's use the -h flag, which tells the computer to do th conversion for us. We can combine multiple flags with the same dash, so
Step2: See, "GSE41265_allGenesTPM.txt.gz" is there!
Step3: Let's "ls" again to see what files have changed
Step4: Read metadata
|
5,050
|
<ASSISTANT_TASK:>
Python Code:
def check_length(n ) :
ans = 0
while(n ) :
n = n >> 1
ans += 1
return ans
def check_ith_bit(n , i ) :
if(n &(1 <<(i - 1 ) ) ) :
return True
else :
return False
def no_of_flips(n ) :
ln = check_length(n )
ans = 0
right = 1
left = ln
while(right < left ) :
if(check_ith_bit(n , right ) != check_ith_bit(n , left ) ) :
ans += 1
left -= 1
right += 1
return ans
n = 12
print(no_of_flips(n ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
5,051
|
<ASSISTANT_TASK:>
Python Code:
wadiz_df_original = pd.read_csv('wadiz_df_0329_1.csv', index_col=0)
user_comment = pd.read_csv('user_data_all_0329.csv', index_col=0)
provider_comment = pd.read_csv('provider_data_all_0329.csv', index_col=0)
wadiz_df = pd.read_csv('wadiz_provider_analysis_0329.csv', index_col=0)
provider_comment_grammar = pd.read_csv('comment_analysis.csv', index_col=0)
# 각 DataFrame별 샘플 수 비교
print('Original DataFrame :', len(wadiz_df_original))
print('User comment :', len(user_comment['project_id'].value_counts()))
print('Provider comment :', len(provider_comment['project_id'].value_counts()))
print('Provider comment grammar check:', len(provider_comment_grammar['project_id'].value_counts()))
print('Revised DataFrame :', len(wadiz_df))
# grammar null값 제거
wadiz_df = wadiz_df[wadiz_df['provider_grammar_level'].notnull()]
# duration 처리
wadiz_df['date_duration'] = wadiz_df['date_duration'].apply(lambda x: int(x[:-24]))
figure = plt.figure(figsize=(10,8));
sns.kdeplot(wadiz_df['funding_rate']);
plt.xlim(-3, 10);
plt.xticks(fontsize=15);
plt.yticks(fontsize=15);
plt.legend(fontsize = 15);
plt.xlabel('funding_rate', fontsize=15);
plt.ylabel('distribution', fontsize = 15);
# 전체 분산과 각 category 분산과의 분포 차이 검정
# K-S : Kolmogorov Smirnov test
for i in wadiz_df['category'].unique()[:-1]:
all_data = wadiz_df['funding_rate']
category_data = wadiz_df.loc[wadiz_df['category'] == i]['funding_rate']
print('[all_sample vs {category_i}]'.format(category_i = i)),
print(' K-S statistic :', round(sp.stats.ks_2samp(all_data, category_data)[0], 4))
print(' p-value :', round(sp.stats.ks_2samp(all_data, category_data)[1], 4))
print(wadiz_df['area'].value_counts())
# 가장 많은 부분을 차지하는 서울/경기만 분석
plt.figure(figsize=(10,8));
sns.kdeplot(wadiz_df.loc[wadiz_df['area'] == 'seoul']['funding_rate'], label = 'seoul');
sns.kdeplot(wadiz_df.loc[wadiz_df['area'] == 'kyungki']['funding_rate'], label = 'kyungki', linestyle = '--');
plt.xlim(-2, 6);
plt.legend(fontsize = 15);
plt.xlabel('funding_rate', fontsize=15);
plt.ylabel('distribution', fontsize = 15);
# Ks_2sampResult : Kolmogorov-Smirnov test
# Ttest_indResult : 2 sample T-test
seoul_dist = wadiz_df.loc[wadiz_df['area'] == 'seoul']['funding_rate']
kyungki_dist = wadiz_df.loc[wadiz_df['area'] == 'kyungki']['funding_rate']
print('seoul vs kyungki :'),
print(sp.stats.ks_2samp(seoul_dist, kyungki_dist))
print(sp.stats.ttest_ind(seoul_dist, kyungki_dist))
figure = plt.figure(figsize=(10,8))
sns.kdeplot(wadiz_df.loc[wadiz_df['success'] == 1]['date_duration'], label = 'success')
sns.kdeplot(wadiz_df.loc[wadiz_df['success'] == 0]['date_duration'], label = 'fail', c='r', linestyle='--')
plt.xlim(-50, 100);
plt.xlabel('date_distribution', fontsize=15);
plt.ylabel('distribution', fontsize = 15);
plt.legend(fontsize = 15);
# Ks_2sampResult : Kolmogorov-Smirnov test
# Ttest_indResult : 2 sample T-test
success_duration = wadiz_df.loc[wadiz_df['success'] == 1]['date_duration']
fail_duration = wadiz_df.loc[wadiz_df['success'] == 0]['date_duration']
print('[success_duration vs fail_duration]'),
print(sp.stats.ks_2samp(success_duration, fail_duration)),
print(sp.stats.ttest_ind(success_duration, fail_duration))
figure = plt.figure(figsize=(10,8))
sns.kdeplot(wadiz_df.loc[wadiz_df['success'] == 1]['target'], label = 'success');
sns.kdeplot(wadiz_df.loc[wadiz_df['success'] == 0]['target'], label = 'fail');
#plt.xlim(-3, 10)
plt.xticks(fontsize=15);
plt.yticks(fontsize=15);
plt.legend(fontsize = 15);
plt.xlabel('duration', fontsize=15);
plt.ylabel('distribution', fontsize = 15);
success_target = wadiz_df.loc[wadiz_df['success'] == 1]['target']
fail_target = wadiz_df.loc[wadiz_df['success'] == 0]['target']
#정규성 test (성공 샘플의 목표펀딩금액)
print('Shapiro test statistics :', sp.stats.shapiro(success_target)[0]),
print('Shapiro test p-value :', sp.stats.shapiro(success_target)[1])
#정규성 test (실패 샘플의 목표펀딩금액)
print('Shapiro test statistics :', sp.stats.shapiro(fail_target)[0]),
print('Shapiro test p-value :', sp.stats.shapiro(fail_target)[1])
# 분포, 평균 검정
# Ks_2sampResult : Kolmogorov-Smirnov test
# MannwhitneyuResult : Mann-Whiteney U test
print(sp.stats.ks_2samp(success_target, fail_target)),
print(sp.stats.mannwhitneyu(success_target, fail_target))
print('성공 Project들의 target 평균 :', np.mean(success_target),
'Std :', np.std(success_target))
print('실패 Project들의 target 평균 :', np.mean(fail_target),
'Std :', np.std(fail_target))
print('<월별 프로젝트 수>')
print(wadiz_df['month'].value_counts())
plt.figure(figsize=(10,8))
sns.kdeplot(wadiz_df.loc[wadiz_df['success'] ==1]['month'], label = 'success')
sns.kdeplot(wadiz_df.loc[wadiz_df['success'] ==0]['month'], label = 'fail')
plt.xticks(range(1, 12), fontsize=15)
plt.yticks(fontsize=15)
plt.xlabel('month', fontsize=15)
plt.ylabel('distribution', fontsize = 15)
plt.legend(fontsize = 15)
print('<월별 성공/실패 분포>')
# Ks_2sampResult : Kolmogorov-Smirnov test
# Ttest_indResult : 2 sample T-test
success_month = wadiz_df.loc[wadiz_df['success'] ==1]['month']
fail_month = wadiz_df.loc[wadiz_df['success'] ==0]['month']
print('[success_month vs fail_month]'),
print('')
print(sp.stats.ks_2samp(success_month, fail_month))
print(sp.stats.ttest_ind(success_month, fail_month))
plt.figure(figsize=(10,8))
sns.kdeplot(wadiz_df.loc[wadiz_df['success'] == 1]['provider_grammar_level'], label = 'success')
sns.kdeplot(wadiz_df.loc[wadiz_df['success'] == 0]['provider_grammar_level'], label = 'fail')
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.xlabel('grammar_level', fontsize=15)
plt.ylabel('distribution', fontsize = 15)
plt.legend(fontsize = 15)
wadiz_df['log_grammar_level'] = wadiz_df['provider_grammar_level'].apply(lambda x: np.log(x))
# log scailing
plt.figure(figsize=(10,8))
sns.kdeplot(wadiz_df.loc[wadiz_df['success'] == 1]['log_grammar_level'], label = 'success')
sns.kdeplot(wadiz_df.loc[wadiz_df['success'] == 0]['log_grammar_level'], label = 'fail')
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.xlabel('grammar_level', fontsize=15)
plt.ylabel('distribution', fontsize = 15)
plt.legend(fontsize = 15)
success_grammar = wadiz_df.loc[wadiz_df['success'] == 1]['provider_grammar_level']
fail_grammar = wadiz_df.loc[wadiz_df['success'] == 0]['provider_grammar_level']
success_log_grammar = wadiz_df.loc[wadiz_df['success'] == 1]['log_grammar_level']
fail_log_grammar = wadiz_df.loc[wadiz_df['success'] == 0]['log_grammar_level']
#정규성 test (성공 샘플의 grammar)
print('Shapiro test statistics :', sp.stats.shapiro(success_grammar)[0]),
print('Shapiro test p-value :', sp.stats.shapiro(success_grammar)[1])
#정규성 test (실패 샘플의 grammar)
print('Shapiro test statistics :', sp.stats.shapiro(fail_grammar)[0]),
print('Shapiro test p-value :', sp.stats.shapiro(fail_grammar)[1])
# Ks_2sampResult : Kolmogorov-Smirnov test
# MannwhitneyuResult : Mann-Whiteney U test
print(sp.stats.ks_2samp(success_grammar, fail_grammar)),
print(sp.stats.mannwhitneyu(success_grammar, fail_grammar))
#정규성 test (실패 샘플의 log_grammar)
print('Shapiro test statistics :', sp.stats.shapiro(success_log_grammar)[0]),
print('Shapiro test p-value :', sp.stats.shapiro(success_log_grammar)[1])
#정규성 test (성공 샘플의 log_grammar)
print('Shapiro test statistics :', sp.stats.shapiro(fail_log_grammar)[0]),
print('Shapiro test p-value :', sp.stats.shapiro(fail_log_grammar)[1])
# Ks_2sampResult : Kolmogorov-Smirnov test
# MannwhitneyuResult : Mann-Whiteney U test
print(sp.stats.ks_2samp(success_log_grammar, fail_log_grammar)),
print(sp.stats.mannwhitneyu(success_log_grammar, fail_log_grammar))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 최종 분석 샘플
Step2: Kolmogorov-Smirnov test
Step3: 모든 test-statistics의 p-value들이 0.05이상이므로 귀무가설(null hypothesis
Step4: 지역별 샘플개수가 작아서 분포의 차이 검정 불가
Step5: 지역간 평균 차이없음 (t-test p-value > 0.05), 분포 차이 없음 (K-S test p-value > 0.05)
Step6: 성공/실패 Project 간 duration 차이 없음 (평균, 분포)
Step7: 두 샘플 모두 정규분포를 이루지않아 (p-value < 0.05, 귀무가설
Step8: 검정 결과 두 분포는 동일한 분포(p-value >0.05)이고 평균차이는 존재함 (p-value < 0.05)
Step9: 5. Month Distribution
Step10: K-S test, t-test 결과 두 분포의 차이는 없음
Step11: grammar_level을 변형시키지않고 분석시 정규분포가 성립하지 않음.
|
5,052
|
<ASSISTANT_TASK:>
Python Code:
import os
import pandas as pd
from google.cloud import bigquery
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
%env PROJECT = {PROJECT}
%env BUCKET = {BUCKET}
%env REGION = {REGION}
%%bash
gcloud config set project $PROJECT
gcloud config set ai_platform/region $REGION
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 100
regex = ".*://(.[^/]+)/"
sub_query =
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
.format(
regex
)
query =
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
.format(
sub_query=sub_query
)
print(query)
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
print(f"The full dataset contains {len(title_dataset)} titles")
title_dataset.source.value_counts()
DATADIR = "./data/"
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = "titles_full.csv"
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding="utf-8"
)
sample_title_dataset = title_dataset.sample(n=1000)
sample_title_dataset.source.value_counts()
SAMPLE_DATASET_NAME = "titles_sample.csv"
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding="utf-8"
)
sample_title_dataset.head()
import datetime
import os
import shutil
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import EarlyStopping, TensorBoard
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
from tensorflow_hub import KerasLayer
print(tf.__version__)
%matplotlib inline
MODEL_DIR = f"gs://{BUCKET}/text_models"
ls $DATADIR
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATADIR, DATASET_NAME)
COLUMNS = ["title", "source"]
titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()
titles_df.source.value_counts()
CLASSES = {"github": 0, "nytimes": 1, "techcrunch": 2}
N_CLASSES = len(CLASSES)
def encode_labels(sources):
classes = [CLASSES[source] for source in sources]
one_hots = to_categorical(classes, num_classes=N_CLASSES)
return one_hots
encode_labels(titles_df.source[:4])
N_TRAIN = int(len(titles_df) * 0.95)
titles_train, sources_train = (
titles_df.title[:N_TRAIN],
titles_df.source[:N_TRAIN],
)
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:],
titles_df.source[N_TRAIN:],
)
sources_train.value_counts()
sources_valid.value_counts()
X_train, Y_train = titles_train.values, encode_labels(sources_train)
X_valid, Y_valid = titles_valid.values, encode_labels(sources_valid)
X_train[:3]
Y_train[:3]
# TODO 1
NNLM = "https://tfhub.dev/google/nnlm-en-dim50/2"
nnlm_module = KerasLayer(
NNLM, output_shape=[50], input_shape=[], dtype=tf.string, trainable=True
)
# TODO 1
nnlm_module(tf.constant(["The dog is happy to see people in the street."]))
# TODO 1
SWIVEL = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1"
swivel_module = KerasLayer(
SWIVEL, output_shape=[20], input_shape=[], dtype=tf.string, trainable=True
)
# TODO 1
swivel_module(tf.constant(["The dog is happy to see people in the street."]))
def build_model(hub_module, name):
model = Sequential(
[
hub_module, # TODO 2
Dense(16, activation="relu"),
Dense(N_CLASSES, activation="softmax"),
],
name=name,
)
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
return model
def train_and_evaluate(train_data, val_data, model, batch_size=5000):
X_train, Y_train = train_data
tf.random.set_seed(33)
model_dir = os.path.join(MODEL_DIR, model.name)
if tf.io.gfile.exists(model_dir):
tf.io.gfile.rmtree(model_dir)
history = model.fit(
X_train,
Y_train,
epochs=100,
batch_size=batch_size,
validation_data=val_data,
callbacks=[EarlyStopping(patience=1), TensorBoard(model_dir)],
)
return history
data = (X_train, Y_train)
val_data = (X_valid, Y_valid)
nnlm_model = build_model(nnlm_module, "nnlm")
nnlm_history = train_and_evaluate(data, val_data, nnlm_model)
history = nnlm_history
pd.DataFrame(history.history)[["loss", "val_loss"]].plot()
pd.DataFrame(history.history)[["accuracy", "val_accuracy"]].plot()
swivel_model = build_model(swivel_module, name="swivel")
swivel_history = train_and_evaluate(data, val_data, swivel_model)
history = swivel_history
pd.DataFrame(history.history)[["loss", "val_loss"]].plot()
pd.DataFrame(history.history)[["accuracy", "val_accuracy"]].plot()
!echo tensorboard --logdir $MODEL_DIR --port 6006
OUTPUT_DIR = "./savedmodels"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, "swivel")
os.environ["EXPORT_PATH"] = EXPORT_PATH
shutil.rmtree(EXPORT_PATH, ignore_errors=True)
tf.keras.models.save_model(swivel_model, EXPORT_PATH)
swivel_model.save(EXPORT_PATH)
%%bash
# TODO 5
MODEL_NAME=title_model
VERSION_NAME=swivel
if [[ $(gcloud ai-platform models list --format='value(name)' | grep ^$MODEL_NAME$) ]]; then
echo "$MODEL_NAME already exists"
else
echo "Creating $MODEL_NAME"
gcloud ai-platform models create --region=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep ^$VERSION_NAME$) ]]; then
echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... "
echo yes | gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud ai-platform versions create \
--model=$MODEL_NAME $VERSION_NAME \
--framework=tensorflow \
--python-version=3.7 \
--runtime-version=2.1 \
--origin=$EXPORT_PATH \
--staging-bucket=gs://$BUCKET \
--machine-type n1-standard-4 \
--region=$REGION
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir {EXPORT_PATH}
!find {EXPORT_PATH}
%%writefile input.json
{"keras_layer_1_input": "hello"}
!gcloud ai-platform predict \
--model title_model \
--json-instances input.json \
--version swivel \
--region=$REGION
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Replace the variable values in the cell below
Step2: Create a Dataset from BigQuery
Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step7: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
Step8: AutoML for text classification requires that
Step9: Let's make sure we have roughly the same number of labels for each of our three labels
Step10: Finally we will save our data, which is currently in-memory, to disk.
Step11: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Step12: Let's write the sample datatset to disk.
Step13: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located
Step14: Loading the dataset
Step15: Let's look again at the number of examples per label to make sure we have a well-balanced dataset
Step16: Preparing the labels
Step17: Preparing the train/test splits
Step18: To be on the safe side, we verify that the train and test splits
Step19: Now let's create the features and labels we will feed our models with
Step20: NNLM Model
Step21: Note that this TF-Hub embedding produces a single 50-dimensional vector when passed a sentence
Step22: Swivel Model
Step23: Similarly as the previous pre-trained embedding, it outputs a single vector when passed a sentence
Step24: Building the models
Step25: Let's also wrap the training code into a train_and_evaluate function that
Step26: Training NNLM
Step27: Training Swivel
Step28: Comparing the models
Step29: Deploying the model
Step30: Then we can deploy the model using the gcloud CLI as before
Step31: Before we try our deployed model, let's inspect its signature to know what to send to the deployed API
Step32: Let's go ahead and hit our model
|
5,053
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from pprint import pprint
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as mc
import spacepy.toolbox as tb
import spacepy.plot as spp
import tqdm
from scipy import stats
import seaborn as sns
sns.set(font_scale=1.5)
# matplotlib.pyplot.rc('figure', figsize=(10,10))
# matplotlib.pyplot.rc('lines', lw=3)
# matplotlib.pyplot.rc('font', size=20)
%matplotlib inline
np.random.seed(8675309)
nT = 400
cts = np.random.poisson(20, size=nT)
edata = []
for i in range(nT):
edata.extend(i + np.sort(np.random.uniform(low=0, high=1, size=cts[i])))
edata = np.asarray(edata)
edata.shape
plt.plot(edata, np.arange(len(edata)))
plt.xlabel('Time of event')
plt.ylabel('Event number')
plt.title("Modeled underlying data")
with mc.Model() as model:
lam = mc.Uniform('lambda', 0, 1000) # this is the exponential parameter
meas = mc.Exponential('meas', lam, observed=np.diff(edata))
lam2 = mc.Uniform('lam2', 0, 1000)
poi = mc.Poisson('Poisson', lam2, observed=cts)
start = mc.find_MAP()
trace = mc.sample(10000, start=start, njobs=8)
mc.traceplot(trace, combined=True, lines={'lambda':20, 'lam2':20})
mc.summary(trace)
fig, ax = plt.subplots(ncols=1, nrows=2, sharex=True)
sns.distplot(trace['lambda'], ax=ax[0])
sns.distplot(trace['lam2'], ax=ax[1])
plt.xlabel('Lambda')
ax[0].set_ylabel('Exp')
ax[1].set_ylabel('Poisson')
ax[0].axvline(20, c='r', lw=1)
ax[1].axvline(20, c='r', lw=1)
plt.tight_layout()
ppc = mc.sample_ppc(trace, samples=500, model=model, size=100)
ax = plt.subplot()
sns.distplot([n.mean() for n in ppc['Poisson']], kde=False, ax=ax)
ax.axvline(cts.mean())
ax.set(title='Posterior predictive of the mean (Poisson)', xlabel='mean(x)', ylabel='Frequency');
ax = plt.subplot()
sns.distplot([n.var() for n in ppc['Poisson']], kde=False, ax=ax)
ax.axvline(cts.var())
ax.set(title='Posterior predictive of the variance (Poisson)', xlabel='var(x)', ylabel='Frequency');
ax = plt.subplot()
sns.distplot([n.mean() for n in ppc['meas']], kde=False, ax=ax)
ax.axvline(np.diff(edata).mean())
ax.set(title='Posterior predictive of the mean (Exponential)', xlabel='mean(x)', ylabel='Frequency');
ax = plt.subplot()
sns.distplot([n.var() for n in ppc['meas']], kde=False, ax=ax)
ax.axvline(np.diff(edata).var())
ax.set(title='Posterior predictive of the variance (Exponential)', xlabel='var(x)', ylabel='Frequency');
deadtime1 = 0.005 # small dead time
deadtime2 = 0.1 # large dead time
edata_td1 = []
edata_td1.append(edata[0])
edata_td2 = []
edata_td2.append(edata[0])
for ii, v in enumerate(edata[1:], 1): # stop one shy to not run over the end, start enumerate at 1
if v - edata_td1[-1] >= deadtime1:
edata_td1.append(v)
if v - edata_td2[-1] >= deadtime2:
edata_td2.append(v)
edata_td1 = np.asarray(edata_td1)
edata_td2 = np.asarray(edata_td2)
plt.figure(figsize=(8,6))
plt.plot(edata, np.arange(len(edata)), label='Real data')
plt.plot(edata_td1, np.arange(len(edata_td1)), label='Small dead time')
plt.plot(edata_td2, np.arange(len(edata_td2)), label='Large dead time')
plt.xlabel('Time of event')
plt.ylabel('Event number')
plt.title("Modeled underlying data")
plt.legend(bbox_to_anchor=(1, 1))
plt.figure(figsize=(8,6))
h1, b1 = np.histogram(edata, np.arange(1000))
plt.plot(tb.bin_edges_to_center(b1), h1, label='Real data', c='k')
h2, b2 = np.histogram(edata_td1, np.arange(1000))
plt.plot(tb.bin_edges_to_center(b2), h2, label='Small dead time', c='r')
h3, b3 = np.histogram(edata_td2, np.arange(1000))
plt.plot(tb.bin_edges_to_center(b3), h3, label='Large dead time')
plt.legend(bbox_to_anchor=(1, 1))
plt.xlim((0,400))
plt.ylabel('Rate')
plt.xlabel('Time')
# assume R1 is Poisson
with mc.Model() as model:
tau = deadtime1
obsRate = mc.Uniform('obsRate', 0, 1000, shape=1)
obsData = mc.Poisson('obsData', obsRate, observed=h2[:400], shape=1)
realRate = mc.Deterministic('realRate', obsData/(1-obsData*tau))
start = mc.find_MAP()
trace = mc.sample(10000, start=start, njobs=8)
mc.traceplot(trace, combined=True, varnames=('obsRate', ))
mc.summary(trace, varnames=('obsRate', ))
sns.distplot(trace['realRate'].mean(axis=0), bins=10)
plt.xlabel('realRate')
plt.ylabel('Density')
dt1_bounds = np.percentile(trace['realRate'], (2.5, 50, 97.5))
print('The estimate of the real rate given that we know the dead time is:', dt1_bounds,
(dt1_bounds[2]-dt1_bounds[0])/dt1_bounds[1])
dat_bounds = np.percentile(h1[:400], (2.5, 50, 97.5))
print("This compares with if we measured without dead time as:", dat_bounds,
(dat_bounds[2]-dat_bounds[0])/dat_bounds[1])
# assume R1 is Poisson
with mc.Model() as model:
tau = deadtime2
obsRate = mc.Uniform('obsRate', 0, 1000)
obsData = mc.Poisson('obsData', obsRate, observed=h3[:400])
realRate = mc.Deterministic('realRate', obsData/(1-obsData*tau))
start = mc.find_MAP()
trace = mc.sample(10000, start=start, njobs=8)
mc.traceplot(trace, combined=True, varnames=('obsRate', ))
mc.summary(trace, varnames=('obsRate', ))
sns.distplot(trace['realRate'].mean(axis=0))
plt.xlabel('realRate')
plt.ylabel('Density')
dt2_bounds = np.percentile(trace['realRate'], (2.5, 50, 97.5))
print('The estimate of the real rate given that we know the dead time is:', dt1_bounds,
(dt2_bounds[2]-dt2_bounds[0])/dt2_bounds[1])
dat_bounds = np.percentile(h1[:400], (2.5, 50, 97.5))
print("This compares with if we measured without dead time as:", dat_bounds,
(dat_bounds[2]-dat_bounds[0])/dat_bounds[1])
real = pd.Series(edata)
td1 = pd.Series(edata_td1)
td2 = pd.Series(edata_td2)
real.to_csv('no_deadtime_times.csv')
td1.to_csv('small_deadtime_times.csv')
td2.to_csv('large_deadtime_times.csv')
real = pd.Series(h1[h1>0])
td1 = pd.Series(h2[h2>0])
td2 = pd.Series(h3[h3>0])
real.to_csv('no_deadtime_rates.csv')
td1.to_csv('small_deadtime_rates.csv')
td2.to_csv('large_deadtime_rates.csv')
with mc.Model() as model:
BoundedExp = mc.Bound(mc.Exponential, lower=deadtime2, upper=None)
# we observe the following time between counts
lam = mc.Uniform('lam', 0, 1000)
time_between = BoundedExp('tb_ob', lam, observed=np.diff(edata_td2))
start = mc.find_MAP()
trace = mc.sample(10000, njobs=8, start=start)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate Poisson process data and generate exponential
Step2: This is consistent with a Poisson of parameter 20! But there seems to be an under prediction going on, wonder why?
Step3: We are reprodicting well.
Step4: And plot the rates per unit time
Step5: Can we use $n_1 = \frac{R_1}{1-R_1 \tau}$ to derive the relation and spread in the dist of R?
Step6: Use the large dead time
Step7: But this is totally broken!!!
Step8: Work on the random thoughts
|
5,054
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-2', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
5,055
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib
matplotlib.use('TkAgg')
from utils import *
# Had to run 'jupyter nbextension enable --py --sys-prefix widgetsnbextension'
fig, ax = plt.subplots()
environment1 = ArmBall()
def movement(m1=0., m2=0., m3=0., m4=0., m5=0., m6=0., m7=0., m8=0., m9=0.):
environment1.update(array([m1, m2, m3, m4, m5, m6, m7, m8, m9]))
display_movement(fig, ax, environment1)
interact_manual(movement, m1=(-1., 1., 0.01), m2=(-1., 1., 0.01), m3=(-1., 1., 0.01),
m4=(-1., 1., 0.01), m5=(-1., 1., 0.01), m6=(-1., 1., 0.01),
m7=(-1., 1., 0.01), m8=(-1., 1., 0.01), m9=(-1., 1., 0.01))
# Parameters to change:
n_joints = 7 # Number of joints
n_dmp_basis = 3 # Number of basis per joint
# Definition of the environment with number of joints and DMP basis per joint
environment2 = ArmBall(n_joints, n_dmp_basis)
%matplotlib inline
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_xlim((-1., 1.))
ax.set_ylim((-1., 1.))
# Plot a random position of the arm
environment2.env.update(environment2.env.random_motors()[0])
environment2.env.plot(ax, 0)
# Parameters to change:
iterations = 2000 # Number of iterations
%matplotlib inline
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_xlim((-1., 1.))
ax.set_ylim((-1., 1.))
ax.plot(0., 0., 'sk', ms=8)
explored_s_mb = []
for _ in range(iterations):
m = environment2.random_motor() # sample a random motor command m
s = environment2.update(m) # observe the sensory effect s=(x, y): the last position of the ball
if abs(s[-1] - 0.6) > 0.001: # if the ball has been moved by the arm, we plot its trajectory and last position
explored_s_mb += [s] # store s for later evaluation
ax.plot(environment2.s_traj[:,0], environment2.s_traj[:,1], lw=2, alpha=0.3)
ax.add_patch(Circle(tuple(environment2.s_traj[-1,:]), 0.1, fc="b", alpha=0.2))
plt.xticks(linspace(-1., 1., grid_size + 1))
plt.yticks(linspace(-1., 1., grid_size + 1))
ax.set_xticklabels([])
ax.set_yticklabels([])
plt.grid()
print "Number of ball catch:", len(explored_s_mb)
print "Measure of exploration:", int(compute_explo(array(explored_s_mb), array([-1., -1.]), array([1., 1.]), gs=grid_size))
# Parameters to change:
iterations = 2000 # Number of iterations
n_joints = 7 # Number of joints
n_dmp_basis = 3 # Number of basis per joint
goal_size = 10. # Size of the 2D goal space
sigma_explo_ratio = 0.05 # Exploration noise (standard deviation)
%matplotlib inline
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_xlim((-2., 2.))
ax.set_ylim((-2., 2.))
ax.plot(0., 0., 'sk', ms=8)
goals_s_gb = []
explored_s_gb = []
# Definition of the environment
environment2 = ArmBall(n_joints, n_dmp_basis, goal_size)
# Initialization of the sensorimotor model
sm_model = SensorimotorModel.from_configuration(environment2.conf, 'nearest_neighbor', 'default')
# Bootstrap sensorimotor model: 1 random motor babbling
m = environment2.random_motor()
s = environment2.update(m)
sm_model.update(m, s)
for _ in range(iterations):
if (not sm_model.bootstrapped_s) or (random() < 0.2):
# Do random motor babbling while the ball has not been grasped, and then in 20% of the trials
m = environment2.random_motor()
else:
# Sample a random goal in the sensory space:
s_goal = rand_bounds(environment2.conf.s_bounds)[0]
# Infer a motor command to reach that goal using the Nearest Neighbor algorithm:
m = sm_model.model.infer_order(tuple(s_goal))
# Add exploration noise (of variance sigma) to experiment new motor parameters:
m = normal(m, sigma_explo_ratio)
# Plot the goal in red:
ax.plot([s_goal[0]], [s_goal[1]], 'or', alpha=0.1)
goals_s_gb += [s_goal]
s = environment2.update(m) # observe the sensory effect s=(x, y): the last position of the ball
sm_model.update(m, s) # update sensorimotor model
if abs(s[-1] - 0.6) > 0.001: # if the ball has been moved by the arm, we plot its trajectory and last position
explored_s_gb += [s] # store s for later evaluation
ax.plot(environment2.s_traj[:,0], environment2.s_traj[:,1], lw=2, alpha=0.1)
ax.add_patch(Circle(tuple(environment2.s_traj[-1,:]), 0.1, fc="b", alpha=0.2))
print "Number of ball catch:", len(explored_s_gb)
print "Measure of exploration:", int(compute_explo(array(explored_s_gb), array([-1., -1.]), array([1., 1.]), gs=grid_size))
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_xlim((-goal_size, goal_size))
ax.set_ylim((-goal_size, goal_size))
ax.plot(0., 0., 'sk', ms=8)
for s_goal, s in zip(goals_s_gb, explored_s_gb):
ax.add_patch(Circle(tuple(s), 0.1, fc="b", alpha=0.2))
ax.plot([s_goal[0]], [s_goal[1]], 'or', alpha=0.1)
# Parameters to change:
iterations = 2000 # Number of iterations
n_joints = 7 # Number of joints
n_dmp_basis = 3 # Number of basis per joint
goal_size = 10. # Size of the 2D goal space
sigma_explo_ratio = 0.05 # Exploration noise (standard deviation)
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_xlim((-2, 2.))
ax.set_ylim((-2., 2.))
ax.plot(0., 0., 'sk', ms=8)
explored_s_agb = []
# Definition of the environment
environment2 = ArmBall(n_joints, n_dmp_basis, goal_size)
# Initialization of the sensorimotor model
sm_model = SensorimotorModel.from_configuration(environment2.conf, 'nearest_neighbor', 'default')
# Initialization of the interest model
im_model = DiscretizedProgress(environment2.conf, environment2.conf.s_dims,
**{'x_card': 20*20, # 20 is the number of cells on each dimension
'win_size': 20, # window size parameter (ws)
'eps_random': 0.2, # proportion of random choice of cell
'measure': competence_dist})
# Bootstrap model: 1 random motor babbling
m = environment2.random_motor()
s = environment2.update(m)
sm_model.update(m, s)
for _ in range(iterations):
if (not sm_model.bootstrapped_s) or random() < 0.2:
# Do random motor babbling while the ball has not been grasped, and then in 20% of the trials
m = environment2.random_motor()
s = environment2.update(m)
sm_model.update(m, s)
else:
# Sample a sensory goal maximizing learning progress using the interest model:
s_goal = im_model.sample()
# Infer a motor command to reach that goal using the Nearest Neighbor algorithm:
m = sm_model.model.infer_order(tuple(s_goal))
# Add exploration noise (of variance sigma) to experiment new motor parameters:
m = normal(m, sigma_explo_ratio)
# Execute this command and observe the corresponding sensory effect:
s = environment2.update(m)
# Update the sensorimotor model:
sm_model.update(m, s)
# Update the interest model:
im_model.update(hstack((m, s_goal)), hstack((m, s)))
# Plot the goals in red:
ax.plot([s_goal[0]], [s_goal[1]], 'or', alpha=0.1)
if abs(s[-1] - 0.6) > 0.001: # if the ball has been moved by the arm, we plot its trajectory and last position
explored_s_agb += [s] # store s for later evaluation
ax.plot(environment2.s_traj[:,0], environment2.s_traj[:,1], lw=2, alpha=0.1)
ax.add_patch(Circle(tuple(environment2.s_traj[-1,:]), 0.1, fc="b", alpha=0.2))
plt.xticks(linspace(-2., 2., 5))
plt.yticks(linspace(-2., 2., 5))
ax.set_xticklabels([])
ax.set_yticklabels([])
plt.grid()
print "Number of ball catch:", len(explored_s_agb)
print "Measure of exploration:", int(compute_explo(array(explored_s_agb), array([-1., -1.]), array([1., 1.]), gs=grid_size))
progress_flat = abs(im_model.discrete_progress.progress())
progress_array = zeros(im_model.space.cardinalities)
for idx in range(len(progress_flat)):
progress_array[im_model.space.index2multi(idx)] = progress_flat[idx]
fig, ax = plt.subplots()
ax.set_xticklabels([])
ax.set_yticklabels([])
cax = ax.imshow(transpose(progress_array), origin='lower', cmap=cm.jet, interpolation="none")
fig.colorbar(cax)
plt.xticks(linspace(0., 19., 21))
plt.yticks(linspace(0., 19., 21))
ax.grid()
# !! BEFORE RUNNING THIS CELL, RESTART KERNEL NOW to recover TkAgg backend
import matplotlib
matplotlib.use('TkAgg')
from utils import *
# Pop a separate window
fig, ax = plt.subplots()
# Definition of the environment
environment3 = ArmStickBalls(n_joints=7, # Number of joints
n_dmp_basis=3, # Number of basis per joint
goal_size=2., # Size of goal space
stick_handle_tol=0.05, # Maximal distance to grab the stick with the hand
stick_length=0.30, # Stick length,
ball_size=0.10, # Maximal distance to grab the ball with the stick
random_ball_noise=0.20 # Random balls Gaussian noise amplitude
)
# 1 iteration of Random Motor Babbling
m = environment3.random_motor()
s = environment3.update(m)
display_movement(fig, ax, environment3, time_step=0.1)
%matplotlib inline
fig, axes = plt.subplots(1,2)
m = environment3.random_motor()
s = environment3.update(m)
axes[0].bar(range(environment3.conf.m_ndims), m)
axes[0].set_xlabel('Motor Dimensions')
axes[1].bar(range(environment3.conf.s_ndims), s)
axes[1].set_xlabel('Sensory Dimensions')
#m = [ 0.54984069, 0.02455233, -0.59603529, -0.6655813, 0.33923315, 0.22605509, 0.81641894, -0.26901228, 0.14363993, -0.71148838, 0.10504532, 0.58099297, -0.80227669, 0.48208766, -0.56107818, -0.68042166, 0.9648434, -0.8760082, -0.19469749, -0.6141873, 0.11389673]
#s = environment3.update(m)
#display(HTML(environment3.plot(fig, ax)))
HTML('<video width="600" height="400" controls> <source src="./environment3.mp4" type="video/mp4"> </video>')
# Parameters to change:
iterations = 20000 # Number of iterations
sigma_explo_ratio = 0.05 # Exploration noise (standard deviation)
%matplotlib inline
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_xlim((-2, 2.))
ax.set_ylim((-2., 2.))
ax.plot(0., 0., 'sk', ms=8)
explored_s_agb_t = []
explored_s_agb_o = []
# Initialization of the sensorimotor model
sm_model = SensorimotorModel.from_configuration(environment3.conf, 'nearest_neighbor', 'default')
# Bootstrap sensorimotor model: 1 random motor babbling
m = environment3.random_motor()
s = environment3.update(m)
sm_model.update(m, s)
for iteration in range(iterations):
# Print number of iterations up to now:
if (iteration+1) % 100 == 0:
clear_output(wait=True)
print "Iteration:", iteration+1
# Compute the interest of modules
if (not sm_model.bootstrapped_s) or random() < 0.2:
# Do random motor babbling while the ball has not been grasped, and then in 20% of the trials
m = environment3.random_motor()
else:
# Sample a random goal in the sensory space:
s_goal = rand_bounds(environment3.conf.s_bounds)[0]
# Infer a motor command to reach that goal using the Nearest Neighbor algorithm:
m = sm_model.model.infer_order(tuple(s_goal))
# Add exploration noise (of variance sigma) to experiment new motor parameters:
m = normal(m, sigma_explo_ratio)
s = environment3.update(m) # observe the sensory effect s (36D): the trajectory of all objects
sm_model.update(m, s) # update sensorimotor model
if abs(s[17] - 0.6) > 0.001:
explored_s_agb_o += [[s[14], s[17]]]
ax.add_patch(Circle(tuple(environment3.s_traj[-1,4:6]), 0.1, fc="b", alpha=0.3))
if abs(s[11] - 0.46213203) > 0.001:
explored_s_agb_t += [[s[8], s[11]]]
ax.add_patch(Circle(tuple(environment3.s_traj[-1,2:4]), 0.05, fc="r", alpha=0.1))
#plt.savefig('explored_RGB_100000it')
print "Number of tool catch:", len(explored_s_agb_t)
print "Measure of exploration of tool:", int(compute_explo(array(explored_s_agb_t), array([-2., -2.]), array([2., 2.]), gs=grid_size))
print "Number of ball catch:", len(explored_s_agb_o)
print "Measure of exploration of interesting ball:", int(compute_explo(array(explored_s_agb_o), array([-2., -2.]), array([2., 2.]), gs=grid_size))
Image('./explored_RGB_100000it.png')
# Parameters to change:
iterations = 20000 # Number of iterations
%matplotlib inline
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.set_xlim((-2, 2.))
ax.set_ylim((-2., 2.))
ax.plot(0., 0., 'sk', ms=8)
explored_s_agb_t = []
explored_s_agb_o = []
interests_evolution = []
n_explore = 4
def plot_s(s):
global explored_s_agb_t, explored_s_agb_o
if abs(s[17] - 0.6) > 0.001:
explored_s_agb_o += [[s[14], s[17]]]
ax.add_patch(Circle(tuple(environment3.s_traj[-1,4:6]), 0.1, fc="b", alpha=0.3))
if abs(s[11] - 0.46213203) > 0.001:
explored_s_agb_t += [[s[8], s[11]]]
ax.add_patch(Circle(tuple(environment3.s_traj[-1,2:4]), 0.05, fc="r", alpha=0.1))
# Define motor and sensory spaces:
m_ndims = environment3.conf.m_ndims # number of motor parameters
m_space = range(m_ndims)
s_hand = range(m_ndims, m_ndims+6)
s_tool = range(m_ndims+6, m_ndims+12)
s_ball1 = range(m_ndims+12, m_ndims+18)
s_ball2 = range(m_ndims+18, m_ndims+24)
s_ball3 = range(m_ndims+24, m_ndims+30)
s_ball4 = range(m_ndims+30, m_ndims+36)
# Create the 6 learning modules:
learning_modules = {}
learning_modules['mod1'] = LearningModule("mod1", m_space, s_hand, environment3.conf)
learning_modules['mod2'] = LearningModule("mod2", m_space, s_tool, environment3.conf)
learning_modules['mod3'] = LearningModule("mod3", m_space, s_ball1, environment3.conf)
learning_modules['mod4'] = LearningModule("mod4", m_space, s_ball2, environment3.conf)
learning_modules['mod5'] = LearningModule("mod5", m_space, s_ball3, environment3.conf)
learning_modules['mod6'] = LearningModule("mod6", m_space, s_ball4, environment3.conf)
# Steps of (4 exploring and 1 exploiting iterations):
for step in range(iterations / (n_explore + 1)):
# Print number of iterations up to now:
if (step+1) % 20 == 0:
clear_output(wait=True)
print "Iteration:", (step+1) * (n_explore + 1)
# Compute the interest of modules
interests = [learning_modules[mid].interest() for mid in learning_modules.keys()]
interests_evolution.append(interests)
# Choose the babbling module (probabilities proportional to interests, with epsilon of random choice):
babbling_module = learning_modules.values()[prop_choice(interests, eps=0.2)]
# The babbling module picks a random goal in its sensory space and returns 4 noisy motor commands:
m_list = babbling_module.produce(n=n_explore)
for m in m_list:
s = environment3.update(m) # execute this command and observe the corresponding sensory effect
plot_s(s) # plot the observed sensory effect
# Update each sensorimotor models:
for mid in learning_modules.keys():
learning_modules[mid].update_sm(m, learning_modules[mid].get_s(array(list(m) + list(s))))
# Choose the best motor command to reach current goal (with no noise):
m = babbling_module.infer(babbling_module.expl_dims, babbling_module.inf_dims, babbling_module.x, n=1, explore=False)
s = environment3.update(m) # execute this command and observe the corresponding sensory effect
# Update the interest of the babbling module:
babbling_module.update_im(m, babbling_module.get_s(array(list(m)+list(s))))
# Update each sensorimotor models:
for mid in learning_modules.keys():
learning_modules[mid].update_sm(m, learning_modules[mid].get_s(array(list(m) + list(s))))
#plt.savefig('explored_AMB_100000it')
for mid in learning_modules.keys():
print "Number of babbling steps for module", mid, ":", learning_modules[mid].im.n_points(), "{}%".format(int(100. * (n_explore + 1)*learning_modules[mid].im.n_points() / float(learning_modules[mid].sm.t)))
print
print "Number of tool catch:", len(explored_s_agb_t)
print "Measure of exploration of tool:", int(compute_explo(array(explored_s_agb_t), array([-2., -2.]), array([2., 2.]), gs=grid_size))
print "Number of ball catch:", len(explored_s_agb_o)
print "Measure of exploration of interesting ball:", int(compute_explo(array(explored_s_agb_o), array([-2., -2.]), array([2., 2.]), gs=grid_size))
fig, ax = plt.subplots()
ax.plot(array(interests_evolution), lw=2)
ax.legend(["s_hand", "s_tool", "s_ball1", "s_ball2", "s_ball3", "s_ball4"], ncol=3)
ax.set_xlabel('Time steps', fontsize=20)
ax.set_ylabel('Learning progress', fontsize=20)
#plt.savefig('learning_progress_AMB_100000it')
display(HTML("<img style='width: 400px; margin: 0px; float: left; border: 1px solid black;' src='./learning_progress_AMB_100000it.png' /> <img style='width: 400px; margin: 0px; float: left; border: 1px solid black;' src='./explored_AMB_100000it.png' />"))
Image('./exploration_stats.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I. Exploring by hand the movements of a robotic arm
Step2: II. Random Motor Babbling
Step3: We first implement the Random Motor Babbling strategy, where the robot explores the dynamics of its arm by just trying random motor configurations.
Step4: Exploration measure
Step5: Using 7 joints and 3 DMP basis per joint, we can see that the ball has been caught only a few times and that a small number of cells have been reached with the ball.
Step6: The previous figure is zoomed in to better see the reachable part of the sensory space.
Step7: IV. Active Goal Babbling
Step8: The previous figure is zoomed in and shows only 16 of the 400 interest cells.
Step9: The progress is 0 in unreachable cells, and can be high in reached cells, in the middle.
Step10: In the following, we display 1 random movement of the arm (but you can run it again and again), and 1 movement tuned to reach the tool and move the blue ball with the tool (as a video).
Step11: In the next block, we plot random motor parameters and the associated sensory feedback in a bar plot to visualize all the dimensions.
Step12: We remind the reader that the sensory space is composed of the trajectories of each item
Step13: Let's try the Random Goal Babbling exploration strategy in this environment
Step14: The red points are the points reached with the tool at the end of each movement that grabbed the tool, and the blue points are reached with the ball when it was caught.
Step15: Here there are 36 sensory dimensions, and Goal Babbling chooses random goals in this space, where most of the space is unreachable (or uncontrollable
Step16: Active Model Babbling has better explored the tool and object spaces than Random Goal Babbling.
Step17: The interest of the static ball (s_ball2) is always 0.
Step18: We finally show exploration results with more trials of each algorithm.
|
5,056
|
<ASSISTANT_TASK:>
Python Code:
! pip3 install -U google-cloud-automl --user
! pip3 install google-cloud-storage
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = 'us-central1' #@param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AutoML, then don't execute this code
if not os.path.exists('/opt/deeplearning/metadata/env_version'):
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION gs://$BUCKET_NAME
! gsutil ls -al gs://$BUCKET_NAME
import json
import os
import sys
import time
from google.cloud import automl_v1beta1 as automl
from google.protobuf.json_format import MessageToJson
from google.protobuf.json_format import ParseDict
# AutoML location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
def automl_client():
return automl.AutoMlClient()
def prediction_client():
return automl.PredictionServiceClient()
def operations_client():
return automl.AutoMlClient()._transport.operations_client
clients = {}
clients["automl"] = automl_client()
clients["prediction"] = prediction_client()
clients["operations"] = operations_client()
for client in clients.items():
print(client)
IMPORT_FILE = 'gs://automl-video-demo-data/traffic_videos/traffic_videos.csv'
! gsutil cat $IMPORT_FILE | head -n 10
dataset = {
"display_name": "traffic_" + TIMESTAMP,
"video_object_tracking_dataset_metadata": {}
}
print(MessageToJson(
automl.CreateDatasetRequest(
parent=PARENT,
dataset=dataset
).__dict__["_pb"])
)
request = clients["automl"].create_dataset(
parent=PARENT,
dataset=dataset
)
result = request
print(MessageToJson(result.__dict__["_pb"]))
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split('/')[-1]
print(dataset_id)
input_config = {
"gcs_source": {
"input_uris": [IMPORT_FILE]
}
}
print(MessageToJson(
automl.ImportDataRequest(
name=dataset_short_id,
input_config=input_config
).__dict__["_pb"])
)
request = clients["automl"].import_data(
name=dataset_id,
input_config=input_config
)
result = request.result()
print(MessageToJson(result))
model = {
"display_name": "traffic_" + TIMESTAMP,
"dataset_id": dataset_short_id,
"video_object_tracking_model_metadata": {}
}
print(MessageToJson(
automl.CreateModelRequest(
parent=PARENT,
model=model
).__dict__["_pb"])
)
request = clients["automl"].create_model(
parent=PARENT,
model=model
)
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
# The full unique ID for the training pipeline
model_id = result.name
# The short numeric ID for the training pipeline
model_short_id = model_id.split('/')[-1]
print(model_short_id)
request = clients["automl"].list_model_evaluations(
parent=model_id,
)
for evaluation in request:
print(MessageToJson(evaluation.__dict__["_pb"]))
# The last evaluation slice
last_evaluation_slice = evaluation.name
request = clients["automl"].get_model_evaluation(
name=last_evaluation_slice
)
print(MessageToJson(request.__dict__["_pb"]))
TRAIN_FILES = "gs://automl-video-demo-data/traffic_videos/traffic_videos_labels.csv"
test_items = ! gsutil cat $TRAIN_FILES | head -n2
cols = str(test_items[0]).split(',')
test_item_1 = str(cols[0])
test_label_1 = str(cols[1])
test_start_time_1 = str(0)
test_end_time_1 = "inf"
print(test_item_1, test_label_1)
cols = str(test_items[1]).split(',')
test_item_2 = str(cols[0])
test_label_2 = str(cols[1])
test_start_time_2 = str(0)
test_end_time_2 = "inf"
print(test_item_2, test_label_2)
import tensorflow as tf
import json
gcs_input_uri = "gs://" + BUCKET_NAME + '/test.csv'
with tf.io.gfile.GFile(gcs_input_uri, 'w') as f:
data = f"{test_item_1}, {test_start_time_1}, {test_end_time_1}"
f.write(data + '\n')
data = f"{test_item_2}, {test_start_time_2}, {test_end_time_2}"
f.write(data + '\n')
print(gcs_input_uri)
!gsutil cat $gcs_input_uri
input_config = {
"gcs_source": {
"input_uris": [gcs_input_uri]
}
}
output_config = {
"gcs_destination": {
"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"
}
}
batch_prediction = automl.BatchPredictRequest(
name=model_id,
input_config=input_config,
output_config=output_config,
)
print(MessageToJson(batch_prediction.__dict__["_pb"]))
request = clients["prediction"].batch_predict(
request=batch_prediction
)
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
destination_uri = batch_prediction.output_config.gcs_destination.output_uri_prefix[:-1]
! gsutil ls $destination_uri/prediction-**
! gsutil cat $destination_uri/prediction-**
delete_dataset = True
delete_model = True
delete_bucket = True
# Delete the dataset using the AutoML fully qualified identifier for the dataset
try:
if delete_dataset:
clients['automl'].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the model using the AutoML fully qualified identifier for the model
try:
if delete_model:
clients['automl'].delete_model(name=model_id)
except Exception as e:
print(e)
if delete_bucket and 'BUCKET_NAME' in globals():
! gsutil rm -r gs://$BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the Google cloud-storage library as well.
Step2: Restart the Kernel
Step3: Before you begin
Step4: Region
Step5: Timestamp
Step6: Authenticate your GCP account
Step7: Create a Cloud Storage bucket
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Step11: AutoML constants
Step12: Clients
Step13: Example output
Step14: Example output
Step15: Response
Step16: Example output
Step17: projects.locations.datasets.importData
Step18: Example output
Step19: Response
Step20: Example output
Step21: Example output
Step22: Response
Step23: Example output
Step24: Evaluate the model
Step25: Response
Step26: Example output
Step27: Response
Step28: Example output
Step29: Example output
Step30: Example output
Step31: Example output
Step32: Response
Step33: Example output
Step34: Example output
|
5,057
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import os
import glob
import numpy as np
from statsmodels.tsa.tsatools import detrend
def make_gen_index(data_folder, time='Monthly'):
Read and combine the state-level generation and index files
inputs:
data_folder: (str or path) path to the state-level data files
time: (str) "Monthly", "Quarterly", or "Annual"
outputs:
gen_index: (df) a dataframe with combined generation and index data
index_path = os.path.join(data_folder, '{} index*'.format(time))
gen_path = os.path.join(data_folder, '{} gen*'.format(time))
# Get filenames and list of state abbreviations
index_fns = glob.glob(index_path)
gen_fns = glob.glob(gen_path)
# Not sure why, but DC shows up in index but not gen
index_states = [f.split()[-1][:-4] for f in index_fns]
gen_states = [f.split()[-1][:-4] for f in gen_fns]
# Read and combine files into single dataframes
def read_combine(paths, states):
df_list = []
for f, state in zip(paths, states):
df = pd.read_csv(f)
df['region'] = state
df_list.append(df)
full_df = pd.concat(df_list)
full_df.reset_index(inplace=True, drop=True)
full_df.rename(columns={'year': 'Year',
'fuel category 1': 'fuel category'},
inplace=True)
if 'datetime' in full_df.columns:
full_df['datetime'] = pd.to_datetime(full_df['datetime'])
return full_df
index_df = read_combine(index_fns, index_states)
gen_df = read_combine(gen_fns, gen_states)
# return index_df, gen_df
# Different timescales will pivot on different columns
gen_pivot_col = {'Monthly': ['Year', 'datetime'],
'Quarterly': ['Year', 'year_quarter'],
'Annual': ['Year']}
gen_pivot = pd.pivot_table(gen_df, index=['region'] + gen_pivot_col[time],
values='generation (MWh)', columns='fuel category')
gen_pivot.reset_index(inplace=True, drop=False)
# gen_pivot['Year'] = gen_pivot['datetime'].dt.year
gen_pivot.replace(np.nan, 0, inplace=True)
# Merge gen and index dataframes
merge_col = {'Monthly': ['datetime'],
'Quarterly': ['year_quarter'],
'Annual': ['Year']}
gen_index = pd.merge(gen_pivot, index_df[merge_col[time] + ['region', 'index (g/kWh)']],
on=merge_col[time] + ['region'])
return gen_index
gen_index = make_gen_index(os.path.join('Data storage', 'final NERC data from states'))
sns.set_style('white', {'axes.linewidth': 1.5,
'axes.grid': True})
sns.set_context('notebook', font_scale=1.2)
def region_facet_grid(df, plot_function, x_axis, y_axis, col_order=None,
suptitle='', add_legend=False, ax_labels=None, plotly=False,
FG_kwargs={}, plot_kwargs={}):
# fig, ax =
g = sns.FacetGrid(df, col_order=col_order, **FG_kwargs)
g.map(plot_function, x_axis, y_axis, **plot_kwargs)
g.set_xticklabels(rotation=35)
if add_legend:
g.add_legend()
if suptitle:
plt.suptitle(suptitle, y=1.02, size=15)
if col_order and 'col' in FG_kwargs:
axes = g.axes.flatten()
for ax, title in zip(axes, order):
ax.set_title(title)
if ax_labels:
g.set_axis_labels(ax_labels)
return g
# if plotly:
# plotly_fig = tls.mpl_to_plotly(g)
order = ['MRO', 'SPP', 'RFC', 'ERCOT', 'FRCC', 'SERC',
'WECC', 'NPCC']
FG_kwargs = dict(col='region',
hue='region',
col_wrap=3,
aspect=1.2,
hue_order=order,
ylim=(0, 1050))
region_facet_grid(df=gen_index, plot_function=plt.plot, x_axis='datetime',
y_axis='index (g/kWh)', col_order=order,
suptitle='Monthly Index', FG_kwargs=FG_kwargs)
import plotly
import plotly.tools as tls
import plotly.plotly as py
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
plotly.offline.init_notebook_mode(connected=True)
order = ['MRO', 'SPP', 'RFC', 'ERCOT', 'FRCC', 'SERC',
'WECC', 'NPCC']
FG_kwargs = dict(hue='region',
aspect=1.5,
size=5,
hue_order=order,
ylim=(0, 1050))
region_facet_grid(df=gen_index, plot_function=plt.plot, x_axis='datetime',
y_axis='index (g/kWh)', add_legend=True, plotly=True,
suptitle='Monthly Index', FG_kwargs=FG_kwargs)
order = ['MRO', 'SPP', 'RFC', 'ERCOT', 'FRCC', 'SERC',
'WECC', 'NPCC']
trace = []
for region in order:
df = gen_index.loc[gen_index['region'] == region]
trace.append(go.Scatter(x=df['datetime'],
y=df['index (g/kWh)'],
name=region,
mode='lines'))
iplot(trace)
order = ['FRCC', 'SPP']
trace = []
for region in order:
df = gen_index.loc[gen_index['region'] == region]
trace.append(go.Scatter(x=df['datetime'],
y=df['index (g/kWh)'],
name=region,
mode='lines'))
iplot(trace)
order = ['FRCC', 'SPP']
trace = []
for region in order:
df = gen_index.loc[gen_index['region'] == region]
trace.append(go.Scatter(x=df['datetime'],
y=detrend(df['index (g/kWh)']),
name=region,
mode='lines'))
iplot(trace)
order = ['ERCOT', 'SPP']
trace = []
for region in order:
df = gen_index.loc[gen_index['region'] == region]
trace.append(go.Scatter(x=df['datetime'],
y=df['index (g/kWh)'],
name=region,
mode='lines'))
iplot(trace)
order = ['ERCOT', 'SPP']
trace = []
for region in order:
df = gen_index.loc[gen_index['region'] == region]
trace.append(go.Scatter(x=df['datetime'],
y=detrend(df['index (g/kWh)']),
name=region,
mode='lines'))
iplot(trace)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Seasonal correlation of CO<sub>2</sub> intensity across NERC regions
Step2: All index values over time for reference
Step3: Viewing all of the lines on a single plot shows that all regions follow the same general trend down
Step4: An interactive version, in case it's helpful
Step5: Look at FRCC and SPP
Step6: With detrending the correlation is 0.03
Step7: Look at ERCOT and SPP
Step8: With detrending the correlation is 0.65
|
5,058
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import pandas as pd
# read CSV file in pandas
mydf = pd.read_csv('.data/Julie_R1_Bef_S4_cell123_Position.csv', skiprows=2)
mydf.head()
# get basic information
print('Number of samples %d'%len(mydf))
print('Number of particles = %d'%len(mydf['TrackID'].unique()))
print('Distance units = %s'%mydf['Unit'][0])
# get TrackIDs
TrackID = mydf['TrackID'].unique()
# select only locations, sampling points and TrackIDs
df = mydf[['Position X','Position Y', 'Position Z', 'Time','TrackID']]
df0 = df.loc[df['TrackID'] == TrackID[0]]
df1 = df.loc[df['TrackID'] == TrackID[1]]
df2 = df.loc[df['TrackID'] == TrackID[2]]
counter = 0
for i in TrackID:
mysize = len( df.loc[df['TrackID'] == i] )
counter +=mysize
print('Number of samples in TrackID = %d is %d'%(i,mysize))
print('Total number of samples %d'%counter)
df0.head() # show first values of first particle
# collect a list of 3d coordinates
P0 = zip(df0['Position X'], df0['Position Y'], df0['Position Z'])
P1 = zip(df1['Position X'], df1['Position Y'], df1['Position Z'])
P2 = zip(df2['Position X'], df2['Position Y'], df2['Position Z'])
P0[0] # test the values are correct
def distance(myarray):
Calculate the distance between 2 3D coordinates along the
axis of the numpy array.
# slice() method is useful for large arrays
# see diff in ./local/lib/python2.7/site-packages/numpy/lib/function_base.py
a = np.asanyarray(myarray)
slice1 = [slice(None)] # create a slice type object
slice2 = [slice(None)]
slice1[-1] = slice(1, None) # like array[1:]
slice2[-1] = slice(None, -1) # like array[:-1]
slice1 = tuple(slice1)
slice2 = tuple(slice2)
# calculate sqrt( dx^2 + dy^2 + dz^2)
sum_squared = np.sum( np.power(a[slice2]-a[slice1],2), axis=1)
return np.sqrt( sum_squared)
# retrieve time vector
#dt = 10 # sampling interval in minutes
dt = 0.1666 # sampling interval in hours
t0 = df0['Time'].values*dt
print(len(t0))
D0 = distance(P0) # in um
S0 = D0/10. # speed in um/min
t0 = t0[:-1] # when ploting speeds we do not need the last sampling point
plt.plot(t0, S0, color = '#006400')
plt.ylabel('Speed (um/min)'),
plt.xlabel('Time (hours)')
plt.title('Particle %d'%TrackID[0]);
print('Track duration %2.4f min'%(len(t0)*10.))
print('total traveled distances = %2.4f um'%np.sum(D0))
print('total average speed = %2.4f um/min'%S0.mean())
# retrieve time vector and calculate speed
dt = 0.1666 # sampling interval in hours
t1 = df1['Time'].values*dt
D1 = distance(P1) # in um
S1 = D1/10. #um/min
t1 = t1[:-1]
plt.plot(t1, S1, color = '#4169E1')
plt.ylabel('Speed (um/min)'),
plt.xlabel('Time (hours)')
plt.title('Particle %d'%TrackID[1]);
print('Track duration %2.4f min'%(len(t1)*10.))
print('total traveled distances = %2.4f um'%np.sum(D1))
print('total average speed = %2.4f um/min'%S1.mean())
# retrieve time vector and calculate speed
dt = 0.1666 # sampling interval in hours
t2 = df2['Time'].values*dt
D2 = distance(P2) # in um
S2 = D2/10. #um/min
t2 = t2[:-1]
plt.plot(t2, S2, color = '#800080')
plt.xlabel('Time (hours)')
plt.ylabel('Speed (um/min)'), plt.title('Particle %d'%TrackID[2]);
print('Track duration %2.4f min'%(len(t2)*10.))
print('total traveled distances = %2.4f um'%np.sum(D2))
print('total average speed = %2.4f um/min'%S2.mean())
#Overlap
plt.plot(t0, S0, color = '#006400');
plt.plot(t1, S1, color = '#4169E1');
plt.plot(t2, S2, color = '#800080');
plt.xlabel('Time (hours)');
plt.ylabel('Speed (um/min)'), plt.title('All Particles');
S0_norm = S0/np.max(S0)
S1_norm = S1/np.max(S1)
S2_norm = S2/np.max(S2)
#Overlap
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(311)
ax2 = fig.add_subplot(312)
ax3 = fig.add_subplot(313)
ax1.plot(t0, S0_norm, color = 'darkgreen', alpha=0.5)
ax2.plot(t1, S1_norm, color = 'royalblue')
ax3.plot(t2, S2_norm, color = 'purple')
#ax3.plot(np.arange(1500), mysin, color= 'cyan')
ax3.set_xlabel('Time (hours)');
for ax in fig.axes:
ax.get_xaxis().set_ticks([])
ax.get_yaxis().set_ticks([])
ax.get_yaxis().set_visible(False)
ax.get_xaxis().set_visible(False)
#ax.axis('Off')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax3.get_xaxis().set_visible(True)
ax.get_xaxis().set_ticks(np.arange(0,25,5))
ax3.spines['bottom'].set_visible(True)
ax3.spines['left'].set_visible(True)
n = len(S0) # length of the signal
k = np.arange(n)
T = n*dt
frq = k/T # two sides frequency range
frq = frq[range(n/2)] # one side frequency range
Y0 = np.fft.fft(S0)/n # fft computing and normalization
Y0 = Y0[range(n/2)]
plt.plot(frq, abs(Y0),color = 'darkgreen') # plotting the spectrum
plt.xlabel('Freq (hours)')
plt.ylabel('|Y(freq)|')
#plt.ylim(ymax=0.02)
n = len(S1) # length of the signal
k = np.arange(n)
T = n*dt
frq = k/T # two sides frequency range
frq = frq[range(n/2)] # one side frequency range
Y1 = np.fft.fft(S1)/n # fft computing and normalization
Y1 = Y1[range(n/2)]
plt.plot(frq, abs(Y0),color = 'darkgreen') # plotting the spectrum
plt.plot(frq, abs(Y1),color = 'royalblue') # plotting the spectrum
plt.xlabel('Freq (hours)')
plt.ylabel('|Y(freq)|')
plt.ylim(ymax = 0.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <H2>Show basic file information</H2>
Step3: <H2>Compute euclidian distances </H2>
Step4: <H2>Velocities</H2>
Step5: <H2>Particle information</H2>
Step6: <H2>Show normalized speeds</H2>
Step7: <H2>Fourier transform</H2>
|
5,059
|
<ASSISTANT_TASK:>
Python Code:
no_elves = 5
elves = [elf for elf in range(1, no_elves + 1)]
print(elves)
def play_round(elves):
_elves = []
elf = 0
while elf < len(elves):
_elves.append(elves[elf])
elf += 2
if len(elves) % 2 == 1:
_elves.pop(0)
return _elves
while len(elves) > 1:
elves = play_round(elves)
print(elves[0])
with open('../inputs/day19.txt', 'r') as f:
input_data = int(f.readline())
elves = [elf for elf in range(1, input_data + 1)]
while len(elves) > 1:
elves = play_round(elves)
print('answer', elves[0])
n = 2
k = 2
j = [1]
while n <= input_data:
j.append((j[n - 2] + k - 1) % n + 1)
n += 1
print('answer', j[-1])
# DO NOT RUN
def do_not_run():
elves = [elf for elf in range(1, input_data + 1)]
while len(elves) > 1:
elves.pop(len(elves)//2)
elves = elves[1:] + elves[:1]
print(elves[0])
from collections import deque
left = deque(i for i in range(1, input_data//2 + 1))
right = deque(i for i in range(input_data, input_data//2, -1))
print('tree left->', len(left), '1:', left[0], 'last:', left[-1])
print('tree right->', len(right), '1:', right[0], 'last:', right[-1])
while left and right:
if len(left) > len(right):
left.pop()
else:
right.pop()
right.appendleft(left.popleft())
left.append(right.pop())
if left:
print('answer', left[0])
else:
print('answer', right[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulate one round of stealing presents
Step2: Continue simulating rounds until only one elf is remaining
Step3: Run on the given input
Step4: Optimized version - Josephus problem
Step5: Part Two
Step6: Alternative logic
Step7: Since we do not know which of the left or right branches might contain the final element, we need to check which of the queues are not empty and then retrieve the answer from that.
|
5,060
|
<ASSISTANT_TASK:>
Python Code:
with open("input/day7.txt", "r") as f:
inputLines = tuple(line.strip() for line in f)
import re
def isABBA(text):
# Use a negative lookahead assertion to avoid matching four equal characters.
return re.search(r"(.)(?!\1)(.)\2\1", text) is not None
assert isABBA("abba")
assert isABBA("xabba")
assert not isABBA("aaaa")
assert isABBA("abcoxxoxyz")
assert isABBA("aabba")
assert isABBA("aaabba")
assert isABBA("aaaabba")
def ipAddressSequences(ipAddress):
# We use a pattern for the hypernet sequences for splitting.
# Moreover, we capture the letters in the hypernet sequences, such that
# normal and hypernet sequences will be alternating in the result.
sequences = re.split(r"\[([^\]]+)\]", ipAddress)
normalSequences = tuple(sequences[::2])
hypernetSequences = tuple(sequences[1::2])
return normalSequences, hypernetSequences
assert ipAddressSequences("abba[mnop]qrst") == (("abba", "qrst"), ("mnop",))
assert ipAddressSequences("abcd[bddb]xyyx") == (("abcd", "xyyx"), ("bddb",))
assert ipAddressSequences("aaaa[qwer]tyui") == (("aaaa", "tyui"), ("qwer",))
assert ipAddressSequences("ioxxoj[asdfgh]zxcvbn") == (("ioxxoj", "zxcvbn"), ("asdfgh",))
assert ipAddressSequences("a[b]") == (("a", ""), ("b",))
assert ipAddressSequences("[b]a") == (("", "a"), ("b",))
assert ipAddressSequences("[b]") == (("", ""), ("b",))
def supportsTLS(ipAddress):
normal, hypernet = ipAddressSequences(ipAddress)
return any(isABBA(s) for s in normal) and not any(isABBA(s) for s in hypernet)
assert supportsTLS("abba[mnop]qrst")
assert not supportsTLS("abcd[bddb]xyyx")
assert not supportsTLS("aaaa[qwer]tyui")
assert supportsTLS("ioxxoj[asdfgh]zxcvbn")
sum(1 for ipAddress in inputLines if supportsTLS(ipAddress))
def supportsSSL(ipAddress):
# The idea is that the ABA and the BAB patterns are separated by an odd number of brackets.
return re.search(# first the ABA pattern
r"([a-z])(?!\1)([a-z])\1"
# then an arbitrary number of letters
+ r"[a-z]*"
# then an opening or closing bracket
+ r"[\[\]]"
# then any number of blocks which contain letters, a bracket, more letters, and another bracket
+ r"([a-z]*[\[\]][a-z]*[\[\]]]*)*"
# then an arbitrary number of letters
+ r"[^\[\]]*"
# finally, the BAB pattern
+ r"\2\1\2",
ipAddress) is not None
assert supportsSSL("aba[bab]xyz")
assert not supportsSSL("xyx[xyx]xyx")
assert supportsSSL("aaa[kek]eke")
assert supportsSSL("zazbz[bzb]cdb")
sum(1 for ipAddress in inputLines if supportsSSL(ipAddress))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: Part 2
|
5,061
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import matplotlib.pyplot as plt
import healpy as hp
from astropy.io import fits
from astropy.coordinates import SkyCoord
from astropy.wcs import WCS
import cygrid
imkw = dict(origin='lower', interpolation='nearest')
NSIDE = 256
NPIX = hp.nside2npix(NSIDE)
LMAX = 256
power = (lambda k: 12 / (k + 1) ** 2.5)
cl = power(np.arange(LMAX))
plt.loglog(cl)
plt.xlabel(r'$l$')
plt.ylabel(r'$C_l$')
random_field = hp.synfast(cl, NSIDE, verbose=False)
hp.mollview(random_field, xsize=300)
header = fits.Header()
pixsize = 14 / 60.
header['SIMPLE'] = 'T'
header['BITPIX'] = -32
header['NAXIS'] = 2
header['NAXIS1'] = 101
header['NAXIS2'] = 101
header['CDELT1'] = -pixsize
header['CDELT2'] = pixsize
header['CRPIX1'] = 51
header['CRPIX2'] = 51
header['CRVAL1'] = 180.
header['CRVAL2'] = 10.
header['LATPOLE'] = 90.
header['CTYPE1'] = 'GLON-SIN'
header['CTYPE2'] = 'GLAT-SIN'
wcs = WCS(header)
gridder = cygrid.WcsGrid(header)
theta, phi = hp.pix2ang(NSIDE, np.arange(NPIX))
lons = np.rad2deg(phi).astype(np.float64)
lats = (90. - np.rad2deg(theta)).astype(np.float64)
kernelsize_fwhm = 12. / 60. # 12 arcminutes
# see https://en.wikipedia.org/wiki/Full_width_at_half_maximum
kernelsize_sigma = kernelsize_fwhm / np.sqrt(8 * np.log(2))
sphere_radius = 3. * kernelsize_sigma
gridder.set_kernel(
'gauss1d',
(kernelsize_sigma,),
sphere_radius,
kernelsize_sigma / 2.
)
gridder.grid(lons, lats, random_field)
datacube = gridder.get_datacube()
fig = plt.figure()
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8], projection=wcs.celestial)
lon, lat = ax.coords
lon.set_axislabel('Galactic Longitude')
lat.set_axislabel('Galactic Latitude')
ax.imshow(datacube, **imkw)
ax.coords.grid(color='white', alpha=0.5, linestyle='solid')
cel_header = wcs.celestial.to_header()
# fits.writeto('fullsky_regridded.fits', datacube, cel_header)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We attempt to limit our dependencies as much as possible, but astropy, healpy, and wcsaxes needs to be available on your machine if you want to re-run the calculations. We can highly recommend anaconda as a scientific python platform.
Step2: Let's define some plotting kwargs for the images.
Step3: We start off by setting the basic healpix parameters which will define our coordinate system. It's given entirely by the nside. For more details, check the paper by Gorski et al. (2005). We stick to ring-ordering throughout this notebook.
Step4: To generate a simulated fullsky map, we set up some power law and use healpy to create an image based on this.
Step5: The random_field is easily generated by using hp.synfast.
Step6: We manually create the header onto which we will grid the data. The target field is centered about $(l, b)=(180^\circ, 10^\circ)$. You need to make sure that all the celestial information are available.
Step7: We start the gridding by initating the gridder with the header.
Step8: Similar to the examples in previous notebooks, we need to define the coordinates for the input image. For healpix data, this is really straightforward and can easily be done with the healpy package.
Step9: The gridding kernel is of key importance for the entire gridding process. cygrid allows you to specify the shape of the kernel (e.g. elliptical Gaussian or tapered sinc) and its size.
Step10: After the kernel has been set, we perform the actual gridding by calling grid() with the coordinates and the data.
Step11: To get the gridded data, we simply call get_datacube().`.
Step12: And here is how our gridded field looks like.
Step13: Finally, we can write the fits-image to disk. To make it two-dimensional, we have to create a celestial header. The WCS module can take care of this.
|
5,062
|
<ASSISTANT_TASK:>
Python Code:
!pip install -U numpy matplotlib Ipython ipywidgets pycroscopy
# Ensure python 3 compatibility
from __future__ import division, print_function, absolute_import
# Import necessary libraries:
# General utilities:
import sys
import os
# Computation:
import numpy as np
import h5py
# Visualization:
import matplotlib.pyplot as plt
from IPython.display import display
import ipywidgets as widgets
# Finally, pycroscopy itself
import pycroscopy as px
# set up notebook to show plots within the notebook
% matplotlib inline
max_mem = 1024*8 # Maximum memory to use, in Mbs. Default = 1024
max_cores = None # Number of logical cores to use in fitting. None uses all but 2 available cores.
input_file_path = px.io_utils.uiGetFile(caption='Select translated .h5 file or raw experiment data',
filter='Parameters for raw BE data (*.txt *.mat *xls *.xlsx);; \
Translated file (*.h5)')
(data_dir, data_name) = os.path.split(input_file_path)
if input_file_path.endswith('.h5'):
# No translation here
h5_path = input_file_path
force = False # Set this to true to force patching of the datafile.
tl = px.LabViewH5Patcher()
hdf = tl.translate(h5_path, force_patch=force)
else:
# Set the data to be translated
data_path = input_file_path
(junk, base_name) = os.path.split(data_dir)
# Check if the data is in the new or old format. Initialize the correct translator for the format.
if base_name == 'newdataformat':
(junk, base_name) = os.path.split(junk)
translator = px.BEPSndfTranslator(max_mem_mb=max_mem)
else:
translator = px.BEodfTranslator(max_mem_mb=max_mem)
if base_name.endswith('_d'):
base_name = base_name[:-2]
# Translate the data
h5_path = translator.translate(data_path, show_plots=True, save_plots=False)
hdf = px.ioHDF5(h5_path)
print('Working on:\n' + h5_path)
h5_main = px.hdf_utils.getDataSet(hdf.file, 'Raw_Data')[0]
print('Datasets and datagroups within the file:\n------------------------------------')
px.io.hdf_utils.print_tree(hdf.file)
print('\nThe main dataset:\n------------------------------------')
print(h5_main)
print('\nThe ancillary datasets:\n------------------------------------')
print(hdf.file['/Measurement_000/Channel_000/Position_Indices'])
print(hdf.file['/Measurement_000/Channel_000/Position_Values'])
print(hdf.file['/Measurement_000/Channel_000/Spectroscopic_Indices'])
print(hdf.file['/Measurement_000/Channel_000/Spectroscopic_Values'])
print('\nMetadata or attributes in a datagroup\n------------------------------------')
for key in hdf.file['/Measurement_000'].attrs:
print('{} : {}'.format(key, hdf.file['/Measurement_000'].attrs[key]))
h5_pos_inds = px.hdf_utils.getAuxData(h5_main, auxDataName='Position_Indices')[-1]
pos_sort = px.hdf_utils.get_sort_order(np.transpose(h5_pos_inds))
pos_dims = px.hdf_utils.get_dimensionality(np.transpose(h5_pos_inds), pos_sort)
pos_labels = np.array(px.hdf_utils.get_attr(h5_pos_inds, 'labels'))[pos_sort]
print(pos_labels, pos_dims)
parm_dict = hdf.file['/Measurement_000'].attrs
is_ckpfm = hdf.file.attrs['data_type'] == 'cKPFMData'
if is_ckpfm:
num_write_steps = parm_dict['VS_num_DC_write_steps']
num_read_steps = parm_dict['VS_num_read_steps']
num_fields = 2
px.be_viz_utils.jupyter_visualize_be_spectrograms(h5_main)
sho_fit_points = 5 # The number of data points at each step to use when fitting
h5_sho_group = px.hdf_utils.findH5group(h5_main, 'SHO_Fit')
sho_fitter = px.BESHOmodel(h5_main, parallel=True)
if len(h5_sho_group) == 0:
print('No SHO fit found. Doing SHO Fitting now')
h5_sho_guess = sho_fitter.do_guess(strategy='complex_gaussian', processors=max_cores, options={'num_points':sho_fit_points})
h5_sho_fit = sho_fitter.do_fit(processors=max_cores)
else:
print('Taking previous SHO results already present in file')
h5_sho_guess = h5_sho_group[-1]['Guess']
try:
h5_sho_fit = h5_sho_group[-1]['Fit']
except KeyError:
print('Previously computed guess found. Now computing fit')
h5_sho_fit = sho_fitter.do_fit(processors=max_cores, h5_guess=h5_sho_guess)
h5_sho_spec_inds = px.hdf_utils.getAuxData(h5_sho_fit, auxDataName='Spectroscopic_Indices')[0]
sho_spec_labels = px.io.hdf_utils.get_attr(h5_sho_spec_inds,'labels')
if is_ckpfm:
# It turns out that the read voltage index starts from 1 instead of 0
# Also the VDC indices are NOT repeating. They are just rising monotonically
write_volt_index = np.argwhere(sho_spec_labels == 'write_bias')[0][0]
read_volt_index = np.argwhere(sho_spec_labels == 'read_bias')[0][0]
h5_sho_spec_inds[read_volt_index, :] -= 1
h5_sho_spec_inds[write_volt_index, :] = np.tile(np.repeat(np.arange(num_write_steps), num_fields), num_read_steps)
(Nd_mat, success, nd_labels) = px.io.hdf_utils.reshape_to_Ndims(h5_sho_fit, get_labels=True)
print('Reshape Success: ' + str(success))
print(nd_labels)
print(Nd_mat.shape)
use_sho_guess = False
use_static_viz_func = False
if use_sho_guess:
sho_dset = h5_sho_guess
else:
sho_dset = h5_sho_fit
data_type = px.io.hdf_utils.get_attr(hdf.file, 'data_type')
if data_type == 'BELineData' or len(pos_dims) != 2:
use_static_viz_func = True
step_chan = None
else:
vs_mode = px.io.hdf_utils.get_attr(h5_main.parent.parent, 'VS_mode')
if vs_mode not in ['AC modulation mode with time reversal',
'DC modulation mode']:
use_static_viz_func = True
else:
if vs_mode == 'DC modulation mode':
step_chan = 'DC_Offset'
else:
step_chan = 'AC_Amplitude'
if not use_static_viz_func:
try:
# use interactive visualization
px.be_viz_utils.jupyter_visualize_beps_sho(sho_dset, step_chan)
except:
raise
print('There was a problem with the interactive visualizer')
use_static_viz_func = True
if use_static_viz_func:
# show plots of SHO results vs. applied bias
px.be_viz_utils.visualize_sho_results(sho_dset, show_plots=True,
save_plots=False)
# Do the Loop Fitting on the SHO Fit dataset
loop_success = False
h5_loop_group = px.hdf_utils.findH5group(h5_sho_fit, 'Loop_Fit')
if len(h5_loop_group) == 0:
try:
loop_fitter = px.BELoopModel(h5_sho_fit, parallel=True)
print('No loop fits found. Fitting now....')
h5_loop_guess = loop_fitter.do_guess(processors=max_cores, max_mem=max_mem)
h5_loop_fit = loop_fitter.do_fit(processors=max_cores, max_mem=max_mem)
loop_success = True
except ValueError:
print('Loop fitting is applicable only to DC spectroscopy datasets!')
else:
loop_success = True
print('Taking previously computed loop fits')
h5_loop_guess = h5_loop_group[-1]['Guess']
h5_loop_fit = h5_loop_group[-1]['Fit']
# Prepare some variables for plotting loops fits and guesses
# Plot the Loop Guess and Fit Results
if loop_success:
h5_projected_loops = h5_loop_guess.parent['Projected_Loops']
h5_proj_spec_inds = px.hdf_utils.getAuxData(h5_projected_loops,
auxDataName='Spectroscopic_Indices')[-1]
h5_proj_spec_vals = px.hdf_utils.getAuxData(h5_projected_loops,
auxDataName='Spectroscopic_Values')[-1]
# reshape the vdc_vec into DC_step by Loop
sort_order = px.hdf_utils.get_sort_order(h5_proj_spec_inds)
dims = px.hdf_utils.get_dimensionality(h5_proj_spec_inds[()],
sort_order[::-1])
vdc_vec = np.reshape(h5_proj_spec_vals[h5_proj_spec_vals.attrs['DC_Offset']], dims).T
#Also reshape the projected loops to Positions-DC_Step-Loop
# Also reshape the projected loops to Positions-DC_Step-Loop
proj_nd, _ = px.hdf_utils.reshape_to_Ndims(h5_projected_loops)
proj_3d = np.reshape(proj_nd, [h5_projected_loops.shape[0],
proj_nd.shape[2], -1])
use_static_plots = False
if loop_success:
if not use_static_plots:
try:
px.be_viz_utils.jupyter_visualize_beps_loops(h5_projected_loops, h5_loop_guess, h5_loop_fit)
except:
print('There was a problem with the interactive visualizer')
use_static_plots = True
if use_static_plots:
for iloop in range(h5_loop_guess.shape[1]):
fig, ax = px.be_viz_utils.plot_loop_guess_fit(vdc_vec[:, iloop], proj_3d[:, :, iloop],
h5_loop_guess[:, iloop], h5_loop_fit[:, iloop],
title='Loop {} - All Positions'.format(iloop))
# hdf.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set some basic parameters for computation
Step2: Make the data pycroscopy compatible
Step3: Inspect the contents of this h5 data file
Step4: Get some basic parameters from the H5 file
Step5: Visualize the raw data
Step6: Fit the Band Excitation (BE) spectra
Step7: Visualize the SHO results
Step8: Fit loops to a function
Step9: Prepare datasets for visualization
Step10: Visualize Loop fits
Step11: Save and close
|
5,063
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Data generation obtained from http://cs231n.github.io/neural-networks-case-study/
def generate_data(N, K):
D = 2 # Dimensionality
X = np.zeros((N * K, D)) # Data matrix (each row = single example)
y = np.zeros(N * K, dtype='uint8') # Class labels
for j in xrange(K):
ix = range(N * j, N * (j + 1))
r = np.linspace(0.0, 1, N) # radius
t = np.linspace(j * 8, (j + 1) * 8, N) + np.random.randn(N) * 0.2 # theta
X[ix] = np.c_[r * np.sin(t), r * np.cos(t)]
y[ix] = j
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral, edgecolor='black') # Visualize
plt.xlim([-1,1])
plt.ylim([-1,1])
return X, y
# Example:
generate_data(300, 3);
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.utils import to_categorical
from keras import regularizers
from keras import optimizers
reg = 0.002
step_size = 0.01
data_per_class = 300 # Number of points per class
num_classes = 4 # Number of classes
X, y = generate_data(data_per_class, num_classes)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral) # Visualize
y_cat = to_categorical(y, num_classes)
X_train, X_test, y_train, y_test = train_test_split(X, y_cat, test_size=0.33)
model = Sequential()
model.add(Dense(units=20, input_dim=2, kernel_regularizer=regularizers.l2(reg)))
model.add(Activation('relu'))
model.add(Dense(units=10, input_dim=2, kernel_regularizer=regularizers.l2(reg)))
model.add(Activation('relu'))
model.add(Dense(units=num_classes, kernel_regularizer=regularizers.l2(reg)))
model.add(Activation('softmax'))
opt = optimizers.Adam(lr=step_size)
model.compile(optimizer=opt,
loss='categorical_crossentropy',
metrics=['accuracy'])
# model.fit(X_train, y_train, epochs=5000, batch_size=X_train.shape[0], verbose=0)
for i in xrange(3001):
model.train_on_batch(X_train, y_train)
if i % 500 == 0:
print "Step %4d. Loss=%.3f, train accuracy=%.5f" % tuple([i] + model.test_on_batch(X_train, y_train))
# Plot the resulting classifier on the test data.
h = 0.02
x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1
y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = np.argmax(model.predict(np.c_[xx.ravel(), yy.ravel()]), axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral, edgecolor='black')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max());
from IPython.display import SVG
# https://github.com/Theano/Theano/issues/1801#issuecomment-267989843
# sudo pip install pydot
# sudo apt-get install graphviz
from keras.utils import plot_model
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Training
Step2: Visualization
|
5,064
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
data_set_size = 15
low_mu, low_sigma = 50, 4.3
low_data_set = low_mu + low_sigma * np.random.randn(data_set_size)
high_mu, high_sigma = 57, 5.2
high_data_set = high_mu + high_sigma * np.random.randn(data_set_size)
days = list(range(1, data_set_size + 1))
plt.plot(days, low_data_set)
plt.show()
plt.plot(days, low_data_set,
days, high_data_set)
plt.show()
plt.plot(days, low_data_set,
days, low_data_set, "vm",
days, high_data_set,
days, high_data_set, "^k")
plt.show()
plt.plot(
days, high_data_set, "^k")
plt.show()
plt.plot(days, low_data_set,
days, low_data_set, "vm",
days, high_data_set,
days, high_data_set, "^k")
plt.xlabel('Day')
plt.ylabel('Temperature: degrees Farenheit')
plt.title('Randomized temperature data')
plt.show()
plt.plot(days, low_data_set,
days, high_data_set
)
plt.xlabel('Day')
plt.ylabel('Temperature: degrees Farenheit')
plt.title('Randomized temperature data')
plt.show()
plt.plot(
days, high_data_set, "^k")
plt.xlabel('Day')
plt.ylabel('Temperature: degrees Farenheit')
plt.title('Randomized temperature data')
plt.show()
t1 = np.arange(0.0, 2.0, 0.1)
t2 = np.arange(0.0, 2.0, 0.01)
# note that plot returns a list of lines. The "l1, = plot" usage
# extracts the first element of the list into l1 using tuple
# unpacking. So l1 is a Line2D instance, not a sequence of lines
l1, = plt.plot(t2, np.exp(-t2))
l2, l3 = plt.plot(t2, np.sin(2 * np.pi * t2), '--go', t1, np.log(1 + t1), '.')
l4, = plt.plot(t2, np.exp(-t2) * np.sin(2 * np.pi * t2), 'rs-.')
plt.legend((l2, l4), ('oscillatory', 'damped'), loc='upper right', shadow=True)
plt.xlabel('time')
plt.ylabel('volts')
plt.title('Damped oscillation')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2>random low and high temperature data</h2>
Step2: Next example from
|
5,065
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
import sys
from scipy.signal import medfilt
# Add a new path with needed .py files.
sys.path.insert(0, 'C:\Users\Dowa\Desktop\Hiwi\kt-2015-DSPHandsOn\MedianFilter\Python')
import functions
import gitInformation
% matplotlib inline
gitInformation.printInformation()
samples = 16*128
waveNumber = 16
time = np.linspace(0, 2, samples)
data = np.sin(np.pi*time*waveNumber)
from pylab import *
def parameters():
params = {
'axes.labelsize': 14,
'text.fontsize': 8,
'legend.fontsize': 15,
'xtick.labelsize': 12,
'ytick.labelsize': 12,
'text.usetex': False,
'figure.figsize': [10, 5]
}
rcParams.update(params)
ax = plt.subplot()
xticks = np.arange(0, len(data)+1, samples/waveNumber)
ax.set_xticks(xticks)
x_label = [r"${%s\pi}$" % (2*w) for w in range(0, len(xticks))]
ax.set_xticklabels(x_label)
plt.figure(figsize=(10,7))
parameters()
plt.xlabel("Window length")
plt.ylabel("RMS")
for y in range (0, 150):
if y % 2 == 1:
functions.ErrorPlotWindow(16, y*3, 16*128)
plt.savefig('Basic behavior.jpg',dpi=600, format = "jpg")
samples = 16*128
waveNumber = 16
time = np.linspace(0, 2, samples)
data = np.sin(np.pi*time*waveNumber)
data = data[128*5+1: -128*5-1]
data_filtered = medfilt(data,111)
plt.figure(1)
parameters()
plt.plot(data, color = "cornflowerblue")
axes = plt.gca()
axes.set_ylim([-1.5,1.5])
plt.savefig('1.jpg', dpi = 600, format = 'jpeg')
plt.figure(2)
parameters()
axes = plt.gca()
axes.set_ylim([-1.5,1.5])
plt.plot(data_filtered, color = 'g')
plt.savefig('2.jpg', dpi = 600, format = 'jpeg')
plt.figure(3)
parameters()
axes = plt.gca()
axes.set_ylim([-1.5,1.5])
plt.plot(data, color = "cornflowerblue")
plt.plot(data_filtered, color = 'g')
plt.plot(data-data_filtered, color = 'r', lw = 1.3)
plt.savefig('3.jpg', dpi = 600, format = 'jpeg')
parameters()
plt.figure(1)
functions.medianSinPlot(16, 101, 16*128, 128*5+1, -128*5-1)
plt.savefig('Behavior with window length 101.jpg', dpi=600)
parameters()
plt.figure(2)
functions.medianSinPlot(16, 155, 16*128, 128*5+1, -128*5-1)
plt.savefig('Behavior with window length 155.jpg', dpi=600)
parameters()
plt.figure(3)
functions.medianSinPlot(16, 301, 16*128, 128*5+1, -128*5-1)
plt.savefig('Behavior with window length 301.jpg', dpi=600)
parameters()
plt.figure(4)
functions.medianSinPlot(16, 129, 16*129, 128*5+1, -128*5-1)
plt.savefig('Behavior with window length 129.jpg', dpi=600)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As you can see, the resolution gets higher with a higher window length until the window legth is multiple of the sample rate.
|
5,066
|
<ASSISTANT_TASK:>
Python Code:
!wget http://mlr.cs.umass.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data
!wget http://mlr.cs.umass.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.names
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
!head -40 auto-mpg.data
def missingIsNan(s):
if s == b'?': # single character read as b'?' != '?'
return np.nan
else:
return float(s)
print(missingIsNan('12.32'))
print(missingIsNan(b'?'))
def missingIsNan(s):
return np.nan if s == b'?' else float(s)
print(missingIsNan('12.32'))
print(missingIsNan(b'?'))
d = {1: 'a', 2: 'yo', 3: 42, 4: missingIsNan}
d
d[1]
d[4]
d[4](b'?')
data = np.loadtxt('auto-mpg.data', usecols=range(8), converters={3: missingIsNan})
data.shape
data[:3,:]
%precision 3
data[:3,:]
np.isnan(data)
np.sum(np.isnan(data))
np.sum(np.isnan(data), axis=0)
nans = np.isnan(data)
nans.shape
nans.any(axis=0).shape
nans.any(axis=1).shape
goodRowsMask = nans.any(axis=1)
goodRowsMask
dataNew = data[goodRowsMask,:]
dataNew.shape
dataNew
goodRowsMask = np.logical_not(goodRowsMask)
dataNew = data[goodRowsMask,:]
dataNew.shape
dataNew[:3,:]
np.sum(np.isnan(dataNew),axis=0)
names = ['mpg','cylinders','displacement','horsepower','weight',
'acceleration','year','origin']
plt.figure(figsize=(10,10))
nrow,ncol = dataNew.shape
for c in range(ncol):
plt.subplot(3,3, c+1)
plt.plot(dataNew[:,c])
plt.ylabel(names[c])
names
T = dataNew[:, 0:1] # dataNew[:,0] results in a one-dimensional matrix, dataNew[:,0:1] preserves its two-dimensional nature.
X = dataNew[:, 1:]
X.shape, T.shape
Xnames = names[1:]
Tname = names[0]
Xnames,Tname
plt.figure(figsize=(10,10))
for c in range(X.shape[1]):
plt.subplot(3,3, c+1)
plt.plot(X[:,c], T, 'o', alpha=0.5)
plt.ylabel(Tname)
plt.xlabel(Xnames[c])
X1 = np.hstack((np.ones((X.shape[0],1)), X))
X.shape, X1.shape
X1[:3,:]
Xnames.insert(0, 'bias')
Xnames
nrows = X1.shape[0]
nTrain = int(round(nrow*0.8))
nTest = nrow - nTrain
nTrain,nTest,nTrain+nTest
rows = np.arange(nrows)
np.random.shuffle(rows)
rows
trainIndices = rows[:nTrain]
testIndices = rows[nTrain:]
trainIndices,testIndices
np.intersect1d(trainIndices, testIndices)
Xtrain = X1[trainIndices,:]
Ttrain = T[trainIndices,:]
Xtest = X1[testIndices,:]
Ttest = T[testIndices,:]
Xtrain.shape,Ttrain.shape, Xtest.shape,Ttest.shape
w = np.linalg.lstsq(np.dot(Xtrain.T,Xtrain), np.dot(Xtrain.T, Ttrain))
w = w[0] # to only keep the weights, and discard other information returned by lstsq
w,len(w)
for wi,name in zip(w.flat,Xnames):
print('{:8.3f} {:s}'.format(wi,name))
!locate linalg.py
X1[:4,:]
predict = np.dot(X1[:4,:],w)
predict
np.hstack(( predict, Ttrain[:4,:]))
print('{:^5} {:^5}'.format('P','T'))
for (p,t) in zip(predict,Ttrain[0:4,:]):
# print(p,t)
print('{:5.2f} {:5.2f}'.format(p[0],t[0]))
predict = np.dot(Xtest, w)
predict.shape, Ttest.shape
plt.plot(predict,Ttest,'o')
plt.xlabel('Predicted MPG')
plt.ylabel('Actual MPG')
# add a 45 degree line
a = max(min(predict),min(Ttest))
b = min(max(predict),max(Ttest))
plt.plot([a,b],[a,b], 'r', linewidth=3,alpha=0.7);
np.sqrt( np.mean( (np.dot(Xtest,w) - Ttest)**2))
for wi,name in zip(w.flat,Xnames):
print('{:8.3f} {:s}'.format(wi,name))
for n,mn,mx in zip(Xnames,np.min(X1,axis=0),np.max(X1,axis=0)):
print('{:>20} {:8.2f} {:8.2f}'.format(n,mn,mx))
Xs = (X - X.mean(axis=0)) / X.std(axis=0)
Xs.shape
Xs.mean(0)
Xs.std(0)
means = np.mean(X,axis=0)
stds = np.std(X,axis=0)
Xs = (X - means) / stds
Xs1 = np.hstack((np.ones((Xs.shape[0],1)), Xs))
w = np.linalg.lstsq( np.dot(Xs1.T,Xs1), np.dot(Xs1.T, T) )[0]
w
def makeStandardize(X):
means = X.mean(axis=0)
stds = X.std(axis=0)
def standardize(origX):
return (origX - means) / stds
def unStandardize(stdX):
return stds * stdX + means
return (standardize, unStandardize)
Xtrain = X[trainIndices,:]
Ttrain = T[trainIndices,:]
Xtest = X[testIndices,:]
Ttest = T[testIndices,:]
(standardize, unStandardize) = makeStandardize(Xtrain)
XtrainS = standardize(Xtrain)
XtestS = standardize(Xtest)
np.mean(XtrainS,axis=0), np.std(XtrainS,axis=0), np.mean(XtestS,axis=0), np.std(XtestS,axis=0)
XtrainS1 = np.hstack((np.ones((XtrainS.shape[0],1)), XtrainS))
XtestS1 = np.hstack((np.ones((XtestS.shape[0],1)), XtestS))
w = np.linalg.lstsq( np.dot(XtrainS1.T,XtrainS1), np.dot(XtrainS1.T, Ttrain))[0] # see this [0]?
for wi,name in zip(w.flat,Xnames):
print('{:8.3f} {:s}'.format(wi,name))
np.abs(w)
np.argsort(np.abs(w.flat))
np.argsort(np.abs(w.flat))[::-1]
sortedOrder = np.argsort(np.abs(w.flat))[::-1]
Xnames = np.array(Xnames)
for wi,name in zip(w.flat[sortedOrder],Xnames[sortedOrder]):
print('{:8.3f} {:s}'.format(wi,name))
X = dataNew[:,[1,2,4,5,6,7]]
T = dataNew[:,[0,3]]
Tnames = [names[0], names[3]]
X.shape,Xnames,T.shape,Tnames
Xtrain = X[trainIndices,:]
Ttrain = T[trainIndices,:]
Xtest = X[testIndices,:]
Ttest = T[testIndices,:]
Xtrain.shape, Ttrain.shape, Xtest.shape, Ttest.shape
standardize,_ = makeStandardize(Xtrain)
XtrainS = standardize(Xtrain)
XtestS = standardize(Xtest)
XtrainS1 = np.hstack((np.ones((XtrainS.shape[0],1)), XtrainS))
Xnames = np.array(['bias']+names)[[0,2,3,5,6,7,8]]
Xnames
XtrainS1.shape,Ttrain.shape
w = np.linalg.lstsq( np.dot(XtrainS1.T, XtrainS1), np.dot(XtrainS1.T, Ttrain))[0]
w
Xnames = np.array(Xnames)
for targeti in range(2):
print('\nTarget {}\n'.format(Tnames[targeti]))
thisw = w[:,targeti]
sortedOrder = np.argsort(np.abs(thisw))[::-1]
for wi,name in zip(thisw[sortedOrder],Xnames[sortedOrder]):
print('{:8.3f} {:s}'.format(wi,name))
XtestS1 = np.hstack((np.ones((XtestS.shape[0],1)), XtestS))
prediction = np.dot(XtestS1,w)
prediction.shape
plt.figure(figsize=(10,10))
for p in range(2):
plt.subplot(2,1,p+1)
plt.plot(prediction[:,p],Ttest[:,p],'o')
plt.xlabel("Predicted " + Tnames[p])
plt.ylabel("Actual " + Tnames[p])
a = max(min(prediction[:,p]),min(Ttest[:,p]))
b = min(max(prediction[:,p]),max(Ttest[:,p]))
plt.plot([a,b],[a,b],'r',linewidth=3)
rmseTrain = np.sqrt(np.mean((np.dot(XtrainS1,w) - Ttrain)**2,axis=0))
rmseTrain
rmseTest = np.sqrt(np.mean((np.dot(XtestS1,w) - Ttest)**2,axis=0))
rmseTest
print('Training RMSE: MPG {:4.2f} HP {:4.2f}'.format(*rmseTrain)) #what is that * doing there??
print(' Testing RMSE: MPG {:4.2f} HP {:4.2f}'.format(*rmseTest))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, take a look at auto-mpg.names. There you will learn that there are 398 samples, each with 8 numerical attributes and one string attribute. Their names are
Step2: Take a look at a few lines of the data file to figure out how to read it in.
Step3: Instead of relying on the pandas package to read this data, let's try using the simpler numpy.loadtxt function.
Step4: Hey, why not get fancy and write a one-liner?
Step5: The converters argument to np.loadtxt accepts a python dictionary, which is python's associative array structure.
Step6: Let's also restrict np.loadtxt to reading just the first 8 columns to avoid dealing with the string in the 9th column.
Step7: We can change the precision that ipython uses to display floating point values.
Step8: As we have done before, we must find the missing values. Let's just remove the samples with missing values.
Step9: What does this result tell us?
Step10: That's the one we want, a boolean value for each row, or sample.
Step11: Wait a minute! This gives us only 6 samples. We wanted all samples but these 6, right?
Step12: So, let's change all False values to True and True values to False.
Step13: Remember, the next step after reading data into python is to visualize
Step14: What is interesting to you in these graphs?
Step15: Now, let's see if a linear model makes some sense by plotting the target values versus each of the input variables.
Step16: What do you think? Are there any linear relationships between the individual input variables and the target variable? Do they make sense, given your knowledge of automobiles?
Step17: And, let's add a name to Xnames.
Step18: We could try to fit a linear model to all of the data and check to see
Step19: Check that the training and testing sets are disjoint.
Step20: We want to solve for $\wv$ in the equation $X^T X \wv = X^T T$.
Step21: How can you figure out what np.linalg.lstsq is doing? Try finding the source code!
Step22: In my version I see documentation for lstsq that states
Step23: How do these predictions compare with the actual mpg values? We can either make a two column matrix, or use a for loop to print them.
Step24: Let's try all of the test data and plot the results.
Step25: Not too shabby! But, how about a numerical measure of accuracy?
Step26: This means we are about 2.8 mpg off in our predictions, on average.
Step27: Perhaps year and origin are the most significant. Does this make sense?
Step28: The weight for weight is the smallest magnitude, but the range of its values are the largest.
Step29: To do this correctly when partitioning data into training and testing sets, we must always calculate means and standard deviations using only the training set, and use the same means and standard deviations when standardizing the testing set. Remember, you must not use any information about the testing set when building a model. If you do, your test error will be lower than it will be when you truly see new data.
Step30: Another way is to construct functions for standardizing that include the calculated means and standard deviations as local variables, by using function closures.
Step31: Let's start with X again, and tack on the column of 1's after dividing data into training and testing partitions.
Step32: Notice that the means and standard deviations for the testing set are not as close to 0 and 1 as they are for the training set. Why?
Step33: Now what do you observe about the relative magnitudes? If you had a ton of input variables, it would be easier to see if we sorted them by their magnitudes.
Step34: Multiple Target Components
Step35: Now try predicting both mpg and horsepower.
Step36: How well did we do in terms of RMSE?
|
5,067
|
<ASSISTANT_TASK:>
Python Code:
from footballdataorg.fd import FD
import json
fd = FD()
pl = fd.get_competition(league_code='PL')
print(json.dumps(pl, indent=2))
teams = fd.get_teams(competition=pl)
teams = fd.search_teams('madrid')
print(json.dumps(teams, indent=2))
manchester_united = fd.get_team('66')
print(json.dumps(manchester_united, indent=2))
fixtures = fd.get_fixtures(competition=pl, team=manchester_united, timeFrame='n14')['fixtures']
print(json.dumps(fixtures, indent=2))
head2head = fd.get_fixture('159293', head2head=5)
print(json.dumps(head2head, indent=2))
players = fd.get_players(manchester_united)
print(json.dumps(players, indent=2))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create FD object
Step2: Get Premier League competition object
Step3: Get the teams of the competition
Step4: Search teams by name
Step5: Get a single team
Step6: Get fixtures in the upcoming two weeks
Step7: Get head to head information about an upcoming fixture
Step8: Get the players of a team
|
5,068
|
<ASSISTANT_TASK:>
Python Code:
import random
import numpy as np
from skynet.utils.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
from skynet.feature.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = '../skynet/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
from skynet.feature.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
# Use the validation set to tune the learning rate and regularization strength
from skynet.linear.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for learning_rate in learning_rates:
for reg in regularization_strengths:
classifier = LinearSVM()
classifier.train(X_train_feats, y_train,
learning_rate=learning_rate, reg=reg,
num_iters=2000, batch_size=200, verbose=False)
y_train_predict = classifier.predict(X_train_feats)
y_val_predict = classifier.predict(X_val_feats)
train_accuracy = np.mean(y_train==y_train_predict)
val_accuracy = np.mean(y_val==y_val_predict)
results[(learning_rate, reg,)] = (train_accuracy, val_accuracy)
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = classifier
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('train accuracy: %f, val accuracy: %f, lr %e reg %e ' % (
train_accuracy, val_accuracy, lr, reg))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
marker_size = 100
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, min(idxs.shape[0], examples_per_class), replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
print(X_train_feats.shape)
from skynet.neural_network.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# grid search
# TODO: try random search for hyperparameters
from itertools import product
best_val_acc = 0
best_net = None
net_results = {}
regs = [1e-3, 3e-3, 5e-3, 1e-2, 1e-1][1:4]
learning_rates = [1e-2, 5e-2, 1e-1, 5e-1, 1][-2:]
num_iters_list = [2000][:]
batch_sizes = [50, 100, 200, 400]
for (
reg,
lr,
num_iters,
batch_size,
) in product(regs,
learning_rates,
num_iters_list,
batch_sizes):
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
learning_rate=lr, reg=reg,
num_iters=num_iters, batch_size=batch_size, verbose=False)
net_results[reg, lr] = (stats['train_acc_history'][-1], stats['val_acc_history'][-1], net)
if stats['val_acc_history'][-1] > best_val_acc:
best_stats = stats
best_net = net
best_val_acc = stats['val_acc_history'][-1]
best_hyperparameters = (hidden_dim, reg, lr, num_iters)
print(('Model val_acc: %f, hidden_size: %d, num_iters: %d, lr: %f, reg: %f'
%(stats['val_acc_history'][-1], hidden_dim, num_iters, lr, reg)))
print(('Best model val_acc: %f, hidden_size: %d, reg: %f, lr: %f, num_iters: %d'
% ((best_val_acc,) + best_hyperparameters)))
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in net_results]
y_scatter = [math.log10(x[1]) for x in net_results]
# plot training accuracy
marker_size = 100
colors = [net_results[x][0] for x in net_results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [net_results[x][1] for x in net_results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(best_stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(best_stats['train_acc_history'], label='train')
plt.plot(best_stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Extract Features
Step3: Train SVM on features
Step4: Inline question 1
|
5,069
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from sklearn import linear_model
import matplotlib.pyplot as plt
import matplotlib as mpl
# read data in pandas frame
dataframe = pd.read_csv('datasets/house_dataset2.csv', encoding='utf-8')
# check data by printing first few rows
dataframe.head()
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
fig.set_size_inches(12.5, 7.5)
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs=dataframe['size'], ys=dataframe['bedrooms'], zs=dataframe['price'])
ax.set_ylabel('bedrooms'); ax.set_xlabel('size'); ax.set_zlabel('price')
# ax.view_init(10, -45)
plt.show()
dataframe.describe()
#Quick visualize data
plt.grid(True)
plt.xlim([-1,5000])
dummy = plt.hist(dataframe["size"],label = 'Size')
dummy = plt.hist(dataframe["bedrooms"],label = 'Bedrooms')
plt.title('Clearly we need feature normalization.')
plt.xlabel('Column Value')
plt.ylabel('Counts')
dummy = plt.legend()
mean_size = dataframe["size"].mean()
std_size = dataframe["size"].std()
mean_bed = dataframe["bedrooms"].mean()
std_bed = dataframe["bedrooms"].std()
dataframe["size"] = (dataframe["size"] - mean_size)/std_size
dataframe["bedrooms"] = (dataframe["bedrooms"] - mean_bed)/std_bed
dataframe.describe()
# reassign X
# assign X
X = np.array(dataframe[['size','bedrooms']])
X = np.insert(X,0,1,axis=1)
#Quick visualize data
plt.grid(True)
plt.xlim([-5,5])
dummy = plt.hist(dataframe["size"],label = 'Size')
dummy = plt.hist(dataframe["bedrooms"],label = 'Bedrooms')
plt.title('Feature scaled and normalization.')
plt.xlabel('Column Value')
plt.ylabel('Counts')
dummy = plt.legend()
# assign X and y
X = np.array(dataframe[['size','bedrooms']])
y = np.array(dataframe[['price']])
m = y.size # number of training examples
# insert all 1's column for theta_0
X = np.insert(X,0,1,axis=1)
# initialize theta
# initial_theta = np.zeros((X.shape[1],1))
initial_theta = np.random.rand(X.shape[1],1)
initial_theta
X.shape
initial_theta.shape
iterations = 1500
alpha = 0.1
def h(X, theta): #Linear hypothesis function
hx = np.dot(X,theta)
return hx
def computeCost(theta,X,y): #Cost function
theta_start is an n- dimensional vector of initial theta guess
X is matrix with n- columns and m- rows
y is a matrix with m- rows and 1 column
#note to self: *.shape is (rows, columns)
return float((1./(2*m)) * np.dot((h(X,theta)-y).T,(h(X,theta)-y)))
#Test that running computeCost with 0's as theta returns 65591548106.45744:
initial_theta = np.zeros((X.shape[1],1)) #(theta is a vector with n rows and 1 columns (if X has n features) )
print (computeCost(initial_theta,X,y))
#Actual gradient descent minimizing routine
def gradientDescent(X, theta_start = np.zeros(2)):
theta_start is an n- dimensional vector of initial theta guess
X is matrix with n- columns and m- rows
theta = theta_start
j_history = [] #Used to plot cost as function of iteration
theta_history = [] #Used to visualize the minimization path later on
for meaninglessvariable in range(iterations):
tmptheta = theta
# append for plotting
j_history.append(computeCost(theta,X,y))
theta_history.append(list(theta[:,0]))
#Simultaneously updating theta values
for j in range(len(tmptheta)):
tmptheta[j] = theta[j] - (alpha/m)*np.sum((h(X,theta) - y)*np.array(X[:,j]).reshape(m,1))
theta = tmptheta
return theta, theta_history, j_history
#Actually run gradient descent to get the best-fit theta values
theta, thetahistory, j_history = gradientDescent(X,initial_theta)
theta
plt.plot(j_history)
plt.title("Convergence of Cost Function")
plt.xlabel("Iteration number")
plt.ylabel("Cost function")
plt.show()
dataframe.head()
x_test = np.array([1,0.130010,-0.22367])
print("$%0.2f" % float(h(x_test,theta)))
hx = h(X, theta)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
fig.set_size_inches(12.5, 7.5)
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs=dataframe['size'], ys=dataframe['bedrooms'], zs=dataframe['price'])
ax.set_ylabel('bedrooms'); ax.set_xlabel('size'); ax.set_zlabel('price')
# ax.plot(xs=np.array(X[:,0],dtype=object).reshape(-1,1), ys=np.array(X[:,1],dtype=object).reshape(-1,1), zs=hx, color='green')
ax.plot(X[:,0], X[:,1], np.array(hx[:,0]), label='fitted line', color='green')
# ax.view_init(20, -165)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Feature Scaling and Mean Normalization
Step2: Initialize Hyper Parameters
Step3: Model/Hypothesis Function
Step5: Cost Function
Step7: Gradient Descent Function
Step8: Run Gradient Descent
Step9: Plot Convergence
Step10: Predict output using trained model
|
5,070
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from preamble import *
plt.rcParams['savefig.dpi'] = 100 # This controls the size of your figures
# Comment out and restart notebook if you only want the last output of each cell.
InteractiveShell.ast_node_interactivity = "all"
# This is a temporary read-only OpenML key. Replace with your own key later.
oml.config.apikey = '11e82c8d91c5abece86f424369c71590'
from sklearn.datasets import make_blobs
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
X, y = make_blobs(centers=2, n_samples=1000, random_state=0)
svm_lin = SVC(kernel='linear')
svm_pol = SVC(kernel='poly')
svm_rbf = SVC(kernel='rbf')
lin_score = cross_val_score(svm_lin, X, y, cv=10, scoring='roc_auc', n_jobs=-1)
pol_score = cross_val_score(svm_pol, X, y, cv=10, scoring='roc_auc', n_jobs=-1)
rbf_score = cross_val_score(svm_rbf, X, y, cv=10, scoring='roc_auc', n_jobs=-1)
print("Mean 10-CV score of linear kernel: " + str(lin_score.mean()))
print("Mean 10-CV score of polynomial kernel: " + str(pol_score.mean()))
print("Mean 10-CV score of radial basis function kernel: " + str(rbf_score.mean()))
# Using a slightly adapted version of the plot_svm_kernels function from mglearn.
def plot_svm_kernels(X, y):
# figure number
fignum = 1
# fit the model
for kernel in ('linear', 'poly', 'rbf'):
clf = SVC(kernel=kernel, gamma=2)
clf.fit(X, y)
# plot the line, the points, and the nearest vectors to the plane
plt.figure(fignum, figsize=(4, 3))
plt.suptitle('kernel = %s' % kernel)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=85, edgecolors='k', c='w', zorder=10)
plt.scatter(X[:, 0], X[:, 1], c=y, zorder=10, cmap=plt.cm.bwr)
# for i, coef in enumerate(clf.dual_coef_[0]):
# plt.annotate("%0.2f" % (coef), (clf.support_vectors_[i, 0]+0.15,clf.support_vectors_[i, 1]), fontsize=8, zorder=11)
plt.axis('tight')
x_min = np.min(X, axis=0)[0] - 1
x_max = np.max(X, axis=0)[0] + 1
y_min = np.min(X, axis=0)[1] - 1
y_max = np.max(X, axis=0)[1] + 1
XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()])
# Put the result into a color plot
Z = Z.reshape(XX.shape)
plt.figure(fignum, figsize=(4, 3))
#plt.pcolormesh(XX, YY, Z > 0, cmap=plt.cm.bwr, alpha=0.1)
plt.contour(XX, YY, Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-.5, 0, .5])
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
fignum = fignum + 1
plt.show()
plot_svm_kernels(X, y)
# First performing a 10CV grid search to obtain the ROC_AUC values.
from sklearn.model_selection import GridSearchCV
param_grid = {
"C" : [2e-15, 2, 2e15],
"gamma" : [2e-15, 2, 2e15]
}
svm_clf = SVC(kernel="rbf")
grid_search = GridSearchCV(svm_clf, param_grid, n_jobs=-1, cv=3, scoring="roc_auc")
_ = grid_search.fit(X, y)
results = pd.DataFrame(grid_search.cv_results_)
scores = np.array(results.mean_test_score)
# Using a slightly adapted version of the plot_svm_kernels function from mglearn.
def plot_svm_rbf_kernel(X, y, clf, C, gamma):
# figure number
fignum = 1
# plot the line, the points, and the nearest vectors to the plane
plt.figure(fignum, figsize=(4, 3))
plt.suptitle('C = ' + str(C) + ', gamma = ' + str(gamma))
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=85, edgecolors='k', c='w', zorder=10)
plt.scatter(X[:, 0], X[:, 1], c=y, zorder=10, cmap=plt.cm.bwr)
# for i, coef in enumerate(clf.dual_coef_[0]):
# plt.annotate("%0.2f" % (coef), (clf.support_vectors_[i, 0]+0.15,clf.support_vectors_[i, 1]), fontsize=8, zorder=11)
plt.axis('tight')
x_min = np.min(X, axis=0)[0] - 1
x_max = np.max(X, axis=0)[0] + 1
y_min = np.min(X, axis=0)[1] - 1
y_max = np.max(X, axis=0)[1] + 1
XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()])
# Put the result into a color plot
Z = Z.reshape(XX.shape)
plt.figure(fignum, figsize=(4, 3))
#plt.pcolormesh(XX, YY, Z > 0, cmap=plt.cm.bwr, alpha=0.1)
plt.contour(XX, YY, Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-.5, 0, .5])
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
# Question: moest je het handmatig scoren op 75-25 of moet je grid_search gebruiken?
# Zo ja, als je grid_search gebruikt, hoe krijg je dan alle estimators terug voor de support vectors?
idx = 0
for C in [2e-15, 2, 2e15]:
for gamma in [2e-15, 2, 2e15]:
svm_clf = SVC(kernel='rbf', C=C, gamma=gamma)
svm_clf.fit(X, y)
print("C value of " + str(C) + ", gamma value of " + str(gamma))
print("Mean test score (10-CV AUC): " + str(results.mean_test_score[idx]))
print("Number of support vectors: " + str(np.size(svm_clf.support_vectors_, axis=0)))
plot_svm_rbf_kernel(X, y, svm_clf, C, gamma)
idx += 1
from sklearn.model_selection import GridSearchCV
param_grid = {
"C" : [2*10**(i) for i in range(-12, 13, 1)],
"gamma" : [2*10**(i) for i in range(-12, 13, 1)]
}
svm_clf = SVC(kernel="rbf")
grid_search = GridSearchCV(svm_clf, param_grid, n_jobs=3, cv=10, scoring="roc_auc")
_ = grid_search.fit(X, y)
# For each of the 9 combinations, create the same RBF plot as before, report the number of support vectors, and the AUC performance.
results = pd.DataFrame(grid_search.cv_results_)
scores = np.array(results.mean_test_score).reshape(24, 24)
plt.figure(figsize=[8, 8])
# Plots the mean cross-validation scores
mglearn.tools.heatmap(scores, xlabel='Gamma', xticklabels=param_grid["gamma"],
ylabel='C', yticklabels=param_grid["C"], cmap="viridis");
robot_data = oml.datasets.get_dataset(1497) # Download Robot data
# Get the predictors X and the labels y
X, y = robot_data.get_data(target=robot_data.default_target_attribute);
from sklearn.model_selection import train_test_split, RandomizedSearchCV
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
# We'll use stratify=None for the built-in stratify of train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y, random_state=42)
param_grid = [
{
'kernel':['linear'],
'C' : [2*10**(i) for i in range(-12, 13, 1)],
},
{
'kernel':['poly'],
'degree': [i for i in range(2, 11)],
'gamma' : [2*10**(i) for i in range(-12, 13, 1)],
'C' : [2*10**(i) for i in range(-12, 13, 1)],
},
{
'kernel':['rbf', 'sigmoid'],
'C' : [2*10**(i) for i in range(-12, 13, 1)],
'gamma' : [2*10**(i) for i in range(-12, 13, 1)]
}
]
random_search = RandomizedSearchCV(SVC(), param_distributions=param_grid, n_iter=30, n_jobs=-1, cv=3)
random_search.fit(X_train, y_train)
print("Best Score (3CV accuracy): " + str(best_score_))
print("Best parameters: ")
print(best_params_)
param_grid
ram_prices = pd.read_csv('data/ram_price.csv')
plt.semilogy(ram_prices.date, ram_prices.price)
plt.xlabel("Year")
plt.ylabel("Price in $/Mbyte");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Kernel selection (4 points (1+2+1))
Step2: Results
Step3: Robots and SVMs (4 points (2+1+1))
Step4: A benchmark study (3 points (2+1))
|
5,071
|
<ASSISTANT_TASK:>
Python Code:
import pandas ## data file loading
import numpy
import sklearn.covariance ## for covariance matrix calculation
import matplotlib.pyplot
import matplotlib
import pylab
import scipy.stats ## for calculating the CDF of normal distribution
import igraph ## for network visualization and finding components
import math
gene_matrix_for_network_df = pandas.read_csv("shared/bladder_cancer_genes_tcga.txt", sep="\t")
gene_matrix_for_network = gene_matrix_for_network_df.as_matrix()
print(gene_matrix_for_network.shape)
genes_median_expression = numpy.median(gene_matrix_for_network, axis=1)
gene_matrix_np = numpy.array(gene_matrix_for_network)
genes_keep = numpy.where(genes_median_expression > 12)
matrix_filt = gene_matrix_np[genes_keep, ][0]
matrix_filt.shape
N = matrix_filt.shape[0]
matrix_filt.shape
matrix_cor = numpy.corrcoef(matrix_filt)
matrix_cov = sklearn.covariance.empirical_covariance(numpy.matrix.transpose(matrix_filt))
matrix_cov_inv = numpy.linalg.inv(matrix_cov)
matrix_pcor = -matrix_cov_inv
for i in range(N):
for j in range(N):
matrix_pcor[i,j] /= numpy.sqrt(matrix_cov_inv[i,i]*matrix_cov_inv[j,j])
print(matrix_pcor.shape)
cor_values = matrix_cor[numpy.where(numpy.tri(*matrix_cor.shape, k=-1))]
pcor_values = matrix_pcor[numpy.where(numpy.tri(*matrix_pcor.shape, k=-1))]
print(len(cor_values))
print(len(pcor_values))
matplotlib.pyplot.hist(cor_values, normed=1, alpha=0.5, label="cor")
matplotlib.pyplot.hist(pcor_values, normed=1, alpha=0.5, label="pcor")
matplotlib.pyplot.legend(loc="upper left")
matplotlib.pyplot.xlabel("R")
matplotlib.pyplot.ylabel("frequency")
matplotlib.pyplot.show()
z_scores = 0.5*numpy.log((1+pcor_values)/
(1-pcor_values))
M = gene_matrix_for_network_df.shape[1]
P_values = 2*scipy.stats.norm.cdf(-numpy.abs(z_scores)*(math.sqrt((M-N-5))))
len(numpy.where(P_values < 0.01)[0])
inds_tri = numpy.where(numpy.tri(*matrix_pcor.shape, k=-1))
inds_sig = numpy.where(P_values < 0.01)
graph_edge_list = list(zip(inds_tri[1][inds_sig].tolist(),
inds_tri[0][inds_sig].tolist()))
final_network = igraph.Graph.TupleList(graph_edge_list, directed=False)
final_network.summary()
degree_dist = final_network.degree_distribution()
xs, ys = zip(*[(left, count) for left, _, count in degree_dist.bins()])
matplotlib.pyplot.loglog(xs, ys, marker="o")
pylab.xlabel("k")
pylab.ylabel("N(k)")
pylab.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the tab-deliminted text file of gene expression measurements (rows correspond to genes, columns correspond to bladder tumor samples). (use Pandas, pandas.read.csv, and as_matrix). As always, sanity check that the file that you loaded has the expected dimensions (4,473 x 414) using shape.
Step2: Convert your data frame to a numpy matrix, using the pandas.DataFrame.as_matrix method.
Step3: As always, sanity check that the file that you loaded has the expected dimensions (4,473 x 414) using shape.
Step4: Compute the median expression level for each row of your matrix
Step5: Filter the matrix to include only rows for which the gene's median expression is > 12; matrix should now be 164 x 414; this will enable us to easily compute the partial correlation matrix using the inverse of the covariance matrix. Print the size of the filtered matrix, as a sanity check.
Step6: Print the shape of your filtered matrix, as a sanity check. It should be 164x414.
Step7: Compute the 164 x 164 matrix of gene-gene Pearson correlation coefficients, using numpy.corrcoef (this function treats each row as a random variable, so you don't have to do any transposing of the matrix, unlike the situation in R).
Step8: Compute the covariance matrix using sklearn.covariance.empirical_covariance (from the sklearn.covariance package, . Make sure you take the transpose of the matrix_filt matrix before passing it to the empirical_covariance function!
Step9: Use numpy.linalg.inv to get the inverse matrix.
Step10: Use a double for loop to "scale" the negative of the precision matrix, which will give you the partial correlation. Print the dimensions of the matrix you get back, as a sanity check.
Step11: Get the correlation coefficients and the partial correlation coefficients of the lower triangle of the matrix (not including the diagonal), as two vectors cor_values and pcor_values; your resulting vectors should each have length 13,366. You will want to use numpy.tri and numpy.where (see class session 19 exercise)
Step12: plot the histograms of the correlation coefficients (upper triangle only) and the partial correlation coefficients, on the same plot using alpha blending (refer to class session 17 exercise)
Step13: Fisher transform the partial correlation values, using numpy.log
Step14: Compute a p-value for each gene pair (upper triangle only), using the fact that sqrt(M-N-5) times the fisher Z sore should be approximately univariate normal (with zero mean) under the null hypothesis that a given gene pair's measurements (conditioned on the measurements for all the other 162 genes) are independent. You will want to use scipy.stats.norm.cdf, numpy.abs, and math.sqrt function (see class session 19 exercise).
Step15: How many gene pairs have a P value less than 0.01? (use which and length)
Step16: What are the sizes of the components in the undirected graph whose edges have P < 0.05 in the statistical test that you did? You will need to use zip, tolist, list, and igraph.Graph.TupleList (see class session 19 exercise)
Step17: Plot the graph degree distribution on log-log scale
|
5,072
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
sns.set(palette = sns.dark_palette("skyblue", 8, reverse=True))
!wget 'https://docs.google.com/spreadsheets/d/1N_Hc-xKr7DQc8bZAvLROGWr5Cr-A6MfGnH91fFW3ZwA/export?format=xlsx&id=1N_Hc-xKr7DQc8bZAvLROGWr5Cr-A6MfGnH91fFW3ZwA' -O wallstreet.xlsx
df = pd.read_excel('wallstreet.xlsx', skiprows=1, index_col = 'Country')
df.describe()
df['Year'].hist(bins=22) # 22 bins so I get every year as a distinct sum
gini_df = df[(df.Year==2009)|(df.index=='United States')]['Gini'] # Only 2009, and choose only the gini columns (and the index, country)
gini_df
current_ax = gini_df.plot(kind='barh', color=sns.color_palette()[0])
current_ax.set_title('Gini index (%) in 2009')
current_ax.vlines(39.99, 0, len(gini_df), color=sns.color_palette()[2])
ax = df['Gini'].plot(kind='kde')
gini_df.plot(kind='kde', ax=ax) #overlay 2009 vs all years/countries
current_ax = gini_df.plot(kind='barh', color=sns.color_palette()[0])
current_ax.patches[list(gini_df.index).index("United States")].set_facecolor('#cc5555')
current_ax.set_title('Gini index (%) in 2009')
current_ax.vlines(39.99, 0, len(gini_df), color=sns.color_palette()[2])
current_ax.annotate('Average for\n1989-2010',
(40, 2),
xytext=(20, 10),
textcoords='offset points',
arrowprops=dict(arrowstyle='-|>'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data
Step2: First issue with the data, right away we can see the wide range of dates. Let's look at the date distribution. We probably would want to use only 2010 if it represents enough data. We will make a note of <b>39.99</b> as the average Gini coefficient over all those years.
Step3: We will get just the data for 2009. Not only it is recent, but it is plenty of data points to represent at once. This will also address the other issue with the data
Step4: This is already way easier to compare than the original infographic. Perhaps not as snazzy, but at least it gives us a start in trying to understand the data. But it is just that, a start. One angle would be to investigate how much above average is the Gini for the US. But I would also want to have the measures, including the average from the same year. A quick comparison of the two distributions (2009 vs all the data) shows how sampling on 2009 skews toward a higher Gini.
Step5: Comparing with GDP, population, gender inequality, even subjective "satisfaction indexes" and the like would be much more interesting. To tell a real story, we need to show some correlation, and provide some narrative and/or visualization to explain Gini. At the end of the day, perhaps the real story is that Gini is not a great universal indicator.
|
5,073
|
<ASSISTANT_TASK:>
Python Code:
import os
class Params:
pass
# Set to run on GCP
Params.GCP_PROJECT_ID = 'ksalama-gcp-playground'
Params.REGION = 'europe-west1'
Params.BUCKET = 'ksalama-gcs-cloudml'
Params.PLATFORM = 'local' # local | GCP
Params.DATA_DIR = 'data/news' if Params.PLATFORM == 'local' else 'gs://{}/data/news'.format(Params.BUCKET)
Params.TRANSFORMED_DATA_DIR = os.path.join(Params.DATA_DIR, 'transformed')
Params.TRANSFORMED_TRAIN_DATA_FILE_PREFIX = os.path.join(Params.TRANSFORMED_DATA_DIR, 'train')
Params.TRANSFORMED_EVAL_DATA_FILE_PREFIX = os.path.join(Params.TRANSFORMED_DATA_DIR, 'eval')
Params.TEMP_DIR = os.path.join(Params.DATA_DIR, 'tmp')
Params.MODELS_DIR = 'models/news' if Params.PLATFORM == 'local' else 'gs://{}/models/news'.format(Params.BUCKET)
Params.TRANSFORM_ARTEFACTS_DIR = os.path.join(Params.MODELS_DIR,'transform')
Params.TRAIN = True
Params.RESUME_TRAINING = False
Params.EAGER = False
if Params.EAGER:
tf.enable_eager_execution()
import tensorflow as tf
from tensorflow import data
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.tf_metadata import metadata_io
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.saved import saved_transform_io
print tf.__version__
RAW_HEADER = 'key,title,source'.split(',')
RAW_DEFAULTS = [['NA'],['NA'],['NA']]
TARGET_FEATURE_NAME = 'source'
TARGET_LABELS = ['github', 'nytimes', 'techcrunch']
TEXT_FEATURE_NAME = 'title'
KEY_COLUMN = 'key'
VOCAB_SIZE = 20000
TRAIN_SIZE = 73124
EVAL_SIZE = 23079
DELIMITERS = '.,!?() '
raw_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema({
KEY_COLUMN: dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation()),
TEXT_FEATURE_NAME: dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation()),
TARGET_FEATURE_NAME: dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation()),
}))
transformed_metadata = metadata_io.read_metadata(
os.path.join(Params.TRANSFORM_ARTEFACTS_DIR,"transformed_metadata"))
raw_feature_spec = raw_metadata.schema.as_feature_spec()
transformed_feature_spec = transformed_metadata.schema.as_feature_spec()
print transformed_feature_spec
def parse_tf_example(tf_example):
parsed_features = tf.parse_single_example(serialized=tf_example, features=transformed_feature_spec)
target = parsed_features.pop(TARGET_FEATURE_NAME)
return parsed_features, target
def generate_tfrecords_input_fn(files_pattern,
mode=tf.estimator.ModeKeys.EVAL,
num_epochs=1,
batch_size=200):
def _input_fn():
file_names = data.Dataset.list_files(files_pattern)
if Params.EAGER:
print file_names
dataset = data.TFRecordDataset(file_names )
dataset = dataset.apply(
tf.contrib.data.shuffle_and_repeat(count=num_epochs,
buffer_size=batch_size*2)
)
dataset = dataset.apply(
tf.contrib.data.map_and_batch(parse_tf_example,
batch_size=batch_size,
num_parallel_batches=2)
)
datset = dataset.prefetch(batch_size)
if Params.EAGER:
return dataset
iterator = dataset.make_one_shot_iterator()
features, target = iterator.get_next()
return features, target
return _input_fn
import tensorflow_hub as hub
print hub.__version__
def create_feature_columns(hparams):
title_embeding_column = hub.text_embedding_column(
"title", "https://tfhub.dev/google/universal-sentence-encoder/1",
trainable=hparams.trainable_embedding)
feature_columns = [title_embeding_column]
print "feature columns: \n {}".format(feature_columns)
print ""
return feature_columns
def create_estimator(hparams, run_config):
feature_columns = create_feature_columns(hparams)
optimizer = tf.train.AdamOptimizer(learning_rate=hparams.learning_rate)
estimator = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
n_classes =len(TARGET_LABELS),
label_vocabulary=TARGET_LABELS,
hidden_units=hparams.hidden_units,
optimizer=optimizer,
config=run_config
)
return estimator
NUM_EPOCHS = 10
BATCH_SIZE = 1000
TOTAL_STEPS = (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS
EVAL_EVERY_SEC = 60
hparams = tf.contrib.training.HParams(
num_epochs = NUM_EPOCHS,
batch_size = BATCH_SIZE,
trainable_embedding = False,
learning_rate = 0.01,
hidden_units=[128, 64],
max_steps = TOTAL_STEPS,
)
MODEL_NAME = 'dnn_estimator_hub'
model_dir = os.path.join(Params.MODELS_DIR, MODEL_NAME)
run_config = tf.estimator.RunConfig(
tf_random_seed=19830610,
log_step_count_steps=1000,
save_checkpoints_secs=EVAL_EVERY_SEC,
keep_checkpoint_max=1,
model_dir=model_dir
)
print(hparams)
print("")
print("Model Directory:", run_config.model_dir)
print("Dataset Size:", TRAIN_SIZE)
print("Batch Size:", BATCH_SIZE)
print("Steps per Epoch:",TRAIN_SIZE/BATCH_SIZE)
print("Total Steps:", TOTAL_STEPS)
def generate_serving_input_fn():
def _serving_fn():
receiver_tensor = {
'title': tf.placeholder(dtype=tf.string, shape=[None])
}
return tf.estimator.export.ServingInputReceiver(
receiver_tensor, receiver_tensor)
return _serving_fn
train_spec = tf.estimator.TrainSpec(
input_fn = generate_tfrecords_input_fn(
Params.TRANSFORMED_TRAIN_DATA_FILE_PREFIX+"*",
mode = tf.estimator.ModeKeys.TRAIN,
num_epochs=hparams.num_epochs,
batch_size=hparams.batch_size
),
max_steps=hparams.max_steps,
hooks=None
)
eval_spec = tf.estimator.EvalSpec(
input_fn = generate_tfrecords_input_fn(
Params.TRANSFORMED_EVAL_DATA_FILE_PREFIX+"*",
mode=tf.estimator.ModeKeys.EVAL,
num_epochs=1,
batch_size=hparams.batch_size
),
exporters=[tf.estimator.LatestExporter(
name="estimate", # the name of the folder in which the model will be exported to under export
serving_input_receiver_fn=generate_serving_input_fn(),
exports_to_keep=1,
as_text=False)],
steps=None,
throttle_secs=EVAL_EVERY_SEC
)
from datetime import datetime
import shutil
if Params.TRAIN:
if not Params.RESUME_TRAINING:
print("Removing previous training artefacts...")
shutil.rmtree(model_dir, ignore_errors=True)
else:
print("Resuming training...")
tf.logging.set_verbosity(tf.logging.INFO)
time_start = datetime.utcnow()
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
estimator = create_estimator(hparams, run_config)
tf.estimator.train_and_evaluate(
estimator=estimator,
train_spec=train_spec,
eval_spec=eval_spec
)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
else:
print "Training was skipped!"
tf.logging.set_verbosity(tf.logging.ERROR)
estimator = create_estimator(hparams, run_config)
train_metrics = estimator.evaluate(
input_fn = generate_tfrecords_input_fn(
files_pattern= Params.TRANSFORMED_TRAIN_DATA_FILE_PREFIX+"*",
mode= tf.estimator.ModeKeys.EVAL,
batch_size= TRAIN_SIZE),
steps=1
)
print("############################################################################################")
print("# Train Measures: {}".format(train_metrics))
print("############################################################################################")
eval_metrics = estimator.evaluate(
input_fn=generate_tfrecords_input_fn(
files_pattern= Params.TRANSFORMED_EVAL_DATA_FILE_PREFIX+"*",
mode= tf.estimator.ModeKeys.EVAL,
batch_size= EVAL_SIZE),
steps=1
)
print("")
print("############################################################################################")
print("# Eval Measures: {}".format(eval_metrics))
print("############################################################################################")
import os
export_dir = model_dir +"/export/estimate/"
saved_model_dir = os.path.join(export_dir, os.listdir(export_dir)[0])
print(saved_model_dir)
print("")
predictor_fn = tf.contrib.predictor.from_saved_model(
export_dir = saved_model_dir,
signature_def_key="predict"
)
output = predictor_fn(
{
'title':[
'Microsoft and Google are joining forces for a new AI framework',
'A new version of Python is mind blowing',
'EU is investigating new data privacy policies'
]
}
)
print(output)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importing libraries
Step2: 1. Define Metadata
Step3: 2. Define Input Function
Step4: 3. Create feature columns
Step5: 4. Create a model using a premade DNNClassifer
Step6: 5. Setup Experiment
Step7: 5.2 Serving function
Step8: 5.3 TrainSpec & EvalSpec
Step9: 6. Run experiment
Step10: 7. Evaluate the model
Step11: 8. Use Saved Model for Predictions
|
5,074
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import scipy.stats
%pylab inline
csv = pd.read_csv("single_family_home_values.csv", parse_dates=["last_sale_date"])
print csv.shape
csv.head()
#scale the data
from sklearn import preprocessing
from scipy import stats
# fill missing values (0's) w/ the median for the column
cols = ["square_footage", "lot_size", "num_rooms", "num_bedrooms", "num_baths", "year_built",
"last_sale_amount", "estimated_value"]
for col in cols:
csv.loc[csv[col]==0, col] = csv[col].median()
csv.head(4)
model_cols = ["square_footage", "lot_size", "num_rooms", "num_bedrooms", "num_baths", "year_built",
"last_sale_amount", "last_sale_date", "estimated_value"]
housing=csv[model_cols]
housing.head()
housing["day"] = (housing.last_sale_date - pd.datetime(1900,1,1))/np.timedelta64(1,'D')
housing.drop("last_sale_date", 1, inplace=True)
housing.head(5)
housing.iloc[3393]
# outlier_check_cols = ["square_footage", "lot_size", "num_rooms", "num_bedrooms", "num_baths", "year_built",
# "last_sale_amount", "estimated_value", "day"]
# for col in outlier_check_cols:
# housing[col] = np.abs(stats.zscore(housing[col])) < 3
housing_sans_outliers = housing[(np.abs(stats.zscore(housing)) < 3).all(axis=1)]
print housing.shape
print housing[(np.abs(stats.zscore(housing)) < 3).all(axis=1)].shape
housing_sans_outliers.head(5)
#check correlations
from pandas.tools.plotting import scatter_matrix
_=scatter_matrix(housing_sans_outliers, alpha=0.2, figsize=(17, 17), diagonal='kde')
#log transform skewed numeric features:
numeric_feats = housing_sans_outliers.dtypes[housing_sans_outliers.dtypes != "object"].index
skewed_feats = housing_sans_outliers[numeric_feats].apply(lambda x: stats.skew(x.dropna())) #compute skewness
skewed_feats = skewed_feats[skewed_feats > 0.75]
skewed_feats = skewed_feats.index
print skewed_feats
housing_sans_outliers[skewed_feats] = np.log1p(housing_sans_outliers[skewed_feats])
housing_sans_outliers.head()
# scaled_sq_footage = preprocessing.scale(csv[["square_footage"]])
_ = housing_sans_outliers[["square_footage"]].boxplot(return_type='dict')
_ = housing_sans_outliers[["square_footage"]].hist(bins=50)
final = housing_sans_outliers[["square_footage", "lot_size", "num_rooms", "num_bedrooms", "num_baths", "year_built",
"last_sale_amount", "day", "estimated_value"]]
# through experimentation scaling doesn't help much
# scaled = preprocessing.scale(final.values)
scaled = final.values
features = scaled[:,0:-1]
labels = scaled[:,-1]
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from sklearn import ensemble
from sklearn.metrics import r2_score
#test train split
X_train, X_test, Y_train, Y_test = train_test_split(features, labels, test_size=0.25, random_state=4)
lassocv = linear_model.LassoCV(alphas=[0.05, 0.1, 0.3, 1, 3, 5, 10, 15, 30, 50, 75])
lassocv.fit(features, labels)
lassocv.alpha_
lasso = linear_model.Lasso(alpha=lassocv.alpha_)
lasso.fit(X_train, Y_train)
lasso.score(X_test, Y_test)
training_cols = ["square_footage", "lot_size", "num_rooms", "num_bedrooms", "num_baths", "year_built",
"last_sale_amount", "day"]
coef = pd.Series(lasso.coef_, index = training_cols)
print("Lasso picked " + str(sum(coef != 0)) + " variables and eliminated the other " + str(sum(coef == 0)) + " variables")
imp_coef = coef.sort_values()
imp_coef.plot(kind = "barh")
plt.title("Coefficients in the Lasso Model")
regcv = linear_model.RidgeCV(alphas=[0.05, 0.1, 0.3, 1, 3, 5, 10, 15, 30, 50, 75])
regcv.fit(features, labels)
regcv.alpha_
reg = linear_model.Ridge(alpha=regcv.alpha_)
reg.fit(X_train, Y_train)
ridge_preds = reg.predict(X_test)
reg.score(X_test, Y_test)
parameters = {'n_estimators':[100,300,500],'max_depth':[2, 5, 10], 'min_samples_split': [2,4,8],
'learning_rate': [0.1, .5]}
gbr = ensemble.GradientBoostingRegressor()
gscv = GridSearchCV(gbr, parameters)
gscv.fit(features, labels)
gscv.best_estimator_
params = {'n_estimators': 100, 'max_depth': 5, 'min_samples_split': 2,
'learning_rate': 0.1, 'loss': 'ls'}
gbr = gscv.best_estimator_
gbr.fit(X_train, Y_train)
gbr.score(X_test, Y_test)
0.76074209487440014
gb_preds = gbr.predict(X_test)
from mlxtend.regressor import StackingRegressor
meta = linear_model.LinearRegression()
blender = StackingRegressor(regressors=[reg, gbr], meta_regressor=meta)
_=blender.fit(X_train, Y_train)
y_pred = blender.predict(X_test)
blender.score(X_test, Y_test)
from sklearn.model_selection import cross_val_score
scores = cross_val_score(blender, features, labels, cv=10)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
mean_diff = np.mean(np.abs(np.exp(Y_test)-np.exp(y_pred)))
p_mean_diff = np.mean(mean_diff/np.exp(Y_test))
print "Mean Error:\t %.0f/%0.3f%%" % (mean_diff, p_mean_diff*100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Preprocessing
Step2: Location/Address Information
Step3: Transform Dates
Step4: Number of rooms
Step5: Outliers
Step6: Skewedness
Step7: Scaling
Step8: Training
Step9: Lasso
Step10: As you can see from the above output, Lasso scores an R^2 of .576. Let's check out what features it found important and then compare w/ Ridge
Step11: Some of these make sense
Step12: Nice! We have a marked improvement over Lasso!
Step13: Blended Model
Step14: Evaluation
Step15: As you can see the R^2 is .78 which is very respectful.
|
5,075
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Load the TensorBoard notebook extension
%load_ext tensorboard
import tensorflow as tf
import datetime, os
fashion_mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train),(x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
def train_model():
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
logdir = os.path.join("logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
train_model()
%tensorboard --logdir logs
%tensorboard --logdir logs
train_model()
from tensorboard import notebook
notebook.list() # View open TensorBoard instances
# Control TensorBoard display. If no port is provided,
# the most recently launched TensorBoard is used
notebook.display(port=6006, height=1000)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using TensorBoard in Notebooks
Step2: Import TensorFlow, datetime, and os
Step3: TensorBoard in notebooks
Step4: Create a very simple model
Step5: Train the model using Keras and the TensorBoard callback
Step6: Start TensorBoard within the notebook using magics
Step7: <!-- <img class="tfo-display-only-on-site" src="https
Step8: <!-- <img class="tfo-display-only-on-site" src="https
Step9: You can use the tensorboard.notebook APIs for a bit more control
|
5,076
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from scipy.io import loadmat
from shogun import features, MulticlassLabels, Math
# load the dataset
dataset = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))
Xall = dataset['data']
# the usps dataset has the digits labeled from 1 to 10
# we'll subtract 1 to make them in the 0-9 range instead
Yall = np.array(dataset['label'].squeeze(), dtype=np.double)-1
# 1000 examples for training
Xtrain = features(Xall[:,0:1000])
Ytrain = MulticlassLabels(Yall[0:1000])
# 4000 examples for validation
Xval = features(Xall[:,1001:5001])
Yval = MulticlassLabels(Yall[1001:5001])
# the rest for testing
Xtest = features(Xall[:,5002:-1])
Ytest = MulticlassLabels(Yall[5002:-1])
# initialize the random number generator with a fixed seed, for repeatability
Math.init_random(10)
from shogun import NeuralNetwork, NeuralInputLayer, NeuralLogisticLayer, NeuralSoftmaxLayer
from shogun import DynamicObjectArray
# setup the layers
layers = DynamicObjectArray()
layers.append_element(NeuralInputLayer(256)) # input layer, 256 neurons
layers.append_element(NeuralLogisticLayer(256)) # first hidden layer, 256 neurons
layers.append_element(NeuralLogisticLayer(128)) # second hidden layer, 128 neurons
layers.append_element(NeuralSoftmaxLayer(10)) # output layer, 10 neurons
# create the networks
net_no_reg = NeuralNetwork(layers)
net_no_reg.quick_connect()
net_no_reg.initialize_neural_network()
net_l2 = NeuralNetwork(layers)
net_l2.quick_connect()
net_l2.initialize_neural_network()
net_l1 = NeuralNetwork(layers)
net_l1.quick_connect()
net_l1.initialize_neural_network()
net_dropout = NeuralNetwork(layers)
net_dropout.quick_connect()
net_dropout.initialize_neural_network()
# import networkx, install if necessary
try:
import networkx as nx
except ImportError:
import pip
pip.main(['install', '--user', 'networkx'])
import networkx as nx
G = nx.DiGraph()
pos = {}
for i in range(8):
pos['X'+str(i)] = (i,0) # 8 neurons in the input layer
pos['H'+str(i)] = (i,1) # 8 neurons in the first hidden layer
for j in range(8): G.add_edge('X'+str(j),'H'+str(i))
if i<4:
pos['U'+str(i)] = (i+2,2) # 4 neurons in the second hidden layer
for j in range(8): G.add_edge('H'+str(j),'U'+str(i))
if i<6:
pos['Y'+str(i)] = (i+1,3) # 6 neurons in the output layer
for j in range(4): G.add_edge('U'+str(j),'Y'+str(i))
nx.draw(G, pos, node_color='y', node_size=750)
from shogun import MulticlassAccuracy
def compute_accuracy(net, X, Y):
predictions = net.apply_multiclass(X)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(predictions, Y)
return accuracy*100
net_no_reg.put('epsilon', 1e-6)
net_no_reg.put('max_num_epochs', 600)
# uncomment this line to allow the training progress to be printed on the console
#from shogun import MSG_INFO; net_no_reg.io.put('loglevel', MSG_INFO)
net_no_reg.put('labels', Ytrain)
net_no_reg.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print("Without regularization, accuracy on the validation set =", compute_accuracy(net_no_reg, Xval, Yval), "%")
# turn on L2 regularization
net_l2.put('l2_coefficient', 3e-4)
net_l2.put('epsilon', 1e-6)
net_l2.put('max_num_epochs', 600)
net_l2.put('labels', Ytrain)
net_l2.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print("With L2 regularization, accuracy on the validation set =", compute_accuracy(net_l2, Xval, Yval), "%")
# turn on L1 regularization
net_l1.put('l1_coefficient', 3e-5)
net_l1.put('epsilon', e-6)
net_l1.put('max_num_epochs', 600)
net_l1.put('labels', Ytrain)
net_l1.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print("With L1 regularization, accuracy on the validation set =", compute_accuracy(net_l1, Xval, Yval), "%")
from shogun import NNOM_GRADIENT_DESCENT
# set the dropout probabilty for neurons in the hidden layers
net_dropout.put('dropout_hidden', 0.5)
# set the dropout probabilty for the inputs
net_dropout.put('dropout_input', 0.2)
# limit the maximum incoming weights vector lengh for neurons
net_dropout.put('max_norm', 15)
net_dropout.put('epsilon', 1e-6)
net_dropout.put('max_num_epochs', 600)
# use gradient descent for optimization
net_dropout.put('optimization_method', NNOM_GRADIENT_DESCENT)
net_dropout.put('gd_learning_rate', 0.5)
net_dropout.put('gd_mini_batch_size', 100)
net_dropout.put('labels', Ytrain)
net_dropout.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print("With dropout, accuracy on the validation set =", compute_accuracy(net_dropout, Xval, Yval), "%")
from shogun import NeuralConvolutionalLayer, CMAF_RECTIFIED_LINEAR
# prepere the layers
layers_conv = DynamicObjectArray()
# input layer, a 16x16 image single channel image
layers_conv.append_element(NeuralInputLayer(16,16,1))
# the first convolutional layer: 10 feature maps, filters with radius 2 (5x5 filters)
# and max-pooling in a 2x2 region: its output will be 10 8x8 feature maps
layers_conv.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 10, 2, 2, 2, 2))
# the first convolutional layer: 15 feature maps, filters with radius 2 (5x5 filters)
# and max-pooling in a 2x2 region: its output will be 15 4x4 feature maps
layers_conv.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 15, 2, 2, 2, 2))
# output layer
layers_conv.append_element(NeuralSoftmaxLayer(10))
# create and initialize the network
net_conv = NeuralNetwork(layers_conv)
net_conv.quick_connect()
net_conv.initialize_neural_network()
# 50% dropout in the input layer
net_conv.put('dropout_input', 0.5)
# max-norm regularization
net_conv.put('max_norm', 1.0)
# set gradient descent parameters
net_conv.put('optimization_method', NNOM_GRADIENT_DESCENT)
net_conv.put('gd_learning_rate', 0.01)
net_conv.put('gd_mini_batch_size', 100)
net_conv.put('epsilon', 0.0)
net_conv.put('max_num_epochs', 100)
# start training
net_conv.put('labels', Ytrain)
net_conv.train(Xtrain)
# compute accuracy on the validation set
print("With a convolutional network, accuracy on the validation set =", compute_accuracy(net_conv, Xval, Yval), "%")
print("Accuracy on the test set using the convolutional network =", compute_accuracy(net_conv, Xtest, Ytest), "%")
predictions = net_conv.apply_multiclass(Xtest)
_=figure(figsize=(10,12))
# plot some images, with the predicted label as the title of each image
# this code is borrowed from the KNN notebook by Chiyuan Zhang and Sören Sonnenburg
for i in range(100):
ax=subplot(10,10,i+1)
title(int(predictions[i]))
ax.imshow(Xtest[:,i].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r)
ax.set_xticks([])
ax.set_yticks([])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating the network
Step2: We can also visualize what the network would look like. To do that we'll draw a smaller network using networkx. The network we'll draw will have 8 inputs (labeled X), 8 neurons in the first hidden layer (labeled H), 4 neurons in the second hidden layer (labeled U), and 6 neurons in the output layer (labeled Y). Each neuron will be connected to all neurons in the layer that precedes it.
Step3: Training
Step4: Training without regularization
Step5: Training with L2 regularization
Step6: Training with L1 regularization
Step7: Training with dropout
Step8: Convolutional Neural Networks
Step9: Now we can train the network. Like in the previous section, we'll use gradient descent with dropout and max-norm regularization
Step10: Evaluation
Step11: We can also look at some of the images and the network's response to each of them
|
5,077
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
% matplotlib inline
def ErrorPlot( waveNumber,windowLength ):
data = np.fromfunction( lambda x: np.sin((x-windowLength / 2)/128 * 2 * np.pi * waveNumber), (128 + windowLength / 2, ) ) #creating an array with a sine wave
datafiltered = medianFilter(data, windowLength) #calculate the filtered wave with the medianFiltered function
data = data[ windowLength / 2 : - windowLength ] # slice the data array to synchronize both waves
datafiltered = datafiltered[ : len(data) ] # cut the filtered wave to the same length as the data wave
error = ErrorRate(data,datafiltered,windowLength,waveNumber) #calculate the error with the ErrorRate function
plt.axis([0, y + 1, 0, 1.2])
plt.xlabel('Window Length', fontsize = 20)
plt.ylabel('Error rate', fontsize = 20)
plt.scatter(*error)
def ErrorRate(data,datafiltered,windowLength, waveNumber):
errorrate = data-datafiltered #calculate the difference between the sine wave and the filtered wave
error = [] #creating a list and save the error rate with the matching wavenumber in it
errorrate = np.abs(errorrate)
error.append([windowLength ,np.mean(errorrate)])# fill the list with the errorrate and corresponding window length
error = zip(*error) #zip the error ([1,1],[2,2],[3,3]) = ([1,2,3],[1,2,3])
return error
def medianFilter( data, windowLength ):
if (windowLength < len(data)and data.ndim == 1):
tempret = np.zeros(len(data)-windowLength+1) # creating an array where the filtered values will be saved in
if windowLength % 2 ==0: # check if the window length is odd or even because with even window length we get an unsynchrone filtered wave
for c in range(0, len(tempret)):
tempret[c] = np.median( data[ c : c + windowLength +1 ] ) # write the values of the median filtered wave in tempret, calculate the median of all values in the window
return tempret
else:
for c in range(0, len(tempret)):
tempret[c] = np.median( data[ c : c + windowLength ] )
return tempret
else:
raise ValueError("windowLength must be smaller than len(data) and data must be a 1D array")
fig = plt.figure()
for y in range (0,40,2):
ErrorPlot(5,y)
pp = PdfPages( 'Error of the median filtered sine waves with different window lengths.pdf')
pp.savefig(fig)
pp.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I plot the error of the filtered wave. I use the absulte values of the difference between sine wave and median filtered wave and calculate the mean, to get the error. I use a wave number of 5 and different window lengths
Step2: Plotting Figures
|
5,078
|
<ASSISTANT_TASK:>
Python Code:
import o2sclpy
import matplotlib.pyplot as plot
import numpy
import sys
plots=True
if 'pytest' in sys.modules:
plots=False
link=o2sclpy.linker()
link.link_o2scl()
fc=o2sclpy.find_constants(link)
ħc=fc.find_unique('ħc','MeV*fm')
print('ħc = %7.6e\n' % (ħc))
cu=link.o2scl_settings.get_convert_units()
sfho=o2sclpy.eos_had_rmf(link)
o2sclpy.rmf_load(link,sfho,'SFHo')
sfhx=o2sclpy.eos_had_rmf(link)
o2sclpy.rmf_load(link,sfhx,'SFHx')
sfho.saturation()
print(('SFHo: n0=%7.6e 1/fm^3, E/A=%7.6e MeV, K=%7.6e MeV, '+
'M*/M=%7.6e, S=%7.6e MeV, L=%7.6e MeV\n') %
(sfho.n0,sfho.eoa*ħc,sfho.comp*ħc,sfho.msom,sfho.esym*ħc,
sfho.fesym_slope(sfho.n0)*ħc))
sfhx.saturation()
print(('SFHx: n0=%7.6e 1/fm^3, E/A=%7.6e MeV, K=%7.6e MeV, '+
'M*/M=%7.6e, S=%7.6e MeV, L=%7.6e MeV\n') %
(sfhx.n0,sfhx.eoa*ħc,sfhx.comp*ħc,sfhx.msom,sfhx.esym*ħc,
sfhx.fesym_slope(sfhx.n0)*ħc))
ug_nb=o2sclpy.uniform_grid_end_width.init(link,0.01,0.32,0.01)
ug_T=o2sclpy.uniform_grid_end_width.init(link,0.1,10.0,0.1)
t3d=o2sclpy.table3d(link)
t3d.set_xy_grid('nB',ug_nb,'T',ug_T)
t3d.new_slice('EoA')
n=sfho.get_def_neutron()
p=sfho.get_def_proton()
th=sfho.get_def_thermo()
print('Neutron mass is %7.6e MeV.' % (n.m*ħc))
print('Proton mass is %7.6e MeV.\n' % (p.m*ħc))
sigma=0.0
omega=0.0
rho=0.0
for i in range(0,t3d.get_nx()):
print(i+1,'of',t3d.get_nx())
# At the lowest temperature point we always need a new initial
# guess.
first_point=True
for j in range(0,t3d.get_ny()):
n.n=ug_nb[i]/2.0
p.n=ug_nb[i]/2.0
# If we're not at the lowest temperature point, use the
# previous solution to the field equations to generate
# the next solution.
if first_point==False:
sfho.set_fields(sigma,omega,rho)
sfho.calc_temp_e(n,p,ug_T[j]/ħc,th)
if first_point==True:
first_point=False
ret,sigma,omega,rho=sfho.get_fields()
# Divide the energy density by the baryon density to
# get the energy per baryon, and then subtract out the
# rest mass contribution from both the neutrons and
# the protons.
t3d.set(i,j,'EoA',th.ed/ug_nb[i]*ħc-n.m*n.n/ug_nb[i]*ħc-
p.m*p.n*ħc/ug_nb[i])
if plots:
pl=o2sclpy.plotter()
pl.colbar=True
pl.xtitle(r'$ n_B~(\mathrm{fm}^{-3}) $')
pl.ytitle(r'$ T~(\mathrm{MeV}) $')
pl.ttext(1.25,0.5,r'$ E/A~(\mathrm{MeV}) $',rotation=90)
pl.den_plot_direct(t3d,'EoA')
plot.show()
def test_fun():
assert numpy.allclose(sfho.n0,0.1582415,rtol=1.0e-4)
assert numpy.allclose(sfhx.n0,0.1600292,rtol=1.0e-4)
return
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Link the O$_2$scl library
Step2: Get the value of $\hbar c$ from an O$_2$scl find_constants object
Step3: Get a copy (a pointer to) the O$_2$scl unit conversion object
Step4: Compute nuclear saturation and output the saturation density
Step5: Baryon density grid in $1/\mathrm{fm}^3$. The O$_2$scl object
Step6: Temperature grid in MeV
Step7: Store the EOS in a table3d object
Step8: Create a new slice for the energy per baryon
Step9: Instead of creating new fermion objects, just get the default
Step10: By default, the O2scl EOS objects work in units of
Step11: The solver works much better with an initial guess, so
Step12: The EOS at finite temperature for isospin-symmetric matter, with
Step13: Now plot the results. Raw matplotlib works, but o2sclpy has
Step14: For testing purposes
|
5,079
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import torch
import torch.nn as nn
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# hyperparameters
input_size = 4
num_classes = 3
num_epochs = 10000
learning_rate = 0.01
iris = load_iris()
X = iris.data
y = iris.target
# print(X.shape)
# print(y.shape)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=5)
# print(X_train.shape)
# print(X_test.shape)
# print(y_train.shape)
# print(y_test.shape)
# データの標準化
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# print(np.mean(X_train, axis=0))
# print(np.std(X_train, axis=0))
class LogisticRegression(nn.Module):
def __init__(self, input_size, num_classes):
super(LogisticRegression, self).__init__()
self.linear = nn.Linear(input_size, num_classes)
def forward(self, x):
out = self.linear(x)
return out
model = LogisticRegression(input_size, num_classes)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
def train(X_train, y_train):
inputs = Variable(torch.from_numpy(X_train).float())
targets = Variable(torch.from_numpy(y_train).long())
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
return loss.data[0]
def valid(X_test, y_test):
inputs = Variable(torch.from_numpy(X_test).float())
targets = Variable(torch.from_numpy(y_test).long())
outputs = model(inputs)
val_loss = criterion(outputs, targets)
# 精度を求める
_, predicted = torch.max(outputs.data, 1)
correct = (predicted == targets.data).sum()
val_acc = correct / targets.size(0)
return val_loss.data[0], val_acc
loss_list = []
val_loss_list = []
val_acc_list = []
for epoch in range(num_epochs):
perm = np.arange(X_train.shape[0])
np.random.shuffle(perm)
X_train = X_train[perm]
y_train = y_train[perm]
loss = train(X_train, y_train)
val_loss, val_acc = valid(X_test, y_test)
if epoch % 1000 == 0:
print('epoch %d, loss: %.4f val_loss: %.4f val_acc: %.4f'
% (epoch, loss, val_loss, val_acc))
# logging
loss_list.append(loss)
val_loss_list.append(val_loss)
val_acc_list.append(val_acc)
# plot learning curve
plt.figure()
plt.plot(range(num_epochs), loss_list, 'r-', label='train_loss')
plt.plot(range(num_epochs), val_loss_list, 'b-', label='val_loss')
plt.legend()
plt.figure()
plt.plot(range(num_epochs), val_acc_list, 'g-', label='val_acc')
plt.legend()
import torch
import torch.nn as nn
import torchvision.datasets as dsets
import torchvision.transforms as transforms
from torch.autograd import Variable
# Hyper Parameters
input_size = 784
num_classes = 10
num_epochs = 20
batch_size = 100
learning_rate = 0.001
# MNIST Dataset (Images and Labels)
train_dataset = dsets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = dsets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
# Dataset Loader (Input Pipline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
print(len(train_dataset))
print(len(test_dataset))
# 1データだけ取得
image, label = iter(train_loader).next()
print(type(image), type(label))
print(image.size(), label.size())
class LogisticRegression(nn.Module):
def __init__(self, input_size, num_classes):
super(LogisticRegression, self).__init__()
self.linear = nn.Linear(input_size, num_classes)
def forward(self, x):
out = self.linear(x)
return out
model = LogisticRegression(input_size, num_classes)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
def train(train_loader):
model.train()
running_loss = 0
for batch_idx, (images, labels) in enumerate(train_loader):
images = Variable(images.view(-1, 28 * 28))
labels = Variable(labels)
optimizer.zero_grad()
outputs = model(images)
loss = criterion(outputs, labels)
running_loss += loss
loss.backward()
optimizer.step()
train_loss = running_loss / len(train_loader)
return train_loss.data[0]
def valid(test_loader):
model.eval()
running_loss = 0
correct = 0
total = 0
for batch_idx, (images, labels) in enumerate(test_loader):
images = Variable(images.view(-1, 28 * 28))
labels = Variable(labels)
outputs = model(images)
loss = criterion(outputs, labels)
running_loss += loss
_, predicted = torch.max(outputs.data, 1)
correct += (predicted == labels.data).sum()
total += labels.size(0)
val_loss = running_loss / len(test_loader)
val_acc = correct / total
return val_loss.data[0], val_acc
loss_list = []
val_loss_list = []
val_acc_list = []
for epoch in range(num_epochs):
loss = train(train_loader)
val_loss, val_acc = valid(test_loader)
print('epoch %d, loss: %.4f val_loss: %.4f val_acc: %.4f'
% (epoch, loss, val_loss, val_acc))
# logging
loss_list.append(loss)
val_loss_list.append(val_loss)
val_acc_list.append(val_acc)
import matplotlib.pyplot as plt
%matplotlib inline
# plot learning curve
plt.figure()
plt.plot(range(num_epochs), loss_list, 'r-', label='train_loss')
plt.plot(range(num_epochs), val_loss_list, 'b-', label='val_loss')
plt.legend()
plt.grid()
plt.figure()
plt.plot(range(num_epochs), val_acc_list, 'g-', label='val_acc')
plt.legend()
plt.grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Iris Dataset
Step2: LogisticRegressionはlogitsを返してsoftmaxを通さないので注意
Step3: MNIST
|
5,080
|
<ASSISTANT_TASK:>
Python Code:
!pip install thinc syntok "ml_datasets>=0.2.0a0" tqdm
from syntok.tokenizer import Tokenizer
def tokenize_texts(texts):
tok = Tokenizer()
return [[token.value for token in tok.tokenize(text)] for text in texts]
import ml_datasets
import numpy
def load_data():
train_data, dev_data = ml_datasets.dbpedia(train_limit=2000, dev_limit=2000)
train_texts, train_cats = zip(*train_data)
dev_texts, dev_cats = zip(*dev_data)
unique_cats = list(numpy.unique(numpy.concatenate((train_cats, dev_cats))))
nr_class = len(unique_cats)
print(f"{len(train_data)} training / {len(dev_data)} dev\n{nr_class} classes")
train_y = numpy.zeros((len(train_cats), nr_class), dtype="f")
for i, cat in enumerate(train_cats):
train_y[i][unique_cats.index(cat)] = 1
dev_y = numpy.zeros((len(dev_cats), nr_class), dtype="f")
for i, cat in enumerate(dev_cats):
dev_y[i][unique_cats.index(cat)] = 1
train_tokenized = tokenize_texts(train_texts)
dev_tokenized = tokenize_texts(dev_texts)
# Generate simple vocab mapping, <unk> is 0
vocab = {}
count_id = 1
for text in train_tokenized:
for token in text:
if token not in vocab:
vocab[token] = count_id
count_id += 1
# Map texts using vocab
train_X = []
for text in train_tokenized:
train_X.append(numpy.array([vocab.get(t, 0) for t in text]))
dev_X = []
for text in dev_tokenized:
dev_X.append(numpy.array([vocab.get(t, 0) for t in text]))
return (train_X, train_y), (dev_X, dev_y), vocab
from typing import List
import thinc
from thinc.api import Model, chain, list2ragged, with_array, reduce_mean, Softmax
from thinc.types import Array2d
@thinc.registry.layers("EmbedPoolTextcat.v1")
def EmbedPoolTextcat(embed: Model[Array2d, Array2d]) -> Model[List[Array2d], Array2d]:
with Model.define_operators({">>": chain}):
model = with_array(embed) >> list2ragged() >> reduce_mean() >> Softmax()
model.set_ref("embed", embed)
return model
CONFIG =
[hyper_params]
width = 64
[model]
@layers = "EmbedPoolTextcat.v1"
[model.embed]
@layers = "Embed.v1"
nO = ${hyper_params:width}
[optimizer]
@optimizers = "Adam.v1"
learn_rate = 0.001
[training]
batch_size = 8
n_iter = 10
from thinc.api import registry, Config
C = registry.resolve(Config().from_str(CONFIG))
C
(train_X, train_y), (dev_X, dev_y), vocab = load_data()
batch_size = C["training"]["batch_size"]
optimizer = C["optimizer"]
model = C["model"]
model.get_ref("embed").set_dim("nV", len(vocab) + 1)
model.initialize(X=train_X, Y=train_y)
def evaluate_model(model, dev_X, dev_Y, batch_size):
correct = 0.0
total = 0.0
for X, Y in model.ops.multibatch(batch_size, dev_X, dev_Y):
Yh = model.predict(X)
for j in range(len(Yh)):
correct += Yh[j].argmax(axis=0) == Y[j].argmax(axis=0)
total += len(Y)
return float(correct / total)
from thinc.api import fix_random_seed
from tqdm.notebook import tqdm
fix_random_seed(0)
for n in range(C["training"]["n_iter"]):
loss = 0.0
batches = model.ops.multibatch(batch_size, train_X, train_y, shuffle=True)
for X, Y in tqdm(batches, leave=False):
Yh, backprop = model.begin_update(X)
d_loss = []
for i in range(len(Yh)):
d_loss.append(Yh[i] - Y[i])
loss += ((Yh[i] - Y[i]) ** 2).sum()
backprop(numpy.array(d_loss))
model.finish_update(optimizer)
score = evaluate_model(model, dev_X, dev_y, batch_size)
print(f"{n}\t{loss:.2f}\t{score:.3f}")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For simple and standalone tokenization, we'll use the syntok package and the following function
Step2: Setting up the data
Step3: Defining the model and config
Step5: The config defines the top-level model using the registered EmbedPoolTextcat function, and the embed argument, referencing the Embed layer.
Step6: Training setup
Step7: Once the data is loaded, we'll know the vocabulary size and can set the dimension on the embedding layer. model.get_ref("embed") returns the layer defined as the ref "embed" and the set_dim method lets you set a value for a dimension. To fill in the other missing shapes, we can call model.initialize with some input and output data.
Step8: Training the model
|
5,081
|
<ASSISTANT_TASK:>
Python Code:
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.io # To read matlab files
from scipy import spatial
import pylab
pylab.rcParams['figure.figsize'] = 8, 5
np.random.seed(3)
# Load data from matlab file DatosLabReg.mat
# matvar = <FILL IN>
matvar = scipy.io.loadmat('DatosLabReg.mat')
# Take main variables, Xtrain, Xtest, Ytrain, Ytest from the corresponding dictionary entries in matvar:
# <SOL>
Xtrain = matvar['Xtrain']
Xtest = matvar['Xtest']
Ytrain = matvar['Ytrain']
Ytest = matvar['Ytest']
# </SOL>
# Data normalization
# <SOL>
mean_x = np.mean(Xtrain,axis=0)
std_x = np.std(Xtrain,axis=0)
Xtrain = (Xtrain - mean_x) / std_x
Xtest = (Xtest - mean_x) / std_x
# </SOL>
sigma_0 = np.std(Ytrain)
sigma_eps = sigma_0 / np.sqrt(10)
l = 8
print('sigma_0 = {0}'.format(sigma_0))
print('sigma_eps = {0}'.format(sigma_eps))
# Compute Kernel matrices.
# You may find spatial.distance.cdist() usefull to compute the euclidean distances required by Gaussian kernels.
# <SOL>
# Compute appropriate distances
dist = spatial.distance.cdist(Xtrain, Xtrain, 'euclidean')
dist_ss = spatial.distance.cdist(Xtest, Xtest, 'euclidean')
dist_s = spatial.distance.cdist(Xtest, Xtrain, 'euclidean')
# Compute Kernel matrices
K = (sigma_0**2)*np.exp(-dist/l)
K_ss = (sigma_0**2)*np.exp(-dist_ss/l)
K_s = (sigma_0**2)*np.exp(-dist_s/l)
# </SOL>
# Compute predictive mean
# m_y = <FILL IN>
m_y = K_s.dot(np.linalg.inv(K + sigma_eps**2 * np.eye(K.shape[0]))).dot((Ytrain))
# Compute predictive variance
# v_y = <FILL IN>
v_y = np.diagonal(K_ss - K_s.dot(np.linalg.inv(K + sigma_eps**2 * np.eye(K.shape[0]))).dot(K_s.T)) + sigma_eps**2
# Compute MSE
# MSE = <FILL IN>
MSE = np.mean((m_y - Ytest)**2)
# Compute NLPD
# NLPD = <FILL IN>
NLPD = 0.5 * np.mean(((Ytest - m_y)**2)/(np.matrix(v_y).T) + 0.5*np.log(2*np.pi*np.matrix(v_y).T))
print(m_y.T)
print('MSE = {0}'.format(MSE))
print('NLPD = {0}'.format(NLPD))
# <SOL>
X_1d = np.matrix(Xtrain[:,0]).T
Xt_1d = np.matrix(Xtest[:,0]).T
Xt_1d = np.sort(Xt_1d,axis=0) #We sort the vector for representational purposes
dist = spatial.distance.cdist(X_1d,X_1d,'euclidean')
dist_ss = spatial.distance.cdist(Xt_1d,Xt_1d,'euclidean')
dist_s = spatial.distance.cdist(Xt_1d,X_1d,'euclidean')
K = (sigma_0**2)*np.exp(-dist/l)
K_ss = (sigma_0**2)*np.exp(-dist_ss/l)
K_s = (sigma_0**2)*np.exp(-dist_s/l)
m_y = K_s.dot(np.linalg.inv(K + sigma_eps**2 * np.eye(K.shape[0]))).dot((Ytrain))
v_f = K_ss - K_s.dot(np.linalg.inv(K + sigma_eps**2 * np.eye(K.shape[0]))).dot(K_s.T)
L = np.linalg.cholesky(v_f+1e-10*np.eye(v_f.shape[0]))
for iter in range(50):
f_ast = L.dot(np.random.randn(len(Xt_1d),1)) + m_y
plt.plot(np.array(Xt_1d)[:,0],f_ast[:,0],'c:');
# Plot as well the test points
plt.plot(np.array(Xtest[:,0]),Ytest[:,0],'r.',markersize=12);
plt.plot(np.array(Xt_1d)[:,0],m_y[:,0],'b-',linewidth=3,label='Predictive mean');
plt.legend(loc='best')
plt.xlabel('x',fontsize=18);
plt.ylabel('s',fontsize=18);
# </SOL>
# <SOL>
X_1d = np.matrix(Xtrain[:,0]).T
Xt_1d = np.matrix(Xtest[:,0]).T
idx = np.argsort(Xt_1d,axis=0) # We sort the vector for representational purposes
Xt_1d = np.sort(Xt_1d,axis=0)
idx = np.array(idx).flatten().T
Ytest = Ytest[idx]
dist = spatial.distance.cdist(X_1d,X_1d,'euclidean')
dist_ss = spatial.distance.cdist(Xt_1d,Xt_1d,'euclidean')
dist_s = spatial.distance.cdist(Xt_1d,X_1d,'euclidean')
K = (sigma_0**2)*np.exp(-dist/l)
K_ss = (sigma_0**2)*np.exp(-dist_ss/l)
K_s = (sigma_0**2)*np.exp(-dist_s/l)
m_y = K_s.dot(np.linalg.inv(K + sigma_eps**2 * np.eye(K.shape[0]))).dot((Ytrain))
v_f = K_ss - K_s.dot(np.linalg.inv(K + sigma_eps**2 * np.eye(K.shape[0]))).dot(K_s.T)
v_f_diag = np.diagonal(v_f)
L = np.linalg.cholesky(v_f+1e-10*np.eye(v_f.shape[0]))
for iter in range(50):
f_ast = L.dot(np.random.randn(len(Xt_1d),1)) + m_y
plt.plot(np.array(Xt_1d)[:,0],f_ast[:,0],'c:');
# Plot as well the test points
plt.plot(np.array(Xtest[:,0]),Ytest[:,0],'r.',markersize=12);
plt.plot(np.array(Xt_1d)[:,0],m_y[:,0],'b-',linewidth=3,label='Predictive mean');
plt.plot(np.array(Xt_1d)[:,0],m_y[:,0]+2*v_f_diag,'m--',label='Predictive mean of f $\pm$ 2std',linewidth=3);
plt.plot(np.array(Xt_1d)[:,0],m_y[:,0]-2*v_f_diag,'m--',linewidth=3);
#Plot now the posterior mean and posterior mean \pm 2 std for s (i.e., adding the noise variance)
plt.plot(np.array(Xt_1d)[:,0],m_y[:,0]+2*v_f_diag+2*sigma_eps,'m:',label='Predictive mean of s $\pm$ 2std',linewidth=3);
plt.plot(np.array(Xt_1d)[:,0],m_y[:,0]-2*v_f_diag-2*sigma_eps,'m:',linewidth=3);
plt.legend(loc='best')
plt.xlabel('x',fontsize=18);
plt.ylabel('s',fontsize=18);
# </SOL>
# <SOL>
MSE = np.mean((m_y - Ytest)**2)
v_y = np.diagonal(v_f) + sigma_eps**2
NLPD = 0.5 * np.mean(((Ytest - m_y)**2)/(np.matrix(v_y).T) + 0.5*np.log(2*np.pi*np.matrix(v_y).T))
# </SOL>
print('MSE = {0}'.format(MSE))
print('NLPD = {0}'.format(NLPD))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Introduction
Step2: 2. The stocks dataset.
Step3: After running this code, you will have inside matrix Xtrain the evolution of (normalized) price for 9 airlines, whereas vector Ytrain will contain a single column with the price evolution of the tenth airline. The objective of the regression task is to estimate the price of the tenth airline from the prices of the other nine.
Step4: As we studied in a previous session, the joint distribution of the target values in the training set, ${\mathbf s}$, and the latent values corresponding to the test points, ${\mathbf f}^\ast$, is given by
Step5: You should obtain the following results
Step6: 3.2. Unidimensional regression
Step7: Plot again the previous figure, this time including in your plot the confidence interval delimited by two standard deviations of the prediction. You can observe how $95.45\%$ of observed data fall within the designated area.
Step8: Compute now the MSE and NLPD of the model. The correct results are given below
|
5,082
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np # numpy namespace
from timeit import default_timer as timer # for timing
from matplotlib import pyplot # for plotting
import math
def step_numpy(dt, prices, c0, c1, noises):
return prices * np.exp(c0 * dt + c1 * noises)
def mc_numpy(paths, dt, interest, volatility):
c0 = interest - 0.5 * volatility ** 2
c1 = volatility * np.sqrt(dt)
for j in range(1, paths.shape[1]): # for each time step
prices = paths[:, j - 1] # last prices
# gaussian noises for simulation
noises = np.random.normal(0., 1., prices.size)
# simulate
paths[:, j] = step_numpy(dt, prices, c0, c1, noises)
# stock parameter
StockPrice = 20.83
StrikePrice = 21.50
Volatility = 0.021
InterestRate = 0.20
Maturity = 5. / 12.
# monte-carlo parameter
NumPath = 3000000
NumStep = 100
# plotting
MAX_PATH_IN_PLOT = 50
def driver(pricer, do_plot=False):
paths = np.zeros((NumPath, NumStep + 1), order='F')
paths[:, 0] = StockPrice
DT = Maturity / NumStep
ts = timer()
pricer(paths, DT, InterestRate, Volatility)
te = timer()
elapsed = te - ts
ST = paths[:, -1]
PaidOff = np.maximum(paths[:, -1] - StrikePrice, 0)
print('Result')
fmt = '%20s: %s'
print(fmt % ('stock price', np.mean(ST)))
print(fmt % ('standard error', np.std(ST) / np.sqrt(NumPath)))
print(fmt % ('paid off', np.mean(PaidOff)))
optionprice = np.mean(PaidOff) * np.exp(-InterestRate * Maturity)
print(fmt % ('option price', optionprice))
print('Performance')
NumCompute = NumPath * NumStep
print(fmt % ('Mstep/second', '%.2f' % (NumCompute / elapsed / 1e6)))
print(fmt % ('time elapsed', '%.3fs' % (te - ts)))
if do_plot:
pathct = min(NumPath, MAX_PATH_IN_PLOT)
for i in range(pathct):
pyplot.plot(paths[i])
print('Plotting %d/%d paths' % (pathct, NumPath))
pyplot.show()
return elapsed
numpy_time = driver(mc_numpy, do_plot=True)
from numbapro import vectorize
@vectorize(['f8(f8, f8, f8, f8, f8)'])
def step_cpuvec(last, dt, c0, c1, noise):
return last * math.exp(c0 * dt + c1 * noise)
def mc_cpuvec(paths, dt, interest, volatility):
c0 = interest - 0.5 * volatility ** 2
c1 = volatility * np.sqrt(dt)
for j in range(1, paths.shape[1]):
prices = paths[:, j - 1]
noises = np.random.normal(0., 1., prices.size)
paths[:, j] = step_cpuvec(prices, dt, c0, c1, noises)
cpuvec_time = driver(mc_cpuvec, do_plot=True)
@vectorize(['f8(f8, f8, f8, f8, f8)'], target='parallel')
def step_parallel(last, dt, c0, c1, noise):
return last * math.exp(c0 * dt + c1 * noise)
def mc_parallel(paths, dt, interest, volatility):
c0 = interest - 0.5 * volatility ** 2
c1 = volatility * np.sqrt(dt)
for j in range(1, paths.shape[1]):
prices = paths[:, j - 1]
noises = np.random.normal(0., 1., prices.size)
paths[:, j] = step_parallel(prices, dt, c0, c1, noises)
parallel_time = driver(mc_parallel, do_plot=True)
@vectorize(['f8(f8, f8, f8, f8, f8)'], target='gpu')
def step_gpuvec(last, dt, c0, c1, noise):
return last * math.exp(c0 * dt + c1 * noise)
def mc_gpuvec(paths, dt, interest, volatility):
c0 = interest - 0.5 * volatility ** 2
c1 = volatility * np.sqrt(dt)
for j in range(1, paths.shape[1]):
prices = paths[:, j - 1]
noises = np.random.normal(0., 1., prices.size)
paths[:, j] = step_gpuvec(prices, dt, c0, c1, noises)
gpuvec_time = driver(mc_gpuvec, do_plot=True)
from numbapro import cuda, jit
from numbapro.cudalib import curand
@jit('void(double[:], double[:], double, double, double, double[:])', target='gpu')
def step_cuda(last, paths, dt, c0, c1, normdist):
i = cuda.grid(1)
if i >= paths.shape[0]:
return
noise = normdist[i]
paths[i] = last[i] * math.exp(c0 * dt + c1 * noise)
def mc_cuda(paths, dt, interest, volatility):
n = paths.shape[0]
blksz = cuda.get_current_device().MAX_THREADS_PER_BLOCK
gridsz = int(math.ceil(float(n) / blksz))
# instantiate a CUDA stream for queueing async CUDA cmds
stream = cuda.stream()
# instantiate a cuRAND PRNG
prng = curand.PRNG(curand.PRNG.MRG32K3A, stream=stream)
# Allocate device side array
d_normdist = cuda.device_array(n, dtype=np.double, stream=stream)
c0 = interest - 0.5 * volatility ** 2
c1 = volatility * np.sqrt(dt)
# configure the kernel
# similar to CUDA-C: step_cuda<<<gridsz, blksz, 0, stream>>>
step_cfg = step_cuda[gridsz, blksz, stream]
# transfer the initial prices
d_last = cuda.to_device(paths[:, 0], stream=stream)
for j in range(1, paths.shape[1]):
# call cuRAND to populate d_normdist with gaussian noises
prng.normal(d_normdist, mean=0, sigma=1)
# setup memory for new prices
# device_array_like is like empty_like for GPU
d_paths = cuda.device_array_like(paths[:, j], stream=stream)
# invoke step kernel asynchronously
step_cfg(d_last, d_paths, dt, c0, c1, d_normdist)
# transfer memory back to the host
d_paths.copy_to_host(paths[:, j], stream=stream)
d_last = d_paths
# wait for all GPU work to complete
stream.synchronize()
cuda_time = driver(mc_cuda, do_plot=True)
def perf_plot(rawdata, xlabels):
data = [numpy_time / x for x in rawdata]
idx = np.arange(len(data))
fig = pyplot.figure()
width = 0.5
ax = fig.add_subplot(111)
ax.bar(idx, data, width)
ax.set_ylabel('normalized speedup')
ax.set_xticks(idx + width / 2)
ax.set_xticklabels(xlabels)
ax.set_ylim(0.9)
pyplot.show()
perf_plot([numpy_time, cpuvec_time, parallel_time, gpuvec_time],
['numpy', 'cpu-vect', 'parallel-vect', 'gpu-vect'])
perf_plot([numpy_time, cpuvec_time, parallel_time, gpuvec_time, cuda_time],
['numpy', 'cpu-vect', 'parallel-vect', 'gpu-vect', 'cuda'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configurations
Step2: Driver
Step3: Result
Step4: Basic Vectorize
Step5: Parallel Vectorize
Step6: CUDA Vectorize
Step7: In the above simple CUDA vectorize example, the speedup is not significant due to the memory transfer overhead. Since the kernel has relatively low compute intensity, explicit management of memory transfer would give a significant speedup.
Step8: Performance Comparision
|
5,083
|
<ASSISTANT_TASK:>
Python Code:
import metaknowledge as mk
import networkx as nx
import matplotlib.pyplot as plt
%matplotlib inline
import metaknowledge.contour.plotting as mkv
RC = mk.RecordCollection('../savedrecs.txt')
CoCitation = RC.networkCoCitation()
print(mk.graphStats(CoCitation, makeString = True)) #makestring by default is True so it is not strictly necessary to include
CoCitation.nodes(data = True)[0]
CoCitation.edges(data = True)[0]
coCiteJournals = RC.networkCoCitation(nodeType = 'journal', dropNonJournals = True)
print(mk.graphStats(coCiteJournals))
nx.draw_spring(coCiteJournals)
citationsA = RC.networkCitation(nodeType = 'year', keyWords = ['A'])
print(mk.graphStats(citationsA))
nx.draw_spring(citationsA, with_labels = True)
coAuths = RC.networkCoAuthor()
print(mk.graphStats(coAuths))
wcCoOccurs = RC.oneModeNetwork('WC')
print(mk.graphStats(wcCoOccurs))
nx.draw_spring(wcCoOccurs, with_labels = True)
ti_wc = RC.twoModeNetwork('WC', 'title')
print(mk.graphStats(ti_wc))
mkv.quickVisual(ti_wc, showLabel = False) #default is False as there are usually lots of labels
tags = ['TI', 'UT', 'AU']
multiModeNet = RC.nModeNetwork(tags)
mk.graphStats(multiModeNet)
mkv.quickVisual(multiModeNet)
tags = mk.tagsAndNames #All the tags, twice
sillyMultiModeNet = RC.nModeNetwork(tags)
mk.graphStats(sillyMultiModeNet)
mkv.quickVisual(sillyMultiModeNet)
minWeight = 3
maxWeight = 10
proccessedCoCiteJournals = mk.dropEedges(coCiteJournals, minWeight, maxWeight, dropSelfLoops = True)
mk.graphStats(proccessedCoCiteJournals)
proccessedCoCiteJournals = mk.dropNodesByDegree(proccessedCoCiteJournals, 1)
mk.graphStats(proccessedCoCiteJournals)
nx.draw_spring(proccessedCoCiteJournals)
mk.writeGraph(proccessedCoCiteJournals, "FinalJournalCoCites")
FinalJournalCoCites = mk.readGraph("FinalJournalCoCites_edgeList.csv", "FinalJournalCoCites_nodeAttributes.csv")
mk.graphStats(FinalJournalCoCites)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: And so we can visualize the graphs
Step2: Before we start we should also get a RecordCollection to work with.
Step3: Now lets look at the different types of graph.
Step4: graphStats() is a function to extract some of the statists of a graph and make them into a nice string.
Step5: and an edge
Step6: All the graphs metaknowledge use are networkx graphs, a few functions to trim them are implemented in metaknowledge, here is the example section, but many useful functions are implemented by it. Read the documentation here for more information.
Step7: Lets take a look at the graph after a quick spring layout
Step8: A bit basic but gives a general idea. If you want to make a much better looking and more informative visualization you could try gephi or visone. Exporting to them is covered below in Exporting graphs.
Step9: Making a co-author network
Step10: Making a one-mode network
Step11: Making a two-mode network
Step12: The network is directed by default with the first tag going to the second.
Step13: quickVisual() makes a graph with the different types of nodes coloured differently and a couple other small visual tweaks from networkx's draw_spring.
Step14: Beware this can very easily produce hairballs
Step15: Post processing graphs
Step16: Then to remove all the isolates, i.e. nodes with degree less than 1, use dropNodesByDegree()
Step17: Now before the processing the graph can be seen here. After the processing it looks like
Step18: Hm, it looks a bit thinner. Using a visualizer will make the difference a bit more noticeable.
Step19: These files are simple CSVs an can be read easily by most systems. If you want to read them back into Python the readGraph() function will do that.
|
5,084
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = []
for sentence in source_text.split('\n'):
source_id_list = []
for word in sentence.split():
source_id_list.append(source_vocab_to_int[word])
source_id_text.append(source_id_list)
target_id_text = []
for sentence in target_text.split('\n'):
target_id_list = []
for word in sentence.split():
target_id_list.append(target_vocab_to_int[word])
target_id_list.append(target_vocab_to_int['<EOS>'])
target_id_text.append(target_id_list)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
input_ = tf.placeholder(tf.int32, shape=[None,None], name='input')
targets = tf.placeholder(tf.int32, shape=[None,None], name='targets')
learning_rate = tf.placeholder(tf.float32, shape=None, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, shape=None, name='keep_prob')
target_seq_len = tf.placeholder(tf.int32, shape=[None], name='target_sequence_length')
max_target_seq_len = tf.reduce_max(input_tensor=target_seq_len, name='max_target_len')
source_seq_len = tf.placeholder(tf.int32, shape=[None], name='source_sequence_length')
return input_, targets, learning_rate, keep_prob, target_seq_len, max_target_seq_len, source_seq_len
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
# remove the last column on the batch (last sequence), strides=[1,1] means one colum
target_data_removed_end = tf.strided_slice(target_data, begin=[0,0], end=[batch_size,-1], strides=[1,1])
# append the <GO> int ID to the input data of the decoder
# a. Create a tensor with dimensions similar to the batch. Then, GO ID can be appended to all rows of the sequence
GO_tensor_to_concat = tf.fill(dims=[batch_size,1], value=target_vocab_to_int['<GO>'])
# b. Append the GO tensor to the whole batch using axis=1 because appends in column shape
preprocess_target_data = tf.concat([GO_tensor_to_concat,target_data_removed_end],axis=1)
return preprocess_target_data
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def createLSTMcell(rnn_size, keep_prob=None):
Create LSTM cell with give Dropout probability 1-keep_prob
:param rnn_size: number of LSTM cells
:keep_prob: Dropout keep probability
lstm = tf.contrib.rnn.LSTMCell(rnn_size,initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
drop_lstm = tf.contrib.rnn.DropoutWrapper(lstm,input_keep_prob=keep_prob)#,output_keep_prob=keep_prob,state_keep_prob=keep_prob)
return drop_lstm
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
embed = tf.contrib.layers.embed_sequence(ids=rnn_inputs, vocab_size=source_vocab_size, embed_dim=encoding_embedding_size)
multi_lstm = tf.contrib.rnn.MultiRNNCell([createLSTMcell(rnn_size,keep_prob) for _ in range(num_layers)])
enc_output, enc_state = tf.nn.dynamic_rnn(multi_lstm, embed, sequence_length=source_sequence_length, dtype= tf.float32)
return enc_output, enc_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
# :param Time_major: makes computation faster, returns Tensor in time major mode rather than Batch
# This code fails if set to time_major set to True, do not set it
#with tf.variable_scope("decoder"):
training_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
basic_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper,\
initial_state=encoder_state, output_layer=output_layer)
BasicDecoderOutput,_ = tf.contrib.seq2seq.dynamic_decode(basic_decoder, impute_finished=True,\
maximum_iterations=max_summary_length)
return BasicDecoderOutput
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
#with tf.variable_scope("decoder", reuse=True):
#column vector of ID's used in each sentence feed on beginning of first sequence of each batch
start_id = tf.constant([start_of_sequence_id], dtype=tf.int32)
start_id_tensor = tf.tile(start_id, [batch_size],name='start_tokens')
# GreedyEmbeddingHelper - start_tokens = 1D Tensor of START IDs, end_token = scalar value of END ID
dec_helper_greedy = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_id_tensor, \
end_of_sequence_id)
dec_infer_basic = tf.contrib.seq2seq.BasicDecoder(dec_cell, dec_helper_greedy, encoder_state, output_layer=output_layer)
dec_infer_output,_ = tf.contrib.seq2seq.dynamic_decode(dec_infer_basic, impute_finished=True,\
maximum_iterations=max_target_sequence_length)
return dec_infer_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
# 1. EMBEDDING
# inputs had been preprocessed in decoder_inputs, now we just need to do the embed look_up
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]), name='decoder_embed')
# dec_input - id's of the words to decode
dec_embedding_input = tf.nn.embedding_lookup(dec_embeddings, dec_input,name='decoder_embed_look_up')
# 2. BUILD DECODER
decoder_stack_lstm = tf.contrib.rnn.MultiRNNCell([createLSTMcell(rnn_size, keep_prob) for _ in range(num_layers)])
# 3. OUTPUT LAYER that maps output of decoder to elements of vocabulary
output_layer = Dense(target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(mean=0.0,stddev=0.1), \
name='output_layer')
#--------------------------------------------------------------------------------------------------------------
# The dec_embeddings MUST BE PASSED TO dec_inference, while the EMBEDDED_LOOKUP result is PASSED to dec train
#--------------------------------------------------------------------------------------------------------------
with tf.variable_scope("decoder"):
train_logits = decoding_layer_train(encoder_state, decoder_stack_lstm, dec_embedding_input,\
target_sequence_length, max_target_sequence_length, \
output_layer, keep_prob)
with tf.variable_scope("decoder", reuse=True):
# dropout for inference ASSUMED = 1 since it is not training and all waits have been set in dec_train
# so the Dropout should be at training time
keep_prob_infer = tf.constant(1.0,dtype=tf.float32,name='dec_infer_Dropout')
inference_logits = decoding_layer_infer(encoder_state, decoder_stack_lstm, dec_embeddings, \
target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], \
max_target_sequence_length, target_vocab_size, output_layer,\
batch_size, keep_prob_infer)
return train_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param max_target_sentence_length: Length of the longest target sentence
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
# 1. Apply embedding to the input data for the encoder (embedded done inside encoding_layer())
# 2. Create the encoding_layer
BasicEncoderOuput, encoder_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,\
source_sequence_length, source_vocab_size, enc_embedding_size)
# 3. Preprocess the target data that will be feed to the decoder
preprocess_dec_data = process_decoder_input(target_data, target_vocab_to_int, batch_size)
# 4. Apply embedding to the input target data for the decoder (embedded done inside decoding_layer())
# 5. Create the decoding_layer
train_BasicDecoderOutput, inference_BasicDecoderOutput = decoding_layer(preprocess_dec_data, encoder_state,\
target_sequence_length,\
max_target_sentence_length, rnn_size,\
num_layers, target_vocab_to_int,\
target_vocab_size, batch_size,\
keep_prob, dec_embedding_size)
return train_BasicDecoderOutput, inference_BasicDecoderOutput
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 128
# Number of Layers
num_layers = 1
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
display_step = 5
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
sentence_lowercase = sentence.lower()
words2int = []
for word in sentence.split():
if word in vocab_to_int:
words2int.append(vocab_to_int[word])
else:
words2int.append(vocab_to_int['<UNK>'])
return words2int
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step22: Encoding
Step25: Decoding - Training
Step28: Decoding - Inference
Step31: Build the Decoding Layer
Step34: Build the Neural Network
Step35: Neural Network Training
Step37: Build the Graph
Step41: Batch and pad the source and target sequences
Step44: Train
Step46: Save Parameters
Step48: Checkpoint
Step51: Sentence to Sequence
Step53: Translate
|
5,085
|
<ASSISTANT_TASK:>
Python Code:
from niwidgets import NiWidget
from niwidgets import examplet1 # this is an example T1 dataset
my_widget = NiWidget(examplet1)
my_widget.nifti_plotter()
from niwidgets import examplezmap # this is an example statistical map from neurosynth
import nilearn.plotting as nip
my_widget = NiWidget(examplezmap)
my_widget.nifti_plotter(plotting_func=nip.plot_glass_brain, # custom plot function
threshold=(0.0, 6.0, 0.01), # custom slider
display_mode=['ortho','xz']) # custom drop-down menu
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Default plotting function
Step2: Custom plotting functions
|
5,086
|
<ASSISTANT_TASK:>
Python Code:
import functools
import matplotlib.pyplot as plt
import numpy as np
import operator
import seaborn as sns
np.random.seed(sum(map(ord, 'hm2')))
# list available fonts: [f.name for f in matplotlib.font_manager.fontManager.ttflist]
plt.rc('font', family='DejaVu Sans')
dataset_6_to_4 = [1] * 6 + [0] * 4
hypothesis = np.linspace(0, 1, 101)
def bernoulli_likelihoods(theta, dataset):
return functools.reduce(operator.mul, map(lambda value: theta if value else (1.0 - theta), dataset), 1)
experiments = bernoulli_likelihoods(hypothesis, dataset_6_to_4)
theta_hat_index = np.argmax(experiments)
theta_hat = hypothesis[theta_hat_index]
max_value = experiments[theta_hat_index]
print('theta hat = {}'.format(theta_hat))
plt.plot(hypothesis, experiments, label=r'$L(\theta)$')
plt.plot((theta_hat, theta_hat), (0.0, max_value), label=r'$\hat{\theta}$', ls='--')
plt.xlabel(r'$\theta$')
plt.ylabel('likelihood')
plt.legend();
hypothesis = np.linspace(0, 1, 101)
dataset_3_to_2 = [1] * 3 + [0] * 2
dataset_60_to_40 = [1] * 60 + [0] * 40
dataset_5_to_5 = [1] * 5 + [0] * 5
datasets = [{
'dataset': dataset_6_to_4,
'name': '6/4',
}, {
'dataset': dataset_3_to_2,
'name': '3/2',
}, {
'dataset': dataset_60_to_40,
'name': '60/40',
}, {
'dataset': dataset_5_to_5,
'name': '5/5'
},
]
for idx, ds in enumerate(datasets):
experiments = bernoulli_likelihoods(hypothesis, ds['dataset'])
plt.subplot(len(datasets), 1, idx + 1)
plt.plot(hypothesis, experiments, label=ds['name'])
plt.ylabel('likelihood')
plt.legend();
plt.xlabel(r'$\theta$');
def theta_prob(theta):
return theta * theta * (1 - theta) * (1 - theta) / 0.03333
def bernoulli_a_posteriori_probability(theta, dataset):
return theta_prob(theta) * functools.reduce(operator.mul, map(lambda value: theta if value else (1.0 - theta), dataset), 1)
for idx, ds in enumerate(datasets):
map_experiments = bernoulli_a_posteriori_probability(hypothesis, ds['dataset'])
plt.subplot(len(datasets), 1, idx + 1)
plt.plot(hypothesis, map_experiments, label='MAP of {}'.format(ds['name']))
mle_experiments = bernoulli_likelihoods(hypothesis, ds['dataset'])
plt.plot(hypothesis, mle_experiments, label='MLE of {}'.format(ds['name']))
plt.ylabel('likelihood')
plt.legend();
plt.xlabel(r'$\theta$');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: draw result
Step2: (d)
Step3: Maximum a Posteriori Probability Estimation
|
5,087
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%pylab inline
# We will use the Inside AirBnB dataset from here on
df = pd.read_csv('data/sf_listings.csv')
df.head()
df.room_type.value_counts().plot.bar()
# Since SF doesn't have many neighborhoods (comparatively) we can also see the raw # per neighborhood
df.groupby('neighbourhood').count()['id'].plot.bar(figsize=(14,6))
df.groupby('host_id').count()['id'].plot.hist(bins=50)
# let's zoom in to the tail
subselect = df.groupby('host_id').count()['id']
subselect[subselect > 1].plot.hist(bins=50)
def scale_free_plot(df, num):
subselect = df.groupby('host_id').count()['id']
return subselect[subselect > num].plot.hist(bins=75)
scale_free_plot(df, 2)
# the shape of the distribution stays relatively the same as we subselect
for i in range(5):
scale_free_plot(df, i)
plt.show()
from pandas.tools.plotting import scatter_matrix
# it only makes sense to plot the continuous columns
continuous_columns = ['price', 'minimum_nights', 'number_of_reviews', 'reviews_per_month', \
'calculated_host_listings_count','availability_365']
# semicolon prevents the axis objests from printing
scatter_matrix(df[continuous_columns], alpha=0.6, figsize=(16, 16), diagonal='kde');
sns.distplot(df[(df.calculated_host_listings_count > 2) & (df.room_type == 'Entire home/apt')].availability_365, bins=50)
sns.distplot(df[(df.calculated_host_listings_count <= 2) & (df.room_type == 'Entire home/apt')].availability_365, bins=50)
# Host with multiple listing for the entire home distribution is skewed to availability the entire year
# implying that these hosts are renting the AirBnB as short term sublets (or hotels)
entire_home = df[df.room_type == 'Entire home/apt']
plt.figure(figsize=(14,6))
sns.kdeplot(entire_home[entire_home.calculated_host_listings_count > 1].availability_365, label='Multiple Listings')
sns.kdeplot(entire_home[entire_home.calculated_host_listings_count == 1].availability_365, label = 'Single Listing')
plt.legend();
# Host with multiple listing for the entire home distribution is skewed to availability the entire year
# implying that these hosts are renting the AirBnB as short term sublets (or hotels)
plt.figure(figsize=(14,6))
sns.kdeplot(df[df.minimum_nights > 29].availability_365, label='Short term Sublet')
sns.kdeplot(df[df.minimum_nights <= 20].availability_365, label = 'Listing')
plt.legend();
# Host with multiple listing for the entire home distribution is skewed to availability the entire year
# implying that these hosts are renting the AirBnB as short term sublets (or hotels)
entire_home = df[df.minimum_nights > 29]
plt.figure(figsize=(14,6))
sns.kdeplot(entire_home[entire_home.calculated_host_listings_count > 1].availability_365, label='Multiple Listings')
sns.kdeplot(entire_home[entire_home.calculated_host_listings_count == 1].availability_365, label = 'Single Listing')
plt.legend();
# just a tocuh hard to interpret...
plt.figure(figsize=(16, 6))
sns.violinplot(data=df, x='neighbourhood', y='price')
# boxplots can sometimes handle outliers better, we can see here there are some listings that are high priced extrema
plt.figure(figsize=(16, 6))
sns.boxplot(data=df, x='neighbourhood', y='price')
top_neighborhoods = df.groupby('neighbourhood').count().sort_values('id', ascending = False).index[:10]
top_neighborhoods
neighborhood_subset = df[df.neighbourhood.isin(top_neighborhoods)]
plt.figure(figsize=(16, 6))
sns.boxplot(data=neighborhood_subset[neighborhood_subset.price < 2000], x='neighbourhood', y='price')
plt.figure(figsize=(16, 6))
sns.violinplot(data=neighborhood_subset[neighborhood_subset.price < 2000], x='neighbourhood', y='price')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Scatterplot Matrix
Step2: Interesting insights from the scatter matrix
Step3: Extra!
Step4: Lets try to only show the 10 neighborhoods with the most listings and to zoom in on the distribution of the lower prices (now that we can identify the outliers) we can remove listings priced at > $2000
|
5,088
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import json
from pandas.io.json import json_normalize
# define json string
data = [{'state': 'Florida',
'shortname': 'FL',
'info': {'governor': 'Rick Scott'},
'counties': [{'name': 'Dade', 'population': 12345},
{'name': 'Broward', 'population': 40000},
{'name': 'Palm Beach', 'population': 60000}]},
{'state': 'Ohio',
'shortname': 'OH',
'info': {'governor': 'John Kasich'},
'counties': [{'name': 'Summit', 'population': 1234},
{'name': 'Cuyahoga', 'population': 1337}]}]
# use normalization to create tables from nested element
json_normalize(data, 'counties')
# further populate tables created from nested element
json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
# load json as string
json.load((open('data/world_bank_projects_less.json')))
# load as Pandas dataframe
sample_json_df = pd.read_json('data/world_bank_projects_less.json')
sample_json_df
#import packages
import pandas as pd
import json
from pandas.io.json import json_normalize
# Displays all DataFrame columns and rows
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
# Load json file as a string
world_bank_file = json.load((open('world_bank_projects.json')))
# Load json file into a Pandas DataFrame
world_bank_df = pd.read_json('world_bank_projects.json')
#print(world_bank_df.head(5))
#print(world_bank_df.columns)
print(world_bank_df.info())
#print(world_bank_df[['mjthemecode','mjtheme']])
# 1. Find the 10 countries with the most projects
# Select the countryname and project_name columns from the Pandas DataFrame
most_projects = world_bank_df[['countryname', 'project_name']]
# Group by countryname, count and sort values from highest to lowest
most_projects = most_projects.groupby("countryname").size().sort_values(ascending = False)
# Print first 10 rows
print(most_projects.head(10))
# 2. Find the top 10 major project themes (using column ‘mjtheme_namecode’)
# Use normalization to create tables from nested element
major_themes = json_normalize(world_bank_file, 'mjtheme_namecode')
# Group by countryname, count and sort values from highest to lowest
major_themes = major_themes.groupby("name").size().sort_values(ascending = False)
# Print first 10 rows
print(major_themes)
# 3. In 2. above you will notice that some entries have only the code and the name is missing.
# Create a dataframe with the missing names filled in.
import numpy as np
# Use normalization to create table from 'mjtheme_namecode' column
missing_names = json_normalize(world_bank_file, 'mjtheme_namecode')
# Replacing empty entries with NaNs
missing_names = missing_names.apply(lambda x: x.str.strip()).replace('', np.nan)
#print(missing_names)
# Finding unique values
code_name_map = missing_names.loc[missing_names['name'] != '', :]
unique_names = code_name_map.drop_duplicates().dropna()
# Creating dictionary from unique_names
unique_names_dict_list = unique_names.set_index('code').T.to_dict(orient = 'records')
# Extracting dictionary from list
unique_names_dict = unique_names_dict_list[0]
# Fills NaNs with values from dictionary
filled_values = missing_names['name'].fillna(missing_names['code'].map(unique_names_dict))
print(filled_values.value_counts())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: imports for Python, Pandas
Step2: JSON example, with string
Step3: JSON example, with file
Step4: JSON exercise
|
5,089
|
<ASSISTANT_TASK:>
Python Code:
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = 'traffic-signs-data/train.p'
validating_file = 'traffic-signs-data/valid.p'
testing_file = 'traffic-signs-data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validating_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
import numpy as np
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training examples
n_train = len(X_train)
n_valid = len(X_valid)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of validating examples =", n_valid)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
samples = []
plt.figure(figsize = (18, 18))
for i in range(0, n_classes):
plt.subplot(10, 5, i + 1)
selected_image = X_train[y_train == i]
count = len(selected_image)
num = np.random.randint(0, count)
plt.imshow(selected_image[num, :, :, :]) #we show each first image in the each classes
plt.title(i)
plt.axis('off')
samples.append(count)
plt.show()
# plot the number of images in each classes
plt.figure(figsize = (18, 9))
plt.bar(range(0, n_classes), samples)
plt.title("Classes")
plt.xlabel("Class")
plt.ylabel("Quantity of images")
plt.show()
print("Min: {}, Max: {}".format(np.min(samples), np.max(samples)))
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
import numpy as np
from skimage import exposure
from skimage import color
from skimage import transform
from skimage import filters
import cv2
from numpy import random
def histogram_equalization_image(image):
# image = exposure.equalize_adapthist(image, clip_limit = 0.03)
image = exposure.equalize_adapthist(image, clip_limit = 0.03)
return image
def histogram_equalization_data(data):
data = np.array([histogram_equalization_image(image) for image in data])
return data
def perspective_transform(image):
random_higher_bound = random.randint(27, 32)
random_lower_bound = random.randint(0, 5)
points_one = np.float32([[0,0],[32,0],[0,32],[32,32]])
points_two = np.float32([[0, 0], [random_higher_bound, random_lower_bound], [random_lower_bound, 32],[32, random_higher_bound]])
M = cv2.getPerspectiveTransform(points_one, points_two)
image = cv2.warpPerspective(image, M, dsize = (32, 32))
return image
def transform_image(image):
dx, dy = np.random.randint(-2.7, 2.8, 2)
M = np.float32([[1, 0, dx], [0, 1, dy]])
image = cv2.warpAffine(image, M, (image.shape[0], image.shape[1]))
#image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
print(dx, dy)
return image
X_train = np.sum(X_train / 3, axis = 3, keepdims = True)
X_valid = np.sum(X_valid / 3, axis = 3, keepdims = True)
X_test = np.sum(X_test / 3, axis = 3, keepdims = True)
image_0 = np.sum(X_train[500] / 3, axis = 2, keepdims = True)
image_1 = transform_image(image_0)
image_1 = image_1[:,:,None]
image_2 = perspective_transform(image_0)
image_2 = image_2[:,:,None]
plt.figure(figsize = (6, 3))
plt.subplot(1, 2, 1)
plt.imshow(image_0.squeeze(), cmap = 'gray') #we show each first image in the each classes
plt.title(15)
plt.subplot(1, 2, 2)
plt.imshow(image_2.squeeze(), cmap='gray') #we show each first image in the each classes
plt.show()
from tqdm import tqdm
selected_images = []
X_added_images = []
y_added_labels = []
max_samples = np.max(samples)
for classes_count in tqdm(range(len(samples))):
class_count = samples[classes_count]
total_count = (max_samples - class_count) + max_samples
count = 0
selected_images = X_train[y_train == classes_count]
for i in tqdm(range(total_count)):
image = selected_images[count]
if i % 2 == 0:
image = transform_image(image)
image = image[:,:,None]
elif i % 10 == 0:
image = perspective_transform(image)
image = image[:,:,None]
else:
image = np.copy(image)
X_added_images.append(image)
y_added_labels.append(classes_count)
count += 1
if count >= class_count:
count = 0
#for i in tqdm(range(len(X_train))):
# image = X_train[np.random.randint(0, len(X_train))]
# image_1 = transform_image(image)
# image_1 = image_1[:,:,None]
# X_added_images.append(image_1)
# y_added_labels.append((y_train[i]))
#
# image_2 = perspective_transform(image)
# image_2 = image_2[:,:,None]
# X_added_images.append(image_2)
# y_added_labels.append((y_train[i]))
X_added_images = np.array(X_added_images)
y_added_labels = np.array(y_added_labels)
X_train = np.concatenate((X_train, X_added_images), axis = 0)
y_train = np.concatenate((y_train, y_added_labels), axis = 0)
print('Done')
X_added_images.shape
X_added_images[0].shape
# plot the number of images in each classes
samples_after = []
for i in range(43):
selected_image = X_train[y_train == i]
count = len(selected_image)
samples_after.append(count)
plt.figure(figsize = (18, 9))
plt.bar(range(43), samples_after)
plt.title("Classes")
plt.xlabel("Class")
plt.ylabel("Quantity of images")
plt.show()
print("Min: {}, Max: {}".format(np.min(samples_after), np.max(samples_after)))
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
X_added_images = shuffle(X_added_images)
plt.figure(figsize = (18, 9))
for i in range(9):
plt.subplot(3, 3, i + 1)
# plt.imshow(cv2.cvtColor(X_added_images[i], cv2.COLOR_BGR2RGB)) #we show each first image in the each classes
plt.imshow(X_added_images[i].squeeze(), cmap='gray') #we show each first image in the each classes
plt.title(i)
plt.axis('off')
plt.show()
### Define your architecture here.
### Feel free to use as many code cells as needed.
EPOCHS = 10
BATCH_SIZE = 128
from tensorflow.contrib.layers import flatten
def KsrNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 32x32x3.
conv0_W = tf.Variable(tf.truncated_normal(shape=(1, 1, 1, 3), mean = mu, stddev = sigma))
conv0_b = tf.Variable(tf.zeros(3))
conv0 = tf.nn.conv2d(x, conv0_W, strides=[1, 1, 1, 1], padding='VALID') + conv0_b
# SOLUTION: Layer 1: Convolutional. Input = 32x32x3. Output = 30x30x32.
conv1_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 3, 32), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(32))
conv1 = tf.nn.conv2d(conv0, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Layer 1: Convolutional. Input = 30x30x32. Output = 28x28x64.
conv1a_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 32, 64), mean = mu, stddev = sigma))
conv1a_b = tf.Variable(tf.zeros(64))
conv1a = tf.nn.conv2d(conv1, conv1a_W, strides=[1, 1, 1, 1], padding='VALID') + conv1a_b
# SOLUTION: Activation.
conv1a = tf.nn.relu(conv1a)
# SOLUTION: Pooling. Input = 28x28x64. Output = 14x14x64.
conv1a = tf.nn.max_pool(conv1a, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x128.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 64, 128), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(128))
conv2 = tf.nn.conv2d(conv1a, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x128. Output = 5x5x128.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x128. Output = 3200.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 3200. Output = 1024.
fc1_W = tf.Variable(tf.truncated_normal(shape=(3200, 1024), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(1024))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# SOLUTION: Layer 3: Fully Connected. Input = 3200. Output = 1024.
fc1a_W = tf.Variable(tf.truncated_normal(shape=(1024, 120), mean = mu, stddev = sigma))
fc1a_b = tf.Variable(tf.zeros(120))
fc1a = tf.matmul(fc1, fc1a_W) + fc1a_b
# SOLUTION: Activation.
fc1a = tf.nn.relu(fc1a)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1a, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
import tensorflow as tf
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43)
rate = 0.001
logits = KsrNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
import time
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
start_total_time = time.time()
print()
for i in range(EPOCHS):
start_epoch_time = time.time()
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_valid, y_valid)
end_epoch_time = time.time()
print("EPOCH {}: {}".format(i + 1, end_epoch_time - start_epoch_time))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
end_total_time = time.time()
print("Total time: {}".format(end_total_time - start_total_time))
saver.save(sess, './lenet')
print("Model saved")
print(X_test.shape)
start = time.time()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
end = time.time()
print(end - start)
import glob
my_images = []
#plt.figure(figsize = (18, 9))
for i, img in enumerate(glob.glob('./my_images/*.png')):
image = cv2.imread(img)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# plt.subplot(2, 4, i + 1)
# plt.imshow(image) #we show each first image in the each classes
# plt.title(i)
# plt.axis('off')
my_images.append(image)
#plt.show()
my_images = np.asarray(my_images)
print(my_images.shape)
#X_my_test = histogram_equalization_data(my_images)
X_my_test = my_images
X_my_test.shape
plt.figure(figsize = (18, 9))
for i in range(len(X_my_test)):
plt.subplot(2, 4, i + 1)
plt.imshow(X_my_test[i].squeeze()) #we show each first image in the each classes
plt.title(i)
plt.axis('off')
plt.show()
X_my_test = np.sum(X_my_test / 3, axis = 3, keepdims = True)
X_my_test.shape
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
y_my_test = [11, 1, 12, 38, 34, 18, 25, 3]
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver2 = tf.train.import_meta_graph('./lenet.meta')
saver2.restore(sess, "./lenet")
my_test_accuracy = evaluate(X_my_test, y_my_test)
print("Accuracy = {:.3f}".format(my_test_accuracy))
### Visualize the softmax probabilities here.
### Feel free to use as many code cells as needed.
softmax_logits = tf.nn.softmax(logits)
top_k = tf.nn.top_k(softmax_logits, k=3)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver = tf.train.import_meta_graph('./lenet.meta')
saver.restore(sess, "./lenet")
my_softmax_logits = sess.run(softmax_logits, feed_dict={x: X_my_test})
my_top_k = sess.run(top_k, feed_dict={x: X_my_test})
fig, axs = plt.subplots(len(X_my_test),4, figsize=(12, 14))
fig.subplots_adjust(hspace = .4, wspace=.2)
axs = axs.ravel()
for i, image in enumerate(X_my_test):
axs[4*i].axis('off')
# axs[4*i].imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
axs[4*i].imshow(image.squeeze(), cmap='gray')
axs[4*i].set_title('input')
guess1 = my_top_k[1][i][0]
index1 = np.argwhere(y_valid == guess1)[0]
axs[4*i+1].axis('off')
axs[4*i+1].imshow(X_valid[index1].squeeze(), cmap='gray')
axs[4*i+1].set_title('top guess: {} ({:.0f}%)'.format(guess1, 100*my_top_k[0][i][0]))
guess2 = my_top_k[1][i][1]
index2 = np.argwhere(y_valid == guess2)[0]
axs[4*i+2].axis('off')
axs[4*i+2].imshow(X_valid[index2].squeeze(), cmap='gray')
axs[4*i+2].set_title('2nd guess: {} ({:.0f}%)'.format(guess2, 100*my_top_k[0][i][1]))
guess3 = my_top_k[1][i][2]
index3 = np.argwhere(y_valid == guess3)[0]
axs[4*i+3].axis('off')
axs[4*i+3].imshow(X_valid[index3].squeeze(), cmap='gray')
axs[4*i+3].set_title('3rd guess: {} ({:.0f}%)'.format(guess3, 100*my_top_k[0][i][2]))
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
fig, axs = plt.subplots(8,2, figsize=(9, 19))
axs = axs.ravel()
for i in range(len(my_softmax_logits)*2):
if i%2 == 0:
axs[i].axis('off')
axs[i].imshow(my_images[i//2])
else:
axs[i].bar(np.arange(n_classes), my_softmax_logits[(i-1)//2])
axs[i].set_ylabel('Softmax probability')
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session = sess,feed_dict = {x: image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize = (15, 15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap + 1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Include an exploratory visualization of the dataset
Step3: Train data
Step4: Step 2
Step5: Model Architecture
Step6: Train, Validate and Test the Model
Step7: Train the Model
Step8: Evaluate the Model
Step9: Step 3
Step10: Predict the Sign Type for Each Image
Step11: Analyze Performance
Step12: Output Top 5 Softmax Probabilities For Each Image Found on the Web
Step13: Project Writeup
|
5,090
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.metrics import confusion_matrix, f1_score
from utils import accuracy, accuracy_adjacent, display_cm, facies_labels
PRED = pd.read_csv('prediction_depths.csv')
PRED.set_index(["Well Name", "Depth"], inplace=True)
PRED.head()
TRUE = pd.read_csv('blind_stuart_crawford_core_facies.csv')
TRUE.rename(columns={'Depth.ft': 'Depth'}, inplace=True)
TRUE.rename(columns={'WellName': 'Well Name'}, inplace=True)
TRUE.set_index(["Well Name", "Depth"], inplace=True)
TRUE.head()
def get_accuracies(y_preds):
Get the F1 scores from all the y_preds.
y_blind is a 1D array. y_preds is a 2D array.
accs = []
for y_pred in y_preds:
PRED['Facies'] = y_pred
all_data = PRED.join(TRUE, how='inner')
y_blind = all_data['LithCode'].values
y_pred = all_data['Facies'].values
y_pred = y_pred[y_blind!=11]
y_blind = y_blind[y_blind!=11]
cv_conf = confusion_matrix(y_blind, y_pred)
accs.append(accuracy(cv_conf))
return np.array(accs)
from glob import glob
from os import path
import operator
scores, medians = {}, {}
for f in glob('./*/*_100_realizations.npy'):
team = path.basename(f).split('_')[0]
y_preds = np.load(f)
scores[team] = get_accuracies(y_preds)
medians[team] = np.median(scores[team])
plt.hist(pd.Series(scores[team]), alpha=0.5)
for t, m in sorted(medians.items(), key=operator.itemgetter(1), reverse=True):
print("{:20s}{:.4f}".format(t, m))
s = pd.Series(scores['LA-Team'])
plt.hist(s)
s.describe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Using globals. I am a miserable person.
Step3: Look more closely at LA Team
|
5,091
|
<ASSISTANT_TASK:>
Python Code::
import catboost as cb
#Create datasets
train_dataset = cb.Pool(X_train,y_train, cat_features=categorical_indicies)
eval_dataset = cb.Pool(X_val,y_val, cat_features=categorical_indicies)
model = cb.CatBoostClassifier(iterations=1000,
loss_function='Logloss',
eval_metric='Accuracy')
#Declare parameters to tune and values to try
grid = {'learning_rate': [0.03, 0.1],
'depth': [4, 6, 10],
'l2_leaf_reg': [1, 3, 5,]}
#Find optimum parameters
model.grid_search(grid,train_dataset,plot=True)
#Fit model with early stopping if improvement hasn't been made within 50 iterations
model.fit(train_dataset,
eval_set=eval_dataset,
early_stopping_rounds=50,
plot=True,
silent=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
5,092
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-2', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
5,093
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
#Dont import matplotlib until we get to histogram example
import matplotlib.pyplot as plt
#This next line tells jupyter to plot it in the same space
%matplotlib inline
table = pd.read_excel("GASISData.xls")
table.head()
table['PLAYNAME']
table['NEW COLUMN'] = 5
table['NEW COLUMN']
texasTable = table.loc[table['STATE'] == "TEXAS"]
print(texasTable)
sizeTable = table.shape
print(sizeTable)
num_rows = sizeTable[0]
num_cols = sizeTable[1]
print(num_rows)
print(num_cols)
table.loc[table['LATITUDE'] > 10]
pd.DataFrame.hist(table,"AVPERM")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use pd.read_excel in order to open file. If it says file not found, then make sure your directory is correct
Step2: Lets say we want to see the first 10 rows of the data to make sure it is the correct file (Google "pandas data preview") #table.tail is end of data
Step3: What if I want to look at just one column of data
Step4: What if I want to create a new column
Step5: What if I want to find data in a certain set, such as only in Texas (Google) (panas find rows where value is)
Step6: Run the following to get shape of table
Step7: This is done to find the count of number of rows and number of cols
Step8: Rows where you have some preset parameter, such as where lattitude is greater than 80 (Google) (Google same thing as above)
Step9: Exercise
|
5,094
|
<ASSISTANT_TASK:>
Python Code:
# Install necessary Python libraries and restart your kernel after.
!python -m pip install -r ../requirements.txt
# TODO(developer): Fill these variables with your values before running the sample
PROJECT_ID = "YOUR_PROJECT_ID_HERE"
LOCATION = "us" # Format is 'us' or 'eu'
PROCESSOR_ID = "PROCESSOR_ID" # Create processor in Cloud Console
FILE_PATH = '../resources/general/multi-document.pdf'
# Import necessary Python modules
from google.cloud import documentai_v1 as documentai
def process_document_sample(
project_id: str, location: str, processor_id: str, file_path: str
):
# Instantiates a client
opts = {"api_endpoint": f"{location}-documentai.googleapis.com"}
client = documentai.DocumentProcessorServiceClient(client_options=opts)
# The full resource name of the processor, e.g.:
# projects/project-id/locations/location/processor/processor-id
# You must create new processors in the Cloud Console first
name = f"projects/{project_id}/locations/{location}/processors/{processor_id}"
with open(file_path, "rb") as image:
image_content = image.read()
# Read the file into memory
document = {"content": image_content, "mime_type": "application/pdf"}
# Configure the process request
request = {"name": name, "raw_document": document}
# Recognizes text entities in the PDF document
result = client.process_document(request=request)
document = result.document
print("Document processing complete.")
# For a full list of Document object attributes, please reference this page: https://googleapis.dev/python/documentai/latest/_modules/google/cloud/documentai_v1beta3/types/document.html#Document
document_pages = document.pages
# Read the text recognition output from the processor
text = document.text
print("The document contains the following text (first 100 charactes):")
print(text[:100])
# Read the detected page split from the processor
print("\nThe processor detected the following page split entities:")
print_pages_split(text, document)
def print_pages_split(text: str, document: dict):
Document AI identifies possible page splits
in document. This function converts page splits
to text snippets and prints it.
for i, entity in enumerate(document.entities):
confidence = entity.confidence
text_entity = ''
for segment in entity.text_anchor.text_segments:
start = segment.start_index
end = segment.end_index
text_entity += text[start:end]
pages = [p.page for p in entity.page_anchor.page_refs]
print(f"*** Entity number: {i}, Split Confidence: {confidence} ***")
print(f"*** Pages numbers: {[p for p in pages]} ***\nText snippet: {text_entity[:100]}")
process_document_sample(PROJECT_ID, LOCATION, PROCESSOR_ID, FILE_PATH)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set your Processor Variables
Step3: Now let's define the function to process the document with Document AI Python client
Step4: We can now run the processor on the sample multi-document pdf.
|
5,095
|
<ASSISTANT_TASK:>
Python Code:
n = 20 #number of coupons
mu = 1/n #this is the mean coupon probability
sigma = mu/2 #this is the std dev parameter we will play around with - it seems to make sense to express it in terms of the mean
x = np.arange(n)+0.5 #arange goes from 0 to n-1, and I want it to go from 1 to n
p_x = stat.norm.ppf(x/(n), mu, sigma)
print('mean prob = %1.16f' % np.mean(p_x), ', sum of all probs = %1.16f' % np.sum(p_x) )
plt.bar(x, p_x, align='center')
plt.axis([0,n,0,2/n])
n = 20 #number of coupons
mu = 1/n #this is the mean coupon probability
sigma = mu/10 #this is the std dev parameter we will play around with - it seems to make sense to express it in terms of the mean
x = np.arange(n)+0.5 #arange goes from 0 to n-1, and I want it to go from 1 to n
p_x = stat.norm.ppf(x/n, mu, sigma)
print('mean prob = %1.16f' % np.mean(p_x), ', sum of all probs = %1.16f' % np.sum(p_x) )
plt.bar(x, p_x, align='center')
plt.axis([0,n,0,2/n])
n = 20 #number of coupons
mu = 1/n #this is the mean coupon probability
sigma = mu #this is the std dev parameter we will play around with - it seems to make sense to express it in terms of the mean
x = np.arange(n)+0.5 #arange goes from 0 to n-1, and I want it to go from 1 to n
p_x = stat.norm.ppf(x/n, mu, sigma)
print('mean prob = %1.16f' % np.mean(p_x), ', sum of all probs = %1.16f' % np.sum(p_x) )
plt.bar(x, p_x, align='center')
plt.axis([0,n,-0.1,3/n])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notice that as we decreased the standard deviation (from half of the mean to a tenth of the mean) our spread of probabilities got a lot smaller. Also notice that this method doesn't require any scaling of the probabilities to get them to sum to 1. This gives me a warm and fuzzy feeling that this is a good way to generate coupon probabilities.
|
5,096
|
<ASSISTANT_TASK:>
Python Code:
from dipy.reconst.dti import fractional_anisotropy, color_fa
from argparse import ArgumentParser
from scipy import ndimage
import os
import re
import numpy as np
import nibabel as nb
import sys
import matplotlib
matplotlib.use('Agg') # very important above pyplot import
import matplotlib.pyplot as plt
import vtk
from dipy.reconst.dti import from_lower_triangular
img = nb.load('../../../../../Desktop/result/dogsig1_gausig2.3/v100_ch0_tensorfsl_dogsig1_gausig2.3.nii')
data = img.get_data()
# Output is the structure tensor generated from a lower triangular structure tensor (which data is)
output = from_lower_triangular(data)
output_ds = output[4250:4300, 250:300, :, :, :]
print output.shape
print output_ds.shape
FA = fractional_anisotropy(output_ds)
FA = np.clip(FA, 0, 1)
FA[np.isnan(FA)] = 0
print FA.shape
from dipy.reconst.dti import decompose_tensor
evalues, evectors = decompose_tensor(output_ds)
print evectors[..., 0, 0].shape
print evectors.shape[-2:]
print FA[:, :, :, 0].shape
## To satisfy requirements for RGB
RGB = color_fa(FA[:, :, :, 0], evectors)
nb.save(nb.Nifti1Image(np.array(255 * RGB, 'uint8'), img.get_affine()), 'tensor_rgb_upper.nii.gz')
print('Computing tensor ellipsoids in a random part')
from dipy.data import get_sphere
sphere = get_sphere('symmetric724')
from dipy.viz import fvtk
ren = fvtk.ren()
evals = evalues[:, :, :]
evecs = evectors[:, :, :]
print "printing evals:"
print evals
print "printing evecs"
print evecs
cfa = RGB[:, :, :]
cfa = cfa / cfa.max()
print "printing cfa"
print cfa
fvtk.add(ren, fvtk.tensor(evals, evecs, cfa, sphere))
from IPython.display import Image
def vtk_show(renderer, width=400, height=300):
Takes vtkRenderer instance and returns an IPython Image with the rendering.
renderWindow = vtk.vtkRenderWindow()
renderWindow.SetOffScreenRendering(1)
renderWindow.AddRenderer(renderer)
renderWindow.SetSize(width, height)
renderWindow.Render()
windowToImageFilter = vtk.vtkWindowToImageFilter()
windowToImageFilter.SetInput(renderWindow)
windowToImageFilter.Update()
writer = vtk.vtkPNGWriter()
writer.SetWriteToMemory(1)
writer.SetInputConnection(windowToImageFilter.GetOutputPort())
writer.Write()
data = str(buffer(writer.GetResult()))
return Image(data)
# x = 4250:4300, y = 250:300, z = : on Tony's computer (doesn't show anything)
# Thus, all results were displayed after running on Albert's computer
vtk_show(ren)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Subsampling
Step3: Results
|
5,097
|
<ASSISTANT_TASK:>
Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
# import functions from previous notebook
from chap22 import params
from chap22 import make_system
from chap22 import slope_func
from chap22 import event_func
def range_func(angle, params):
params = params.set(angle=angle)
system = make_system(params)
results, details = run_solve_ivp(system, slope_func,
events=event_func)
x_dist = results.iloc[-1].x
print(angle, x_dist)
return x_dist
range_func(45, params)
angles = linspace(20, 80, 21)
sweep = SweepSeries()
for angle in angles:
x_dist = range_func(angle, params)
sweep[angle] = x_dist
sweep.plot()
decorate(xlabel='Launch angle (degree)',
ylabel='Range (meter)')
res = maximize_scalar(range_func, params, bounds=[0, 90])
res
res.x, res.fun
# Solution
def event_func(t, state, system):
x, y, vx, vy = state
return x - 94.5
# Solution
system = make_system(params)
event_func(0, system.init, system)
# Solution
def height_func(angle, params):
params = params.set(angle=angle)
system = make_system(params)
results, details = run_solve_ivp(system, slope_func,
events=event_func)
height = results.iloc[-1].y
return height
# Solution
height_func(40, params)
# Solution
res = maximize_scalar(height_func, params, bounds=[0, 90])
res
# Solution
res.x, res.fun
# Solution
def error_func(velocity, params):
print(velocity)
params = params.set(velocity=velocity)
res = maximize_scalar(height_func, params, bounds=[0, 90])
return res.fun - 11.3
# Solution
error_func(40, params)
# Solution
bracket = [30, 50]
res = root_scalar(error_func, params, bracket=bracket)
# Solution
res
# Solution
min_velocity = res.root
min_velocity
# Solution
error_func(min_velocity, params)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the previous chapter we developed a model of the flight of a
Step2: range_func makes a new System object with the given value of
Step3: And we can sweep a sequence of angles like this
Step4: Here's what the results look like.
Step5: It looks like the optimal angle is near 40°.
Step6: The first parameter is the function we want to maximize. The second is
Step7: For these parameters, the optimal angle is about 41°, which yields a
Step8: Next, write a function called height_func that takes a launch angle, simulates the flight of a baseball, and returns the height of the baseball when it reaches the wall.
Step9: Now use maximize_scalar to find the optimal angle. Is it higher or lower than the angle that maximizes range?
Step10: Even though we are finding the "minimum" velocity, we are not really solving a minimization problem. Rather, we want to find the velocity that makes the height at the wall exactly 37 feet (11.3 m), given that it's launched at the optimal angle. And that's a job for root_scalar.
Step11: Test your error function before you call root_scalar.
Step12: Then use root_scalar to find the answer to the problem, the minimum velocity that gets the ball out of the park.
Step13: And just to check, run error_func with the value you found.
|
5,098
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.image import imread
import tensorflow as tf
import numpy as np
import sys
import os
tf.__version__
import knifey
from knifey import img_size, img_size_flat, img_shape, num_classes, num_channels
# knifey.data_dir = "data/knifey-spoony/"
knifey.maybe_download_and_extract()
dataset = knifey.load()
class_names = dataset.class_names
class_names
image_paths_train, cls_train, labels_train = dataset.get_training_set()
image_paths_train[0]
image_paths_test, cls_test, labels_test = dataset.get_test_set()
image_paths_test[0]
print("Size of:")
print("- Training-set:\t\t{}".format(len(image_paths_train)))
print("- Test-set:\t\t{}".format(len(image_paths_test)))
def plot_images(images, cls_true, cls_pred=None, smooth=True):
assert len(images) == len(cls_true)
# Create figure with sub-plots.
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing.
if cls_pred is None:
hspace = 0.3
else:
hspace = 0.6
fig.subplots_adjust(hspace=hspace, wspace=0.3)
# Interpolation type.
if smooth:
interpolation = 'spline16'
else:
interpolation = 'nearest'
for i, ax in enumerate(axes.flat):
# There may be less than 9 images, ensure it doesn't crash.
if i < len(images):
# Plot image.
ax.imshow(images[i],
interpolation=interpolation)
# Name of the true class.
cls_true_name = class_names[cls_true[i]]
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true_name)
else:
# Name of the predicted class.
cls_pred_name = class_names[cls_pred[i]]
xlabel = "True: {0}\nPred: {1}".format(cls_true_name,
cls_pred_name)
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
def load_images(image_paths):
# Load the images from disk.
images = [imread(path) for path in image_paths]
# Convert to a numpy array and return it.
return np.asarray(images)
# Load the first images from the test-set.
images = load_images(image_paths=image_paths_test[0:9])
# Get the true classes for those images.
cls_true = cls_test[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true, smooth=True)
path_tfrecords_train = os.path.join(knifey.data_dir, "train.tfrecords")
path_tfrecords_train
path_tfrecords_test = os.path.join(knifey.data_dir, "test.tfrecords")
path_tfrecords_test
def print_progress(count, total):
# Percentage completion.
pct_complete = float(count) / total
# Status-message.
# Note the \r which means the line should overwrite itself.
msg = "\r- Progress: {0:.1%}".format(pct_complete)
# Print it.
sys.stdout.write(msg)
sys.stdout.flush()
def wrap_int64(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def wrap_bytes(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def convert(image_paths, labels, out_path):
# Args:
# image_paths List of file-paths for the images.
# labels Class-labels for the images.
# out_path File-path for the TFRecords output file.
print("Converting: " + out_path)
# Number of images. Used when printing the progress.
num_images = len(image_paths)
# Open a TFRecordWriter for the output-file.
with tf.python_io.TFRecordWriter(out_path) as writer:
# Iterate over all the image-paths and class-labels.
for i, (path, label) in enumerate(zip(image_paths, labels)):
# Print the percentage-progress.
print_progress(count=i, total=num_images-1)
# Load the image-file using matplotlib's imread function.
img = imread(path)
# Convert the image to raw bytes.
img_bytes = img.tostring()
# Create a dict with the data we want to save in the
# TFRecords file. You can add more relevant data here.
data = \
{
'image': wrap_bytes(img_bytes),
'label': wrap_int64(label)
}
# Wrap the data as TensorFlow Features.
feature = tf.train.Features(feature=data)
# Wrap again as a TensorFlow Example.
example = tf.train.Example(features=feature)
# Serialize the data.
serialized = example.SerializeToString()
# Write the serialized data to the TFRecords file.
writer.write(serialized)
convert(image_paths=image_paths_train,
labels=cls_train,
out_path=path_tfrecords_train)
convert(image_paths=image_paths_test,
labels=cls_test,
out_path=path_tfrecords_test)
def parse(serialized):
# Define a dict with the data-names and types we expect to
# find in the TFRecords file.
# It is a bit awkward that this needs to be specified again,
# because it could have been written in the header of the
# TFRecords file instead.
features = \
{
'image': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64)
}
# Parse the serialized data so we get a dict with our data.
parsed_example = tf.parse_single_example(serialized=serialized,
features=features)
# Get the image as raw bytes.
image_raw = parsed_example['image']
# Decode the raw bytes so it becomes a tensor with type.
image = tf.decode_raw(image_raw, tf.uint8)
# The type is now uint8 but we need it to be float.
image = tf.cast(image, tf.float32)
# Get the label associated with the image.
label = parsed_example['label']
# The image and label are now correct TensorFlow types.
return image, label
def input_fn(filenames, train, batch_size=32, buffer_size=2048):
# Args:
# filenames: Filenames for the TFRecords files.
# train: Boolean whether training (True) or testing (False).
# batch_size: Return batches of this size.
# buffer_size: Read buffers of this size. The random shuffling
# is done on the buffer, so it must be big enough.
# Create a TensorFlow Dataset-object which has functionality
# for reading and shuffling data from TFRecords files.
dataset = tf.data.TFRecordDataset(filenames=filenames)
# Parse the serialized data in the TFRecords files.
# This returns TensorFlow tensors for the image and labels.
dataset = dataset.map(parse)
if train:
# If training then read a buffer of the given size and
# randomly shuffle it.
dataset = dataset.shuffle(buffer_size=buffer_size)
# Allow infinite reading of the data.
num_repeat = None
else:
# If testing then don't shuffle the data.
# Only go through the data once.
num_repeat = 1
# Repeat the dataset the given number of times.
dataset = dataset.repeat(num_repeat)
# Get a batch of data with the given size.
dataset = dataset.batch(batch_size)
# Create an iterator for the dataset and the above modifications.
iterator = dataset.make_one_shot_iterator()
# Get the next batch of images and labels.
images_batch, labels_batch = iterator.get_next()
# The input-function must return a dict wrapping the images.
x = {'image': images_batch}
y = labels_batch
return x, y
def train_input_fn():
return input_fn(filenames=path_tfrecords_train, train=True)
def test_input_fn():
return input_fn(filenames=path_tfrecords_test, train=False)
some_images = load_images(image_paths=image_paths_test[0:9])
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"image": some_images.astype(np.float32)},
num_epochs=1,
shuffle=False)
some_images_cls = cls_test[0:9]
feature_image = tf.feature_column.numeric_column("image",
shape=img_shape)
feature_columns = [feature_image]
num_hidden_units = [512, 256, 128]
model = tf.estimator.DNNClassifier(feature_columns=feature_columns,
hidden_units=num_hidden_units,
activation_fn=tf.nn.relu,
n_classes=num_classes,
model_dir="./checkpoints_tutorial18-1/")
model.train(input_fn=train_input_fn, steps=200)
result = model.evaluate(input_fn=test_input_fn)
result
print("Classification accuracy: {0:.2%}".format(result["accuracy"]))
predictions = model.predict(input_fn=predict_input_fn)
cls = [p['classes'] for p in predictions]
cls_pred = np.array(cls, dtype='int').squeeze()
cls_pred
plot_images(images=some_images,
cls_true=some_images_cls,
cls_pred=cls_pred)
predictions = model.predict(input_fn=test_input_fn)
cls = [p['classes'] for p in predictions]
cls_pred = np.array(cls, dtype='int').squeeze()
np.sum(cls_pred == 2)
def model_fn(features, labels, mode, params):
# Args:
#
# features: This is the x-arg from the input_fn.
# labels: This is the y-arg from the input_fn.
# mode: Either TRAIN, EVAL, or PREDICT
# params: User-defined hyper-parameters, e.g. learning-rate.
# Reference to the tensor named "image" in the input-function.
x = features["image"]
# The convolutional layers expect 4-rank tensors
# but x is a 2-rank tensor, so reshape it.
net = tf.reshape(x, [-1, img_size, img_size, num_channels])
# First convolutional layer.
net = tf.layers.conv2d(inputs=net, name='layer_conv1',
filters=32, kernel_size=3,
padding='same', activation=tf.nn.relu)
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
# Second convolutional layer.
net = tf.layers.conv2d(inputs=net, name='layer_conv2',
filters=32, kernel_size=3,
padding='same', activation=tf.nn.relu)
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
# Flatten to a 2-rank tensor.
net = tf.contrib.layers.flatten(net)
# Eventually this should be replaced with:
# net = tf.layers.flatten(net)
# First fully-connected / dense layer.
# This uses the ReLU activation function.
net = tf.layers.dense(inputs=net, name='layer_fc1',
units=128, activation=tf.nn.relu)
# Second fully-connected / dense layer.
# This is the last layer so it does not use an activation function.
net = tf.layers.dense(inputs=net, name='layer_fc_2',
units=num_classes)
# Logits output of the neural network.
logits = net
# Softmax output of the neural network.
y_pred = tf.nn.softmax(logits=logits)
# Classification output of the neural network.
y_pred_cls = tf.argmax(y_pred, axis=1)
if mode == tf.estimator.ModeKeys.PREDICT:
# If the estimator is supposed to be in prediction-mode
# then use the predicted class-number that is output by
# the neural network. Optimization etc. is not needed.
spec = tf.estimator.EstimatorSpec(mode=mode,
predictions=y_pred_cls)
else:
# Otherwise the estimator is supposed to be in either
# training or evaluation-mode. Note that the loss-function
# is also required in Evaluation mode.
# Define the loss-function to be optimized, by first
# calculating the cross-entropy between the output of
# the neural network and the true labels for the input data.
# This gives the cross-entropy for each image in the batch.
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels,
logits=logits)
# Reduce the cross-entropy batch-tensor to a single number
# which can be used in optimization of the neural network.
loss = tf.reduce_mean(cross_entropy)
# Define the optimizer for improving the neural network.
optimizer = tf.train.AdamOptimizer(learning_rate=params["learning_rate"])
# Get the TensorFlow op for doing a single optimization step.
train_op = optimizer.minimize(
loss=loss, global_step=tf.train.get_global_step())
# Define the evaluation metrics,
# in this case the classification accuracy.
metrics = \
{
"accuracy": tf.metrics.accuracy(labels, y_pred_cls)
}
# Wrap all of this in an EstimatorSpec.
spec = tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
eval_metric_ops=metrics)
return spec
params = {"learning_rate": 1e-4}
model = tf.estimator.Estimator(model_fn=model_fn,
params=params,
model_dir="./checkpoints_tutorial18-2/")
model.train(input_fn=train_input_fn, steps=200)
result = model.evaluate(input_fn=test_input_fn)
result
print("Classification accuracy: {0:.2%}".format(result["accuracy"]))
predictions = model.predict(input_fn=predict_input_fn)
cls_pred = np.array(list(predictions))
cls_pred
plot_images(images=some_images,
cls_true=some_images_cls,
cls_pred=cls_pred)
predictions = model.predict(input_fn=test_input_fn)
cls_pred = np.array(list(predictions))
cls_pred
np.sum(cls_pred == 0)
np.sum(cls_pred == 1)
np.sum(cls_pred == 2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This was developed using Python 3.6 (Anaconda) and TensorFlow version
Step2: Load Data
Step3: The data dimensions have already been defined in the knifey module, so we just need to import the ones we need.
Step4: Set the directory for storing the data-set on your computer.
Step5: The Knifey-Spoony data-set is about 22 MB and will be downloaded automatically if it is not located in the given path.
Step6: Now load the data-set. This scans the sub-directories for all *.jpg images and puts the filenames into two lists for the training-set and test-set. This does not actually load the images.
Step7: Get the class-names.
Step8: Training and Test-Sets
Step9: Print the first image-path to see if it looks OK.
Step10: Get the test-set.
Step11: Print the first image-path to see if it looks OK.
Step12: The Knifey-Spoony data-set has now been loaded and consists of 4700 images and associated labels (i.e. classifications of the images). The data-set is split into 2 mutually exclusive sub-sets, the training-set and the test-set.
Step13: Helper-function for plotting images
Step14: Helper-function for loading images
Step15: Plot a few images to see if data is correct
Step16: Create TFRecords
Step17: File-path for the TFRecords file holding the test-set.
Step18: Helper-function for printing the conversion progress.
Step19: Helper-function for wrapping an integer so it can be saved to the TFRecords file.
Step20: Helper-function for wrapping raw bytes so they can be saved to the TFRecords file.
Step21: This is the function for reading images from disk and writing them along with the class-labels to a TFRecords file. This loads and decodes the images to numpy-arrays and then stores the raw bytes in the TFRecords file. If the original image-files are compressed e.g. as jpeg-files, then the TFRecords file may be many times larger than the original image-files.
Step22: Note the 4 function calls required to write the data-dict to the TFRecords file. In the original code-example from the Google Developers, these 4 function calls were actually nested. The design-philosophy for TensorFlow generally seems to be
Step23: Convert the test-set to a TFRecords-file
Step24: Input Functions for the Estimator
Step25: Helper-function for creating an input-function that reads from TFRecords files for use with the Estimator API.
Step26: This is the input-function for the training-set for use with the Estimator API
Step27: This is the input-function for the test-set for use with the Estimator API
Step28: Input Function for Predicting on New Images
Step29: These images are now stored as numpy arrays in memory, so we can use the standard input-function for the Estimator API. Note that the images are loaded as uint8 data but it must be input to the TensorFlow graph as floats so we do a type-cast.
Step30: The class-numbers are actually not used in the input-function as it is not needed for prediction. However, the true class-number is needed when we plot the images further below.
Step31: Pre-Made / Canned Estimator
Step32: You can have several input features which would then be combined in a list
Step33: In this example we want to use a 3-layer DNN with 512, 256 and 128 units respectively.
Step34: The DNNClassifier then constructs the neural network for us. We can also specify the activation function and various other parameters (see the docs). Here we just specify the number of classes and the directory where the checkpoints will be saved.
Step35: Training
Step36: Evaluation
Step37: Predictions
Step38: Predictions for the Entire Test-Set
Step39: The test-set contains 530 images in total and they have all been predicted as class 2 (spoony). So this model does not work at all for classifying the Knifey-Spoony dataset.
Step40: New Estimator
Step41: Create an Instance of the Estimator
Step42: We can then create an instance of the new Estimator.
Step43: Training
Step44: Evaluation
Step45: Predictions
Step46: Predictions for the Entire Test-Set
Step47: The Convolutional Neural Network predicts different classes for the images, although most have just been classified as 0 (forky), so the accuracy is horrible.
|
5,099
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import timeit
help('numpy.random.randint')
data = pd.DataFrame(data=np.random.randint(1,10,1000),columns=['value'])
data.describe()
np.median(a=data['value'])
setup = '''
import pandas as pd
import numpy as np
data = pd.DataFrame(data=np.random.randint(1,10,1000),columns=['value'])'''
median_statement ='''np.median(a=data['value'])'''
help('timeit')
timeit.Timer(setup=setup, stmt=median_statement).timeit(number=100000)
setup = '''
import numpy as np
data = np.random.randint(1,10,1000)
'''
median_statement = '''np.median(a=data)'''
mean_statement = '''np.mean (a=data)'''
timeTestSetup = {'median': timeit.Timer(setup=setup,stmt=median_statement),
'mean': timeit.Timer(setup=setup, stmt=mean_statement)}
for (k,v) in timeTestSetup.items():
print("Method:\t{}: Time:\t{}".format(k,v.timeit()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a panda data frame with 1000 values randomly 1 <= x < 10. Uniform random?
Step2: Could also use np.random.normal for some statistical fun
Step3: define setup string which will be run in test case, but before timing begins.
Step4: calculate the median of the column 'value' time how long it takes.
Step5: run timeit with a timer containing the setup and the median statement. The median statement will be called 100000 times.
Step6: Maybe a better test with less overhead.
Step7: Build a little dictionary with the median and mean setup and statement.
Step8: Run them both and report the results.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.