text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h4>Training and Evaluation Set</h4>
Step2: <h4>Test Set</h4>
Step3: <h4>Read the target predicted by AWS ML</h4>
Step4: <h4>AWS ML Estimated/Predicted</h4>
Step5: <h4>Plot actual value and predicted value</h4>
Step6: <h4>Predicted value has lot more noise</h4>
|
<ASSISTANT_TASK:>
Python Code:
def straight_line(x):
return 5 * x + 8
straight_line(25)
straight_line(1.254)
np.random.seed(5)
samples = 150
x_vals = pd.Series(np.random.rand(samples) * 20)
y_vals = x_vals.map(straight_line)
# Add random noise
y_noisy_vals = y_vals + np.random.randn(samples) * 3
df = pd.DataFrame({'x': x_vals,
'y':y_vals,
'y_noisy': y_noisy_vals})
df.head()
# Correlation will indicate how strongly features are related to the output
df.corr()
fig = plt.figure(figsize = (12, 8))
plt.scatter(x = df.x,
y = df.y,
label = 'ideal fit')
plt.scatter(x = df.x,
y = df.y_noisy,
color = 'r',
marker = '+',
label = 'Target')
plt.grid(True)
plt.xlabel('Input Feature')
plt.ylabel('Target')
plt.legend()
data_path = '..\Data\RegressionExamples\straight_line'
df.to_csv(os.path.join(data_path, 'straight_line_example_all.csv'),
index = True,
index_label = 'Row')
# 130 rows for Training + Eval Set
df[df.index < 130].to_csv(os.path.join(data_path,'straight_line_noisy_example_train.csv'),
index = True,
index_label = 'Row'
,columns = ['x','y_noisy'])
# run all the samples for prediction
df.to_csv(os.path.join(data_path, 'straight_line_example_test_all.csv'),
index = True,
index_label = 'Row',
columns = 'x')
df_predicted = pd.read_csv('./output/bp-oDvVSUKSpPe-straight_line_example_test_all.csv.gz')
df_predicted.head()
df_predicted.columns = ["Row", "y_predicted"]
df_predicted.index = df_predicted.Row
df_predicted.head()
fig = plt.figure(figsize = (12, 8))
plt.scatter(x = df.x,
y = df.y_noisy,
color = 'r',
label = 'actual',)
plt.scatter(x = df.x,
y = df_predicted.y_predicted,
color = 'b',
label = 'predicted')
plt.grid(True)
plt.legend()
# Training Data Residuals
residuals = (df_predicted.y_predicted - df.y_noisy)
fig = plt.figure(figsize = (12, 8))
plt.hist(residuals)
plt.grid(True)
plt.xlabel('(Predicted - Actual)')
plt.ylabel('Count')
plt.title('Residuals Distribution')
plt.axvline(color = 'g')
# left of 0 = prediction < actual
# right of 0 = prediction > actual
fig = plt.figure(figsize = (12, 8))
plt.boxplot([df.y_noisy, df_predicted.y_predicted],
labels = ['actual','predicted'])
plt.title('Box Plot - Actual, Predicted')
plt.ylabel('Target')
plt.grid(True)
df_predicted_numeric = pd.read_csv('./output-numeric/bp-VNV4qb98Jmd-straight_line_example_test_all.csv.gz')
df_predicted_numeric.columns = ["Row", "y_predicted"]
df_predicted_numeric.head()
fig = plt.figure(figsize = (12, 8))
plt.scatter(x = df.x,
y = df.y_noisy,
color = 'r',
label = 'actual',)
plt.scatter(x = df.x,
y = df_predicted.y_predicted,
color = 'k',
label = 'predicted bin')
plt.scatter(x = df.x,
y = df_predicted_numeric.y_predicted,
color = 'b',
label = 'predicted num')
plt.legend()
fig = plt.figure(figsize = (12, 8))
plt.boxplot([df.y_noisy, df_predicted.y_predicted, df_predicted_numeric.y_predicted],
labels = ['actual','predicted-bin','predicted-numeric'])
plt.title('Box Plot - Actual, Predicted')
plt.ylabel('Target')
plt.grid(True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) What's the current wind speed? How much warmer does it feel than it actually is?
Step2: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
Step3: 4) What's the difference between the high and low temperatures for today?
Step4: 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
Step5: 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
Step6: 7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
|
<ASSISTANT_TASK:>
Python Code:
import requests
url="https://api.forecast.io/forecast/64f4867f7d4c86182f3d1c6ed881dbfc/17.3850,78.4867"
response=requests.get(url)
data=response.json()
data.keys()
data['currently'].keys()
print("The current wind speed is",data['currently']['windSpeed'],"miles per hour.")
apparentTemperature=data['currently']['apparentTemperature']
temperature=data['currently']['temperature']
if apparentTemperature-temperature > 0:
print("It feels", "%.2f" %(apparentTemperature-temperature),"degrees warmer.")
else:
print("It feels", "%.2f" %(temperature-apparentTemperature),"degrees cooler.")
data['daily'].keys()
mooncover=data['daily']['data'][0]['moonPhase']
if mooncover == 0:
print("Today is a new moon day. The mooncover is",mooncover)
if mooncover > 0 and mooncover <0.25:
print("The moon is in waxing crescent phase. The mooncover is",mooncover)
if mooncover == 0.25:
print("Today is a first quarter moon. The mooncover is",mooncover)
if mooncover > 0.25 and mooncover <0.5:
print("The moon is in waxing gibbous phase. The mooncover is",mooncover)
if mooncover == 0.5:
print("Today is a full moon day. The mooncover is",mooncover)
if mooncover > 0.5 and mooncover<0.75:
print("The moon is in waning gibbous phase. The mooncover is",mooncover)
if mooncover == 0.75:
print("Today is a last quarter moon. The mooncover is",mooncover)
if mooncover >0.75:
print("The moon is in waning crescent phase. The mooncover is",mooncover)
low=data['daily']['data'][0]['temperatureMin']
print("The low temperature is",low)
high=data['daily']['data'][0]['temperatureMax']
print("The high temperature is",high)
print("There is a difference of",high-low,"degrees between the high and low temperatures today.")
x=0
hotlimit=86
warmlimit=68
for date in data['daily']['data']:
if data['daily']['data'][x]['temperatureMax'] > hotlimit:
print("The high for day",x,"of the mext week is",data['daily']['data'][x]['temperatureMax'],"Fahrenheit")
print("It's a hot day. ")
if data['daily']['data'][x]['temperatureMax'] < hotlimit and data['daily']['data'][x]['temperatureMax'] > warmlimit :
print("The high for day",x,"of the mext week is",data['daily']['data'][x]['temperatureMax'],"Fahrenheit")
print("It's a warm day. ")
if data['daily']['data'][x]['temperatureMax'] < warmlimit:
print("The high for day",x,"of the mext week is",data['daily']['data'][x]['temperatureMax'],"Fahrenheit")
print("It's a cold day. ")
x=x+1
import requests
url="https://api.forecast.io/forecast/64f4867f7d4c86182f3d1c6ed881dbfc/25.7617,-80.1918"
miamiresponse=requests.get(url)
miamidata=miamiresponse.json()
miamidata.keys()
noofhoursinaday=0
miamidata['hourly']['data']
for count in miamidata['hourly']['data']:
if miamidata['hourly']['data'][noofhoursinaday]['cloudCover'] > 0.5:
print("The temperature for hour",noofhoursinaday+1,"is",miamidata['hourly']['data'][noofhoursinaday]['temperature'],"degrees F and cloudy.")
else:
print("The temperature for hour",noofhoursinaday+1,"is",miamidata['hourly']['data'][noofhoursinaday]['temperature'],"degrees F.")
noofhoursinaday=noofhoursinaday+1
if noofhoursinaday>23:
break
import requests
timestamp='346550400'
url="https://api.forecast.io/forecast/64f4867f7d4c86182f3d1c6ed881dbfc/40.7829,-73.9654,"+timestamp
cpresponse=requests.get(url)
cpdata=cpresponse.json()
print("The temperature at midnight of Christmas in 1980 was",cpdata['currently']['temperature'],"degrees F")
import requests
timestamp='662083200'
url="https://api.forecast.io/forecast/64f4867f7d4c86182f3d1c6ed881dbfc/40.7829,-73.9654,"+timestamp
cpresponse=requests.get(url)
cpdata=cpresponse.json()
print("The temperature at midnight of Christmas in 1990 was",cpdata['currently']['temperature'],"degrees F")
import requests
timestamp='977702400'
url="https://api.forecast.io/forecast/64f4867f7d4c86182f3d1c6ed881dbfc/40.7829,-73.9654,"+timestamp
cpresponse=requests.get(url)
cpdata=cpresponse.json()
print("The temperature at midnight of Christmas in 2000 was",cpdata['currently']['temperature'],"degrees F")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load capacity curves
Step2: Load ground motion records
Step3: Load damage state thresholds
Step4: Calculate fragility function
Step5: Fit lognormal CDF fragility curves
Step6: Plot fragility functions
Step7: Save fragility functions
|
<ASSISTANT_TASK:>
Python Code:
import numpy
from rmtk.vulnerability.common import utils
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF import MSA_utils
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF.read_pinching_parameters import read_parameters
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF import double_MSA_on_SDOF
%matplotlib inline
capacity_curves_file = '/Users/chiaracasotto/GitHub/rmtk_data/2MSA/capacity_curves.csv'
sdof_hysteresis = "/Users/chiaracasotto/GitHub/rmtk_data/pinching_parameters.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
capacity_curves = utils.check_SDOF_curves(capacity_curves)
utils.plot_capacity_curves(capacity_curves)
hysteresis = read_parameters(sdof_hysteresis)
gmrs_folder = '../../../../../rmtk_data/MSA_records'
number_models_in_DS = 1
no_bins = 2
no_rec_bin = 10
damping_ratio = 0.05
minT = 0.1
maxT = 2
filter_aftershocks = 'FALSE'
Mw_multiplier = 0.92
waveform_path = '../../../../../rmtk_data/2MSA/waveform.csv'
gmrs = utils.read_gmrs(gmrs_folder)
gmr_characteristics = MSA_utils.assign_Mw_Tg(waveform_path, gmrs, Mw_multiplier,
damping_ratio, filter_aftershocks)
#utils.plot_response_spectra(gmrs,minT,maxT)
damage_model_file = "/Users/chiaracasotto/GitHub/rmtk_data/2MSA/damage_model_ISD.csv"
damage_model = utils.read_damage_model(damage_model_file)
degradation = False
record_scaled_folder = "../../../../../rmtk_data/2MSA/Scaling_factors"
msa = MSA_utils.define_2MSA_parameters(no_bins,no_rec_bin,record_scaled_folder,filter_aftershocks)
PDM, Sds, gmr_info = double_MSA_on_SDOF.calculate_fragility(
capacity_curves, hysteresis, msa, gmrs, gmr_characteristics,
damage_model, damping_ratio,degradation, number_models_in_DS)
IMT = 'Sa'
T = 0.47
#T = numpy.arange(0.4,1.91,0.01)
regression_method = 'max likelihood'
fragility_model = MSA_utils.calculate_fragility_model_damaged( PDM,gmrs,gmr_info,IMT,msa,damage_model,
T,damping_ratio, regression_method)
minIML, maxIML = 0.01, 4
MSA_utils.plot_fragility_model(fragility_model,damage_model,minIML, maxIML)
output_type = "csv"
output_path = "../../../../../rmtk_data/2MSA/"
minIML, maxIML = 0.01, 4
tax = 'RC'
MSA_utils.save_mean_fragility(fragility_model,damage_model,tax,output_type,output_path,minIML, maxIML)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step2: 3.2 Hour Angle (HA) and Local Sidereal Time (LST)
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
from IPython.display import HTML
import ephem
import matplotlib
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 10)
import matplotlib
HTML('../style/code_toggle.html')
#Setting up the observer
JB = ephem.Observer()
JB.lat = '53:14:10'
JB.lon = '-02:18:26'
JB.elevation = 0.0
months = ["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"]
h_number = np.zeros((14,),dtype=float)
#Calculating the lst at differnt times of the year
for k in xrange(1,14):
if k < 10:
JB.date = '2016/'+str(k)+'/22 23:59:59'
str_sid = str(JB.sidereal_time()).split(":")
h_number[k-1] = float(str_sid[0]) + float(str_sid[1])/60. + float(str_sid[2])/3600.
elif k > 10:
JB.date = '2016/'+str(k-1)+'/22 23:59:59'
str_sid = str(JB.sidereal_time()).split(":")
h_number[k-1] = float(str_sid[0]) + float(str_sid[1])/60. + float(str_sid[2])/3600
#Plot
matplotlib.rcParams.update({'font.size': 13.75})
fig, ax = plt.subplots()
h_number[-1] = h_number[0]
x = np.arange(14)
x[9:] = x[9:]-1
ax.plot(x,h_number)
ax.set_ylim([0,24])
ticks = np.array([0,2,4,6,8,10,12,14,16,18,20,22,24])
plt.yticks(ticks)
labels = [item.get_text() for item in ax.get_xticklabels()]
labels = np.array(["Jan","Mar","May","Jul","Sep","Nov","Jan"])
ax.set_xticklabels(labels)
labels = [item.get_text() for item in ax.get_yticklabels()]
labels = np.array(['$0^h$','$2^h$','$4^h$','$6^h$','$8^h$','$10^h$','$12^h$','$14^h$','$16^h$','$18^h$','$20^h$','$22^h$','$24^h$'])
ax.set_yticklabels(labels)
plt.grid('on')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ahora vamos a eliminar todo aquello que no se consideren cadenas de texto válidas. Para ello definiremos una función que elimine aquello que no queremos contabilizar.
Step2: Ahora vamos a crear el primer RDD del contenido del libro.
|
<ASSISTANT_TASK:>
Python Code:
fileName='book.txt'
import re
def removePunctuation(text):
return re.sub('[^a-z| |0-9]', '', text.strip().lower())
shakespeareRDD = (sc
.textFile(fileName, 8)
.map(removePunctuation))
shakespeareRDD.take(4)
print '\n'.join(shakespeareRDD
.zipWithIndex() # to (line, lineNum)
.map(lambda (l, num): '{0}: {1}'.format(num, l)) # to 'lineNum: line'
.take(15))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in matched Gene expression data.
Step2: Run a simple screen for DX probes
Step3: Pathway and Gene Annotation Analysis
Step4: Overexpressed pathways
Step5: Underexpressed pathways
Step6: I am folling up on Fatty Acid Metabolism as opposed to biological oxidations, because it has a larger effect size, although the smaller gene-set size gives it a less extreme p-value.
|
<ASSISTANT_TASK:>
Python Code:
import NotebookImport
from Imports import *
import seaborn as sns
sns.set_context('paper',font_scale=1.5)
sns.set_style('white')
matched_rna = pd.read_hdf('/data_ssd/RNASeq_2014_07_15.h5', 'matched_tn')
rna_microarray = pd.read_hdf('/data_ssd/GEO_microarray_dx.h5', 'data')
matched_rna = rna_microarray.join(matched_rna)
dx_rna = binomial_test_screen(matched_rna, fc=1.)
dx_rna = dx_rna[dx_rna.num_dx > 300]
dx_rna.frac.hist(bins=30)
dx_rna.ix[['ADH1A','ADH1B','ADH1C']]
dx_rna.shape
dx_rna.p.rank().ix[['ADH1A','ADH1B','ADH1C']]
dx_rna.sort('p').head(10)
paired_bp_tn_split(matched_rna.ix['ADH1B'], codes, data_type='mRNA')
gs2 = gene_sets.ix[dx_rna.index].fillna(0)
rr = screen_feature(dx_rna.frac, rev_kruskal, gs2.T,
align=False)
fp = (1.*gene_sets.T * dx_rna.frac).T.dropna().replace(0, np.nan).mean().order()
fp.name = 'mean frac'
rr.ix[ti(fp > .5)].join(fp).sort('p').head()
rr.ix[ti(fp < .5)].join(fp).sort('p').head()
def fig_1f(ax):
v = pd.concat([dx_rna.frac,
dx_rna.frac.ix[ti(gs2['REACTOME_CELL_CYCLE_MITOTIC']>0)],
dx_rna.frac.ix[ti(gs2['KEGG_FATTY_ACID_METABOLISM']>0)]])
v1 = pd.concat([pd.Series('All Genes', dx_rna.frac.index),
pd.Series('Cell Cycle\nMitotic',
ti(gs2['REACTOME_CELL_CYCLE_MITOTIC']>0)),
pd.Series('Fatty Acid\nMetabolism',
ti(gs2['KEGG_FATTY_ACID_METABOLISM']>0))])
v1.name = ''
v.name = 'Fraction Overexpressed'
violin_plot_pandas(v1, v, ann=None, ax=ax)
prettify_ax(ax)
return ax
#Do not import
fig, ax = subplots(1,1, figsize=(5,3))
fig_1f(ax);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate some data
Step2: Let's try adding Gaussian (normal) noise...
Step3: ... or some Cauchy (heavy-tailed) noise
Step4: Trying to recover the linear function using Ridge regression
Step5: Cauchy noise
Step6: That looks much less impressive. The problem is that the squared-$\ell^2$-norm minimized by Ridge regression is too sensitive to outliers.
Step7: $\ell^1$ cost
Step8: Japanese cost
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import numpy as np
from sklearn.linear_model import Ridge
from flexible_linear import FlexibleLinearRegression
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
%matplotlib inline
np.random.seed(1)
N = 500
A = 50
B = 3
x = np.linspace(0, 100, N, dtype=float)
X = x.reshape(-1, 1) # scikit-learn wants 2d arrays
y = A + B * x
plt.plot(x, y, '-')
y_gauss = y + 20*np.random.randn(N)
plt.plot(x, y, '-', x, y_gauss, '-')
plt.legend(['True', 'Noisy'], loc='upper left')
y_cauchy = y + 20*np.random.standard_cauchy(N)
plt.plot(x, y, '-', x, y_cauchy, '.')
plt.legend(['True', 'Noisy'], loc='lower right')
clf = Ridge()
clf.fit(X, y_gauss)
pred = clf.predict(X)
plt.plot(x, y, '-', x, pred, '-')
plt.legend(['True', 'Recovered'], loc='upper left')
print(" True: %.3f + %.3f * x" % (A, B))
print("Recovered: %.3f + %.3f * x" % (clf.intercept_, clf.coef_[0]))
clf = Ridge()
clf.fit(X, y_cauchy)
pred = clf.predict(X)
plt.plot(x, y, '-', x, pred, '-')
plt.legend(['True', 'Recovered'], loc='upper left')
print(" True: %.3f + %.3f * x" % (A, B))
print("Recovered: %.3f + %.3f * x" % (clf.intercept_, clf.coef_[0]))
clf = FlexibleLinearRegression(cost_func='l2', C=0.0)
clf.fit(X, y_cauchy)
pred = clf.predict(X)
plt.plot(x, y, '-', x, pred, '-')
plt.legend(['True', 'Recovered'], loc='upper left')
print(" True: %.3f + %.3f * x" % (A, B))
print("Recovered: %.3f + %.3f * x" % (clf.coef_[0], clf.coef_[1]))
clf = FlexibleLinearRegression(cost_func='l1', C=0.0)
clf.fit(X, y_cauchy)
pred = clf.predict(X)
plt.plot(x, y, '-', x, pred, '-')
plt.legend(['True', 'Recovered'], loc='upper left')
print(" True: %.3f + %.3f * x" % (A, B))
print("Recovered: %.3f + %.3f * x" % (clf.coef_[0], clf.coef_[1]))
clf = FlexibleLinearRegression(cost_func='japanese', C=0.0, cost_opts={'eta': 10.0})
clf.fit(X, y_cauchy)
pred = clf.predict(X)
plt.plot(x, y, '-', x, pred, '-')
plt.legend(['True', 'Recovered'], loc='upper left')
print(" True: %.3f + %.3f * x" % (A, B))
print("Recovered: %.3f + %.3f * x" % (clf.coef_[0], clf.coef_[1]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Q1 - échantillon aléatoire
Step2: Q1 - dessiner le nuage de points - donnée
Step3: Q2 - tri
Step4: Q3 - moyenne
Step5: Q4 - distance
Step6: Q5 - fonction comme paramètre
Step7: Ecrire la fonction précédente en utilisant la fonction fct.
Step8: Q6 - optimiser
Step9: Le premier point de coupure trouvé (le trait orange) correspond à un des bords d'un des escaliers.
Step10: Q8 - fonction récursive
Step11: Q9 - coût
Step12: L'instruction suivante permet de voir où le programme passe la majeure partie de son temps.
Step13: La fonction sum cache une boucle, avec la boucle for dans la fonction optimise, cela explique le coût en $O(n^2)$. Le fait qu'à chaque itération, on passe une observation d'un côté à l'autre de la coupure puis on recalcule les moyennes... Il y a deux façons d'optimiser ce calcul selon qu'on tient compte du fait que les valeurs de $Y$ sont binaires ou non.
Step14: C'est carrément plus rapide et cela marche pour toute fonction fct.
Step15: Si on ne suppose pas que les $Y_i$ sont binaires et qu'ils sont quelconques, les histogrammes contiendront plus de deux éléments. Dans ce cas, il faut conserver deux tableaux triés des $Y_i$, de part et d'autres de la coupure. Lorsqu'on bouge la coupure $k$, cela revient à déplacer $Y_k$ d'un tableau à l'autre ce qui se fera par recherche dichotomique donc en $O(\ln n)$. La mise à jour de la moyenne des valeurs absolues est immédiate si la fonction fct=abs(x-y) et pas forcément immédiate dans le cas général. Lorsque c'est une valeur absolue, il faut utiliser quelques résultats sur la régression quantile.
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
import random
X = [random.random() * 16 for i in range(0,1000)]
Y = [ int(x**0.5) % 2 for x in X]
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(X, Y, '.')
nuage = [(x,y) for x,y in zip(X,Y)]
nuage.sort()
nuage[:5]
def somme_diff(xy, i, j):
m = sum(e[1] for e in xy[i:j]) / (j-i)
return sum(abs(e[1]-m) for e in xy[i:j])
somme_diff(nuage, 0, 5), somme_diff(nuage, 0, len(nuage))
def difference(nuage, i, j, k):
m1 = somme_diff(nuage, i, k)
m2 = somme_diff(nuage, k, j)
m = somme_diff(nuage, i, j)
return abs(m1+m2-m)
difference(nuage, 0, len(nuage), 100)
def fct(x, y):
return abs(x-y)
def distance_list(list_x, list_y, f):
return sum(f(x,y) for x,y in zip(list_x, list_y))
distance_list([0, 1], [0, 2], fct)
def somme_diff(xy, i, j, f):
m = sum(e[1] for e in xy[i:j]) / (j-i)
# On a modifié les fonctions précédentes pour calculer
# une fonction d'erreur "custom" ou définie par l'utilisateur.
return sum(f(e[1], m) for e in xy[i:j])
def difference(nuage, i, j, k, f):
m1 = somme_diff(nuage, i, k, f)
m2 = somme_diff(nuage, k, j, f)
m = somme_diff(nuage, i, j, f)
return abs(m - m1) + abs(m - m2)
difference(nuage, 0, len(nuage), 100, fct)
def optimise(nuage, i, j, f):
mx = -1
ib = None
for k in range(i+1,j-1):
d = difference(nuage, i,j,k, f)
if ib is None or d > mx:
mx = d
ib = k
if ib is None:
# Au cas où l'intervalle est vide, on retourne une coupure
# égale à i.
ib = i
mx = 0
return ib, mx
optimise(nuage, 0, len(nuage), fct)
import matplotlib.pyplot as plt
x = nuage[552][0]
plt.plot(X,Y,'.')
plt.plot([x,x], [0,1])
optimise(nuage, 0, 68, fct), optimise(nuage, 68, len(nuage), fct)
import matplotlib.pyplot as plt
x = nuage[58][0]
x2 = nuage[552][0]
plt.plot(X,Y,'.')
plt.plot([x,x], [0,1])
plt.plot([x2,x2], [0,1])
def recursive(nuage, i, j, f, th=0.1):
k, mx = optimise(nuage, i, j, f)
if mx <= th:
return None
r1 = recursive(nuage, i, k, f, th=th)
r2 = recursive(nuage, k, j, f, th=th)
if r1 is None and r2 is None:
return [k]
elif r1 is None:
return [k] + r2
elif r2 is None:
return r1 + [k]
else:
return r1 + [k] + r2
r = recursive(nuage, 0, len(nuage), fct)
r
import matplotlib.pyplot as plt
plt.plot(X, Y, '.')
for i in r:
x = nuage[i][0]
plt.plot([x,x], [0,1])
def somme_diff_abs(xy, i, j):
m = sum(e[1] for e in xy[i:j]) / (j-i)
return sum(abs(e[1]-m) for e in xy[i:j])
def difference_abs(nuage, i, j, k):
m1 = somme_diff_abs(nuage, i, k)
m2 = somme_diff_abs(nuage, k, j)
m = somme_diff_abs(nuage, i, j)
return abs(m1+m2-m)
def optimise_abs(nuage, i, j):
mx = -1
ib = None
for k in range(i+1,j-1):
d = difference_abs(nuage, i,j,k)
if ib is None or d > mx:
mx = d
ib = k
if ib is None:
ib = i
mx = 0
return ib, mx
%timeit optimise_abs(nuage, 0, len(nuage))
# %prun optimise_abs(nuage, 0, len(nuage))
def histogramme_y(xy, i, j):
d = [0, 0]
for x, y in xy[i:j]:
d[y] += 1
return d
def somme_diff_histogramme(d):
m = d[1] * 1.0 / (d[0] + d[1])
return (1-m) * d[1] + m * d[0]
def optimise_rapide(nuage, i, j):
# On calcule les histogrammes.
d1 = histogramme_y(nuage, i, i+1)
d2 = histogramme_y(nuage, i+1, j)
d = d1.copy()
d[0] += d2[0]
d[1] += d2[1]
m = somme_diff_histogramme(d)
m1 = somme_diff_histogramme(d1)
m2 = somme_diff_histogramme(d2)
mx = -1
ib = None
for k in range(i+1,j-1):
d = abs(m1+m2-m)
if ib is None or d > mx:
mx = d
ib = k
# On met à jour les histogrammes. On ajoute d'un côté, on retranche de l'autre.
y = nuage[k][1]
d1[y] += 1
d2[y] -= 1
m1 = somme_diff_histogramme(d1)
m2 = somme_diff_histogramme(d2)
if ib is None:
ib = i
mx = 0
return ib, mx
# On vérifie qu'on obtient les mêmes résultats.
optimise_rapide(nuage, 0, len(nuage)), optimise_abs(nuage, 0, len(nuage))
%timeit optimise_rapide(nuage, 0, len(nuage))
import random
X2 = list(range(10))
Y2 = X2
import matplotlib.pyplot as plt
plt.plot(X2,Y2,'.')
nuage2 = [(x,y) for x,y in zip(X2,Y2)]
nuage2.sort()
r = recursive(nuage2, 0, len(nuage2), fct)
len(r), r
import matplotlib.pyplot as plt
plt.plot(X2,Y2,'.')
for i in r:
x = nuage2[i][0]
plt.plot([x,x], [0,10])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-3', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: init the connection to the ftp server
Step2: What is in the current directory ?
Step3: change to the directory of level 4 delayed-time global maps, and list the content.
Step4: get one file, and unzip it
Step5: close the connection to the ftp server
Step6: Now we are going to download several files.
|
<ASSISTANT_TASK:>
Python Code:
from ftplib import FTP
import os
import numpy as np
ftp = FTP('ftp.sltac.cls.fr')
ftp.login('pprandi','PierreCMEMS2017')
ftp.retrlines('LIST')
ftp.cwd('Core/SEALEVEL_GLO_PHY_L4_REP_OBSERVATIONS_008_047/dataset-duacs-rep-global-merged-allsat-phy-l4-v3/2016/')
ftp.retrlines('LIST')
filename="dt_global_allsat_phy_l4_20160925_20170209.nc.gz"
downloadDir = "/Users/pierre/Documents/Presentations/210706_CMEMSArctic/data_tmp"
fHandle = open("%s/%s" %(downloadDir, filename), "wb")
ftp.retrbinary("RETR " + filename , fHandle.write)
fHandle.close()
os.system("gzip -d %s/%s" %(downloadDir, filename))
ftp.quit()
# get back to parent directory on the ftp server
ftp = FTP('ftp.sltac.cls.fr')
ftp.login('pprandi','PierreCMEMS2017')
ftp.cwd('Core/SEALEVEL_GLO_PHY_L4_REP_OBSERVATIONS_008_047/dataset-duacs-rep-global-merged-allsat-phy-l4-v3')
# get two files a month for five years
for year in np.arange(2010, 2016, 1):
ftp.cwd("%4i" %year)
remoteFilesList = ftp.nlst()
for month in np.arange(1, 13, 1):
for day in [1, 15]:
print("downloading year %s, month %s, day %s" %(year, month, day))
pattern = "dt_global_allsat_phy_l4_%4i%02i%02i_" %(year, month, day)
index = [i for i, elem in enumerate(remoteFilesList) if pattern in elem][0]
remoteFilename = remoteFilesList[index]
localFilename = "%s/dt_global_allsat_phy_l4_%4i%02i%02i.nc.gz" %(downloadDir, year, month, day)
if not os.path.isfile(localFilename[0:-3]):
fHandle = open("%s" %localFilename, "wb")
ftp.retrbinary("RETR " + remoteFilename , fHandle.write)
fHandle.close()
os.system("gzip -d %s" %localFilename)
ftp.cwd("..")
ftp.quit()
# download one along track file
ftp = FTP('ftp.sltac.cls.fr')
ftp.login('pprandi','PierreCMEMS2017')
ftp.cwd('/Core/SEALEVEL_GLO_PHY_L3_REP_OBSERVATIONS_008_045/dataset-duacs-rep-global-alg-phy-l3-v3/2016/')
ftp.retrlines('LIST')
filename="dt_global_alg_phy_vfec_l3_20160925_20170209.nc.gz"
downloadDir = "/Users/pierre/Documents/Presentations/201706_CMEMSArctic/data_tmp"
fHandle = open("%s/%s" %(downloadDir, filename), "wb")
ftp.retrbinary("RETR " + filename , fHandle.write)
fHandle.close()
os.system("gzip -d %s/%s" %(downloadDir, filename))
ftp.quit()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Passage Ranking using TFR-BERT
Step2: Import TensorFlow Ranking and useful libraries through the notebook.
Step3: Data preparation
Step4: Overview of TFR-BERT in Orbit
Step5: Note
Step6: Define Task
Step7: Train and evaluate the model
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q tensorflow-ranking tf-models-official
import os
import tensorflow as tf
import tensorflow_ranking as tfr
from official.nlp.configs import encoders
from tensorflow_ranking.extension.premade import tfrbert_task
!wget -O "/tmp/train.tfrecords" "https://ciir.cs.umass.edu/downloads/Antique/tf-ranking/antique_train_seq_64_elwc.tfrecords"
!wget -O "/tmp/test.tfrecords" "https://ciir.cs.umass.edu/downloads/Antique/tf-ranking/antique_test_seq_64_elwc.tfrecords"
!mkdir -p /tmp/tfrbert
!wget "https://storage.googleapis.com/cloud-tpu-checkpoints/bert/v3/uncased_L-12_H-768_A-12.tar.gz" -P "/tmp/tfrbert"
!mkdir -p /tmp/tfrbert/uncased_L-12_H-768_A-12
!tar -xvf /tmp/tfrbert/uncased_L-12_H-768_A-12.tar.gz --strip-components 3 -C "/tmp/tfrbert/uncased_L-12_H-768_A-12/"
SEQ_LENGTH = 64
context_feature_spec = {}
example_feature_spec = {
'input_word_ids': tf.io.FixedLenFeature(
shape=(SEQ_LENGTH,), dtype=tf.int64,
default_value=[0] * SEQ_LENGTH),
'input_mask': tf.io.FixedLenFeature(
shape=(SEQ_LENGTH,), dtype=tf.int64,
default_value=[0] * SEQ_LENGTH),
'input_type_ids': tf.io.FixedLenFeature(
shape=(SEQ_LENGTH,), dtype=tf.int64,
default_value=[0] * SEQ_LENGTH)}
label_spec = (
"relevance",
tf.io.FixedLenFeature(shape=(1,), dtype=tf.int64, default_value=-1)
)
# Set up data config
# We use a small list size here for demo purposes only. Users can use a larger
# list size on a machine with more memory to train TFR-BERT.
train_data_config = tfrbert_task.TFRBertDataConfig(
input_path="/tmp/train.tfrecords",
is_training=True,
global_batch_size=8,
list_size=2,
dataset_fn='tfrecord',
seq_length=64)
validation_data_config = tfrbert_task.TFRBertDataConfig(
input_path="/tmp/test.tfrecords",
is_training=False,
global_batch_size=8,
list_size=2,
dataset_fn='tfrecord',
seq_length=64)
# Set up task config
task_config = tfrbert_task.TFRBertConfig(
init_checkpoint='/tmp/tfrbert/uncased_L-12_H-768_A-12/bert_model.ckpt',
train_data=train_data_config,
validation_data=validation_data_config,
model=tfrbert_task.TFRBertModelConfig(
encoder=encoders.EncoderConfig(
bert=encoders.BertEncoderConfig(num_layers=12))))
# Set up TFRBertTask
task = tfrbert_task.TFRBertTask(
task_config,
label_spec=label_spec,
dataset_fn=tf.data.TFRecordDataset,
logging_dir='/tmp/model_dir')
metrics = task.build_metrics()
model = task.build_model()
task.initialize(model)
train_dataset = task.build_inputs(task_config.train_data)
vali_dataset = task.build_inputs(task_config.validation_data)
train_iterator = iter(train_dataset)
vali_iterator = iter(vali_dataset)
optimizer = tf.keras.optimizers.Adam(lr=1e-6)
NUM_TRAIN_STEPS = 100
EVAL_STEPS = 10
for train_step in range(NUM_TRAIN_STEPS):
task.train_step(next(train_iterator), model, optimizer, metrics=metrics)
train_metrics = {m.name: m.result().numpy() for m in metrics}
print("Training metrics for epoch: " + str(train_step) + " ", train_metrics)
if train_step % EVAL_STEPS == 0:
task.validation_step(next(train_iterator), model, metrics=metrics)
vali_metrics = {m.name: m.result().numpy() for m in metrics}
print("Validation metrics for epoch: " + str(train_step) + " ",
vali_metrics)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The oandapy wraps the Oanda API in a format that make our codes cleaner and the task of extracting information from the API easier.
Step2: We instantiate (create an instance) to prepare for the API call we are about to make. We named this instance as oanda. This name is flexible. In this case, we follow the documentation provide in oandapy.
Step3: Using the instance we created earlier with the name oanda, we make an API call using get_prices method whilst passing the argument instruments="AUD_USD,NZD_USD,EUR_USD" to the API. The argument provided tells the Oanda API to return information only on these instruments. All the data is to be stored in the response variable.
Step4: The above value is in the format of Python lists object. Python lists object can be identified by the squared brackets [].
Step5: We can store individual value for future use
Step6: We can also use the pandas DataFrame to display the value in a familiar and easier to read table format. The pandas library is a powerful library created for the sole purpose of data analysis
Step7: Obtaining a List of Trabable Instruments
Step8: Getting Historical Prices
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import oandapy
import configparser
config = configparser.ConfigParser()
config.read('../config/config_v1.ini')
account_id = config['oanda']['account_id']
api_key = config['oanda']['api_key']
oanda = oandapy.API(environment="practice",
access_token=api_key)
response = oanda.get_prices(instruments=
"AUD_USD,NZD_USD,EUR_USD")
print(response)
response['prices']
response["prices"][0]['ask']
data = response['prices']
time_stamp = data[0]['time']
instrument = data[0]['instrument']
bid_price = data[0]['bid']
ask_price = data[0]['ask']
print("[{}] {} bid={} ask={}".format(time_stamp,
instrument,
bid_price,
ask_price))
pd.DataFrame(response["prices"])
response = oanda.get_instruments(account_id)
pd.DataFrame(response['instruments']).head()
pd.DataFrame(response['instruments']).tail()
response = oanda.get_history(instrument="EUR_USD")
print(response)
res = pd.DataFrame(response["candles"])
res.head()
res.columns = ['Close_Ask', 'Close_Bid', 'Complete',
'High_Ask', 'High_Bid', 'Low_Ask', 'Low_Bid',
'Open_Ask', 'Open_Bid', 'Time', 'Volume']
res = res.reindex_axis(['Time', 'Open_Bid', 'Open_Ask',
'High_Bid', 'High_Ask', 'Low_Bid',
'Low_Ask', 'Close_Bid', 'Close_Ask',
'Complete', 'Volume'],
axis=1)
res.tail()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Automatic generation of coupled cluster equations
Step2: Define the orbital spaces
Step3: Define the Hamiltonian operator
Step4: Define a function to compute the cluster operator truncated to level n
Step5: Setup the similarity-transformed Hamiltonian and compute expectation value
Step6: Running the code
Step7: We can inspect all the singles equations
Step8: And generate nice-looking equations
Step9: Counting the number of diagrams
|
<ASSISTANT_TASK:>
Python Code:
import wicked as w
import time
from IPython.display import display, Math, Latex
def latex(expr):
Function to render any object that has a member latex() function
display(Math(expr.latex()))
w.reset_space()
w.add_space('o','fermion','occupied',['i','j','k','l','m','n'])
w.add_space('v','fermion','unoccupied',['a','b','c','d','e','f'])
E0 = w.op("E_0",[''])
F = w.utils.gen_op('f',1,'ov','ov')
V = w.utils.gen_op('v',2,'ov','ov')
H = E0 + F + V
def make_T(n):
components = [f"{'v+' * k} {'o' * k}" for k in range(1,n + 1)]
return w.op("t",components)
make_T(3)
def cc_equations(n):
start = time.perf_counter()
wt = w.WickTheorem()
T = make_T(n)
Hbar = w.bch_series(H,T,4)
expr = wt.contract(w.rational(1), Hbar, 0, 2 * n)
mbeq = expr.to_manybody_equation("r")
end = time.perf_counter()
t = end - start
equations = {}
for r in range(0,n + 1):
s = f"{'o' * r}|{'v' * r}"
equations[r] = (mbeq[s])
return equations, t
equations, t = cc_equations(5)
# count the number of terms for each rank
s = 0
for rank,eq in equations.items():
print(f'Rank {rank}: {len(eq)} equations')
s += len(eq)
print(f'\nGenerated {s} equations in {t:.3f} seconds')
equations[1]
for eq in equations[1]:
latex(eq)
rows = []
for n in range(1,6):
equations, t = cc_equations(n)
count = " ".join([f'{len(eqs):4d}' for k, eqs in equations.items()])
rows.append(f' {n} {t:4.1f} {count}')
width = 39
print(f'{"=" * width}')
print(f' n time Excitation level ')
print(f' (s) 0 1 2 3 4 5 ')
print(f'{"-" * width}')
print("\n".join(rows))
print(f'{"=" * width}')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Lesson
Step3: Project 1
Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
Step5: TODO
Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
Step8: Examine the ratios you've calculated for a few words
Step9: Looking closely at the values you just calculated, we see the following
Step10: Examine the new ratios you've calculated for the same words from before
Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Step12: End of Project 1.
Step13: Project 2
Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
Step16: TODO
Step17: Run the following cell. It should display (1, 74074)
Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
Step20: TODO
Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
Step23: TODO
Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
Step29: End of Project 2.
Step30: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
Step31: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
Step32: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
Step33: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
Step34: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
Step35: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
Step39: Project 4
Step40: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
Step41: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
Step42: End of Project 4.
Step46: Project 5
Step47: Run the following cell to recreate the network and train it once again.
Step48: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Step49: End of Project 5.
Step53: Project 6
Step54: Run the following cell to train your network with a small polarity cutoff.
Step55: And run the following cell to test it's performance. It should be
Step56: Run the following cell to train your network with a much larger polarity cutoff.
Step57: And run the following cell to test it's performance.
Step58: End of Project 6.
|
<ASSISTANT_TASK:>
Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
from collections import Counter
import numpy as np
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
import re
for index, review in enumerate(reviews):
label = labels[index]
review = re.sub('\s+', ' ', review) # condense whitespacechars
words = review.split(' ')
total_counts += Counter(words)
if label == 'POSITIVE':
positive_counts += Counter(words)
if label == 'NEGATIVE':
negative_counts += Counter(words)
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()[:10]
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()[:10]
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
common_words = [x for x in total_counts.most_common() if x[1] > 100]
for word, count in common_words:
pos_neg_ratios[word] = positive_counts[word] / float(negative_counts[word]+1)
for word, ratio in pos_neg_ratios.most_common():
if ratio > 1:
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log(1 / (ratio + 0.01))
print(pos_neg_ratios.most_common()[:15])
print(list(reversed(pos_neg_ratios.most_common()[:15])))
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
# TODO: Convert ratios to logs
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()[:10]
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:10]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = [x for x in total_counts]
vocab_size = len(vocab)
print(vocab_size)
from IPython.display import Image
Image(filename='sentiment_network_2.png')
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = np.zeros((1,vocab_size))
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i, word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
review = re.sub('\s+', ' ', review) # condense whitespace
words = review.split(' ')
for word in words:
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
# TODO: Your code here
if label == 'POSITIVE':
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(' '):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
self.layer_0 *= 0 # clear out previous state
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given revlayer_2_erroriews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
layer_2_error = layer_2 - self.get_target_for_label(label)
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 -= np.dot(layer_1.T, layer_2_delta) * self.learning_rate
self.weights_0_1 -= np.dot(self.layer_0.T, layer_1_delta) * self.learning_rate
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if layer_2_error < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.test(reviews[-1000:],labels[-1000:])
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(' '):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
self.layer_0 *= 0 # clear out previous state
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given revlayer_2_erroriews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
layer_2_error = layer_2 - self.get_target_for_label(label)
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 -= np.dot(layer_1.T, layer_2_delta) * self.learning_rate
self.weights_0_1 -= np.dot(self.layer_0.T, layer_1_delta) * self.learning_rate
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if layer_2_error < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(' '):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.layer_1 = np.zeros((1,hidden_nodes))
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given revlayer_2_erroriews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews_raw)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
layer_2_error = layer_2 - self.get_target_for_label(label)
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 -= np.dot(layer_1.T, layer_2_delta) * self.learning_rate
self.weights_0_1 -= np.dot(self.layer_0.T, layer_1_delta) * self.learning_rate
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if layer_2_error < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, min_count = 10, polarity_cutoff = 0.1, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels, min_count, polarity_cutoff)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)
Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like.
Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times.
Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff
def pre_process_data(self, reviews, labels, min_count, polarity_cutoff):
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(' '):
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.layer_1 = np.zeros((1,hidden_nodes))
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given revlayer_2_erroriews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews_raw)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
layer_2_error = layer_2 - self.get_target_for_label(label)
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 -= np.dot(layer_1.T, layer_2_delta) * self.learning_rate
self.weights_0_1 -= np.dot(self.layer_0.T, layer_1_delta) * self.learning_rate
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if layer_2_error < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'sandbox-1', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: <font size="1.5em">More information about the watermark magic command extension.</font>
Step3: <br>
Step4: <br>
Step5: As we can see, the resulting principal components do not yield a subspace where the data is linearly separated well. Note that PCA is a unsupervised method and does not "consider" class labels in order to maximize the variance in contrast to Linear Discriminant Analysis. Here, the colors blue and red are just added for visualization purposes to indicate the degree of separation.
Step6: We can clearly see that the projection via RBF kernel PCA yielded a subspace where the classes are separated well. Such a subspace can then be used as input for linear classification models, such as Support Vector Machines or naive Bayes classifiers, which will be covered in future articles.
Step7: <br>
Step8: <br>
Step9: Again, the results obtained via the linear PCA approach does not produce a subspace where the 2 classes are linearly well separated.
Step10: And again, this 1-dimensional subspace obtained via Gaussian RBF kernel PCA looks much better in terms of linear class separation.
Step11: <br>
Step12: <br>
Step13: <br>
Step15: <br>
Step16: Now, let's make a new half-moon dataset and project it onto a 1-dimensonal subspace using the RBF kernel PCA
Step17: <br>
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -v -u -d -p scipy,scikit-learn,numpy,matplotlib
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def stepwise_kpca(X, gamma, n_components):
Implementation of a RBF kernel PCA.
Arguments:
X: A MxN dataset as NumPy array where the samples are stored as rows (M),
and the attributes defined as columns (N).
gamma: A free parameter (coefficient) for the RBF kernel.
n_components: The number of components to be returned.
# Calculating the squared Euclidean distances for every pair of points
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Converting the pairwise distances into a symmetric MxM matrix.
mat_sq_dists = squareform(sq_dists)
# Computing the MxM kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Centering the symmetric NxN kernel matrix.
N = K.shape[0]
one_n = np.ones((N,N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenvalues in descending order with corresponding
# eigenvectors from the symmetric matrix.
eigvals, eigvecs = eigh(K)
# Obtaining the i eigenvectors that corresponds to the i highest eigenvalues.
X_pc = np.column_stack((eigvecs[:,-i] for i in range(1,n_components+1)))
return X_pc
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.figure(figsize=(8,6))
plt.scatter(X[y==0, 0], X[y==0, 1], color='red', alpha=0.5)
plt.scatter(X[y==1, 0], X[y==1, 1], color='blue', alpha=0.5)
plt.title('A nonlinear 2Ddataset')
plt.ylabel('y coordinate')
plt.xlabel('x coordinate')
plt.show()
from sklearn.decomposition import PCA
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
plt.figure(figsize=(8,6))
plt.scatter(X_spca[y==0, 0], X_spca[y==0, 1], color='red', alpha=0.5)
plt.scatter(X_spca[y==1, 0], X_spca[y==1, 1], color='blue', alpha=0.5)
plt.title('First 2 principal components after Linear PCA')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.show()
import numpy as np
scikit_pca = PCA(n_components=1)
X_spca = scikit_pca.fit_transform(X)
plt.figure(figsize=(8,6))
plt.scatter(X_spca[y==0, 0], np.zeros((50,1)), color='red', alpha=0.5)
plt.scatter(X_spca[y==1, 0], np.zeros((50,1)), color='blue', alpha=0.5)
plt.title('First principal component after Linear PCA')
plt.xlabel('PC1')
plt.show()
X_pc = stepwise_kpca(X, gamma=15, n_components=2)
plt.figure(figsize=(8,6))
plt.scatter(X_pc[y==0, 0], X_pc[y==0, 1], color='red', alpha=0.5)
plt.scatter(X_pc[y==1, 0], X_pc[y==1, 1], color='blue', alpha=0.5)
plt.title('First 2 principal components after RBF Kernel PCA')
plt.text(-0.18, 0.18, 'gamma = 15', fontsize=12)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.show()
plt.figure(figsize=(8,6))
plt.scatter(X_pc[y==0, 0], np.zeros((50)), color='red', alpha=0.5)
plt.scatter(X_pc[y==1, 0], np.zeros((50)), color='blue', alpha=0.5)
plt.title('First principal component after RBF Kernel PCA')
plt.text(-0.17, 0.007, 'gamma = 15', fontsize=12)
plt.xlabel('PC1')
plt.show()
from sklearn.decomposition import KernelPCA
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.figure(figsize=(8,6))
plt.scatter(X_skernpca[y==0, 0], X_skernpca[y==0, 1], color='red', alpha=0.5)
plt.scatter(X_skernpca[y==1, 0], X_skernpca[y==1, 1], color='blue', alpha=0.5)
plt.text(-0.48, 0.35, 'gamma = 15', fontsize=12)
plt.title('First 2 principal components after RBF Kernel PCA via scikit-learn')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.show()
scikit_kpca = KernelPCA(n_components=1, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.figure(figsize=(8,6))
plt.scatter(X_skernpca[y==0, 0], np.zeros((50,1)), color='red', alpha=0.5)
plt.scatter(X_skernpca[y==1, 0], np.zeros((50,1)), color='blue', alpha=0.5)
plt.text(-0.48, 0.007, 'gamma = 15', fontsize=12)
plt.title('First principal component after RBF Kernel PCA')
plt.xlabel('PC1')
plt.show()
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.figure(figsize=(8,6))
plt.scatter(X[y==0, 0], X[y==0, 1], color='red', alpha=0.5)
plt.scatter(X[y==1, 0], X[y==1, 1], color='blue', alpha=0.5)
plt.title('Concentric circles')
plt.ylabel('y coordinate')
plt.xlabel('x coordinate')
plt.savefig('/Users/Sebastian/Desktop/circles1.pdf')
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
plt.figure(figsize=(8,6))
plt.scatter(X[y==0, 0], np.zeros((500,1))+0.1, color='red', alpha=0.5)
plt.scatter(X[y==1, 0], np.zeros((500,1))-0.1, color='blue', alpha=0.5)
plt.ylim([-15,15])
plt.text(-0.125, 12.5, 'gamma = 15', fontsize=12)
plt.title('First principal component after Linear PCA')
plt.xlabel('PC1')
plt.savefig('/Users/Sebastian/Desktop/circles2.pdf')
X_pc = stepwise_kpca(X, gamma=15, n_components=1)
plt.figure(figsize=(8,6))
plt.scatter(X_pc[y==0, 0], np.zeros((500,1)), color='red', alpha=0.5)
plt.scatter(X_pc[y==1, 0], np.zeros((500,1)), color='blue', alpha=0.5)
plt.text(-0.05, 0.007, 'gamma = 15', fontsize=12)
plt.title('First principal component after RBF Kernel PCA')
plt.xlabel('PC1')
plt.savefig('/Users/Sebastian/Desktop/circles3.pdf')
from sklearn.datasets.samples_generator import make_swiss_roll
from mpl_toolkits.mplot3d import Axes3D
X, color = make_swiss_roll(n_samples=800, random_state=123)
fig = plt.figure(figsize=(7,7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=color, cmap=plt.cm.rainbow)
plt.title('Swiss Roll in 3D')
plt.show()
from sklearn.decomposition import PCA
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
plt.figure(figsize=(8,6))
plt.scatter(X_spca[:, 0], X_spca[:, 1], c=color, cmap=plt.cm.rainbow)
plt.title('First 2 principal components after Linear PCA')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.show()
scikit_pca = PCA(n_components=1)
X_spca = scikit_pca.fit_transform(X)
plt.figure(figsize=(8,6))
plt.scatter(X_spca, np.zeros((800,1)), c=color, cmap=plt.cm.rainbow)
plt.title('First principal component after Linear PCA')
plt.xlabel('PC1')
plt.show()
X_pc = stepwise_kpca(X, gamma=0.1, n_components=2)
plt.figure(figsize=(8,6))
plt.scatter(X_pc[:, 0], X_pc[:, 1], c=color, cmap=plt.cm.rainbow)
plt.title('First 2 principal components after RBF Kernel PCA')
plt.text(-0.14, 0.14, 'gamma = 0.1', fontsize=12)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.show()
plt.figure(figsize=(8,6))
plt.scatter(X_pc[:,0], np.zeros((800,1)), c=color, cmap=plt.cm.rainbow)
plt.text(-0.125, 0.007, 'gamma = 0.1', fontsize=12)
plt.title('First principal component after RBF Kernel PCA')
plt.xlabel('PC1')
plt.show()
from sklearn.manifold import locally_linear_embedding
X_lle, err = locally_linear_embedding(X, n_neighbors=12, n_components=2)
plt.figure(figsize=(8,6))
plt.scatter(X_lle[:, 0], X_lle[:, 1], c=color, cmap=plt.cm.rainbow)
plt.title('First 2 principal components after Locally Linear Embedding')
plt.show()
from sklearn.manifold import locally_linear_embedding
X_lle, err = locally_linear_embedding(X, n_neighbors=12, n_components=1)
plt.figure(figsize=(8,6))
plt.scatter(X_lle, np.zeros((800,1)), c=color, cmap=plt.cm.rainbow)
plt.title('First principal component after Locally Linear Embedding')
plt.show()
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def stepwise_kpca(X, gamma, n_components):
Implementation of a RBF kernel PCA.
Arguments:
X: A MxN dataset as NumPy array where the samples are stored as rows (M),
and the attributes defined as columns (N).
gamma: A free parameter (coefficient) for the RBF kernel.
n_components: The number of components to be returned.
Returns the k eigenvectors (alphas) that correspond to the k largest
eigenvalues (lambdas).
# Calculating the squared Euclidean distances for every pair of points
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Converting the pairwise distances into a symmetric MxM matrix.
mat_sq_dists = squareform(sq_dists)
# Computing the MxM kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Centering the symmetric NxN kernel matrix.
N = K.shape[0]
one_n = np.ones((N,N)) / N
K_norm = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenvalues in descending order with corresponding
# eigenvectors from the symmetric matrix.
eigvals, eigvecs = eigh(K_norm)
# Obtaining the i eigenvectors (alphas) that corresponds to the i highest eigenvalues (lambdas).
alphas = np.column_stack((eigvecs[:,-i] for i in range(1,n_components+1)))
lambdas = [eigvals[-i] for i in range(1,n_components+1)]
return alphas, lambdas
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = stepwise_kpca(X, gamma=15, n_components=1)
x_new = X[25]
X_proj = alphas[25] # original projection
x_new
X_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new-row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(8,6))
plt.scatter(alphas[y==0, 0], np.zeros((50)), color='red', alpha=0.5)
plt.scatter(alphas[y==1, 0], np.zeros((50)), color='blue', alpha=0.5)
plt.scatter(X_proj, 0, color='black', label='original projection of point X[24]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green', label='remapped point X[24]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download model checkpoint
Step2: Import the Model Architecture
Step3: Exercise
Step4: Build the Servable from the Estimator API
Step6: Helper Functions for Building a TensorFlow Graph
Step7: Unit test the helper function
Step8: Run the Test Graph
Step9: Remarks
Step10: Unit Test the Preprocessing Helper Function
Step11: Helper Function
Step12: Unit Test the Output Postprocessing Helper Function
Step13: End-to-End Helper Function
Step14: Servable Model API Definition
Step15: Build the Estimator
Step16: Serving input receiver function
Step17: Export the servable model to disk
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import os
import tensorflow as tf
import urllib.request
# Define a constant indicating the number of layers in our loaded model. We're loading a
# resnet-50 model.
RESNET_SIZE = 50
# Model and serving directories
MODEL_DIR="resnet_model_checkpoints"
SERVING_DIR="estimator_servable"
SAMPLE_DIR="../client"
urllib.request.urlretrieve("http://download.tensorflow.org/models/official/resnet50_2017_11_30.tar.gz ", "resnet.tar.gz")
#unzip the file into a directory called resnet
from subprocess import call
call(["mkdir", MODEL_DIR])
call(["tar", "-zxvf", "resnet.tar.gz", "-C", MODEL_DIR])
# Make sure you see model checkpoint files in this directory
os.listdir(MODEL_DIR)
%run ../../models/official/resnet/resnet_model.py
# TODO: Copy constants from imagenet_main.py.
def serving_model_fn(features, labels, mode):
'''The main model function used by the estimator to define the TensorFlow model server API.
Args:
features: The client request, which is a dictionary: {'image': 1D tensor of jpeg strings}
labels: None or not used since we are predicting only
mode: TRAIN, EVAL, or PREDICT. Serving only uses PREDICT mode.
Returns:
If training or evaluating (should not happen), return a blank EstimatorSpec that does
nothing.
If predicting (always), return an EstimatorSpec that produces a response with top k classes
and probabilities to send back to the client.
'''
# TODO: Remove tf.summary.image(). This is used for monitoring during training.
tf.summary.image('images', features, max_outputs=6)
# Move preprocessing, network, and postprocessing into a helper function.
# serving_input_to_output() will be defined below.
predictions = serving_input_to_output(features, mode)
# Create the PREDICT EstimatorSpec that will send a proper response back to the client.
if mode == tf.estimator.ModeKeys.PREDICT:
return create_servable_estimator_spec(predictions, mode)
# TODO: You already returned the EstimatorSpec for predictions.
# Training and evaluation are not needed.
# Shortcut every graph element below here by returning a minimal EstimatorSpec.
return ???
# Calculate loss, which includes softmax cross entropy and L2 regularization.
cross_entropy = tf.losses.softmax_cross_entropy(
logits=logits, onehot_labels=labels)
# Create a tensor named cross_entropy for logging purposes.
tf.identity(cross_entropy, name='cross_entropy')
tf.summary.scalar('cross_entropy', cross_entropy)
# Add weight decay to the loss. We exclude the batch norm variables because
# doing so leads to a small improvement in accuracy.
loss = cross_entropy + _WEIGHT_DECAY * tf.add_n(
[tf.nn.l2_loss(v) for v in tf.trainable_variables()
if 'batch_normalization' not in v.name])
if mode == tf.estimator.ModeKeys.TRAIN:
# Scale the learning rate linearly with the batch size. When the batch size
# is 256, the learning rate should be 0.1.
initial_learning_rate = 0.1 * params['batch_size'] / 256
batches_per_epoch = _NUM_IMAGES['train'] / params['batch_size']
global_step = tf.train.get_or_create_global_step()
# Multiply the learning rate by 0.1 at 30, 60, 80, and 90 epochs.
boundaries = [
int(batches_per_epoch * epoch) for epoch in [30, 60, 80, 90]]
values = [
initial_learning_rate * decay for decay in [1, 0.1, 0.01, 1e-3, 1e-4]]
learning_rate = tf.train.piecewise_constant(
tf.cast(global_step, tf.int32), boundaries, values)
# Create a tensor named learning_rate for logging purposes.
tf.identity(learning_rate, name='learning_rate')
tf.summary.scalar('learning_rate', learning_rate)
optimizer = tf.train.MomentumOptimizer(
learning_rate=learning_rate,
momentum=_MOMENTUM)
# Batch norm requires update_ops to be added as a train_op dependency.
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss, global_step)
else:
train_op = None
accuracy = tf.metrics.accuracy(
tf.argmax(labels, axis=1), predictions['classes'])
metrics = {'accuracy': accuracy}
# Create a tensor named train_accuracy for logging purposes.
tf.identity(accuracy[1], name='train_accuracy')
tf.summary.scalar('train_accuracy', accuracy[1])
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
loss=loss,
train_op=train_op,
eval_metric_ops=metrics)
def convert_jpeg_to_image(encoded_image):
Preprocesses the image by subtracting out the mean from all channels.
Args:
image: A jpeg-formatted byte stream represented as a string.
Returns:
A 3d tensor of image pixels normalized to be between -0.5 and 0.5, resized to
height x width x 3.
The normalization approximates the preprocess_for_train and preprocess_for_eval functions
in https://github.com/tensorflow/models/blob/v1.4.0/official/resnet/vgg_preprocessing.py.
image = ??? # TODO: Use a tf function to decode the jpeg into a 3d tensor.
image = tf.to_float(image) / 255.0 - 0.5 # Normalize values to be between -0.5 and 0.5.
return image
# Defining input test graph nodes: only needs to be run once!
test_jpeg_ph = tf.placeholder(dtype=tf.string, shape=[], name='test_jpeg_placeholder') # A placeholder for a single string, which is a dimensionless (0D) tensor.
test_decoded_tensor = convert_jpeg_to_image(test_jpeg_ph) # Output node, which returns a 3D tensor after processing.
# Print the graph elements to check shapes. ? indicates that TensorFlow does not know the length.
# of those dimensions.
print(test_jpeg_ph)
print(test_decoded_tensor)
# Validate the result of the function using a sample image SAMPLE_DIR/cat_sample.jpg
with open(os.path.join(SAMPLE_DIR, "cat_sample.jpg"), "rb") as imageFile:
jpeg_str = imageFile.read()
with tf.Session() as sess:
result = sess.run(test_decoded_tensor, feed_dict={test_jpeg_ph: jpeg_str})
assert result.shape == (224, 224, 3)
# TODO: Replace with assert statements to check max and min normalized pixel values
assert False
print('Hooray! JPEG decoding test passed!')
def preprocess_input(features):
'''Function to preprocess client request before feeding into the network.
Use tf.map_fn and the convert_jpeg_to_image() helper function to convert the
1D input tensor of jpeg strings into a list of single-precision floating
point 3D tensors, which are normalized pixel values for the images.
Then stack and reshape this list of tensors into a 4D tensor with
appropriate dimensions.
Args:
features: request received from our client,
a dictionary with a single element containing a tensor of multiple jpeg images
{'images' : 1D_tensor_of_jpeg_byte_strings}
Returns:
a 4D tensor of normalized pixel values for the input images.
'''
images = features['images'] # A tensor of tf.strings
processed_images = ??? # TODO: fill in the ???
processed_images = tf.stack(processed_images) # Convert list of 3D tensors to a 4D tensor
processed_images = tf.reshape(tensor=processed_images, # Reshaping informs Tensorflow of the final dimensions of the 4D tensor
shape=[-1, _DEFAULT_IMAGE_SIZE, _DEFAULT_IMAGE_SIZE, 3])
return processed_images
# Build a Test Input Preprocessing Network: only needs to be run once!
test_jpeg_tensor = tf.placeholder(dtype=tf.string, shape=???, name='test_jpeg_tensor') # A placeholder for a single string, which is a dimensionless (0D) tensor.
test_processed_images = preprocess_input({'images': test_jpeg_tensor}) # Output node, which returns a 3D tensor after processing.
# Print the graph elements to check shapes. ? indicates that Tensorflow does not know the length of those dimensions.
print(test_jpeg_tensor)
print(test_processed_images)
# Run test network using a sample image SAMPLE_DIR/cat_sample.jpg
with open(os.path.join(SAMPLE_DIR, "cat_sample.jpg"), "rb") as imageFile:
jpeg_str = imageFile.read()
with tf.Session() as sess:
result = sess.run(test_processed_images, feed_dict={test_jpeg_tensor: np.array([jpeg_str, jpeg_str])}) # Duplicate for length 2 array
assert result.shape == (2, 224, 224, 3) # 4D tensor with first dimension length 2, since we have 2 images
# TODO: add a test for min and max normalized pixel values
assert False
# TODO: add a test to verify that the resulting tensor for image 0 and image 1 are identical.
assert False
print('Hooray! Input unit test succeeded!')
TOP_K = 5
def postprocess_output(logits, k=TOP_K):
'''Return top k classes and probabilities from class logits.'''
probs = tf.nn.softmax(logits) # Converts logits to probabilities.
top_k_probs, top_k_classes = ???
return {'classes': top_k_classes, 'probabilities': top_k_probs}
# Build Test Output Postprocessing Network: only needs to be run once!
test_logits_ph = tf.placeholder(dtype=tf.float32, shape=???, name='test_logits_placeholder')
test_prediction_output = postprocess_output(test_logits_ph)
# Print the graph elements to check shapes.
print(test_logits_ph)
print(test_prediction_output)
# Run test network
with tf.Session() as sess:
logits = np.ones(???) # TODO: number of classes
result = sess.run(test_prediction_output, {test_logits_ph: logits})
classes = result['classes']
probs = result['probabilities']
# Inefficient but simple element-wise check
assert probs[1:].all() == probs[:-1].all()
expected_probs = np.array(len(probs) * [1.0/???]) # Number of classes
assert probs.all() == expected_probs.all()
print('Hooray! Output unit test succeeded!')
def serving_input_to_output(jpeg_tensor, mode, k=TOP_K):
# TODO: Preprocess jpeg tensors before sending tensors to the network.
preprocessed_images = ???
# TODO: Use 'channels_first' or 'channels_last'
network = imagenet_resnet_v2(RESNET_SIZE, _LABEL_CLASSES, data_format='channels_last')
# TODO: Connect the preprocessed images to the network
logits = ???
# TODO: Postprocess outputs of network (logits) and send top k predictions back to client.
predictions = ???
return predictions
def create_servable_estimator_spec(predictions, mode):
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions, # Note: This is not be used in serving, but must be provided for the Estimator API.
??? # TODO: assign an appropriate dictionary to the export_outputs parameter here.
)
estimator = tf.estimator.Estimator(
model_fn=serving_model_fn,
model_dir=MODEL_DIR,
)
def serving_input_receiver_fn():
return tf.estimator.export.build_raw_serving_input_receiver_fn(???)() ## TODO: Add dictionary
estimator.export_savedmodel(export_dir_base=SERVING_DIR,
serving_input_receiver_fn=serving_input_receiver_fn)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Internally it computes a dict with
Step2: DECODE ONE VALUE
Step3: MATCH ONE VALUE
Step4: ENCODE A VALUE (EXCLUSIVELLY)
Step5: ENCODE A VALUE (ALL)
Step6: ENCODE AND
Step7: DECODE AN IMAGE
Step8: BANDS
Step9: APPLY THE BitReader TO THE BAND THAT HOLDS THE BIT INFORMATION
Step10: BitReader INFORMATION FOR KNOW COLLECTIONS AVAILABLE IN geetools.cloud_mask MODULE
|
<ASSISTANT_TASK:>
Python Code:
import ee
ee.Initialize()
from geetools import bitreader, cloud_mask
options = {
'0-1': {0:'clear', 1:'cloud', 2:'mix'}, # cloud state
'2-2': {0: 'no_shadow', 1:'shadow'}, # cloud shadow (bit 0 is not needed)
'6-7': {0:'climatology', 1:'low', 2:'average', 3:'high'} # land/water flag
}
reader = bitreader.BitReader(options, 16)
reader.info
print('bit length', reader.bit_length)
value = 204
bits = reader.getBin(value)
print('204:', bits)
reader.decode(204)
reader.match(204, 'cloud')
reader.match(204, 'shadow')
reader.encode('shadow')
reader.encode('clear')
reader.encode('no_shadow')
print(reader.encodeOne('shadow')[0:100])
print(reader.encodeOne('cloud')[0:100])
print(reader.encodeAnd('cloud', 'shadow')[0:100])
import ee
import ipygee as ui
Map = ui.Map()
Map.show()
modcol = ee.ImageCollection('MODIS/006/MOD09GA').sort('system:time_start', False)
mod = ee.Image(modcol.first())
red = 'sur_refl_b01'
green = 'sur_refl_b04'
blue = 'sur_refl_b03'
qa = 'state_1km'
qa_mask = mod.select(qa)
Map.addLayer(mod, {'bands':[red, green, blue], 'min':0, 'max':5000}, 'Original')
Map.addLayer(qa_mask, {'min':0, 'max':reader.max}, 'QA')
mask = reader.decodeImage(mod, qa)
Map.addLayer(mask.select(['cloud']), {'min':0, 'max':1}, 'Clouds')
from geetools import cloud_mask
state1km = cloud_mask.BITS_MODIS09GA
state1km
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example
Step3: Example
|
<ASSISTANT_TASK:>
Python Code:
from zipline.pipeline.data import USEquityPricing as USEP
from zipline.pipeline.factors import SimpleMovingAverage
# sma30 and sma90 are Factors.
# Factors represent computations producing numerical-valued outputs.
sma30 = SimpleMovingAverage(inputs=[USEP.close], window_length=30)
sma90 = SimpleMovingAverage(inputs=[USEP.close], window_length=90)
# Comparisons between Factors produce Filters.
# Filters represent computations producing boolean-valued outputs.
screen = (sma30 > sma90)
example0 = Pipeline(
columns={"sma30": sma30, "sma90": sma90},
screen=screen,
)
example0
example0.show_graph('svg')
results0 = engine.run_pipeline(example0, start_date, end_date)
results0
from zipline.pipeline.factors import VWAP
vwap30 = VWAP(window_length=30)
vwap90 = VWAP(window_length=90)
# Arithmetic operations between Factors produce new Factors.
vwap_pct_change = ((vwap30 - vwap90) / vwap30)
# rank() is a method available on any Factor instance.
# It produces a new Factor containing the numerical rank of each
# asset after sorting the underyling Factor values.
vwap_pct_change_rank = vwap_pct_change.rank(ascending=False)
# top() is another method available on Factors. It produces a Filter
# representing the top N assets sorted by the underlying Factor values.
top200 = vwap_pct_change.top(200)
example1 = Pipeline(
columns={
"rank": vwap_pct_change_rank,
"pct_change": vwap_pct_change,
},
screen=top200,
)
example1.show_graph()
engine.run_pipeline(example1, start_date, end_date)
import numpy as np
from zipline.pipeline import CustomFactor
class MaxDrawdown(CustomFactor):
Factor computing the maximum drawdown
an asset has taken in the last N days.
inputs = [USEP.close]
def compute(self, today, assets, out, closes):
# The difference between each day and the max of all
# earlier days in the period.
drawdowns = fmax.accumulate(closes, axis=0) - closes
drawdowns[isnan(drawdowns)] = np.NINF
drawdown_ends = np.nanargmax(drawdowns, axis=0)
# This is slow in pure Python.
# Cython or Numba could accelerate this substantially.
for i, end in enumerate(drawdown_ends):
peak = nanmax(data[:end + 1, i])
out[i] = (peak - data[end, i]) / data[end, i]
maxdd_90 = MaxDrawdown(window_length=90)
# rank() takes an optional `mask` keyword, which can be passed a Filter
# to signify "Compute rank() only for assets for which the Filter
# returned True.
masked_rank = vwap_pct_change.rank(mask=maxdd_90.bottom(200))
example2 = Pipeline(
columns={
'masked_rank': masked_rank
},
screen=maxdd_90.bottom(200),
)
example2.show_graph('svg')
engine.run_pipeline(example1, start_date, end_date)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Key features
Step2: Built on pandas
Step3: Note how you can debug both pieces of code by running and inspecting df.a.mean().
Step4: Notice how much of this code is writing the word lambda.
Step5: Lazy expressions
Step6: No reset_index
Step7: Unified (un)grouped API
Step8: In pandas you have to change your code for grouped data.
Step9: Note that g_cyl does not have an assign method, and requires passing what operation you want to do ("mean") as a string to .transform().
Step10: Suppose that we want to get the courses each student scored lowest on.
Step11: In siuba it is simpler, and comparable in speed.
Step12: This is because siuba's lazy expressions let it optimize grouped operations.
Step13: SQL queries
Step14: Abstract syntax trees
Step15: Each black box in the printout above is a Call. Calls are the pieces that represent the underlying operations. They have methods to inspect and transform them.
Step16: Nested data
|
<ASSISTANT_TASK:>
Python Code:
# this is a hidden cell
print(
<div class="output_area rendered_html docutils container">
{table}
</div>
.format(table = table.replace('\n', "")))
import pandas as pd
from siuba import _, mutate
my_data = pd.DataFrame({
'g': ['a', 'a', 'b'],
'x': [1,2,3],
})
# pandas
my_data.assign(avg = lambda d: d.x.mean())
# siuba
mutate(my_data, avg = _.x.mean())
(my_data
.assign(avg = lambda d: d.x.mean()) # create new column
.loc[lambda d: d.x != 3] # filter out some rows
)
# actions can be imported individually
from siuba import mutate, arrange
# they can be combined using a pipe
my_data >> mutate(y = _.x + 1) >> arrange(_.g, -_.x)
# rather than repeat the name of your data, you can use lazy expressions ---
my_data_frame = pd.DataFrame({'a': [1,2,3]})
# bad
my_data_frame["b"] = my_data_frame["a"] + 1
my_data_frame["c"] = my_data_frame["b"] + 2
# good
my_data_frame >> mutate(b = _.a + 1, c = _.b + 2)
from siuba.data import mtcars
from siuba import summarize
g_cyl = mtcars.groupby("cyl")
agg_res = g_cyl[["hp", "mpg"]].agg("mean")
agg_res
# bad
agg_res.reset_index()
# good
summarize(g_cyl, hp = _.hp.mean(), mpg = _.mpg.mean())
g_cyl = mtcars.groupby("cyl")
mtcars >> mutate(demeaned = _.hp - _.hp.mean()) # uses ungrouped mean
g_cyl >> mutate(demeaned = _.hp - _.hp.mean()) # uses grouped mean
g_cyl = mtcars.groupby("cyl")
# ungrouped vs grouped mean
mtcars.assign(demeaned = lambda d: d.hp - d.hp.mean())
mtcars.assign(demeaned = g_cyl.obj.hp - g_cyl.hp.transform("mean"))
# fast grouped operations (pull from dev docs)
# PLOT of timing
import numpy as np
import pandas as pd
np.random.seed(123)
students = pd.DataFrame({
'student_id': np.repeat(np.arange(2000), 10),
'course_id': np.random.randint(1, 20, 20000),
'score': np.random.randint(1, 100, 20000)
})
g_students = students.groupby('student_id')
g_students
%%time
# pandas
is_student_min = g_students.obj.score == g_students.score.transform('min')
low_scores = students[is_student_min]
from siuba.experimental.pd_groups import fast_filter
%%time
# siuba
low_scores = fast_filter(g_students, _.score == _.score.min())
# set up code for timing
from dplython import X, DplyFrame, sift, group_by as dply_group_by
g_students2 = DplyFrame(students) >> dply_group_by(X.student_id)
%%time
g_students2 >> sift(X.score == X.score.min())
# generate SQL queries
from siuba.data import cars_sql
from siuba import group_by, mutate, show_query
q = (cars_sql
>> group_by("cyl")
>> mutate(demeaned = _.hp - _.hp.mean())
>> show_query()
)
# ASTs for transforming
from siuba.siu import Symbolic, Call, strip_symbolic
_ = Symbolic()
sym = _.a.mean() + _["b"]
sym
call = strip_symbolic(sym)
# get columns names used in lazy expression
call.op_vars(attr_calls = False)
from siuba import _, mutate, unnest
tagged = pd.DataFrame({
'id': [1,2,3],
'tags': ['a,b,c', 'd,e', 'f']
})
(tagged
>> mutate(split_tags = _.tags.str.split(','))
>> unnest("split_tags")
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's get started with some basic imports.
Step2: And then we'll build a synthetic "dataset" and initialize a new bundle with those data
Step3: mask_enabled and mask_phases parameters
Step4: This is True by default, but mask_phases is set to an empty list, meaning that no masking will actually be applied. This mask_enabled parameter just allows for setting a mask via mask_phases and then toggling that mask on and off without having to reset mask_phases.
Step5: We can now plot the observations to see the unmasked (fake) data.
Step6: If we wanted to only include the eclipses, we could set mask_phases as a list of tuples, where each tuple contains the phase range that we want to keep.
Step7: Note that b.run_compute, computes the forward model ignoring that mask, whether compute_phases is set or not.
Step8: However, when calling b.run_solver, the mask will be applied when determining what times to compute the forward-model (see Advanced
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
import numpy as np
b = phoebe.default_binary()
b = phoebe.default_binary()
b.add_dataset('lc', times=phoebe.linspace(0,5,1001))
b.run_compute()
times = b.get_value('times@model')
fluxes = b.get_value('fluxes@model')
sigmas = np.ones_like(times) * 0.01
b = phoebe.default_binary()
b.add_dataset('lc', compute_phases=phoebe.linspace(0,1,101),
times=times, fluxes=fluxes, sigmas=sigmas,
dataset='lc01')
b.get_parameter(qualifier='mask_enabled', dataset='lc01', context='dataset')
print(b.get_parameter(qualifier='mask_enabled', dataset='lc01', context='dataset').description)
b.get_parameter(qualifier='mask_phases', dataset='lc01', context='dataset')
print(b.get_parameter(qualifier='mask_phases', dataset='lc01', context='dataset').description)
_ = b.plot(show=True)
_ = b.plot(x='phases', show=True)
b.set_value('mask_phases', [(-0.1, 0.1), (0.4,-0.4)])
_ = b.plot(show=True)
_ = b.plot(x='phases', show=True)
b.run_compute()
_ = b.plot(x='phases', show=True, marker={'model': '+'})
b.set_value('compute_phases', [])
b.run_compute()
_ = b.plot(x='phases', show=True, marker={'model': '+'})
print(b.get_parameter('phases_t0').description)
print(b.get_parameter('phases_t0').choices)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <br>
Step2: <br>
Step7: <br>
Step8: Select a simulation file to test
Step9: Load the parameters for the models
Step10: <br>
Step11: <br>
Step12: <br>
Step13: <br>
|
<ASSISTANT_TASK:>
Python Code:
# Uncomment and run this one time only
# !pip install http://download.pytorch.org/whl/cu75/torch-0.1.12.post2-cp27-none-linux_x86_64.whl
# !pip install torchvision==0.1.8
# !pip install tabulate
# !pip install --upgrade scikit-learn
# !pip install --upgrade numpy
# !pip install h5py
# !pip install ibmseti
# !pip install tqdm
# !pip install --upgrade pandas
# Uncomment and run this one time only!
# from __future__ import print_function
# import requests
# import shutil
# base_url = 'https://dal.objectstorage.open.softlayer.com/v1/AUTH_cdbef52bdf7a449c96936e1071f0a46b/code_challenge_models/effsubsee'
# for i in range(1,6):
# r = requests.get('{0}/fold{1}/FOLD{1}_BEST_wresnet34x2_batchsize96_checkpoint.pth.tar'.format(base_url, i), stream=True)
# filename = 'effsubsee_FOLD{}_BEST_wresnet34x2_batchsize96_checkpoint.pth.tar'.format(i)
# with open(filename, 'wb') as fout:
# shutil.copyfileobj(r.raw, fout)
# print('saved {}'.format(filename))
# Uncomment and run this once
# !wget -O mean_stddev_primary_full_v3__384t__512f__logmod2-ph.hdf5 https://github.com/sgrvinod/ml4seti-Effsubsee/blob/master/folds/mean_stddev_primary_full_v3__384t__512f__logmod2-ph.hdf5?raw=true
# Uncomment and run this one time only
# !wget https://dal.objectstorage.open.softlayer.com/v1/AUTH_cdbef52bdf7a449c96936e1071f0a46b/simsignals_v3_zipped/primary_testset_preview_v3.zip
# !unzip -q primary_testset_preview_v3.zip
# !ls
import math
from torch import nn
class BasicBlock(nn.Module):
Graph of the Basic Block, as defined in the paper.
This block contains two 3x3 convolutional layers, each with prior Batch Norm and ReLU.
There is an additive residual connection across the block.
If the number of dimensions change across the block, this residual is a convolutional projection of the input.
Args:
inplanes (int): number of dimensions in the input tensor.
outplanes (int): number of dimensions in the output tensor.
stride (int): stride length for the filter.
dropout (float, fraction): the fraction of neurons to randomly drop/set to zero in-between conv. layers.
def __init__(self, inplanes, outplanes, stride, dropout=0.0):
super(BasicBlock, self).__init__()
self.inplanes = inplanes
self.outplanes = outplanes
self.bn1 = nn.BatchNorm2d(inplanes)
self.relu1 = nn.ReLU(inplace=True)
self.conv1 = nn.Conv2d(inplanes, outplanes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(outplanes)
self.relu2 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(outplanes, outplanes, kernel_size=3, stride=1, padding=1, bias=False)
self.dropout = dropout
if self.inplanes != self.outplanes:
self.projection = nn.Conv2d(inplanes, outplanes, kernel_size=1, stride=stride, padding=0, bias=False)
else:
self.projection = None
def forward(self, x):
out = self.bn1(x)
out = self.relu1(out)
if self.inplanes != self.outplanes:
residual = self.projection(out)
else:
residual = x
out = self.conv1(out)
out = self.bn2(out)
out = self.relu2(out)
if self.dropout > 0.:
out = nn.functional.dropout(out, p=self.dropout, training=self.training)
out = self.conv2(out)
out += residual
return out
class WideResNet(nn.Module):
Graph of the Wide Residual Network, as defined in the paper.
This network contains 4 convolutional blocks, each increasing dimensions by a factor of 'k':
The first is a single 3x3 Convolution, increasing dimensions from 2 (log(amplitude^2), phase) to 16.
The second is a sequence of Basic Blocks, 16 dimensions -> 16*k
The third is a sequence of Basic Blocks, 16*k dimensions -> 16*k^2
The fourth is a sequence of Basic Blocks, 16*k dimensions -> 16*k^3
These convolutional layers are followed by Batch Norm, ReLU, Average Pool, and finally a Fully Connected Layer
to perform the classification.
Args:
n (int): number of single convolutional layers in the entire network, 'n' in the paper.
k (int): widening factor for each succeeding convolutional layer, 'k' in the paper.
block (nn.module): BasicBlock.
dropout (float, fraction): the fraction of neurons to randomly drop/set to zero inside the blocks.
def __init__(self, n, k, block=BasicBlock, dropout=0.0):
super(WideResNet, self).__init__()
if (n - 4) % 6 != 0:
raise ValueError("Invalid depth! Depth must be (6 * n_blocks + 4).")
n_blocks = (n - 4) / 6
self.conv_block1 = nn.Conv2d(2, 16, kernel_size=3, stride=1, padding=1, bias=False)
self.conv_block2 = self._make_layer(block, n_blocks, 16, 16 * k, 2, dropout)
self.conv_block3 = self._make_layer(block, n_blocks, 16 * k, 32 * k, 2, dropout)
self.conv_block4 = self._make_layer(block, n_blocks, 32 * k, 64 * k, 2, dropout)
self.bn1 = nn.BatchNorm2d(64 * k)
self.relu = nn.ReLU(inplace=True)
self.fc = nn.Linear(64 * k * 6 * 8, 7)
for m in self.modules():
if isinstance(m, nn.Conv2d):
n_weights = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n_weights))
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
m.bias.data.zero_()
def _make_layer(self, block, n_blocks, inplanes, outplanes, stride, dropout):
Graph of a Convolutional block layer (conv_block2/conv_block3/conv_block4), as defined in the paper.
This graph assembles a number of blocks (BasicBlock) in sequence.
Args:
block (nn.module): BasicBlock or ResidualBlock.
inplanes (int): number of dimensions in the input tensor.
outplanes (int): number of dimensions in the output tensor.
stride (int): stride length for the filter.
dropout (float, fraction): the fraction of neurons to randomly drop/set to zero in-between conv. layers.
layers = []
for i in range(n_blocks):
if i == 0:
layers.append(block(inplanes, outplanes, stride, dropout))
else:
layers.append(block(outplanes, outplanes, 1, dropout))
return nn.Sequential(*layers)
def forward(self, x):
out = self.conv_block1(x)
out = self.conv_block2(out)
out = self.conv_block3(out)
out = self.conv_block4(out)
out = self.bn1(out)
out = self.relu(out)
out = nn.functional.avg_pool2d(out, 8)
out = out.view(out.size(0), -1)
return self.fc(out)
def wresnet34x2():
model = WideResNet(n=34, k=2, block=BasicBlock, dropout=0.3)
return model
from __future__ import print_function
import argparse
import os
import time
import torch
import torchvision.transforms as transforms
import pandas as pd
import ibmseti
import numpy as np
import ibmseti
import h5py
def normalizeSimFile(normalizeData, simfile):
# Load the Normalizer function
h = h5py.File(normalizeData, 'r')
mean = torch.FloatTensor(h['mean'][:])
mean = mean.permute(2, 0, 1)
std_dev = torch.FloatTensor(h['std_dev'][:])
std_dev = std_dev.permute(2, 0, 1)
h.close()
normalize = transforms.Normalize(mean=mean,
std=std_dev)
# Load simulation data
time_freq_resolution=(384, 512)
aca = ibmseti.compamp.SimCompamp(open(simfile, 'rb').read())
complex_data = aca.complex_data()
complex_data = complex_data.reshape(time_freq_resolution[0], time_freq_resolution[1])
complex_data = complex_data * np.hanning(complex_data.shape[1])
cpfft = np.fft.fftshift(np.fft.fft(complex_data), 1)
spectrogram = np.abs(cpfft)
features = np.stack((np.log(spectrogram ** 2),
np.arctan(cpfft.imag / cpfft.real)), -1)
# create FloatTensor, permute to proper dimensional order, and normalize
data = torch.FloatTensor(features)
data = data.permute(2, 0, 1)
data = normalize(data)
# The model expects a 4D tensor
s = data.size()
data = data.contiguous().view(1, s[0], s[1], s[2])
input_var = torch.autograd.Variable(data, volatile=True)
return input_var
def singleProbs(model, input_var):
model.eval()
softmax = torch.nn.Softmax()
softmax.zero_grad()
output = model(input_var)
probs = softmax(output).data.view(7).tolist()
return probs
#!ls primary_testset_preview_v3/*
simfile = 'primary_testset_preview_v3/00b3b8fdb14ce41f341dbe251f476093.dat'
allFolds = []
def loadFoldParams(modelcheckpoint):
model = wresnet34x2().cpu()
if os.path.isfile(modelcheckpoint):
print("=> Loading checkpoint '{}'".format(modelcheckpoint))
checkpoint = torch.load(modelcheckpoint, map_location=lambda storage, loc: storage)
best_acc = checkpoint['best_acc']
print("This model had an accuracy of %.2f on the validation set." % (best_acc,))
keys = checkpoint['state_dict'].keys()
for old_key in keys:
new_key = old_key.replace('module.', '')
checkpoint['state_dict'][new_key] = checkpoint['state_dict'].pop(old_key)
model.load_state_dict(checkpoint['state_dict'])
print("=> Loaded checkpoint '{}' (epoch {})"
.format(modelcheckpoint, checkpoint['epoch']))
else:
print("=> No model checkpoint found. Exiting")
return
allFolds.append(model)
def lf():
for i in range(1,6):
loadFoldParams('effsubsee_FOLD{}_BEST_wresnet34x2_batchsize96_checkpoint.pth.tar'.format(i))
%time lf()
assert len(allFolds) == 5
# normalize the simulation data file
normalizer = 'mean_stddev_primary_full_v3__384t__512f__logmod2-ph.hdf5'
%time input_var = normalizeSimFile(normalizer, simfile)
# calculate probabilities
def runAllModels(aSimFile):
probs = np.zeros(7)
for mf in allFolds:
probs += singleProbs(mf, input_var)
probs = probs/float(len(allFolds))
return probs
%time probs = runAllModels(simfile)
print('final class probabilities')
print(probs)
class_list = ['brightpixel', 'narrowband', 'narrowbanddrd', 'noise', 'squarepulsednarrowband', 'squiggle', 'squigglesquarepulsednarrowband']
print('signal classification')
predicted_signal_class = class_list[probs.argmax()]
print(predicted_signal_class)
%matplotlib inline
import matplotlib.pyplot as plt
aca = ibmseti.compamp.SimCompamp(open(simfile,'rb').read())
spectrogram = aca.get_spectrogram()
fig, ax = plt.subplots(figsize=(20, 10))
ax.imshow(np.log(spectrogram), aspect = 0.5*float(spectrogram.shape[1]) / spectrogram.shape[0], cmap='gray')
import pandas as pd
preview_test_set_pd = pd.read_csv('https://github.com/setiQuest/ML4SETI/raw/master/results/private_list_primary_v3_testset_preview_uuid_class_29june_2017.csv', index_col=None)
expected_signal_class = preview_test_set_pd[preview_test_set_pd.UUID == simfile.split('/')[-1].rstrip('.dat')].SIGNAL_CLASSIFICATION.values[0]
assert predicted_signal_class == expected_signal_class
print(expected_signal_class)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Data
Step2: Exploratory Data Analysis
Step3: Roughly 20 percent of the Age data is missing. The proportion of Age missing is likely small enough for reasonable replacement with some form of imputation. Looking at the Cabin column, it looks like we are just missing too much of that data to do something useful with at a basic level. We'll probably drop this later, or change it to another feature like "Cabin Known
Step4: Cufflinks for plots
Step5: Data Cleaning
Step6: We can see the wealthier passengers in the higher classes tend to be older, which makes sense. We'll use these average age values to impute based on Pclass for Age.
Step7: Now apply that function!
Step8: Now let's check that heat map again!
Step9: Great! Let's go ahead and drop the Cabin column and the row in Embarked that is NaN.
Step10: Converting Categorical Features
Step11: Great! Our data is ready for our model!
Step12: Training and Predicting
Step13: Let's move on to evaluate our model!
Step14: Not so bad! You might want to explore other feature engineering and the other titanic_text.csv file, some suggestions for feature engineering
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
train = pd.read_csv('titanic_train.csv')
train.head(25)
sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis')
sns.set_style('whitegrid')
sns.countplot(x='Survived',data=train,palette='RdBu_r')
# sns.set_style('whitegrid')
sns.countplot(x='Survived',hue='Sex',data=train,palette='RdBu_r')
# sns.set_style('whitegrid')
sns.countplot(x='Survived',hue='Pclass',data=train,palette='rainbow')
sns.distplot(train['Age'].dropna(),kde=False,color='darkred',bins=30)
train['Age'].hist(bins=30,color='darkred',alpha=0.7)
sns.countplot(x='SibSp',data=train)
train['Fare'].hist(color='green',bins=40,figsize=(8,4))
import plotly_express as pex
pex.histogram(data_frame=train, x='Fare', nbins=30)
plt.figure(figsize=(12, 7))
sns.boxplot(x='Pclass',y='Age',data=train,palette='winter')
def impute_age(cols):
Age = cols[0]
Pclass = cols[1]
if pd.isnull(Age):
if Pclass == 1:
return 37
elif Pclass == 2:
return 29
else:
return 24
else:
return Age
train['Age'] = train[['Age','Pclass']].apply(impute_age,axis=1)
sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis')
train.drop('Cabin',axis=1,inplace=True)
train.head(50)
train.shape
train.dropna(inplace=True)
train.shape
train.info()
sex = pd.get_dummies(train['Sex'],drop_first=True)
embark = pd.get_dummies(train['Embarked'],drop_first=True)
embark.head()
sex.head()
train.drop(['Sex','Embarked','Name','Ticket'],axis=1,inplace=True)
train = pd.concat([train,sex,embark],axis=1)
train.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(train.drop('Survived',axis=1),
train['Survived'], test_size=0.30,
random_state=101)
from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression()
logmodel.verbose = 1
logmodel.fit(X_train,y_train)
logmodel.coef_
logmodel.intercept_
predictions = logmodel.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test,predictions))
test_df = pd.read_csv('titanic_test.csv')
test_df.head()
test_df.shape
test_df.iloc[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Auxiliary function to run the solver
Step4: Define your solver
Step5: Apply your custom solver
Step6: View in 2D and 3D ("glass" brain like 3D plot)
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import linalg
import mne
from mne.datasets import sample
from mne.viz import plot_sparse_source_estimates
data_path = sample.data_path()
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'
subjects_dir = data_path + '/subjects'
condition = 'Left Auditory'
# Read noise covariance matrix
noise_cov = mne.read_cov(cov_fname)
# Handling average file
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked.crop(tmin=0.04, tmax=0.18)
evoked = evoked.pick_types(eeg=False, meg=True)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname)
def apply_solver(solver, evoked, forward, noise_cov, loose=0.2, depth=0.8):
Function to call a custom solver on evoked data
This function does all the necessary computation:
- to select the channels in the forward given the available ones in
the data
- to take into account the noise covariance and do the spatial whitening
- to apply loose orientation constraint as MNE solvers
- to apply a weigthing of the columns of the forward operator as in the
weighted Minimum Norm formulation in order to limit the problem
of depth bias.
Parameters
----------
solver : callable
The solver takes 3 parameters: data M, gain matrix G, number of
dipoles orientations per location (1 or 3). A solver shall return
2 variables: X which contains the time series of the active dipoles
and an active set which is a boolean mask to specify what dipoles are
present in X.
evoked : instance of mne.Evoked
The evoked data
forward : instance of Forward
The forward solution.
noise_cov : instance of Covariance
The noise covariance.
loose : float in [0, 1] | 'auto'
Value that weights the source variances of the dipole components
that are parallel (tangential) to the cortical surface. If loose
is 0 then the solution is computed with fixed orientation.
If loose is 1, it corresponds to free orientations.
The default value ('auto') is set to 0.2 for surface-oriented source
space and set to 1.0 for volumic or discrete source space.
depth : None | float in [0, 1]
Depth weighting coefficients. If None, no depth weighting is performed.
Returns
-------
stc : instance of SourceEstimate
The source estimates.
# Import the necessary private functions
from mne.inverse_sparse.mxne_inverse import \
(_prepare_gain, _check_loose_forward, is_fixed_orient,
_reapply_source_weighting, _make_sparse_stc)
all_ch_names = evoked.ch_names
loose, forward = _check_loose_forward(loose, forward)
# put the forward solution in fixed orientation if it's not already
if loose == 0. and not is_fixed_orient(forward):
forward = mne.convert_forward_solution(
forward, surf_ori=True, force_fixed=True, copy=True, use_cps=True)
# Handle depth weighting and whitening (here is no weights)
gain, gain_info, whitener, source_weighting, mask = _prepare_gain(
forward, evoked.info, noise_cov, pca=False, depth=depth,
loose=loose, weights=None, weights_min=None)
# Select channels of interest
sel = [all_ch_names.index(name) for name in gain_info['ch_names']]
M = evoked.data[sel]
# Whiten data
M = np.dot(whitener, M)
n_orient = 1 if is_fixed_orient(forward) else 3
X, active_set = solver(M, gain, n_orient)
X = _reapply_source_weighting(X, source_weighting, active_set, n_orient)
stc = _make_sparse_stc(X, active_set, forward, tmin=evoked.times[0],
tstep=1. / evoked.info['sfreq'])
return stc
def solver(M, G, n_orient):
Dummy solver
It just runs L2 penalized regression and keep the 10 strongest locations
Parameters
----------
M : array, shape (n_channels, n_times)
The whitened data.
G : array, shape (n_channels, n_dipoles)
The gain matrix a.k.a. the forward operator. The number of locations
is n_dipoles / n_orient. n_orient will be 1 for a fixed orientation
constraint or 3 when using a free orientation model.
n_orient : int
Can be 1 or 3 depending if one works with fixed or free orientations.
If n_orient is 3, then ``G[:, 2::3]`` corresponds to the dipoles that
are normal to the cortex.
Returns
-------
X : array, (n_active_dipoles, n_times)
The time series of the dipoles in the active set.
active_set : array (n_dipoles)
Array of bool. Entry j is True if dipole j is in the active set.
We have ``X_full[active_set] == X`` where X_full is the full X matrix
such that ``M = G X_full``.
K = linalg.solve(np.dot(G, G.T) + 1e15 * np.eye(G.shape[0]), G).T
K /= np.linalg.norm(K, axis=1)[:, None]
X = np.dot(K, M)
indices = np.argsort(np.sum(X ** 2, axis=1))[-10:]
active_set = np.zeros(G.shape[1], dtype=bool)
for idx in indices:
idx -= idx % n_orient
active_set[idx:idx + n_orient] = True
X = X[active_set]
return X, active_set
# loose, depth = 0.2, 0.8 # corresponds to loose orientation
loose, depth = 1., 0. # corresponds to free orientation
stc = apply_solver(solver, evoked, forward, noise_cov, loose, depth)
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
opacity=0.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: RANDOM FORESTS
Step2: The first avalanche problem dictates the danger level - that was expected
Step3: Looks like there is little gain when using a depth > 7.
Step4: Now we see slight improvement in $R^{2}$.
Step5: Gradient boosting
Step6: Again, very small difference between random forests and boosted trees.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import json
import graphviz
import matplotlib.pyplot as plt
from sklearn import tree
from sklearn.model_selection import train_test_split
pd.set_option("display.max_rows",6)
%matplotlib inline
df_data = pd.read_csv(r'varsom_ml_preproc_3y.csv', index_col=0)
target_ = 'danger_level'
X = df_data.drop([target_, 'date'], axis=1)
y = df_data.filter([target_], axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 222, test_size = 0.3)
print(df_data.columns.values)
df_data.describe()
print(df_data[df_data['danger_level']>3]['region_id'].unique())
dec_tree = tree.DecisionTreeRegressor(random_state=222, max_depth = 3)
dec_tree.fit(X_train, y_train) # we're using the same data as in last linear model
predictions_dt = dec_tree.predict(X_test)
print(predictions_dt.shape, y_test.shape)
# Visualize the tree
dot_data = tree.export_graphviz(dec_tree, out_file=None,
feature_names=df_data.drop([target_, 'date'], axis=1).columns,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph.render('aval_danger_by_problem_dt_lev7')
graph
print('Decision tree R^2: %.4f' % dec_tree.score(X_test, y_test))
depths = range(1, 20)
tree_models = [tree.DecisionTreeRegressor(random_state=222, max_depth=d).fit(X_train, y_train) for d in depths]
tree_Rsquare = [f.score(X_test, y_test) for f in tree_models]
plt.plot(depths, tree_Rsquare, color = 'red')
plt.xlabel('Tree depth')
plt.ylabel('$R^2$')
# so let's create a tree with depth = 7
dec_tree = tree.DecisionTreeRegressor(random_state=222, max_depth = 7)
dec_tree.fit(X_train, y_train) # we're using the same data as in last linear model
predictions_dt = dec_tree.predict(X_test)
# Visualize the tree
dot_data = tree.export_graphviz(dec_tree, out_file=None,
feature_names=df_data.drop([target_, 'date'], axis=1).columns,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph.render('aval_danger_by_problem_dt2')
graph
print('Decision tree R^2: %.4f' % dec_tree.score(X_test, y_test))
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor(random_state = 422, max_depth = 7)
forest_reg.fit(X_train, y_train)
predictions_rf = forest_reg.predict(X_test)
print('Random forest R^2: %.4f' % forest_reg.score(X_test, y_test))
from sklearn.ensemble import GradientBoostingRegressor
grad_boost = GradientBoostingRegressor(random_state = 222, max_depth = 6)
grad_boost.fit(X_train, y_train)
predictions_gb = grad_boost.predict(X_test)
print('Gradient boosted tree R^2: %.4f' % grad_boost.score(X_test, y_test))
feature_labels = np.array(df_data.drop([target_, 'date'], axis=1).columns)
#feature_labels = np.array(['CRIM', 'ZN', 'INDUS', 'CHAS', 'RM', 'AGE', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV'])
importance = forest_reg.feature_importances_
feature_indexes_by_importance = importance.argsort()
for index in feature_indexes_by_importance:
print('{}-{:.2f}%'.format(feature_labels[index], (importance[index] *100.0)))
# excluding the variables where impotance is less than 1%
X = df_data[['avalanche_problem_1_distribution_id',
'avalanche_problem_1_destructive_size_ext_id',
'avalanche_problem_1_trigger_simple_id',
'avalanche_problem_1_probability_id',
'avalanche_problem_2_destructive_size_ext_id',
'avalanche_problem_2_probability_id']]
y = df_data['danger_level']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 222)
grad_boost = GradientBoostingRegressor(random_state = 222, max_depth = 7)
grad_boost.fit(X_train, y_train)
predictions_gb = grad_boost.predict(X_test)
print('Gradient boosted tree R^2: %.4f' % grad_boost.score(X_test, y_test))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Data
Step2: Alias Criminals Table A C, Then Select All Names From C
|
<ASSISTANT_TASK:>
Python Code:
# Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
%%sql
-- Create a table of criminals
CREATE TABLE criminals (pid, name, age, sex, city, minor);
INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (632, 'Stacy Miller', 23, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (621, 'Betty Bob', NULL, 'F', 'Petaluma', 1);
INSERT INTO criminals VALUES (162, 'Jaden Ado', 49, 'M', NULL, 0);
INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (411, 'Bob Iton', NULL, 'M', 'San Francisco', 0);
%%sql
-- Select all names from the table 'c'
SELECT c.name
-- From the criminals table, now called c
FROM criminals AS c
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoding Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Sentence to Sequence
Step48: Translate
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def get_id_text(input, vocab_to_int):
return [[vocab_to_int[word] for word in sentence.split()] for sentence in input]
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_sentences = [sentence for sentence in source_text.split('\n')]
target_sentences = [sentence + ' <EOS>' for sentence in target_text.split('\n')]
source_id_text = get_id_text(source_sentences, source_vocab_to_int)
target_id_text = get_id_text(target_sentences, target_vocab_to_int)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
input = tf.placeholder(tf.int32, (None, None), name='input')
targets = tf.placeholder(tf.int32, (None, None), name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_probability = tf.placeholder(tf.float32, name='keep_prob')
return input, targets, learning_rate, keep_probability
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
go = target_vocab_to_int['<GO>']
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
return tf.concat([tf.fill([batch_size, 1], go), ending], 1)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dropout = tf.contrib.rnn.DropoutWrapper(lstm, keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
_, rnn_state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32)
return rnn_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
dec_fn_train = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
output_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell,
dec_fn_train,
dec_embed_input,
sequence_length,
scope=decoding_scope
)
train_logits = output_fn(output_logits)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: Maximum length of
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn,
encoder_state,
dec_embeddings,
start_of_sequence_id,
end_of_sequence_id,
maximum_length,
vocab_size
)
infer_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decoder_fn=infer_decoder_fn, scope=decoding_scope)
return infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
with tf.variable_scope('decoding') as decoding_scope:
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, input_keep_prob=keep_prob, output_keep_prob=keep_prob)
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
with tf.variable_scope('decoding') as decoding_scope:
train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)
with tf.variable_scope('decoding', reuse=True) as decoding_scope:
infer_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length - 1, vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
enc_inputs = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
enc_state = encoding_layer(enc_inputs, rnn_size, num_layers, keep_prob)
dec_inputs = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.truncated_normal([target_vocab_size, dec_embedding_size], stddev=0.01))
dec_embed_inputs = tf.nn.embedding_lookup(dec_embeddings, dec_inputs)
train_logits, infer_logits = decoding_layer(
dec_embed_inputs,
dec_embeddings,
enc_state,
target_vocab_size,
sequence_length,
rnn_size,
num_layers,
target_vocab_to_int,
keep_prob
)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 7
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 10
decoding_embedding_size = 10
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.7
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target_batch,
[(0,0),(0,max_seq - target_batch.shape[1]), (0,0)],
'constant')
if max_seq - batch_train_logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='1_data_cleaning'></a>
Step2: Construction "./" means we use the current folder of the script. "../" would mean - one level higher relative to the script. In our case we want to stay in the same folder where the script file is located, and go to the "data" folder.
Step3: Task 1
Step4: Dataset Descriptions
Step5: It should be noticed that loc and iloc methods are most likely to return different values. For example, let's select the 3rd line.
Step6: The at method is used for quick selection of the specific element
Step7: If you need to select several columns, you could do it by feeding a list.
Step8: Or you can use a slice
Step9: If you need strings that match a condition (for example, only applications of the ART_AND_DESIGN category), the following query format is used (it also returns a DataFrame object)
Step10: Here we used the head() function again to only show the first 5 rows).
Step11: Let's take a look at the general information about dataset using the info() method.
Step12: The dataset consists of 13 columns, only one of which is numeric (float64) and the rest are categorical (object). The dataset has 10841 rows. Note that there are also missing values, mostly in the Current Ver.
Step13: Most applications are in FAMILY, GAME and TOOLS categories.
Step14: There was a shift of a row on one column to the left. Let's delete this sample.
Step15: Task 2
Step16: <a id='rating'></a>
Step17: Checking for missing values
Step18: <a id='size'></a>
Step19: The size of applications is specified in megabytes and kilobytes. But the most common is the value 'Varies with device'.
Step20: Now let's apply our function to the Size column
Step21: Let's check that the column is in the float format now
Step22: All values 'Variations with device' are missing now.
Step23: As you can see there is about 16% of the samples with NaN values (previously 'Variations with device'). That's quite a lot. To deal with this, we can create a new column with binary values that will store information about these NaN samples (perhaps this information will be useful) and then process them in the original column.
Step24: Now let's handle the missing values. There are several traditional methods to deal with them
Step25: Just in case, check that there are no empty values
Step26: <a id='type'></a>
Step27: 93% of apps are free. To say more, you need to look at the relationships with other variables. We will deal with this in the next section. Let's check for missing values
Step28: There is one missing, let's inspect this one.
Step29: Task 6 Handle the instance with the missing "Type" in the best way on your opinion
Step30: <a id='price'></a>
Step31: The variable is numeric, but it contains special characters. That's why it has 'object' Type.
Step32: Let's create a distribution plot of prices among the paid apps.
Step33: Cheap apps make up the most part. Moreover, the rest of apps take too small part of all apps, so it is difficult to say anything by the graphic.
Step34: Most apps cost around 1 and 3 dollars. The dispersion of values on the left part of the plot is due to the strong discreteness of small values of the original column. Let's have a look at the strange peak on the right side of the plot.
Step35: Strange apps for the rich. Perhaps they are an indicator of status in certain circles or something like that.
Step36: <a id='content_rating'></a>
Step37: Remove the "Adults only 18+" and "Unrated classes". They are too few.
Step38: Check for missing.
Step39: <a id='genres'></a>
Step40: THere are quite a lot of genres - 119. Moreover, some of them are actually a result of combining two basic genres. Let's find out the amount of unique ones.
Step41: Task 8 Calculate the amount of genres for each app and estimate the general distribution for them? Use series.str.findall()
Step42: There isn't a lot of applications with many genres. Let's look at some of them.
Step43: You may notice that the genre column often contains the same as the category column. We will explore this in more detail later. Now let's check for missing values and move on.
Step44: <a id='current_ver'></a>
Step45: Because of too many unique values, this isn't a particularly informative column. It seems that each developer uses its own notation, so we will just remove it.
Step46: <a id='android_ver'></a>
Step47: There are 2 missing values. Remove it.
Step48: Let's cluster them together in bigger classes.
Step49: Groups 1-3 and 5-8 still have a small number of examples. The biggest one is 4 group. We probably should cluster the classes even more and combine 1-3 and 5-8 into two separate groups.
Step50: With such distribution of values, this feature can actually contribute to the model.
Step51: <a id='final_check'></a>
Step52: <a id='removing_duplicates'></a>
Step53: Some applications have duplicates with different sizes, so we’ll sort by size
Step54: Task 9 Remove duplicates with drop_duplicates() (keep applications with the largest size)
Step55: After filtering, we lost ~11% from the total volume of our dataset. Obviously this is an unpleasant measure, but it has to be done.
Step56: <a id='2_data_relations'></a>
Step57: There is no correlation between the numeric variables.
Step58: Pairplots compares the distributions of variables in pairs and allow you to make the most common assumptions, which can then be checked and clarified later.
Step59: Task 10
Step60: Apps with "0" rating are more likely to have a short title. At the same time, more popular applications are in the range of 20 to 50 characters.
Step61: The average price of a "Free" app is zero dollars, a paid one is 14 dollars. Looks fine.
Step62: Let's see which categories use the most subcategories.
Step63: Task 11 Explore the genre number statistics for each category. Use groupby и describe.
Step64: The maximum number of subcategories in one category is 2, the minimum is 1. Two subcategories are most often found in the categories PARENTING, FAMILY, EDUCATION. Only 11 categories have multiple genres.
Step65: Let's calculate how many apps have the same genres and categories.
Step66: The columns are completely the same for almost 70%. That is, 70% of the values don't contain new information. Most likely, there are more matches considering possible errors related to usage of regular expressions on raw data.
Step67: Task 12 Explore whether there is a difference between distributions of apps with the same and with different categories and genres (use sns.countplot).
Step68: The rating distributions for True and False are slightly different. You can see that applications with a rating 0 stand out a little bit. To numerically estimate this distribution, we will use the contingency table.
Step69: The difference in distribution for apps with same category and genre is negligible. On the other side, applications with different genre and category are 1.5 times more likely to have "2" rating.
Step70: <a id='rating_&_content_rating'></a>
Step71: Let's look at the table of both at the rating contingency with Content Rating column and the is_not_equal_genre column.
Step72: First thing you could notice are applications with the age rating "Everyone" and the same categories and genres. For those application the most common rating is "0". Apps with Everyone 10+ and is_not_equal_genre False mostly have rating "2".
Step73: There are not a lot of paid applications, but numerous categories, so the graphics are so sparse. You can see that some categories have outlier applications
Step74: In this graph, the column height shows the average value, and the bar shows the confidence interval. You can notice that the Finance and Lifestyle categories have a very wide confidence interval. This means that these categories are highly sparse and have some isolated groups in different parts of the distributions. It is not valid to use the average value for such categories. There is no sense to consider other categories with a wide confidence interval - there are too few examples to interpret the stats meaningfully.
Step75: The shift was due to a few expensive "freaky" apps we already saw earlier. Let's remove them and rearrange the graphic.
Step76: The Lifestyle and Finance categories became more realistic and stable. So, we are removing extra expensive samples.
Step77: <a id='Категории_и_размер'></a>
Step78: Task 14 Explore the distribution of app size in each category. Use the df_log table and the log_size column
Step79: <a id='3_feature_space'></a>
Step80: To measure processing time we will use a special context manager.
Step81: <a id='base_model'></a>
Step82: The so-called majority classifier can be the simplest type of a base model. The point is
Step83: Now let's determine which class is most common in the training data.
Step84: In the training set, samples with second class are more common. Then our test prediction will be an array with the same size as the size of out test set and it will be completely filled with 2.
Step85: To analyze the results, we will use a function that displays several different metrics.
Step86: Now let's calculate the accuracy of the majority classifier.
Step87: These will be our reference values. Let's write them in the table.
Step88: <a id='origin_features'></a>
Step89: Now we'll get our test and training sets. (notice that we use the same random_state and therefore the data is split the same way as before)
Step90: We will train a Logistic Regression model - one of the simplest ones among linear classifiers. To perform hyperparameter optimization we'll use cross-validation by applying Pipeline and GridSearchCV functions. This specific classifier was chosen in order to decrease the training time within the workshop. Results for more complex models will be provided in the bonus file.
Step91: Again, we shall save the results in the table.
Step92: As you can see, the F1-score of the model is slightly higher than the score of the majority classifier.
Step93: Next, it is necessary to delete the application names - they are unique for each sample. If we encode them with dummy encoding, it will lead to adding N, where N is the length of the whole dataset, and all these columns will have only one value 1 and all the others 0. Such features obviously do not work.
Step94: As was said before, sometimes it makes sense to encode columns with LabelEncoding. In our case the Content Rating column is suitable for this approach. We should encode it in a way, so with the growth of restrictions, the corresponding number will also grow.
Step95: Now let's encode the remaining categories with the dummy method.
Step96: Now we have 44 features instead of 4.
Step97: Task 15 Train the model, make a prediction on the test data and output the statistics. Don't forget to measure your training time.
Step98: As you can see, adding categorical features have improved the accuracy of the model.
Step99: Let's add a price per megabyte column.
Step100: Also let's add logarithms for price and size.
Step101: Usually it is useful to indroduce polynomial features for the numerical ones. Let's make them from the previously calculated logarithms of price and size. To do this, we can use the scikit-learn function Polynomial Features.
Step102: You can see that there are new columns low_price^2, log_price, log_size and log_size^2. Now you need to attach them to the main dataset, and discard unnecessary ones. To avoid errors during concatenation, we will replace the poly_df index with the df_new index.
Step103: Add the number of characters and the number of words in the title as features.
Step104: The names of applications contain a lot of garbage. It is better to clean it.
Step105: First we need to check what the text looks like after cleaning.
Step106: Now, after filtering, we can add the number of characters and the number of words as features.
Step107: Next step is to add the difference between the original number of words and symbols and the normalized ones.
Step108: For some reason there are normalized headers that have more words than original ones. We need to check it.
Step109: Everything seems fine.
Step110: Let's encode categorical features into numerical vectors.
Step111: Now it's time to visualize our data. We will use UMAP for this.
Step112: You can see that there is a lot of clusters and they contain different proportions of applications with different ratings. Let's apply a clustering algorithm to separate these groups. For now we will assume that we have 30 clusters and use the k_means algorithm.
Step113: Now we visualize the results of our clustering.
Step114: Looks promissing. Let's explore whether there is a difference in the distributions of the target variable in each cluster.
Step115: Obviously, the distribution of the target variable differs in many way throughout the clusters. We probably should add the cluster indexes to the main dataset. Note that the cluster index is a category. To perform one-hot encoding we will remove the target variable and then we can add new columns to the main dataset.
Step116: It also would be possible to use aggregation features
Step117: Adding the manually created features increased the effectiveness of the model.
Step118: Apparently, most of the information can be saved with 60-70 new projections. Let's see how many components do we need to keep 99% of the original dataset's information.
Step119: <a id='greedy_selection'></a>
Step120: Top 20 of the most significant features
Step121: Let's find at what stage was the best accuracy.
Step122: <a id='4_final_prediction'></a>
Step123: <a id='conclusions'></a>
Step124: The best result on the validation set was shown by a model trained on the features obtained by the greedy selection method. Because some features can decrease the accuracy, removing them allows you to get a better result than on the original dataset. However, on the test set, the greedy algorithm showed slightly worse results than the full dataset. This could happen due to the fact that the optimal set of features was selected for the validation set which in our case might have a slightly different distribution and the dataset isn't particularly large.
|
<ASSISTANT_TASK:>
Python Code:
#TODO add to cointainer
!pip install umap-learn
import numpy as np
from numpy.random import seed
import scipy.stats as stats
from scipy.stats import uniform, truncnorm, randint
import random
import pandas as pd
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split, GridSearchCV, RandomizedSearchCV
from sklearn.metrics import f1_score, classification_report, confusion_matrix, accuracy_score, roc_auc_score
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error, make_scorer
from sklearn.linear_model import LogisticRegression
import sklearn.cluster as cluster
import umap
from joblib import load
from io import BytesIO
import requests
import matplotlib.pyplot as plt
from pylab import rcParams
import seaborn as sns
plt.style.use('seaborn-poster')
%matplotlib inline
import os
import pickle
import warnings
from time import time
warnings.filterwarnings('ignore')
DATA_F ='./data/googleplaystore_alter.csv'
FEATURE_IMPORTANCE_F= './data/feature_importances_logreg.csv'
BONUS_F = './data/bonus_df_alter.csv'
CLUSTER_F = './data/cluster.joblib?raw=true'
df = pd.read_csv(DATA_F)
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# df.head(?)
# df.tail(?)
df.iloc[2]
df['App'].head(5)
df.at[1, 'App']
df.loc[1:3, ['App', 'Category']]
df.iloc[2:4, [1, 2]]
df[df['Category']=='ART_AND_DESIGN'].head(5)
df[(df['Category']=='ART_AND_DESIGN') & (df.Type == 'Free')].head(5)
df.info()
df.Category.value_counts()
df[df.Category == '2']
df = df[df.Category != '2']
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# df.Category. ?
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# df.Rating. ?
plt.figure(figsize=(7,7))
plt.pie(df.Rating.value_counts(),
labels=df.Rating.value_counts().index,
autopct='%1.1f%%',
startangle=120,
explode=[0.02]*3)
plt.axis('equal')
plt.show()
# TODO
# df.Rating.astype(int)
# df.Rating.plot.pie()
df.Rating.describe()
df.Rating.isnull().sum()
df.Size.value_counts()
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# def check_size(size_value):
# write your code here
df.Size = df.Size.apply(check_size)
assert df.Size.dtype == float
df.Size.dtype
df.Size.isnull().sum()/len(df)
df['unknown_size'] = df.Size.isnull()
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# df.Size = df.Size.fillna(value=?)
assert df.Size.isnull().sum() == 0
df.Size.isnull().sum()
df.Type.value_counts(normalize=True)
df.Type.isnull().sum()
df[df.Type.isnull()]
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
assert df.Type.isnull().sum() == 0
df.Type.isnull().sum()
df.Price.value_counts()[:10]
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# df['Price'] = df['Price'].str.replace(?)
# df['Price'] = df['Price']. ?
assert df.Price.dtype == float
plt.figure(figsize=[15, 7])
sns.distplot(df.Price[df.Type == 'Paid'])
plt.xlabel("Price, $")
plt.title('The distribution of apps by price' ,size = 24)
plt.show()
plt.figure(figsize=[15, 7])
sns.distplot(np.log(df.Price[df.Type == 'Paid']))
plt.xlabel("log(Price), $")
plt.title('The distribution of apps by price',size = 24)
plt.show()
df[df.Price > 200]
df.Price.isnull().sum()
df['Content Rating'].value_counts()
df = df[(df['Content Rating'] != 'Adults only 18+') & (df['Content Rating']!= 'Unrated')]
df['Content Rating'].value_counts()
df['Content Rating'].isnull().sum()
df.Genres.value_counts()
list_of_genres = []
for i in df.Genres.str.split(';').values:
list_of_genres.extend(i)
print('Amount of subcategory: {}'.format(len(set(list_of_genres))))
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# counts_of_genres = df.Genres.str.findall ?
assert sum(counts_of_genres) == 11333
counts_of_genres.describe()
(counts_of_genres>1).sum()/len(counts_of_genres)
df[counts_of_genres>1].head(10)
df.Genres.isnull().sum()
df['Current Ver'].value_counts()
df = df.drop(columns=['Current Ver'])
df['Android Ver'].value_counts(normalize=True)
df['Android Ver'].isnull().sum()
df = df.dropna(subset=['Android Ver'])
for i in range(1, 9):
df.loc[df['Android Ver'].str.contains('^{}..*'.format(i)), 'Android Ver'] = '{} and up'.format(i)
df['Android Ver'].value_counts(normalize=True).sort_index()
df.loc[df['Android Ver'].str.contains('^[123]..*'), 'Android Ver'] = '1 and up'
df.loc[df['Android Ver'].str.contains('^[5678]..*'), 'Android Ver'] = '5 and up'
df['Android Ver'].value_counts(normalize=True)
df['Android Ver'].isnull().sum()
df.isnull().sum()
df_dup = df[df.duplicated(subset='App')]
df_dup.head()
dup_apps = ['Box', 'Call Blocker', 'Bubble Shooter', 'Word Search']
df_tmp = pd.DataFrame()
for col in dup_apps:
df_tmp = pd.concat((df_tmp, df[df.App == col]))
df_tmp
df = df.sort_values('Size')
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# df = df.drop_duplicates(?)
assert df.shape == (9652, 10)
df.shape
df.head()
corr_df = df.corr()
cmap = sns.diverging_palette(220, 10, as_cmap=True)
mask = np.zeros_like(corr_df, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
plt.subplots(figsize=[15,10])
plt.title('Correlation matrix')
sns.heatmap(corr_df, mask=mask, cmap=cmap, linewidths=.5, annot=True)
plt.show()
df_log = df.copy()
df_log['log_price'] = np.log1p(df_log.Price)
df_log.unknown_size = df_log.unknown_size.astype(int)
ax = sns.pairplot(df_log, hue='Rating',
vars = ['Size', 'log_price', 'unknown_size'],
plot_kws = {'alpha': 0.6, 's': 80, 'edgecolor': 'w'},
diag_kind='hist', diag_kws = {'edgecolor': 'w', 'alpha': 0.6, 'bins': 10},
size = 4)
ax.fig.suptitle('Pairplot with grouping by rating', y=1.02, size=18)
plt.show()
df_app = pd.DataFrame({'App': df.App,
'Rating': df.Rating})
df_app['App_len'] = df_app.App.apply(len)
plt.figure(figsize=(15, 7))
for i in sorted(df_app.Rating.unique()):
sns.kdeplot(df_app.App_len[df_app.Rating==i],
shade=True,
legend=False,)
plt.legend(labels=sorted(df_app.Rating.unique()))
plt.xlabel("Number of characters in App")
plt.title("The distribution of the Apps name",size = 18)
plt.show()
df.groupby('Type')['Price'].mean()
df.head(5)
df_cat_genrs = df[['Category', 'Genres', 'Rating']]
df_cat_genrs['Count_of_genres'] = df_cat_genrs.Genres.str.findall(';').apply(len)+1
df_cat_genrs.head()
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# group_df = df_cat_genrs.groupby( ? )[ ? ].describe().sort_values('mean', ascending=False)
group_df.head(15)
df_cat_genrs.Category = df_cat_genrs.Category.str.lower()
df_cat_genrs.Genres = df_cat_genrs.Genres.str.lower()
df_cat_genrs.Genres = df_cat_genrs.Genres.str.replace('&', '')
df_cat_genrs.Genres = df_cat_genrs.Genres.str.replace(';', ' ')
df_cat_genrs.Category = df_cat_genrs.Category.str.replace('_', ' ')
df_cat_genrs.Category = df_cat_genrs.Category.str.replace('and', '')
df_cat_genrs.head(5)
(df_cat_genrs['Category'] == df_cat_genrs['Genres']).sum()/len(df_cat_genrs)
df_cat_genrs['is_cat_equal_genre'] = df_cat_genrs['Category'] == df_cat_genrs['Genres']
df_cat_genrs.head()
plt.figure(figsize=(10, 5))
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# sns.countplot(x= ?, hue='Rating', data=df_cat_genrs, hue_order=np.sort(df['Rating'].unique()))
plt.xlabel("Coincidence of Genre and Category")
plt.title('The distribution of apps by ratings',size = 18)
plt.show()
cr_tab = pd.crosstab(df_cat_genrs.Rating, df_cat_genrs.is_cat_equal_genre)
cr_tab
cr_tab / cr_tab.min()
df = df.drop(columns=['Genres'])
df['is_cat_equal_genre'] = df_cat_genrs['is_cat_equal_genre']
# for task version - hide next 3 cells
plt.figure(figsize=(10, 5))
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# sns.countplot(x= ?, hue= ?, data=df, hue_order=np.sort(df['Rating'].unique()))
plt.xlabel("Сontent Rating")
plt.title('The distribution of apps by ratings',size = 18)
plt.show()
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# df_ct_rc = pd.crosstab(?, ?)
df_ct_rc /= df_ct_rc.sum()
df_ct_rc
plt.figure(figsize=(10,10))
for i, con_rating in enumerate(df['Content Rating'].unique()):
plt.subplot(2, 2, i+1)
vc = df.Rating[df['Content Rating']== con_rating].value_counts().sort_index()
patches = plt.pie(vc, autopct='%1.1f%%',
startangle=120, explode=[0.02]*3)
plt.title(con_rating)
plt.axis('equal')
plt.legend(labels=vc.index, loc=(1.02,1.75))
plt.show()
pd_ct_comp = pd.crosstab(df.Rating, [df['Content Rating'], df.is_cat_equal_genre])
pd_ct_comp /= pd_ct_comp.sum()
pd_ct_comp = pd_ct_comp.style.background_gradient(cmap='summer_r')
pd_ct_comp
g = sns.catplot(x="Category",y="log_price", data=df_log[df_log.Type=='Paid'])
g.fig.set_figheight(7)
g.fig.set_figwidth(20)
plt.title('The distribution of logarithmic price in different categories', size = 20)
plt.xticks(rotation=90)
plt.show()
g = sns.catplot(x="Category",y="log_price", data=df_log[df_log.Type=='Paid'], kind='bar')
g.fig.set_figheight(7)
g.fig.set_figwidth(20)
plt.title('The distribution of logarithmic price in different categories', size = 20)
plt.xticks(rotation=90)
plt.show()
df_log.log_price[(df_log.Category.isin(['LIFESTYLE', 'FINANCE'])) & (df_log.Type == 'Paid')].describe()
df_log[(df_log.log_price > 5.) & (df_log.Category.isin(['LIFESTYLE', 'FINANCE']))]
g = sns.catplot(x="Category",y="log_price", data=df_log[(df_log.Type=='Paid') & (df_log.log_price < 5)], kind='bar')
g.fig.set_figheight(7)
g.fig.set_figwidth(20)
plt.title('The distribution of logarithmic price in different categories', size = 20)
plt.xticks(rotation=90)
plt.show()
df = df[~((df_log.Category.isin(['LIFESTYLE', 'FINANCE'])) & (df_log.log_price > 5))]
df_log = df_log[~((df_log.Category.isin(['LIFESTYLE', 'FINANCE'])) & (df_log.log_price > 5))]
df_log['log_size'] = df_log.Size.apply(np.log1p)
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# g = sns.catplot(x= ?, y= ?, data=df_log)
g.fig.set_figheight(7)
g.fig.set_figwidth(20)
plt.title('The distribution of logarithmic size in different categories', size = 20)
plt.xticks(rotation=90)
plt.show()
results = pd.DataFrame(columns=['method', 'model', 'val score', 'test score', 'learning time', 'predict time'])
class Timer(object):
def __init__(self):
self.elapsed_time = 0
def __enter__(self):
self.start = time()
def __exit__(self, type, value, traceback):
self.end = time()
self.elapsed_time = int((self.end - self.start)*1000)
timer = Timer()
Y = df.Rating.astype(int)
df = df.drop(columns=['Rating'])
y_train, y_test = train_test_split(Y, test_size=0.3, random_state=42)
classes, counts = np.unique(y_train, return_counts=True)
major_class = classes[np.argmax(counts)]
major_class
base_predict = np.full(y_test.shape, major_class)
def calculate_metrics(y_predict, y_test):
print('acсuracy: {:.4f}'.format(accuracy_score(y_predict, y_test)))
print('F1 score: {:.4f}'.format(f1_score(y_predict, y_test, average='macro')))
print(classification_report(y_predict, y_test))
print(confusion_matrix(y_predict, y_test))
calculate_metrics(base_predict, y_test)
results = results.append({'method':'Baseline',
'model': 'Majority',
'val score': None,
'test score': f1_score(base_predict, y_test, average='macro'),
'learning time' : None,
'predict time': None},
ignore_index=True)
df.info()
numeric_df = df.select_dtypes(include=['int64', 'float64', 'bool'])
numeric_df.head()
X_train, X_test, y_train, y_test = train_test_split(numeric_df, Y,
test_size=0.3,
random_state=42)
pipe = Pipeline([('scale', StandardScaler()),
('clf', LogisticRegression(random_state=42))])
params = {
'clf__C': [0.01, 0.05, 0.1, 0.5, 0.9, 0.99],
'clf__penalty': ['l1', 'l2', 'elasticnet']
}
np.random.seed(123)
clf = GridSearchCV(pipe,
cv=3,
param_grid=params,
scoring='f1_macro',
verbose=1,
n_jobs=6)
with timer:
clf.fit(X_train, y_train)
learning_time = timer.elapsed_time
best_clf = clf.best_estimator_.steps[1][1]
best_clf
clf.best_score_
with timer:
predict = clf.predict(X_test)
predict_time = timer.elapsed_time
calculate_metrics(predict, y_test)
results = results.append({'method':'Numeric',
'model': 'LR',
'val score': clf.best_score_,
'test score': f1_score(clf.predict(X_test), y_test, average='macro'),
'learning time': learning_time,
'predict time': predict_time},
ignore_index=True)
df_dummies = df.copy()
df_dummies = df_dummies.drop(columns=['App'])
con_rat_dict = {'Everyone':0, 'Everyone 10+':1, 'Teen':2, 'Mature 17+':3}
df_dummies = df_dummies.replace({"Content Rating": con_rat_dict})
df_dummies = pd.get_dummies(df_dummies)
df_dummies.shape
X_train, X_test, y_train, y_test = train_test_split(df_dummies, Y,
test_size=0.3,
random_state=42)
np.random.seed(123)
with timer:
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# clf = GridSearchCV(pipe, cv=3,
# param_grid=params,
# scoring='f1_macro', verbose=1,
# n_jobs=6).fit(?, ?)
learning_time = timer.elapsed_time
print('Best score:', clf.best_score_)
with timer:
predict = clf.predict(X_test)
predict_time = timer.elapsed_time
calculate_metrics(predict, y_test)
results = results.append({'method': 'Categories',
'model': 'LR',
'val score': clf.best_score_,
'test score': f1_score(clf.predict(X_test), y_test, average='macro'),
'learning time': learning_time,
'predict time': predict_time},
ignore_index=True)
df_new = df.copy()
df_new['price_for_mb'] = df_new.Size/df_new.Price
df_new['price_for_mb'] = df_new['price_for_mb'].replace([np.inf, -np.inf], 0)
df_new['log_price'] = df_new.Price.apply(np.log1p)
df_new['log_size'] = df_new.Size.apply(np.log1p)
p = PolynomialFeatures(degree=2).fit(df_new[['log_price', 'log_size']])
poly_df = pd.DataFrame(p.transform(df_new[['log_price', 'log_size']]),
columns=p.get_feature_names(['log_price', 'log_size']))
poly_df.head(10)
poly_df = poly_df.drop(columns=['log_price', 'log_size', '1'])
poly_df.index = df_new.index
df_new = pd.concat([df_new, poly_df.reindex(df_new.index)], axis=1)
df_new.head(5)
df_new['len_of_app_title'] = df_new.App.apply(len)
df_new['count_of_app_title'] = df_new.App.str.split(' ').apply(len)
# Let's change to lower case, remove special characters, leave only Latin letters and numbers.
df_new['cleantext'] = df_new.App.str.lower()
df_new['cleantext'] = df_new.cleantext.str.replace('[-_]', ' ')
df_new['cleantext'] = df_new.cleantext.str.replace('[^0-9A-Za-z ]+', '')
df_new[['App', 'cleantext']].head(10)
df_new['len_of_cleantext_title'] = df_new.cleantext.apply(len)
df_new['count_of_cleantext_title'] = df_new.cleantext.str.split(' ').apply(len)
df_new['diff_len_title'] = df_new['len_of_app_title'] - df_new['len_of_cleantext_title']
df_new['diff_count_title'] = df_new['count_of_app_title'] - df_new['count_of_cleantext_title']
df_new[['diff_len_title', 'diff_count_title']].describe()
df_new[['App', 'cleantext']][df_new['diff_count_title'] == -5]
len_vocab = len(set(' '.join(df_new.cleantext.tolist()).split(' ')))
max_count_of_cleantext_title = round(df_new.count_of_cleantext_title.max(), 0)
print('Number of unique words: {}'.format(len_vocab))
print('Maximum number of words in a title: {}'.format(max_count_of_cleantext_title))
df_new = df_new.drop(columns=['App', 'cleantext'])
df_new = df_new.replace({"Content Rating": con_rat_dict})
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# df_new = pd. ?
df_new.shape
scaled_data = StandardScaler().fit_transform(df_new)
%%time
np.random.seed(1)
X_umap = umap.UMAP(n_components=2, random_state=2).fit_transform(scaled_data)
plt.figure(figsize=[15, 9])
plt.title('UMAP')
for i in Y.unique():
indx = np.where(Y.values == i)
plt.scatter(X_umap[indx, 0], X_umap[indx, 1], marker='.', alpha=0.5, label=i)
plt.legend()
plt.show()
cls = cluster.KMeans(n_clusters=30, random_state=42)
kmeans_labels = cls.fit_predict(scaled_data)
# For Google Colab below
# Unfortunately, Google collab doesn't allow to reproduce KMeans model training in spite of the random_state.
# Without it we can't reproduce our clusters and distributions inside them. That's why we're loading pretrained cluster
# model. If you want train your own cluster model you can use the commented code
# m_file = BytesIO(requests.get(CLUSTER_F).content)
# cls = load(m_file)
# kmeans_labels = cls.predict(scaled_data)
plt.figure(figsize=[15, 9])
ax = plt.subplot()
for i in np.unique(kmeans_labels):
indx = np.where(kmeans_labels == i)
plt.scatter(X_umap[indx, 0], X_umap[indx, 1], marker='.', alpha=0.5, label=i)
ax.legend(loc='upper center', bbox_to_anchor=(1.05, 1.0), ncol=1, fancybox=True, shadow=True, fontsize=10)
plt.show()
df_clstr = pd.DataFrame({'Cluster': kmeans_labels, 'Rating':Y})
g = sns.catplot("Rating", col="Cluster", col_wrap=5, data=df_clstr, kind="count", height=2.5, aspect=.8, size=4)
df_clstr = df_clstr.drop(columns=['Rating'])
df_clstr['Cluster'] = df_clstr['Cluster'].astype('category')
df_clstr = pd.get_dummies(df_clstr)
df_clstr.index = df_new.index
df_new=pd.concat([df_new, df_clstr], axis=1)
df_new.shape
X_train, X_test, y_train, y_test = train_test_split(df_new, Y,
test_size=0.3,
random_state=42)
np.random.seed(123)
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# with ?:
# clf = GridSearchCV( ? ).fit( ? )
learning_time = timer.elapsed_time
print('Best score:', clf.best_score_)
print('Best params:', clf.best_params_)
with timer:
predict = clf.predict(X_test)
predict_time = timer.elapsed_time
calculate_metrics(predict, y_test)
results = results.append({'method': 'Manual features',
'model': 'LR',
'val score': clf.best_score_,
'test score': f1_score(clf.predict(X_test), y_test, average='macro'),
'learning time':learning_time,
'predict time': predict_time},
ignore_index=True)
model = PCA()
pca_data = model.fit_transform(StandardScaler().fit_transform(df_new))
plt.figure(figsize=[13, 5])
plt.title('Principal Component Analysis (PCA)')
plt.plot(range(len(model.explained_variance_ratio_)), model.explained_variance_ratio_, '--o')
plt.ylabel('Explained variance ratio')
plt.xlabel('Components')
plt.tight_layout()
plt.show()
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# model = PCA(n_components=?)
# pca_data = model.fit_transform(?)
assert pca_data.shape[1] == 48
pca_data.shape
# %%time
# X_train, X_test, y_train, y_test = train_test_split(df_new, Y,
# test_size=0.3,
# random_state=42)
# scaled_data = StandardScaler().fit_transform(X_train)
# scaled_X = pd.DataFrame(scaled_data, index=X_train.index, columns=X_train.columns)
# clf_logreg = LogisticRegression(random_state=42, C=0.05, penalty='l2')
# parameters_grid = {'C': [0.05]}
# columns = X_train.columns
# important_features = []
# features_scores = []
# np.random.seed(123)
# for j in range(len(columns)-1):
# print('{}\{}'.format(j, len(columns)))
# col_for_del = []
# scores = []
# for i in columns:
# cols = columns[columns != i]
# clf = GridSearchCV(clf_logreg, cv=3,
# param_grid=parameters_grid,
# scoring='f1_macro',
# verbose=0,
# n_jobs=6).fit(scaled_X[cols], y_train)
# scores.append(clf.best_score_)
# max_col = columns[np.argmax(scores)]
# print(max_col, clf.best_score_)
# important_features.append(max_col)
# features_scores.append(max(scores))
# columns = columns[columns != max_col]
# print(j, '\r', end='')
# features_scores.append(0)
# important_features.append(columns[0])
# feature_importances = pd.DataFrame({'features': important_features,
# 'feature_importances': features_scores,
# 'iteration': range(len(features_scores))})
# feature_importances = feature_importances.sort_values('iteration', ascending=True)
# feature_importances.to_csv(FEATURE_IMPORTANCE_F, index=False)
feature_importances = pd.read_csv(FEATURE_IMPORTANCE_F)
plt.figure(figsize=[15, 6])
plt.title('Brute force')
plt.plot(feature_importances.iteration[:-1],
feature_importances.feature_importances[:-1], '-o')
plt.ylabel('f1_macro')
plt.xlabel('iteration')
plt.tight_layout()
plt.show()
feature_importances.features[::-1][:20]
feature_importances[feature_importances.feature_importances == feature_importances.feature_importances.max()]
max_id = feature_importances[
feature_importances.feature_importances == feature_importances.feature_importances.max()
].index[0]
print('We can leave {} features'.format(len(feature_importances)-max_id))
selected_features = feature_importances.features[max_id:]
# Greedy features
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# X_train, X_test, y_train, y_test = train_test_split(df_new[ ? ], Y,
# test_size=0.3,
# random_state=42)
pipe = Pipeline([('scale', StandardScaler()),
('clf', LogisticRegression(random_state=42))])
params = {
'clf__C': [0.01, 0.05, 0.1, 0.5, 0.9, 0.99],
'clf__penalty': ['l1', 'l2', 'elasticnet']
}
np.random.seed(123)
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# with ? :
# clf = GridSearchCV( ? ).fit( ? )
learning_time = timer.elapsed_time
with timer:
predict = clf.predict(X_test)
predict_time = timer.elapsed_time
calculate_metrics(predict, y_test)
results = results.append({'method': 'Greedy selection',
'model': 'LR',
'val score': clf.best_score_,
'test score': f1_score(clf.predict(X_test), y_test, average='macro'),
'learning time': learning_time,
'predict time': predict_time},
ignore_index=True)
# PCA features
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# X_train, X_test, y_train, y_test = train_test_split( ? )
np.random.seed(123)
pipe = Pipeline([('scale', StandardScaler()),
('pca', PCA(n_components=0.99)),
('clf', LogisticRegression(random_state=42))])
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# with ?
# clf = ?
learning_time = timer.elapsed_time
with timer:
predict = clf.predict(X_test)
predict_time = timer.elapsed_time
calculate_metrics(clf.predict(X_test), y_test)
results = results.append({'method': 'PCA features',
'model': 'LR',
'val score': clf.best_score_,
'test score': f1_score(clf.predict(X_test), y_test, average='macro'),
'learning time': learning_time,
'predict time': predict_time},
ignore_index=True)
results
# from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
# from sklearn.linear_model import LogisticRegression
# from catboost import CatBoostClassifier
# from lightgbm import LGBMClassifier
# log_params = {'clf__C': uniform(0.01, 0.99),
# 'clf__penalty': ['l1', 'l2']}
# rf_params = {'clf__n_estimators': randint(100, 1000),
# 'clf__max_depth': randint(1,10),
# 'clf__min_samples_leaf': randint(1, 10),
# 'clf__min_samples_split': randint(2, 11)}
# gb_params = {'clf__learning_rate': uniform(0.01, 0.49),
# 'clf__n_estimators': randint(100, 700),
# 'clf__max_depth': randint(1, 10),
# 'clf__subsample': uniform(0.6, 0.4)}
# models_dict = {'Logistic Regression':[log_params, LogisticRegression(random_state=42)],
# 'Random Forest':[rf_params, RandomForestClassifier(random_state=42)],
# 'Gradient Boosting': [gb_params, GradientBoostingClassifier(random_state=42)],
# 'XGB':[gb_params, xgb.XGBClassifier(random_state=42)],
# 'CatBoost':[gb_params, CatBoostClassifier(bootstrap_type='Bernoulli', random_state=42, verbose=0)],
# 'LightBoost':[gb_params, LGBMClassifier(random_state=42)]}
# ext_results = pd.DataFrame(columns=['method', 'model', 'val score', 'test score', 'learning time', 'predict time'])
# def custom_pipe(clf, pca='False'):
# if pca:
# return Pipeline([('clf', clf)])
# else:
# return Pipeline([('scale', StandardScaler()),
# ('clf', clf)])
# X_train_gs, X_test_gs = train_test_split(df_new[selected_features],test_size=0.3, random_state=42)
# X_train_all, X_test_all = train_test_split(df_new,test_size=0.3, random_state=42)
# pca_model = PCA(n_components=0.99)
# scaler = StandardScaler()
# X_train_pca = pca_model.fit_transform(scaler.fit_transform(X_train_all))
# X_test_pca = pca_model.transform(scaler.transform(X_test_all))
# data_dict = {'Manual features':(X_train_all, X_test_all),
# 'Greedy selection':(X_train_gs, X_test_gs),
# 'PCA':(X_train_pca, X_test_pca)}
# np.random.seed(123)
# for method, (X_train, X_test) in data_dict.items():
# print(f'Method: {method}\n')
# if method == "PCA":
# pca_flag = True
# else:
# pca_flag = False
# for clf_name, vals in models_dict.items():
# print(clf_name, '\n')
# pipe = custom_pipe(vals[1], pca_flag)
# with timer:
# clf = RandomizedSearchCV(pipe, cv=3, random_state=123,
# param_distributions=vals[0],
# n_jobs=4,
# verbose=1,
# n_iter=100,
# scoring='f1_macro').fit(X_train, y_train)
# print(clf.best_params_ , '\n')
# learning_time = int(timer.elapsed_time / 300)
# with timer:
# predict = clf.predict(X_test)
# predict_time = timer.elapsed_time
# ext_results = ext_results.append({'method': method,
# 'model': clf_name,
# 'val score': clf.best_score_,
# 'test score': f1_score(predict, y_test, average='macro'),
# 'learning time': learning_time,
# 'predict time': predict_time},
# ignore_index=True)
# ext_results.to_csv(BONUS_F, index=False)
ext_results = pd.read_csv(BONUS_F)
ext_results
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'hadgem3-gc31-hm', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: At this point, we have our stations and trips data loaded into memory.
Step2: Then, let's iterate over the stations DataFrame, and add in the node attributes.
Step3: In order to answer the question of "which stations are important", we need to specify things a bit more. Perhaps a measure such as betweenness centrality or degree centrality may be appropriate here.
Step4: Exercise
Step5: Exercise
Step6: Let's now try drawing the graph.
Step7: Exercise
Step8: In this visual, nodes are sorted from highest connectivity to lowest connectivity in the unfiltered graph.
|
<ASSISTANT_TASK:>
Python Code:
# This block of code checks to make sure that a particular directory is present.
if "divvy_2013" not in os.listdir('datasets/'):
print('Unzip the divvy_2013.zip file in the datasets folder.')
stations = pd.read_csv('datasets/divvy_2013/Divvy_Stations_2013.csv', parse_dates=['online date'], index_col='id', encoding='utf-8')
stations
trips = pd.read_csv('datasets/divvy_2013/Divvy_Trips_2013.csv',
parse_dates=['starttime', 'stoptime'],
index_col=['trip_id'])
trips = trips.sort()
trips
G = nx.DiGraph()
for r, d in stations.iterrows(): # call the pandas DataFrame row-by-row iterator
G.add_node(r, attr_dict=d.to_dict())
# # Run the following code at your own risk :)
# for r, d in trips.iterrows():
# start = d['from_station_id']
# end = d['to_station_id']
# if (start, end) not in G.edges():
# G.add_edge(start, end, count=1)
# else:
# G.edge[start][end]['count'] += 1
for (start, stop), d in trips.groupby(['from_station_id', 'to_station_id']):
G.add_edge(start, stop, count=len(d))
from collections import Counter
# Count the number of edges that have x trips recorded on them.
trip_count_distr = Counter([d['count'] for _, _, d in G.edges(data=True)])
# Then plot the distribution of these
plt.scatter(list(trip_count_distr.keys()), list(trip_count_distr.values()), alpha=0.1)
plt.yscale('log')
plt.xlabel('num. of trips')
plt.ylabel('num. of edges')
# Filter the edges to just those with more than 100 trips.
G_filtered = G.copy()
for u, v, d in G.edges(data=True):
if d['count'] < 100:
G_filtered.remove_edge(u,v)
len(G_filtered.edges())
nx.draw(G_filtered)
nodes = sorted(G_filtered.nodes(), key=lambda x:len(G.neighbors(x)))
edges = G_filtered.edges()
edgeprops = dict(alpha=0.1)
nodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes))
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
plt.savefig('images/divvy.png', dpi=300)
nx.write_gpickle(G, 'datasets/divvy_2013/divvy_graph.pkl')
G = nx.read_gpickle('datasets/divvy_2013/divvy_graph.pkl')
G.nodes(data=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: <h2> Explore data </h2>
Step4: Let's write a query to find the unique values for a given column and see how the number of babies and their average weight is distributed across a those values. This is important to ensure that we have enough examples of each data value, and to verify our hunch that the parameter has predictive value.
Step5: We'll use the get_distinct_values function above to explore how the variables num_babies and avg_wt are distributed across the features is_male, mother_age, plurality and gestation_weeks
|
<ASSISTANT_TASK:>
Python Code:
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "1.14" # TF version for CMLE to use
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
# Create SQL query using natality data after the year 2000
query_string =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
# Call BigQuery and examine in dataframe
from google.cloud import bigquery
bq = bigquery.Client(project = PROJECT)
df = bq.query(query_string + "LIMIT 100").to_dataframe()
df.head()
def get_distinct_values(column_name):
sql_query =
SELECT
{0},
COUNT(1) AS num_babies,
AVG(weight_pounds) AS avg_wt
FROM
publicdata.samples.natality
WHERE
year > 2000
GROUP BY
{0}
.format(column_name)
return bq.query(sql_query).to_dataframe()
# Bar plot to see is_male with avg_wt linear and num_babies logarithmic
df = get_distinct_values("is_male")
df.plot(x = "is_male", y = "num_babies", kind = "bar");
df.plot(x = "is_male", y = "avg_wt", kind = "bar");
# Line plots to see mother_age with avg_wt linear and num_babies logarithmic
df = get_distinct_values("mother_age")
df = df.sort_values("mother_age")
df.plot(x = "mother_age", y = "num_babies");
df.plot(x = "mother_age", y = "avg_wt");
# Bar plot to see plurality(singleton, twins, etc.) with avg_wt linear and num_babies logarithmic
df = get_distinct_values("plurality")
df = df.sort_values("plurality")
df.plot(x = "plurality", y = "num_babies", logy = True, kind = "bar");
df.plot(x = "plurality", y = "avg_wt", kind = "bar");
# Bar plot to see gestation_weeks with avg_wt linear and num_babies logarithmic
df = get_distinct_values("gestation_weeks")
df = df.sort_values("gestation_weeks")
df.plot(x = "gestation_weeks", y = "num_babies", logy = True, kind = "bar");
df.plot(x = "gestation_weeks", y = "avg_wt", kind = "bar");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tweet Count Analysis
Step2: Count 'Em
Step3: Less than $1\%$ of our tweets are duplicates, so we have approximately the quantity of tweets that we thought we did.
Step4: Now we have the tweet counts as a dictionary of (week_index, count) pairs. Before we go further, we should fill in the any missing weeks with a 0 value.
Step5: After we've filled in missing weeks with a 0 value, we sort the pairs, then repackage them as a tuple of week indexes and a tuple of counts. Then we can pass these week indexes and counts to a bar plot function as x- and y-values, respectively.
Step6: Unfortunately, we can't automatically display the figure in a Jupyter notebook on NYU's HPC server. So, we saved it to a file, and now we can display it below
Step7: Ignoring the frequency of weeks containing 0 tweets, it seems roughly that there are two overlapping normal curves
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
# From https://stackoverflow.com/a/36218558 .
def sparkImport(module_name, module_directory):
Convenience function.
Tells the SparkContext sc (must already exist) to load
module module_name on every computational node before
executing an RDD.
Args:
module_name: the name of the module, without ".py".
module_directory: the path, absolute or relative, to
the directory containing module
module_Name.
Returns: none.
module_path = os.path.abspath(
module_directory + "/" + module_name + ".py")
sc.addPyFile(module_path)
# Add all scripts from repository to local path.
# From https://stackoverflow.com/a/35273613 .
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import preprocessing
sparkImport("preprocessing", "..")
tweets = sc.textFile("tweets.csv") \
.filter(preprocessing.format_is_correct) \
.map(preprocessing.split_record)
initial_count = tweets.count()
print("Total number of tweets: " + str(initial_count))
tweet_ids = tweets \
.map(lambda record: record[preprocessing.field_index['id']]) \
.distinct()
final_count = tweet_ids.count()
print("Number of duplicates: " + str(initial_count - final_count))
print("Number of distinct tweets: " + str(final_count))
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as pyplot
def get_week(unix_timestamp):
# Add 3 to the day, because Unix timestamp 0 is on a Thursday.
return ((int(unix_timestamp) / secondsPerDay + 3) / 7)
secondsPerDay = 24*60*60
weekly_tweet_counts = tweets \
.map(
lambda record:
(get_week(record[preprocessing.field_index['timestamp']]), 1)) \
.countByKey()
for week_index in range(min(weekly_tweet_counts.keys()), max(weekly_tweet_counts.keys())):
if week_index not in weekly_tweet_counts.keys():
weekly_tweet_counts[week_index] = 0
weekly_tweet_counts_list = sorted(weekly_tweet_counts.items())
weekly_tweet_counts_xy = zip(*weekly_tweet_counts_list)
week_indexes = weekly_tweet_counts_xy[0]
week_counts = weekly_tweet_counts_xy[1]
currentFigure = pyplot.figure()
pyplot.figure(currentFigure.number)
pyplot.bar(week_indexes, week_counts, width=1.0)
pyplot.title('Tweet Count per Week')
pyplot.xlabel('Week Index')
pyplot.ylabel('Tweet Count')
pyplot.xlim([min(week_indexes), max(week_indexes)])
pyplot.ylim([0, max(week_counts)])
pyplot.savefig("tweet_count_per_week.png")
sorted_week_counts = sorted(week_counts)
currentFigure = pyplot.figure()
pyplot.figure(currentFigure.number)
pyplot.hist(sorted_week_counts, 40)
pyplot.title("Distribution of Weekly Tweet Counts")
pyplot.xlabel("Weekly Tweet Count")
pyplot.ylabel("Frequency")
pyplot.savefig("distribution_of_weekly_counts.png")
c_min = 150000
def get_day(unix_timestamp):
return int(unix_timestamp) / (24*60*60)
tweets_per_day = tweets \
.map(lambda record:
(get_day(record[preprocessing.field_index['timestamp']]), 1)) \
.countByKey()
for day in range(min(tweets_per_day.keys()), max(tweets_per_day.keys())):
if day not in tweets_per_day.keys():
tweets_per_day[day] = 0
num_valid_days = 0
for day in range(min(tweets_per_day.keys()), max(tweets_per_day.keys())):
# check if day has enough tweets
valid_days = range(day - 31, day)
valid_day_counts = [tweets_per_day[past_day]
for past_day in valid_days]
if sum(valid_day_counts) > 4*c_min:
num_valid_days = num_valid_days + 1
print("Number of days satisfying our rule: " + str(num_valid_days))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let us plot the first five examples of the train data (first row) and test data (second row).
Step2: Then we import shogun components and convert the data to shogun objects
Step3: Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect.
Step4: Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time
Step5: We have the prediction for each of the 13 k's now and can quickly compute the accuracies
Step6: So k=3 seems to have been the optimal choice.
Step7: So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN
Step8: Evaluate KNN with and without Cover Tree. This takes a few seconds
Step9: Generate plots with the data collected in the evaluation
Step10: Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results.
Step11: Let's apply the SVM to the same test data set to compare results
Step12: Since the SVM performs way better on this task - let's apply it to all data we did not use in training.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from scipy.io import loadmat, savemat
from numpy import random
from os import path
mat = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))
Xall = mat['data']
Yall = np.array(mat['label'].squeeze(), dtype=np.double)
# map from 1..10 to 0..9, since shogun
# requires multiclass labels to be
# 0, 1, ..., K-1
Yall = Yall - 1
random.seed(0)
subset = random.permutation(len(Yall))
Xtrain = Xall[:, subset[:5000]]
Ytrain = Yall[subset[:5000]]
Xtest = Xall[:, subset[5000:6000]]
Ytest = Yall[subset[5000:6000]]
Nsplit = 2
all_ks = range(1, 21)
print(Xall.shape)
print(Xtrain.shape)
print(Xtest.shape)
%matplotlib inline
import pylab as P
def plot_example(dat, lab):
for i in range(5):
ax=P.subplot(1,5,i+1)
P.title(int(lab[i]))
ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest')
ax.set_xticks([])
ax.set_yticks([])
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xtrain, Ytrain)
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xtest, Ytest)
from shogun import MulticlassLabels, features
from shogun import KNN, EuclideanDistance
labels = MulticlassLabels(Ytrain)
feats = features(Xtrain)
k=3
dist = EuclideanDistance()
knn = KNN(k, dist, labels)
labels_test = MulticlassLabels(Ytest)
feats_test = features(Xtest)
knn.train(feats)
pred = knn.apply_multiclass(feats_test)
print("Predictions", pred.get_int_labels()[:5])
print("Ground Truth", Ytest[:5])
from shogun import MulticlassAccuracy
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(pred, labels_test)
print("Accuracy = %2.2f%%" % (100*accuracy))
idx=np.where(pred != Ytest)[0]
Xbad=Xtest[:,idx]
Ybad=Ytest[idx]
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xbad, Ybad)
knn.put('k', 13)
multiple_k=knn.classify_for_multiple_k()
print(multiple_k.shape)
for k in range(13):
print("Accuracy for k=%d is %2.2f%%" % (k+1, 100*np.mean(multiple_k[:,k]==Ytest)))
from shogun import Time, KNN_COVER_TREE, KNN_BRUTE
start = Time.get_curtime()
knn.put('k', 3)
knn.put('knn_solver', KNN_BRUTE)
pred = knn.apply_multiclass(feats_test)
print("Standard KNN took %2.1fs" % (Time.get_curtime() - start))
start = Time.get_curtime()
knn.put('k', 3)
knn.put('knn_solver', KNN_COVER_TREE)
pred = knn.apply_multiclass(feats_test)
print("Covertree KNN took %2.1fs" % (Time.get_curtime() - start))
def evaluate(labels, feats, use_cover_tree=False):
from shogun import MulticlassAccuracy, CrossValidationSplitting
import time
split = CrossValidationSplitting(labels, Nsplit)
split.build_subsets()
accuracy = np.zeros((Nsplit, len(all_ks)))
acc_train = np.zeros(accuracy.shape)
time_test = np.zeros(accuracy.shape)
for i in range(Nsplit):
idx_train = split.generate_subset_inverse(i)
idx_test = split.generate_subset_indices(i)
for j, k in enumerate(all_ks):
#print "Round %d for k=%d..." % (i, k)
feats.add_subset(idx_train)
labels.add_subset(idx_train)
dist = EuclideanDistance(feats, feats)
knn = KNN(k, dist, labels)
knn.set_store_model_features(True)
if use_cover_tree:
knn.put('knn_solver', KNN_COVER_TREE)
else:
knn.put('knn_solver', KNN_BRUTE)
knn.train()
evaluator = MulticlassAccuracy()
pred = knn.apply_multiclass()
acc_train[i, j] = evaluator.evaluate(pred, labels)
feats.remove_subset()
labels.remove_subset()
feats.add_subset(idx_test)
labels.add_subset(idx_test)
t_start = time.clock()
pred = knn.apply_multiclass(feats)
time_test[i, j] = (time.clock() - t_start) / labels.get_num_labels()
accuracy[i, j] = evaluator.evaluate(pred, labels)
feats.remove_subset()
labels.remove_subset()
return {'eout': accuracy, 'ein': acc_train, 'time': time_test}
labels = MulticlassLabels(Ytest)
feats = features(Xtest)
print("Evaluating KNN...")
wo_ct = evaluate(labels, feats, use_cover_tree=False)
wi_ct = evaluate(labels, feats, use_cover_tree=True)
print("Done!")
import matplotlib
fig = P.figure(figsize=(8,5))
P.plot(all_ks, wo_ct['eout'].mean(axis=0), 'r-*')
P.plot(all_ks, wo_ct['ein'].mean(axis=0), 'r--*')
P.legend(["Test Accuracy", "Training Accuracy"])
P.xlabel('K')
P.ylabel('Accuracy')
P.title('KNN Accuracy')
P.tight_layout()
fig = P.figure(figsize=(8,5))
P.plot(all_ks, wo_ct['time'].mean(axis=0), 'r-*')
P.plot(all_ks, wi_ct['time'].mean(axis=0), 'b-d')
P.xlabel("K")
P.ylabel("time")
P.title('KNN time')
P.legend(["Plain KNN", "CoverTree KNN"], loc='center right')
P.tight_layout()
from shogun import GaussianKernel, GMNPSVM
width=80
C=1
gk=GaussianKernel()
gk.set_width(width)
svm=GMNPSVM(C, gk, labels)
_=svm.train(feats)
out=svm.apply(feats_test)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(out, labels_test)
print("Accuracy = %2.2f%%" % (100*accuracy))
Xrem=Xall[:,subset[6000:]]
Yrem=Yall[subset[6000:]]
feats_rem=features(Xrem)
labels_rem=MulticlassLabels(Yrem)
out=svm.apply(feats_rem)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(out, labels_rem)
print("Accuracy = %2.2f%%" % (100*accuracy))
idx=np.where(out.get_labels() != Yrem)[0]
Xbad=Xrem[:,idx]
Ybad=Yrem[idx]
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xbad, Ybad)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The parameter <tt>box_l</tt> sets the size of the simulation box. In general, one should check for finite
Step2: The skin is used for constructing
Step3: The <tt>periodicity</tt> parameter indicates that the system is periodic in all three
Step4: We set up the central bead and the other beads are initialized at random positions on the surface of the colloid. The beads are then allowed to relax using
Step5: The best way to ensure a relatively uniform distribution
Step6: Now that the beads are arranged in the shape of a raspberry, the surface beads are made virtual particles
Step7: 5 Inserting Counterions and Salt Ions
Step8: We then check that charge neutrality is maintained
Step9: A WCA potential acts between all of the ions. This potential represents a purely repulsive
Step10: After inserting the ions, again a short integration is performed with a force cap to
Step11: 6 Electrostatics
Step12: Generally a Bjerrum length of $2$ is appropriate when using WCA interactions with $\sigma=1$, since a typical ion has a radius of $0.35\ \mathrm{nm}$, while the Bjerrum
Step13: 7 Lattice-Boltzmann
Step14: The important parameters for the LB fluid are the density, the viscosity, the time step,
Step15: A logical way of picking a specific set of parameters is to choose them such that the hydrodynamic radius of an ion roughly matches its physical radius determined by the
Step16: 8 Simulating Electrophoresis
Step17: Plot the raspberry trajectory with <tt>matplotlib</tt>
|
<ASSISTANT_TASK:>
Python Code:
import espressomd
espressomd.assert_features(["ELECTROSTATICS", "ROTATION", "ROTATIONAL_INERTIA", "EXTERNAL_FORCES",
"MASS", "VIRTUAL_SITES_RELATIVE", "CUDA", "LENNARD_JONES"])
from espressomd import interactions
from espressomd import electrostatics
from espressomd import lb
from espressomd.virtual_sites import VirtualSitesRelative
import numpy as np
# System parameters
#############################################################
box_l = 40. # size of the simulation box
skin = 0.3 # Skin parameter for the Verlet lists
time_step = 0.01
eq_tstep = 0.001
n_cycle = 1000
integ_steps = 150
# Interaction parameters (Lennard-Jones for raspberry)
#############################################################
radius_col = 3.
harmonic_radius = 3.0
# the subscript c is for colloid and s is for salt (also used for the surface beads)
eps_ss = 1. # LJ epsilon between the colloid's surface particles.
sig_ss = 1. # LJ sigma between the colloid's surface particles.
eps_cs = 48. # LJ epsilon between the colloid's central particle and surface particles.
sig_cs = radius_col # LJ sigma between the colloid's central particle and surface particles (colloid's radius).
a_eff = 0.32 # effective hydrodynamic radius of a bead due to the discreteness of LB.
# System setup
#############################################################
system = espressomd.System(box_l=[box_l] * 3)
system.time_step = time_step
system.cell_system.skin = skin
system.periodicity = [True, True, True]
# the LJ potential with the central bead keeps all the beads from simply collapsing into the center
system.non_bonded_inter[1, 0].wca.set_params(epsilon=eps_cs, sigma=sig_cs)
# the LJ potential (WCA potential) between surface beads causes them to be roughly equidistant on the
# colloid surface
system.non_bonded_inter[1, 1].wca.set_params(epsilon=eps_ss, sigma=sig_ss)
# the harmonic potential pulls surface beads towards the central colloid bead
col_center_surface_bond = interactions.HarmonicBond(k=3000., r_0=harmonic_radius)
system.bonded_inter.add(col_center_surface_bond)
# for the warmup we use a Langevin thermostat with an extremely low temperature and high friction coefficient
# such that the trajectories roughly follow the gradient of the potential while not accelerating too much
system.thermostat.set_langevin(kT=0.00001, gamma=40., seed=42)
print("# Creating raspberry")
center = system.box_l / 2
colPos = center
# Charge of the colloid
q_col = -40
# Number of particles making up the raspberry (surface particles + the central particle).
n_col_part = int(4 * np.pi * np.power(radius_col, 2) + 1)
# Place the central particle
system.part.add(id=0, pos=colPos, type=0, q=q_col, fix=(True, True, True),
rotation=(1, 1, 1)) # Create central particle
# Create surface beads uniformly distributed over the surface of the central particle
for i in range(1, n_col_part):
colSurfPos = np.random.randn(3)
colSurfPos = colSurfPos / np.linalg.norm(colSurfPos) * radius_col + colPos
system.part.add(id=i, pos=colSurfPos, type=1)
system.part[i].add_bond((col_center_surface_bond, 0))
print("# Number of colloid beads = {}".format(n_col_part))
# Relax bead positions. The LJ potential with the central bead combined with the
# harmonic bond keep the monomers roughly radius_col away from the central bead. The LJ
# between the surface beads cause them to distribute more or less evenly on the surface.
system.force_cap = 1000
system.time_step = eq_tstep
print("Relaxation of the raspberry surface particles")
for i in range(n_cycle):
system.integrator.run(integ_steps)
# Restore time step
system.time_step = time_step
# this loop moves the surface beads such that they are once again exactly radius_col away from the center
# For the scalar distance, we use system.distance() which considers periodic boundaries
# and the minimum image convention
colPos = system.part[0].pos
for p in system.part[1:]:
p.pos = (p.pos - colPos) / np.linalg.norm(system.distance(p, system.part[0])) * radius_col + colPos
p.pos = (p.pos - colPos) / np.linalg.norm(p.pos - colPos) * radius_col + colPos
# Select the desired implementation for virtual sites
system.virtual_sites = VirtualSitesRelative()
# Setting min_global_cut is necessary when there is no interaction defined with a range larger than
# the colloid such that the virtual particles are able to communicate their forces to the real particle
# at the center of the colloid
system.min_global_cut = radius_col
# Calculate the center of mass position (com) and the moment of inertia (momI) of the colloid
com = np.average(system.part[1:].pos, 0) # system.part[:].pos returns an n-by-3 array
momI = 0
for i in range(n_col_part):
momI += np.power(np.linalg.norm(com - system.part[i].pos), 2)
# note that the real particle must be at the center of mass of the colloid because of the integrator
print("\n# moving central particle from {} to {}".format(system.part[0].pos, com))
system.part[0].fix = [False, False, False]
system.part[0].pos = com
system.part[0].mass = n_col_part
system.part[0].rinertia = np.ones(3) * momI
# Convert the surface particles to virtual sites related to the central particle
# The id of the central particles is 0, the ids of the surface particles start at 1.
for p in system.part[1:]:
p.vs_auto_relate_to(0)
print("# Adding the positive ions")
salt_rho = 0.001 # Number density of ions
volume = system.volume()
N_counter_ions = int(round((volume * salt_rho) + abs(q_col)))
i = 0
while i < N_counter_ions:
pos = np.random.random(3) * system.box_l
# make sure the ion is placed outside of the colloid
if (np.power(np.linalg.norm(pos - center), 2) > np.power(radius_col, 2) + 1):
system.part.add(pos=pos, type=2, q=1)
i += 1
print("# Added {} positive ions".format(N_counter_ions))
print("\n# Adding the negative ions")
N_co_ions = N_counter_ions - abs(q_col)
i = 0
while i < N_co_ions:
pos = np.random.random(3) * system.box_l
# make sure the ion is placed outside of the colloid
if (np.power(np.linalg.norm(pos - center), 2) > np.power(radius_col, 2) + 1):
system.part.add(pos=pos, type=3, q=-1)
i += 1
print("# Added {} negative ions".format(N_co_ions))
# Check charge neutrality
assert np.abs(np.sum(system.part[:].q)) < 1E-10
# WCA interactions for the ions, essentially giving them a finite volume
system.non_bonded_inter[0, 2].lennard_jones.set_params(
epsilon=eps_ss, sigma=sig_ss,
cutoff=sig_ss * pow(2., 1. / 6.), shift="auto", offset=sig_cs - 1 + a_eff)
system.non_bonded_inter[0, 3].lennard_jones.set_params(
epsilon=eps_ss, sigma=sig_ss,
cutoff=sig_ss * pow(2., 1. / 6.), shift="auto", offset=sig_cs - 1 + a_eff)
system.non_bonded_inter[2, 2].wca.set_params(epsilon=eps_ss, sigma=sig_ss)
system.non_bonded_inter[2, 3].wca.set_params(epsilon=eps_ss, sigma=sig_ss)
system.non_bonded_inter[3, 3].wca.set_params(epsilon=eps_ss, sigma=sig_ss)
print("\n# Equilibrating the ions (without electrostatics):")
# Langevin thermostat for warmup before turning on the LB.
temperature = 1.0
system.thermostat.set_langevin(kT=temperature, gamma=1.)
print("Removing overlap between ions")
ljcap = 100
CapSteps = 100
for i in range(CapSteps):
system.force_cap = ljcap
system.integrator.run(integ_steps)
ljcap += 5
system.force_cap = 0
# Turning on the electrostatics
# Note: Production runs would typically use a target accuracy of 10^-4
print("\n# Tuning P3M parameters...")
bjerrum = 2.
p3m = electrostatics.P3M(prefactor=bjerrum * temperature, accuracy=0.001)
system.actors.add(p3m)
print("# Tuning complete")
E = 0.1 # an electric field of 0.1 is the upper limit of the linear response regime for this model
Efield = np.array([E, 0, 0])
for p in system.part:
p.ext_force = p.q * Efield
system.part[:].v = (0, 0, 0)
lb = espressomd.lb.LBFluidGPU(kT=temperature, seed=42, dens=1., visc=3., agrid=1., tau=system.time_step)
system.actors.add(lb)
system.thermostat.turn_off()
system.thermostat.set_lb(LB_fluid=lb, seed=123, gamma=20.0)
# Reset the simulation clock
system.time = 0
initial_pos = system.part[0].pos
num_iterations = 1000
num_steps_per_iteration = 1000
with open('posVsTime.dat', 'w') as f: # file where the raspberry trajectory will be written to
for i in range(num_iterations):
system.integrator.run(num_steps_per_iteration)
pos = system.part[0].pos - initial_pos
f.write("%.2f %.4f %.4f %.4f\n" % (system.time, pos[0], pos[1], pos[2]))
print("# time: {:.0f} ({:.0f}%), col_pos: {}".format(
system.time, (i + 1) * 100. / num_iterations, np.around(pos, 1), end='\r'))
print("\n# Finished")
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib notebook
trajectory_file = 'posVsTime.dat'
trajectory = np.loadtxt(trajectory_file)[:, 1:4]
# optional: trajectory smoothing with a running average
N = 6
trajectory = np.array(
[np.convolve(trajectory[:, i], np.ones((N,)) / N, mode='valid') for i in range(3)])
# calculate bounding box (cubic box to preserve scaling)
trajectory_range = np.max(trajectory, axis=1) - np.min(trajectory, axis=1)
mid_range = np.median(trajectory, axis=1)
max_range = 1.01 * np.max(np.abs(trajectory_range))
bbox = np.array([mid_range - max_range / 2, mid_range + max_range / 2])
# 3D plot
fig = plt.figure(figsize=(9, 6))
ax = fig.add_subplot(111, projection='3d')
ax.set_xlabel('X axis')
ax.set_ylabel('Y axis')
ax.set_zlabel('Z axis')
ax.set_xlim(*bbox[:, 0])
ax.set_ylim(*bbox[:, 1])
ax.set_zlim(*bbox[:, 2])
ax.text(*trajectory[:, 0], '\u2190 start', 'y')
ax.scatter(*trajectory[:, 0])
ax.plot(*trajectory)
plt.tight_layout()
plt.rcParams.update({'font.size': 14})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mh', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Select data
Step2: Train models
|
<ASSISTANT_TASK:>
Python Code:
import os
import warnings
import tqdm
import numpy as np
import pandas as pd
warnings.simplefilter(action='ignore', category=pd.errors.PerformanceWarning)
%load_ext autoreload
%autoreload 2
import socceraction.spadl as spadl
import socceraction.vaep.features as fs
import socceraction.vaep.labels as lab
# Configure file and folder names
datafolder = "../data-fifa"
spadl_h5 = os.path.join(datafolder, "spadl-statsbomb.h5")
features_h5 = os.path.join(datafolder, "features.h5")
labels_h5 = os.path.join(datafolder, "labels.h5")
predictions_h5 = os.path.join(datafolder, "predictions.h5")
# Create a train and test set of games
games = pd.read_hdf(spadl_h5, "games")
traingames = games[:len(games)//2]
testgames = games[len(games)//2:]
print(len(traingames), len(testgames))
# Select shots from the data and all available info about these shots
def get_shots(games):
shots = []
with pd.HDFStore(spadl_h5) as spadlstore,\
pd.HDFStore(features_h5) as featurestore:
for game_id in tqdm.tqdm(games.game_id, desc="Selecting features"):
ai = spadl.add_names(spadlstore[f"actions/game_{game_id}"])
shot_idx = ai.type_name.str.contains("shot")
Xi = featurestore[f"game_{game_id}"]
shots.append(Xi[shot_idx])
return pd.concat(shots)
train_shots = get_shots(traingames)
test_shots = get_shots(testgames)
# Decide which features to use to compute the expected goals value of the shots
from re import match
xfns = [
fs.actiontype_onehot,
fs.bodypart_onehot,
fs.startlocation,
fs.movement,
fs.space_delta,
fs.startpolar,
fs.team,
]
nb_prev_actions = 2
f = fs.feature_column_names(xfns, nb_prev_actions)
f = list(filter(lambda v: not match('type_[a-z_]+_a0', v), f))
f.remove("dx_a0")
f.remove("dy_a0")
f.remove("movement_a0")
f
# Create features-matrix X and label-vector y.
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score, brier_score_loss, log_loss
from xgboost import XGBClassifier
def Xy(f,shots):
return shots[f], shots.result_success_a0
# Logistic regression
X,y = Xy(f, train_shots)
model = LogisticRegression().fit(X, y)
X,y = Xy(f, test_shots)
pred = [p[1] for p in model.predict_proba(X)]
print("ROC AUC: %.3f" % roc_auc_score(y, pred))
print("Brier score: %.3f" % brier_score_loss(y, pred))
print("Log loss: %.3f" % log_loss(y, pred))
# XGBoost
X,y = Xy(f, train_shots)
model = XGBClassifier().fit(X, y)
X,y = Xy(f, test_shots)
pred = [p[1] for p in model.predict_proba(X)]
print("ROC AUC: %.3f" % roc_auc_score(y, pred))
print("Brier score: %.3f" % brier_score_loss(y, pred))
print("Log loss: %.3f" % log_loss(y, pred))
# Naive baseline, always predict class distribution
X,y = Xy(f, train_shots)
avgP = np.mean(y)
X,y = Xy(f, test_shots)
pred = [avgP for _i in y]
print("ROC AUC: %.3f" % roc_auc_score(y, pred))
print("Brier score: %.3f" % brier_score_loss(y, pred))
print("Log loss: %.3f" % log_loss(y, pred))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2> 1. Refactor the input </h2>
Step2: <h2> 2. Refactor the way features are created. </h2>
Step3: <h2> Create and train the model </h2>
Step4: <h3> Evaluate model </h3>
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.5
from google.cloud import bigquery
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
# TODO: Create an appropriate input function read_dataset
def read_dataset(filename, mode):
#TODO Add CSV decoder function and dataset creation and methods
return dataset
def get_train_input_fn():
return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN)
def get_valid_input_fn():
return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL)
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.compat.v1.estimator.LinearRegressor(
feature_columns = feature_cols, model_dir = OUTDIR)
model.train(input_fn = get_train_input_fn, steps = 200)
metrics = model.evaluate(input_fn = get_valid_input_fn, steps = None)
print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss'])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Allow the Cloud ML Engine service account to read/write to the bucket containing training data.
Step2: <h2> Packaging up the code </h2>
Step3: <h2> Find absolute paths to your data </h2>
Step4: <h2> Running the Python module from the command-line </h2>
Step5: <h2> Running locally using gcloud </h2>
Step6: When I ran it (due to random seeds, your results will be different), the average_loss (Mean Squared Error) on the evaluation dataset was 187, meaning that the RMSE was around 13.
Step7: <h2> Submit training job using gcloud </h2>
Step8: Don't be concerned if the notebook appears stalled (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud.
Step9: <h2> Prediction </h2>
Step10: <h2> Train on larger dataset </h2>
|
<ASSISTANT_TASK:>
Python Code:
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
REGION = 'us-central1' # Choose an available region for Cloud MLE from https://cloud.google.com/ml-engine/docs/regions.
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME. Use a regional bucket in the region you selected.
# for bash
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '2.1' # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
PROJECT_ID=$PROJECT
AUTH_TOKEN=$(gcloud auth print-access-token)
SVC_ACCOUNT=$(curl -X GET -H "Content-Type: application/json" \
-H "Authorization: Bearer $AUTH_TOKEN" \
https://ml.googleapis.com/v1/projects/${PROJECT_ID}:getConfig \
| python -c "import json; import sys; response = json.load(sys.stdin); \
print(response['serviceAccount'])")
echo "Authorizing the Cloud ML Service account $SVC_ACCOUNT to access files in $BUCKET"
gsutil -m defacl ch -u $SVC_ACCOUNT:R gs://$BUCKET
gsutil -m acl ch -u $SVC_ACCOUNT:R -r gs://$BUCKET # error message (if bucket is empty) can be ignored
gsutil -m acl ch -u $SVC_ACCOUNT:W gs://$BUCKET
!find taxifare
!cat taxifare/trainer/model.py
%%bash
echo $PWD
rm -rf $PWD/taxi_trained
cp $PWD/../tensorflow/taxi-train.csv .
cp $PWD/../tensorflow/taxi-valid.csv .
head -1 $PWD/taxi-train.csv
head -1 $PWD/taxi-valid.csv
%%bash
rm -rf taxifare.tar.gz taxi_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python -m trainer.task \
--train_data_paths="${PWD}/taxi-train*" \
--eval_data_paths=${PWD}/taxi-valid.csv \
--output_dir=${PWD}/taxi_trained \
--train_steps=1000 --job-dir=./tmp
%%bash
ls $PWD/taxi_trained/export/exporter/
%%writefile ./test.json
{"pickuplon": -73.885262,"pickuplat": 40.773008,"dropofflon": -73.987232,"dropofflat": 40.732403,"passengers": 2}
## local predict doesn't work with Python 3 yet
#%bash
#model_dir=$(ls ${PWD}/taxi_trained/export/exporter)
#gcloud ai-platform local predict \
# --model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \
# --json-instances=./test.json
%%bash
rm -rf taxifare.tar.gz taxi_trained
gcloud ai-platform local train \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
-- \
--train_data_paths=${PWD}/taxi-train.csv \
--eval_data_paths=${PWD}/taxi-valid.csv \
--train_steps=1000 \
--output_dir=${PWD}/taxi_trained
!ls $PWD/taxi_trained
%%bash
echo $BUCKET
gsutil -m rm -rf gs://${BUCKET}/taxifare/smallinput/
gsutil -m cp ${PWD}/*.csv gs://${BUCKET}/taxifare/smallinput/
%%bash
OUTDIR=gs://${BUCKET}/taxifare/smallinput/taxi_trained
JOBNAME=lab3a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=2.1 \
--python-version=3.7 \
-- \
--train_data_paths="gs://${BUCKET}/taxifare/smallinput/taxi-train*" \
--eval_data_paths="gs://${BUCKET}/taxifare/smallinput/taxi-valid*" \
--output_dir=$OUTDIR \
--train_steps=10000
%%bash
gsutil cp -r ${PWD}/taxi_trained gs://${BUCKET}/taxifare/smallinput/
gsutil ls gs://${BUCKET}/taxifare/smallinput/taxi_trained/export/exporter
%%bash
MODEL_NAME="taxifare"
MODEL_VERSION="v1"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/taxifare/smallinput/taxi_trained/export/exporter | tail -1)
echo "Run these commands one-by-one (the very first time, you'll create a model and then create a version)"
#gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version $TFVERSION --region global
%%bash
gcloud ai-platform predict --model=taxifare --version=v1 --json-instances=./test.json
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
request_data = {'instances':
[
{
'pickuplon': -73.885262,
'pickuplat': 40.773008,
'dropofflon': -73.987232,
'dropofflat': 40.732403,
'passengers': 2,
}
]
}
parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'taxifare', 'v1')
response = api.projects().predict(body=request_data, name=parent).execute()
print("response={0}".format(response))
%%bash
XXXXX this takes 60 minutes. if you are sure you want to run it, then remove this line.
OUTDIR=gs://${BUCKET}/taxifare/ch3/taxi_trained
JOBNAME=lab3a_$(date -u +%y%m%d_%H%M%S)
CRS_BUCKET=cloud-training-demos # use the already exported data
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=2.1 \
--python-version=3.7 \
-- \
--train_data_paths="gs://${CRS_BUCKET}/taxifare/ch3/train.csv" \
--eval_data_paths="gs://${CRS_BUCKET}/taxifare/ch3/valid.csv" \
--output_dir=$OUTDIR \
--train_steps=100000
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
l = [('A', 'a', '1'), ('A', 'b', '2'), ('B','a', '1'), ('A', 'b', '1'), ('B','b', '1'), ('A', 'a', '2')]
np.random.seed(1)
df = pd.DataFrame(np.random.randn(5, 6), columns=l)
def g(df):
df=df[sorted(df.columns.to_list())]
df.columns = pd.MultiIndex.from_tuples(df.columns, names=['Caps','Middle','Lower'])
return df
df = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fully-Connected Neural Nets
Step4: Affine layer
Step5: Affine layer
Step6: ReLU layer
Step7: ReLU layer
Step8: "Sandwich" layers
Step9: Loss layers
Step10: Two-layer network
Step11: Solver
Step12: Multilayer network
Step13: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
Step14: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
Step15: Inline question
Step16: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
Step17: RMSProp and Adam
Step18: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules
Step19: Train a good model!
Step20: Test you model
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print 'Testing affine_forward function:'
print 'difference: ', rel_error(out, correct_out)
# Test the affine_backward function
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print 'Testing affine_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 1e-8
print 'Testing relu_forward function:'
print 'difference: ', rel_error(out, correct_out)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 1e-12
print 'Testing relu_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print 'Testing affine_relu_forward:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print 'Testing svm_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print '\nTesting softmax_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-2
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print 'Testing initialization ... '
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print 'Testing test-time forward pass ... '
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print 'Testing training loss (no regularization)'
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print 'Running numeric gradient check with reg = ', reg
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
model = TwoLayerNet()
solver = None
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
pass
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print 'Running check with reg = ', reg
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-2
learning_rate = 1e-4
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 1e-3
weight_scale = 1e-5
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print 'next_w error: ', rel_error(next_w, expected_next_w)
print 'velocity error: ', rel_error(expected_velocity, config['velocity'])
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print 'running with ', update_rule
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.iteritems():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print 'next_w error: ', rel_error(expected_next_w, next_w)
print 'cache error: ', rel_error(expected_cache, config['cache'])
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print 'next_w error: ', rel_error(expected_next_w, next_w)
print 'v error: ', rel_error(expected_v, config['v'])
print 'm error: ', rel_error(expected_m, config['m'])
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print 'running with ', update_rule
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.iteritems():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
y_test_pred = np.argmax(best_model.loss(X_test), axis=1)
y_val_pred = np.argmax(best_model.loss(X_val), axis=1)
print 'Validation set accuracy: ', (y_val_pred == y_val).mean()
print 'Test set accuracy: ', (y_test_pred == y_test).mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Operations on Tensors
Step2: Point-wise operations
Step3: NumPy Interoperability
Step4: You can convert a native TF tensor to a NumPy array using .numpy()
Step5: Linear Regression
Step6: Let's also create a test dataset to evaluate our models
Step7: Loss Function
Step8: Using mean squared error, our loss is
Step9: This values for the MSE loss above will give us a baseline to compare how a more complex model is doing.
Step10: Gradient Function
Step11: Training Loop
Step12: Now let's compare the test loss for this linear regression to the test loss from the baseline model that outputs always the mean of the training set
Step13: This is indeed much better!
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1 || pip install tensorflow==2.1
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
print(tf.__version__)
x = tf.constant([2, 3, 4])
x
x = tf.Variable(2.0, dtype=tf.float32, name="my_variable")
x.assign(45.8)
x
x.assign_add(4)
x
x.assign_sub(3)
x
# TODO 1a
a = # TODO -- Your code here.
b = # TODO -- Your code here.
c = # TODO -- Your code here.
d = # TODO -- Your code here.
print("c:", c)
print("d:", d)
# TODO 1b
a = # TODO -- Your code here.
b = # TODO -- Your code here.
c = # TODO -- Your code here.
d = # TODO -- Your code here.
print("c:", c)
print("d:", d)
# TODO 1c
# tf.math.exp expects floats so we need to explicitly give the type
a = # TODO -- Your code here.
b = # TODO -- Your code here.
print("b:", b)
# native python list
a_py = [1, 2]
b_py = [3, 4]
tf.add(a_py, b_py)
# numpy arrays
a_np = np.array([1, 2])
b_np = np.array([3, 4])
tf.add(a_np, b_np)
# native TF tensor
a_tf = tf.constant([1, 2])
b_tf = tf.constant([3, 4])
tf.add(a_tf, b_tf)
a_tf.numpy()
X = tf.constant(range(10), dtype=tf.float32)
Y = 2 * X + 10
print(f"X:{X}")
print(f"Y:{Y}")
X_test = tf.constant(range(10, 20), dtype=tf.float32)
Y_test = 2 * X_test + 10
print(f"X_test:{X_test}")
print(f"Y_test:{Y_test}")
y_mean = Y.numpy().mean()
def predict_mean(X):
y_hat = [y_mean] * len(X)
return y_hat
Y_hat = predict_mean(X_test)
errors = (Y_hat - Y) ** 2
loss = tf.reduce_mean(errors)
loss.numpy()
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y) ** 2
return tf.reduce_mean(errors)
# TODO 2
def compute_gradients(X, Y, w0, w1):
# TODO -- Your code here.
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dw0, dw1 = compute_gradients(X, Y, w0, w1)
print("dw0:", dw0.numpy())
print("dw1", dw1.numpy())
# TODO 3
STEPS = 1000
LEARNING_RATE = .02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
for step in range(0, STEPS + 1):
dw0, dw1 = # TODO -- Your code here.
if step % 100 == 0:
loss = # TODO -- Your code here.
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
loss = loss_mse(X_test, Y_test, w0, w1)
loss.numpy()
X = tf.constant(np.linspace(0, 2, 1000), dtype=tf.float32)
Y = X * tf.exp(-(X**2))
%matplotlib inline
plt.plot(X, Y)
def make_features(X):
f1 = tf.ones_like(X) # Bias.
f2 = X
f3 = tf.square(X)
f4 = tf.sqrt(X)
f5 = tf.exp(X)
return tf.stack([f1, f2, f3, f4, f5], axis=1)
def predict(X, W):
return tf.squeeze(X @ W, -1)
def loss_mse(X, Y, W):
Y_hat = predict(X, W)
errors = (Y_hat - Y) ** 2
return tf.reduce_mean(errors)
def compute_gradients(X, Y, W):
with tf.GradientTape() as tape:
loss = loss_mse(Xf, Y, W)
return tape.gradient(loss, W)
STEPS = 2000
LEARNING_RATE = 0.02
Xf = make_features(X)
n_weights = Xf.shape[1]
W = tf.Variable(np.zeros((n_weights, 1)), dtype=tf.float32)
# For plotting
steps, losses = [], []
plt.figure()
for step in range(1, STEPS + 1):
dW = compute_gradients(X, Y, W)
W.assign_sub(dW * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(Xf, Y, W)
steps.append(step)
losses.append(loss)
plt.clf()
plt.plot(steps, losses)
print(f"STEP: {STEPS} MSE: {loss_mse(Xf, Y, W)}")
plt.figure()
plt.plot(X, Y, label="actual")
plt.plot(X, predict(Xf, W), label="predicted")
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Похожим образом можно сделать трёхмерный список.
Step2: Чаще всего используются двумерные списки с равным количеством элементов в каждой строке. Такой двумерный список можно называть матрицей.
Step3: Доступ и изменение элементов
Step4: Создание матрицы n x m
Step5: Но не все так просто...
Step6: Изменилась не только вторая строка, но и все остальные. Так произошло, потому что после повторения списка каждая строка указывает на один и тот же массив.
Step7: Ввод матрицы
Step8: Или более кратко с помощью генератора списка
|
<ASSISTANT_TASK:>
Python Code:
lst_2d = [
[2, 4, 'unicorn'],
[False, 39],
[None],
]
lst_3d = [
[[1, 1, 2], [3, 5], [8, 13]],
[[21, 34], [55]]
]
matrix = [
[0, 0, 1, 5],
[1, 0, 2, 0],
[0, 3, 1, 0],
]
matrix = [
[0, 0, 1, 5],
[1, 0, 2, 0],
[0, 3, 1, 0],
]
print(matrix[1][2])
matrix[0][1] = 9
for row in matrix:
print(*row)
n, m = 3, 4
matrix = [[0] * m] * n
n, m = 3, 4
matrix = [[0] * m] * n
# Изменяем третий элемент во второй строке
matrix[1][2] = 1
for row in matrix:
print(*row)
n, m = 3, 4
matrix = [[0] * m for i in range(n)]
matrix[1][2] = 1
for row in matrix:
print(*row)
n, m = map(int, input().split())
matrix = []
for i in range(n):
matrix.append(list(map(int, input().split())))
n, m = map(int, input().split())
matrix = [list(map(int, input().split())) for i in range(n)]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 2 - Outline of the Assignment
Step4: Expected output
Step6: Expected output
Step8: Expected output
Step10: Expected output
Step12: <table style="width
Step14: Expected Output
Step16: Expected Output
Step18: Expected output with sigmoid
Step20: Expected Output
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v2 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(2,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
W, b = parameters['W' + str(l)], parameters['b' + str(l)]
A, cache = linear_activation_forward(A_prev, W, b, 'relu')
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
W, b = parameters['W' + str(L)], parameters['b' + str(L)]
AL, cache = linear_activation_forward(A, W, b, 'sigmoid')
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
logprobs = np.multiply(np.log(AL), Y) + np.multiply((1-Y), np.log(1 - AL))
cost = - np.sum(logprobs) / m
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = 1.0 / m * np.dot(dZ, A_prev.T)
db = 1.0 / m * np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[L-1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, 'sigmoid')
### END CODE HERE ###
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l+2)], current_cache, 'relu')
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dA1 = "+ str(grads["dA1"]))
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db" + str(l+1)]
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.1
Step2: 1.1
Step3: 1.2
Step4: 1.3
Step5: 1.4
Step6: 1.5 (Multi-Part)
Step7: b. Use the print() function to print your list.
Step8: c. Use the print() function to print out the middle element.
Step9: d. Now replace the middle element with a different item, your favorite song, or song bird.
Step10: e. Use the same print statement from b. to print your new list. Check out the differences.
Step11: f. Add a new element to the end. Read about append().
Step12: g. Add a new element to the beginning. Read about insert().
Step13: h. Add a new element somewhere other than the beginning or the end.
Step14: 1.6
Step15: Question 2
Step16: a. look up the motif for a particular SacII enzyme
Step17: b. add below two enzymes and their motifs to dictionary
Step18: 2.2
Step19: 2.3
Step20: Extra Pratice
|
<ASSISTANT_TASK:>
Python Code:
#type your code here
runningTotal = 0
listOfNumbers = [4,7,9,1,8,6]
#type your code here
print(listOfNumbers)
print("The average of these numbers is {0:.2f}".format(average))
word = "Python"
print(len(word))
#type your code here
numbers = (1, 2, 3, 4, 5, 6, 7, 8, 9) # Declaring the tuple
count_odd = 0
count_even = 0
#type your code here
print("Number of even numbers :",count_even)
print("Number of odd numbers :",count_odd)
motif = "GAATTC"
count = 0
dna_strings = ['AGTGAACCGTCAGATCCGCTAGCGCGAATTC','GGAGACCGACACCCTCCTGCTATGGGTGCTGCTGCTC','TGGGTGCCCGGCAGCACCGGCGACGCACCGGTCGC',
'CACCATGGTGAGCAAGGGCGAGGAGAATAACATGGCC','ATCATCAAGGAGTTCATGCGCTTCAAGAATTC','CATGGAGGGCTCCGTGAACGGCCACGAGTTCGAGA'
,'TCGAGGGCGAGGGCGAGGGCCGCCCCTACGAGGCCTT']
#type your code
#type your code here
#type your code
#type your code
#type your code
#type your code
#type your code
#type your code
#type your code
#type your code
#type your code
#type your code
#type your code
#type your code
dna = 'AAATTCGTGACTGTAA'
#type your code here
#type your code here
sequences=['ATGCCCGGCCCGGC','GCGTGCTAGCAATACGATAAACCGG', 'ATATATATCGAT','ATGGGCCC']
#type your code here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Frequency analysis
Step3: Now let's take a look at the spatial distributions of the PSD.
Step4: Alternatively, you can also create PSDs from Epochs objects with functions
Step5: Notably,
Step6: Lastly, we can also retrieve the unaggregated segments by passing
Step7: Time-frequency analysis
Step8: Inspect power
Step9: Joint Plot
Step10: Inspect ITC
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Stefan Appelhoff <stefan.appelhoff@mailbox.org>
# Richard Höchenberger <richard.hoechenberger@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet, psd_multitaper, psd_welch
from mne.datasets import somato
data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',
'sub-{}_task-{}_meg.fif'.format(subject, task))
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=False)
# Construct Epochs
event_id, tmin, tmax = 1, -1., 3.
baseline = (None, 0)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=baseline, reject=dict(grad=4000e-13, eog=350e-6),
preload=True)
epochs.resample(200., npad='auto') # resample to reduce computation time
epochs.plot_psd(fmin=2., fmax=40., average=True, spatial_colors=False)
epochs.plot_psd_topomap(ch_type='grad', normalize=True)
f, ax = plt.subplots()
psds, freqs = psd_multitaper(epochs, fmin=2, fmax=40, n_jobs=1)
psds = 10. * np.log10(psds)
psds_mean = psds.mean(0).mean(0)
psds_std = psds.mean(0).std(0)
ax.plot(freqs, psds_mean, color='k')
ax.fill_between(freqs, psds_mean - psds_std, psds_mean + psds_std,
color='k', alpha=.5)
ax.set(title='Multitaper PSD (gradiometers)', xlabel='Frequency (Hz)',
ylabel='Power Spectral Density (dB)')
plt.show()
# Estimate PSDs based on "mean" and "median" averaging for comparison.
kwargs = dict(fmin=2, fmax=40, n_jobs=1)
psds_welch_mean, freqs_mean = psd_welch(epochs, average='mean', **kwargs)
psds_welch_median, freqs_median = psd_welch(epochs, average='median', **kwargs)
# Convert power to dB scale.
psds_welch_mean = 10 * np.log10(psds_welch_mean)
psds_welch_median = 10 * np.log10(psds_welch_median)
# We will only plot the PSD for a single sensor in the first epoch.
ch_name = 'MEG 0122'
ch_idx = epochs.info['ch_names'].index(ch_name)
epo_idx = 0
_, ax = plt.subplots()
ax.plot(freqs_mean, psds_welch_mean[epo_idx, ch_idx, :], color='k',
ls='-', label='mean of segments')
ax.plot(freqs_median, psds_welch_median[epo_idx, ch_idx, :], color='k',
ls='--', label='median of segments')
ax.set(title='Welch PSD ({}, Epoch {})'.format(ch_name, epo_idx),
xlabel='Frequency (Hz)', ylabel='Power Spectral Density (dB)')
ax.legend(loc='upper right')
plt.show()
psds_welch_unagg, freqs_unagg = psd_welch(epochs, average=None, **kwargs)
print(psds_welch_unagg.shape)
# define frequencies of interest (log-spaced)
freqs = np.logspace(*np.log10([6, 35]), num=8)
n_cycles = freqs / 2. # different number of cycle per frequency
power, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, use_fft=True,
return_itc=True, decim=3, n_jobs=1)
power.plot_topo(baseline=(-0.5, 0), mode='logratio', title='Average power')
power.plot([82], baseline=(-0.5, 0), mode='logratio', title=power.ch_names[82])
fig, axis = plt.subplots(1, 2, figsize=(7, 4))
power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=8, fmax=12,
baseline=(-0.5, 0), mode='logratio', axes=axis[0],
title='Alpha', show=False)
power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=13, fmax=25,
baseline=(-0.5, 0), mode='logratio', axes=axis[1],
title='Beta', show=False)
mne.viz.tight_layout()
plt.show()
power.plot_joint(baseline=(-0.5, 0), mode='mean', tmin=-.5, tmax=2,
timefreqs=[(.5, 10), (1.3, 8)])
itc.plot_topo(title='Inter-Trial coherence', vmin=0., vmax=1., cmap='Reds')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use XLA with tf.function
Step2: Then define some necessary constants and prepare the MNIST dataset.
Step3: Finally, define the model and the optimizer. The model uses a single dense layer.
Step4: Define the training function
Step5: Train and test the model
Step6: And, finally, check the accuracy
Step7: Behind the scenes, the XLA compiler has compiled the entire TF function to HLO, which has enabled fusion optimizations. Using the introspection facilities, we can see the HLO code (other interesting possible values for "stage" are optimized_hlo for HLO after optimizations and optimized_hlo_dot for a Graphviz graph)
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
tf.compat.v1.enable_eager_execution()
# Size of each input image, 28 x 28 pixels
IMAGE_SIZE = 28 * 28
# Number of distinct number labels, [0..9]
NUM_CLASSES = 10
# Number of examples in each training batch (step)
TRAIN_BATCH_SIZE = 100
# Number of training steps to run
TRAIN_STEPS = 1000
# Loads MNIST dataset.
train, test = tf.keras.datasets.mnist.load_data()
train_ds = tf.data.Dataset.from_tensor_slices(train).batch(TRAIN_BATCH_SIZE).repeat()
# Casting from raw data to the required datatypes.
def cast(images, labels):
images = tf.cast(
tf.reshape(images, [-1, IMAGE_SIZE]), tf.float32)
labels = tf.cast(labels, tf.int64)
return (images, labels)
layer = tf.keras.layers.Dense(NUM_CLASSES)
optimizer = tf.keras.optimizers.Adam()
@tf.function(jit_compile=True)
def train_mnist(images, labels):
images, labels = cast(images, labels)
with tf.GradientTape() as tape:
predicted_labels = layer(images)
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=predicted_labels, labels=labels
))
layer_variables = layer.trainable_variables
grads = tape.gradient(loss, layer_variables)
optimizer.apply_gradients(zip(grads, layer_variables))
for images, labels in train_ds:
if optimizer.iterations > TRAIN_STEPS:
break
train_mnist(images, labels)
images, labels = cast(test[0], test[1])
predicted_labels = layer(images)
correct_prediction = tf.equal(tf.argmax(predicted_labels, 1), labels)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Prediction accuracy after training: %s" % accuracy)
print(train_mnist.experimental_get_compiler_ir(images, labels)(stage='hlo'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's show the symbols data, to see how good the recommender has to be.
Step2: Let's run the trained agent, with the test set
Step3: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
Step4: What are the metrics for "holding the position"?
|
<ASSISTANT_TASK:>
Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent import Agent
from functools import partial
NUM_THREADS = 1
LOOKBACK = -1 # 252*4 + 28
STARTING_DAYS_AHEAD = 252
POSSIBLE_FRACTIONS = [0.0, 1.0]
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_train_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
n_levels=10)
agents = [Agent(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.9999,
dyna_iterations=20,
name='Agent_{}'.format(i)) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 7
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
TEST_DAYS_AHEAD = 20
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
import pickle
with open('../../data/dyna_10000_states_full_training.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import scipy.integrate
import math
import numpy as np
def NDfx(x):
return((1/math.sqrt((2*math.pi)))*(math.e**((-.5)*(x**2))))
x = 2.5
u = 1
o2 = 3
norm = (x-u)/o2
prob = scipy.integrate.quad(NDfx, -np.inf, norm)[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run solver
Step2: Plot dipole activations
Step3: Show the evoked response and the residual for gradiometers
Step4: Generate stc from dipoles
Step5: View in 2D and 3D ("glass" brain like 3D plot)
|
<ASSISTANT_TASK:>
Python Code:
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.inverse_sparse import tf_mixed_norm, make_stc_from_dipoles
from mne.viz import (plot_sparse_source_estimates,
plot_dipole_locations, plot_dipole_amplitudes)
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-no-filter-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'
# Read noise covariance matrix
cov = mne.read_cov(cov_fname)
# Handling average file
condition = 'Left visual'
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked = mne.pick_channels_evoked(evoked)
# We make the window slightly larger than what you'll eventually be interested
# in ([-0.05, 0.3]) to avoid edge effects.
evoked.crop(tmin=-0.1, tmax=0.4)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname)
# alpha_space regularization parameter is between 0 and 100 (100 is high)
alpha_space = 30. # spatial regularization parameter
# alpha_time parameter promotes temporal smoothness
# (0 means no temporal regularization)
alpha_time = 1. # temporal regularization parameter
loose, depth = 0.2, 0.9 # loose orientation & depth weighting
# Compute dSPM solution to be used as weights in MxNE
inverse_operator = make_inverse_operator(evoked.info, forward, cov,
loose=loose, depth=depth)
stc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9.,
method='dSPM')
# Compute TF-MxNE inverse solution with dipole output
dipoles, residual = tf_mixed_norm(
evoked, forward, cov, alpha_space, alpha_time, loose=loose, depth=depth,
maxit=200, tol=1e-6, weights=stc_dspm, weights_min=8., debias=True,
wsize=16, tstep=4, window=0.05, return_as_dipoles=True,
return_residual=True)
# Crop to remove edges
for dip in dipoles:
dip.crop(tmin=-0.05, tmax=0.3)
evoked.crop(tmin=-0.05, tmax=0.3)
residual.crop(tmin=-0.05, tmax=0.3)
plot_dipole_amplitudes(dipoles)
# Plot dipole location of the strongest dipole with MRI slices
idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles])
plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample',
subjects_dir=subjects_dir, mode='orthoview',
idx='amplitude')
# # Plot dipole locations of all dipoles with MRI slices
# for dip in dipoles:
# plot_dipole_locations(dip, forward['mri_head_t'], 'sample',
# subjects_dir=subjects_dir, mode='orthoview',
# idx='amplitude')
ylim = dict(grad=[-120, 120])
evoked.pick_types(meg='grad', exclude='bads')
evoked.plot(titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim,
proj=True)
residual.pick_types(meg='grad', exclude='bads')
residual.plot(titles=dict(grad='Residuals: Gradiometers'), ylim=ylim,
proj=True)
stc = make_stc_from_dipoles(dipoles, forward['src'])
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
opacity=0.1, fig_name="TF-MxNE (cond %s)"
% condition, modes=['sphere'], scale_factors=[1.])
time_label = 'TF-MxNE time=%0.2f ms'
clim = dict(kind='value', lims=[10e-9, 15e-9, 20e-9])
brain = stc.plot('sample', 'inflated', 'rh', views='medial',
clim=clim, time_label=time_label, smoothing_steps=5,
subjects_dir=subjects_dir, initial_time=150, time_unit='ms')
brain.add_label("V1", color="yellow", scalar_thresh=.5, borders=True)
brain.add_label("V2", color="red", scalar_thresh=.5, borders=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Luego se cargan los datos de la competencia
Step2: Visualización de datos y estudio inicial
Step3: Como puede observarse hay un total de 12 columnas de las cuales 5 son datos categóricos. De los datos numéricos que se pueden ver en la tabla (de descripción) se puede deducir lo siguiente
Step4: Como vemos, el dato de Cabin estamos perdiendo aproximadamente el 77%. Me parece que eso no es bueno. En cuanto a edad, tenemos casi el 20% de los datos perdidos, y el 0.22% de datos perdidos de Embarked.
Step5: Ahora vamos a estudiar las features del DataFrames, haciendo comparativas de si sobrevivió o no.
Step6: Analizando los gráficos realizados se pueden sacar algunas conclusiones. Por el lado la edad, se observa que la mayoría de los que iban en el Titanic eran entre 20 y 40 años. De los cuales podemos decir que, mas o menos entre los 20 y 30 la mayoría no sobrevivió. Mientras que entre los 30 y 40 había una pequeña chance mayor de sobrevivir. También se observa que, lo niños entre 0 y 10 años tenían mejor chance de sobrevivir.
Step7: En primer lugar puede observarse que la diagonal es igual a 1. Esto siempre es así, la correlación de una variable consigo misma, siempre es máxima.
Step8: Se puede observar que existe una mayor probabilidad de sobrevivir aquellas mujeres entre 20 y 40 años. En cambio para los hombres esto no se cumple. Por otro lado, los niños varones recién nacios y hasta los 10 años aproximadamente tiene más probabilidad de sobrevivir, esto se observa además, con la mujeres mayores de edad.
Step9: De estos gráficos se pueden deducir que existe una mayor proporción de hombres que han muerto pertenecientes a la clase 3 en comparación de aquellos que iban en las otras clases. En cambio, para los sobrevivientes (hombres) existe una relación casi idéntica de sobrevivientes de la clase 1 con los de la clase 3.
Step10: En estes gráficos podemos observar el rango de edades en las que los hombres y mujeres tuvieron mayores probabilidad de sobrevivir.
Step11: Del gráfico anterior podemos observar que la clase 3 en su mayoría son personas jóvenes, miestras que la clase 1, si bien cubre el mayor rango de edades, la masa se concentra entre los 30 y 50 años de edad.
Step12: De estos gráficos se puede observar que (y es el análisis realizado por Heads or Tails) dentro de la primera clase hay dos tipos de grupos, los ricos y los super ricos.
Step13: Valores perdidos
Step14: Podemos observar que hay una gran cantidad de valores perdidos de la variable 'Cabin' (77.10% de valores perdidos). No creo que haya mucho para hacer.
Step15: Son dos mujeres, una de 38 años y otra de 62 años. Ninguna con familiares. Lo que haremos es decidir que "Embarked" es mejor para estas mujeres.
Step16: Aquí, simplemente lo que hice fue comparar las mujeres que se parecen a las dos "perdidas". Viendo la variable 'Fare' de este grupo, observamos que hay 1 sola mujer que pagó 83.1583, mientras que hay 3 mujeres que pagaron 86.5. Siendo este grupo dónde yo voy a poner a la dos mujeres.
Step17: Veremos quien es al que le falta la variable 'Fare' en el dataframe de Test
Step18: Lo que haemos será ponerle el promedio de todos los 'Fare' que compartan el mismo Sibsp, Parch, Sex, Embarked, Pclass
Step19: Feature Engineering
Step20: Niños
Step21: De los estudiado podemos obervar que iban muy pocos niños en la clase 1, de los cuales aparentemente sobrevivieron todos. Por el lado de la clase 2 y 3, vemos una mayor cantidad de niños y niña, de los cuales tiene un gran porcentaje de superviviencias. Se puede decir que lo niños de la clase 2 y 3 tenían más chances de sobrevivir.
Step22: Del primer gráfico se puede concluir que existe una mayor probabilidad de que si alguien sobrevive, se le conozca la edad (un 40%). En contrapartida, si una persona no se le conoce la edad tiene una chance de sobrevivir de unos 30%.
Step23: Aquí se puede observar lo que veníamos estudiando anteriormente, existe una mayor probabilidad de sobrevivir si había personas que viajaban con varios familiares. Aquí observarmos que lo ideal es viajar con 1, 2 hasta 3 familiares más.
Step24: Se puede observar, como vimos anteriormente, que si se viajaba solo tenía menos chances de sobrevivir. Vemos los que viajaron solo sobrevivieron el 30.35%. Mientras que los que no viajaron solos sobrevivieron el 50%.
Step25: Título
Step26: Ahora hay que buscar patrones en los títulos para ello veremos cuales son los diferente títulos y cuantos con ese títulos hay. Además buscaremos un promedio de la edad de ese titulo.
Step27: Como podemos observar los títulos de Rev. y Dr. lo tienen muy pocas personas, por lo tanto no nos aportaría mucha información. Por otro lado podemos observar que
Step28: Nos preparamos para el modelado
Step29: Modelado
Step30: Perceptrón
|
<ASSISTANT_TASK:>
Python Code:
#%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy import stats
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import Perceptron
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
print('Train Dataset info')
train.head(10)
print('La cantidad de columnas en train es: %i y la cantidad de renglones es: %i'\
%(len(train.columns), len(train.index)))
train.describe()
print(train.info())
train.isnull().sum()
total = len(train.index)
sobrev = train[train['Survived'] == 1]
nosobrev = train[train['Survived'] == 0]
s = float(len(sobrev))
ns = float(len(nosobrev))
s_percent = s/total*100.0
sobre_color = 'green'
nosobre_color = 'red'
print('Sobrevivió el %.2f porciento. En el DataFrame hay un total de %i sobrevivientes y un\
total de %i no sobrevivientes' %(s_percent, s, ns))
plt.figure()
# Estudio de la edad
sns.distplot(sobrev['Age'].dropna().values, bins=range(0, 100, 1), color=sobre_color, label='Sobrevivio')
sns.distplot(nosobrev['Age'].dropna().values, bins=range(0, 100, 1), color=nosobre_color, axlabel='Age', label='No Sobrevivio')
plt.show()
plt.figure()
# Estudio del sexo
sns.barplot('Sex', 'Survived', data=train, palette='Reds_d')
plt.show()
#Estudio de Pclass
plt.figure()
sns.barplot('Pclass','Survived', data=train)
plt.show()
# SibSp study
plt.figure()
sns.barplot('SibSp', 'Survived', data=train)
plt.show()
# Parch study
plt.figure()
sns.barplot('Parch', 'Survived', data=train)
plt.show()
# Embarked Study
plt.figure()
sns.barplot('Embarked', 'Survived', data=train)
plt.show()
# Fare study
plt.figure()
sns.distplot(sobrev['Fare'].dropna().values, bins=range(0, 513, 1), color=sobre_color, label='Sobrevivio')
sns.distplot(nosobrev['Fare'].dropna().values, bins=range(0, 513, 1), color=nosobre_color, label='No Sobrevivio', axlabel="Fare")
plt.show()
plt.figure(figsize=(14,12))
cm = sns.heatmap(train.drop('PassengerId', axis=1).corr(), vmax=0.6, square=True, annot=True)
plt.show()
msobre = train[(train['Survived'] == 1) & (train['Sex'] == 'male')]
mNosobre = train[(train['Survived'] == 0) & (train['Sex'] == 'male')]
fsobre = train[(train['Survived'] == 1) & (train['Sex'] == 'female')]
fNosobre = train[(train['Survived'] == 0) & (train['Sex'] == 'female')]
plt.figure(figsize=(18,6))
plt.subplot(121)
sns.distplot(msobre['Age'].dropna().values, bins=range(0, 100, 1), color=sobre_color, label='Sobrevivido')
sns.distplot(mNosobre['Age'].dropna().values, bins=range(0, 100, 1), color=nosobre_color, label='No Sobrevivido',
axlabel='Edad hombres')
plt.subplot(122)
sns.distplot(fsobre['Age'].dropna().values, bins=range(0, 100, 1), color=sobre_color, label='Sobrevivido')
sns.distplot(fNosobre['Age'].dropna().values, bins=range(0, 100, 1), color=nosobre_color, label='No Sobrevivido',
axlabel="Edad Mujeres")
plt.show()
male = train[(train['Sex'] == 'male')]
female = train[(train['Sex'] == 'female')]
tabMale = pd.crosstab(male['Survived'], male['Pclass'])
print('Hombres - Clase - Sobrevivientes')
print(tabMale)
tabFemale = pd.crosstab(female['Survived'], female['Pclass'])
print('\n\nMujeres - Clase - Sobrevivientes')
print(tabFemale)
plt.figure(figsize=(18,6))
plotMale = tabMale.div(tabMale.sum(1).astype(float), axis=0).plot(kind="bar", stacked=True)
plt.xlabel('Sobreviviente')
plt.ylabel('Porcentaje')
plt.title("Hombres")
plt.show()
plt.figure(figsize=(18,6))
plotFemale = tabFemale.div(tabFemale.sum(1).astype(float), axis=0).plot(kind="bar", stacked=True)
plt.title("Mujeres")
plt.xlabel('Sobreviviente')
plt.ylabel('Porcentaje')
plt.show()
# Class 1 Male
msobreClass1 = train[(train['Survived'] == 1) & (train['Sex'] == 'male') & (train['Pclass'] == 1)]
mnosobreClass1 = train[(train['Survived'] == 0) & (train['Sex'] == 'male') & (train['Pclass'] == 1)]
# Class 2 Male
msobreClass2 = train[(train['Survived'] == 1) & (train['Sex'] == 'male') & (train['Pclass'] == 2)]
mnosobreClass2 = train[(train['Survived'] == 0) & (train['Sex'] == 'male') & (train['Pclass'] == 2)]
# Class 3 Male
msobreClass3 = train[(train['Survived'] == 1) & (train['Sex'] == 'male') & (train['Pclass'] == 3)]
mnosobreClass3 = train[(train['Survived'] == 0) & (train['Sex'] == 'male') & (train['Pclass'] == 3)]
# Class 1 Female
fsobreClass1 = train[(train['Survived'] == 1) & (train['Sex'] == 'female') & (train['Pclass'] == 1)]
fnosobreClass1 = train[(train['Survived'] == 0) & (train['Sex'] == 'female') & (train['Pclass'] == 1)]
# Class 2 Female
fsobreClass2 = train[(train['Survived'] == 1) & (train['Sex'] == 'female') & (train['Pclass'] == 2)]
fnosobreClass2 = train[(train['Survived'] == 0) & (train['Sex'] == 'female') & (train['Pclass'] == 2)]
# Class 3 Female
fsobreClass3 = train[(train['Survived'] == 1) & (train['Sex'] == 'female') & (train['Pclass'] == 3)]
fnosobreClass3 = train[(train['Survived'] == 0) & (train['Sex'] == 'female') & (train['Pclass'] == 3)]
print("Graficos Hombres")
plt.figure(figsize=(16,6))
plt.subplot(131)
sns.distplot(msobreClass1['Age'].dropna().values, bins=range(0,100,1),
color=sobre_color, label='Sobreviviente Clase 1')
sns.distplot(mnosobreClass1['Age'].dropna().values, bins=range(0,100,1),
color=nosobre_color, label='No sobreviviente Clase 1', axlabel="Edad")
plt.title("Class 1")
plt.subplot(132)
sns.distplot(msobreClass2['Age'].dropna().values, bins=range(0,100,1),
color=sobre_color, label='Sobreviviente Clase 2')
sns.distplot(mnosobreClass2['Age'].dropna().values, bins=range(0,100,1),
color=nosobre_color, label='No sobreviviente Clase 2', axlabel="Edad")
plt.title("Class 2")
plt.subplot(133)
sns.distplot(msobreClass3['Age'].dropna().values, bins=range(0,100,1),
color=sobre_color, label='Sobreviviente Clase 3')
sns.distplot(mnosobreClass3['Age'].dropna().values, bins=range(0,100,1),
color=nosobre_color, label='No sobreviviente Clase 3', axlabel="Edad")
plt.title("Class 3")
plt.show()
print("Graficos mujeres")
plt.figure(figsize=(16,6))
plt.subplot(131)
sns.distplot(fsobreClass1['Age'].dropna().values, bins=range(0,100,1), color=sobre_color, label='Sobreviviente Clase 1')
sns.distplot(fnosobreClass1['Age'].dropna().values, bins=range(0,100,1), color=nosobre_color, label='No sobreviviente Clase 1')
plt.title("Class 1")
plt.subplot(132)
sns.distplot(fsobreClass2['Age'].dropna().values, bins=range(0,100,1), color=sobre_color, label='Sobreviviente Clase 2')
sns.distplot(fnosobreClass2['Age'].dropna().values, bins=range(0,100,1), color=nosobre_color, label='No sobreviviente Clase 2')
plt.title("Class 2")
plt.subplot(133)
sns.distplot(fsobreClass3['Age'].dropna().values, bins=range(0,100,1), color=sobre_color, label='Sobreviviente Clase 3')
sns.distplot(fnosobreClass3['Age'].dropna().values, bins=range(0,100,1), color=nosobre_color, label='No sobreviviente Clase 3')
plt.title("Class 3")
plt.show()
class1 = train[(train['Pclass'] == 1)]
class2 = train[(train['Pclass'] == 2)]
class3 = train[(train['Pclass'] == 3)]
plt.figure()
ax = sns.distplot(class1['Age'].dropna().values, bins=range(0, 100, 1),
label="Class 1")
ax.legend(loc="best")
ax = sns.distplot(class2['Age'].dropna().values, bins=range(0, 100, 1),
label='Class 2')
ax.legend(loc="best")
ax = sns.distplot(class3['Age'].dropna().values, bins=range(0, 100, 1),
label='Class 3', axlabel='Clases')
ax.legend(loc="best")
plt.show()
plt.figure(figsize=(12,10))
# Class 1
plt.subplot(311)
ax1 = sns.distplot(np.log10(sobrev['Fare'][sobrev['Pclass'] == 1].dropna().values+1),
color=sobre_color)
ax1 = sns.distplot(np.log10(nosobrev['Fare'][nosobrev['Pclass'] == 1].dropna().values+1),
color=nosobre_color, axlabel='Fare')
ax1.set_xlim(0, np.max(np.log10(train['Fare'].dropna().values)))
ax1.legend(loc="best")
# Class 2
plt.subplot(312)
ax2 = sns.distplot(np.log10(sobrev['Fare'][sobrev['Pclass'] == 2].dropna().values+1),
color=sobre_color)
ax2 = sns.distplot(np.log10(nosobrev['Fare'][nosobrev['Pclass'] == 2].dropna().values+1),
color=nosobre_color, axlabel='Fare')
ax2.set_xlim(0, np.max(np.log10(train['Fare'].dropna().values)))
ax2.legend(loc="best")
# Class 3
plt.subplot(313)
ax3 = sns.distplot(np.log10(sobrev['Fare'][sobrev['Pclass'] == 3].dropna().values+1),
color=sobre_color)
ax3 = sns.distplot(np.log10(nosobrev['Fare'][nosobrev['Pclass'] == 3].dropna().values+1),
color=nosobre_color, axlabel='Fare')
ax3.set_xlim(0, np.max(np.log10(train['Fare'].dropna().values)))
ax3.legend(loc="best")
# Config Plot
plt.subplots_adjust(top=1, bottom=0.08, left=0.10, right=1, hspace=0.25, wspace=0.35)
plt.show()
plt.figure(figsize=(12,10))
ax = sns.boxplot(x="Pclass", y="Fare", hue="Survived", data=train);
ax.set_yscale('log')
plt.show()
print("Train Dataframe")
print(train.info())
print("Test Dataframe")
print(test.info())
print(train[train['Embarked'].isnull()])
combine = pd.concat([train, test])
combine.where((combine['Pclass'] < 1.5) &
(combine['Sex'] == "female") &
(combine['SibSp'] == 0.0) &
(combine['Parch'] == 0.0) ).groupby(['Embarked','Pclass','Sex','Parch','SibSp', 'Fare', 'Survived']).size()
train['Embarked'].iloc[61] = "S"
train['Embarked'].iloc[829] = "S"
print(test[test['Fare'].isnull()])
combine = pd.concat([train, test])
test['Fare'].iloc[152] = combine['Fare'][(combine['Pclass'] == 3) &
(combine['Sex'] == "male") &
(combine['SibSp'] == 0.0) &
(combine['Parch'] == 0.0) &
(combine['Embarked'] == "S")].dropna().median()
print("El valor que se agregó fue: " + str(test['Fare'].iloc[152]))
# Defininf combine
combine = pd.concat([train.drop('Survived', 1), test])
survived = train['Survived']
# creating feature eng
combine['Child'] = combine['Age'] <= 12
combine['Age_know'] = combine["Age"].isnull() == False
combine['Family'] = combine['SibSp'] + combine['Parch']
combine['Alone'] = (combine['SibSp'] + combine['Parch']) == 0
combine['Title'] = combine['Name'].str.split(', ', expand=True)[1].str.split('. ', expand=True)[0]
combine['Young'] = (combine['Age'] <= 30) & (combine['Age'] >= 12)
combine['Old'] = combine['Age'] >= 60
# come back to train and test data
train = combine.iloc[:len(train)]
test = combine.iloc[len(test):]
# add survived again
train['Survived'] = survived
# update sobrev and nosobrev
sobrev = train[train['Survived'] == 1]
nosobrev = train[train['Survived'] == 0]
tab = pd.crosstab(train['Child'], train['Pclass'])
print("\nNiños vs Clases")
print(tab)
tab = pd.crosstab(train['Child'], train['Sex'])
print("\nNiños vs Sexo")
print(tab)
tab = pd.crosstab(train['Child'], train['Survived'])
print("\nNiños vs Sobrevivientes")
print(tab)
plt.figure()
sns.factorplot(x="Sex", y="Survived", hue="Child", col='Pclass',
data=train, kind='bar')
plt.show()
tab = pd.crosstab(train['Age_know'], train['Survived'])
print("\nEdad conocida vs Sobrevivientes")
print(tab)
plt.figure()
buff = tab.div(tab.sum(1).astype('float'), axis=0).plot(kind='bar',stacked=True)
plt.xlabel('Edad conocida')
plt.ylabel('Porcentaje')
plt.show()
tab = pd.crosstab(train['Age_know'], train['Pclass'])
print("\nEdad conocida vs Class")
print(tab)
plt.figure()
buff = tab.div(tab.sum(1).astype('float'), axis=0).plot(kind='bar',stacked=True)
plt.xlabel('Edad conocida')
plt.ylabel('Porcentaje')
plt.show()
tab = pd.crosstab(train['Age_know'], train['Sex'])
print("\nEdad conocida vs Sexo")
print(tab)
plt.figure()
buff = tab.div(tab.sum(1).astype('float'), axis=0).plot(kind='bar',stacked=True)
plt.xlabel('Edad conocida')
plt.ylabel('Porcentaje')
plt.show()
tab = pd.crosstab(train['Family'], train['Survived'])
print("\nTamaño de la familia vs Sobrevivientes")
print(tab)
plt.figure(figsize=(18,14))
buff = tab.div(tab.sum(1).astype('float'), axis=0).plot(kind='bar',stacked=True)
plt.xlabel('Cantidad de miembros de la familia')
plt.ylabel('Porcentaje')
plt.show()
tab = pd.crosstab(train['Alone'], train['Survived'])
print("\nViajantes solitarios vs Sobrevivientes")
print(tab)
plt.figure(figsize=(18,14))
buff = tab.div(tab.sum(1).astype('float'), axis=0).plot(kind='bar',stacked=True)
plt.xlabel('Viajantes solitarios')
plt.ylabel('Porcentaje')
plt.show()
stats.binom_test(x=374,n=163+374,p=175/(175.+179.))
train.loc[:,['Name', 'Age']].head(10)
train.loc[:,['Name', 'Age', 'Title']].head(10)
print(combine['Age'].groupby(combine['Title']).count())
print(combine['Age'].groupby(combine['Title']).mean())
plt.figure(figsize=[12,10])
title = combine[combine['Title'].isin(['Mr', 'Mrs', 'Miss', 'Master', 'Rev', 'Dr'])]
foo = title['Age'].hist(by=title['Title'], bins=np.arange(0,80,1))
plt.show()
print(combine[combine['Title'].isin(["Mrs"])]['Age'].describe())
print("Young")
tab = pd.crosstab(train['Survived'], train['Young'])
print(tab)
sns.barplot('Young', 'Survived', data=train)
plt.show()
print("Old")
tab = pd.crosstab(train['Survived'], train['Old'])
print(tab)
sns.barplot('Old', 'Survived', data=train)
plt.show()
combine = pd.concat([train.drop('Survived',1), test])
survived = train['Survived']
combine['Sex'] = combine['Sex'].astype('category')
combine['Sex'].cat.categories = [0,1]
combine['Sex'] = combine['Sex'].astype('int')
combine["Embarked"] = combine["Embarked"].astype("category")
combine["Embarked"].cat.categories = [0,1,2]
combine["Embarked"] = combine["Embarked"].astype("int")
test = combine.iloc[len(test):]
train = combine.iloc[:len(train)]
train['Survived'] = survived
training, testing = train_test_split(train, test_size=0.2, random_state=0)
cols = ['Sex', 'Pclass', 'Child', 'Alone', 'Family', 'Age_know', 'Young', 'Old']
tcols = np.append(['Survived'], cols)
df = training.loc[:,tcols].dropna()
X = df.loc[:,cols]
y = np.ravel(df.loc[:,['Survived']])
clf_log = LogisticRegression()
clf_log = clf_log.fit(X,y)
score_log = cross_val_score(clf_log, X, y, cv=5).mean()
print(score_log)
clf_pctr = Perceptron(
class_weight='balanced'
)
clf_pctr = clf_pctr.fit(X,y)
score_pctr = cross_val_score(clf_pctr, X, y, cv=5).mean()
print(score_pctr)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lecture 7
Step3: Documenting Invariants
Step4: Accessing Documentation (1)
Step5: Accessing Documentation (2)
Step6: Testing
Step7: Principles of Testing
Step8: Test at the boundaries
Step9: We can also check to make sure the $a=0$ case is handled okay
Step11: When you get an error
Step12: Let's put our tests into one file.
Step15: Code Coverage
Step16: Run the tests and check code coverage
Step17: Run the tests, report code coverage, and report missing lines.
Step18: Run tests, including the doctests, report code coverage, and report missing lines.
Step19: Let's put some tests in for the linear roots function.
Step20: Now run the tests and check code coverage.
|
<ASSISTANT_TASK:>
Python Code:
def quad_roots(a=1.0, b=2.0, c=0.0):
Returns the roots of a quadratic equation: ax^2 + bx + c = 0.
INPUTS
=======
a: float, optional, default value is 1
Coefficient of quadratic term
b: float, optional, default value is 2
Coefficient of linear term
c: float, optional, default value is 0
Constant term
RETURNS
========
roots: 2-tuple of complex floats
Has the form (root1, root2) unless a = 0
in which case a ValueError exception is raised
EXAMPLES
=========
>>> quad_roots(1.0, 1.0, -12.0)
((3+0j), (-4+0j))
import cmath # Can return complex numbers from square roots
if a == 0:
raise ValueError("The quadratic coefficient is zero. This is not a quadratic equation.")
else:
sqrtdisc = cmath.sqrt(b * b - 4.0 * a * c)
r1 = -b + sqrtdisc
r2 = -b - sqrtdisc
return (r1 / 2.0 / a, r2 / 2.0 / a)
def quad_roots(a=1.0, b=2.0, c=0.0):
Returns the roots of a quadratic equation: ax^2 + bx + c.
INPUTS
=======
a: float, optional, default value is 1
Coefficient of quadratic term
b: float, optional, default value is 2
Coefficient of linear term
c: float, optional, default value is 0
Constant term
RETURNS
========
roots: 2-tuple of complex floats
Has the form (root1, root2) unless a = 0
in which case a ValueError exception is raised
NOTES
=====
PRE:
- a, b, c have numeric type
- three or fewer inputs
POST:
- a, b, and c are not changed by this function
- raises a ValueError exception if a = 0
- returns a 2-tuple of roots
EXAMPLES
=========
>>> quad_roots(1.0, 1.0, -12.0)
((3+0j), (-4+0j))
import cmath # Can return complex numbers from square roots
if a == 0:
raise ValueError("The quadratic coefficient is zero. This is not a quadratic equation.")
else:
sqrtdisc = cmath.sqrt(b * b - 4.0 * a * c)
r1 = -b + sqrtdisc
r2 = -b - sqrtdisc
return (r1 / 2.0 / a, r2 / 2.0 / a)
quad_roots.__doc__.splitlines()
import pydoc
pydoc.doc(quad_roots)
import doctest
doctest.testmod(verbose=True)
def test_quadroots():
assert quad_roots(1.0, 1.0, -12.0) == ((3+0j), (-4+0j))
test_quadroots()
def test_quadroots_types():
try:
quad_roots("", "green", "hi")
except TypeError as err:
assert(type(err) == TypeError)
test_quadroots_types()
def test_quadroots_zerocoeff():
try:
quad_roots(a=0.0)
except ValueError as err:
assert(type(err) == ValueError)
test_quadroots_zerocoeff()
%%file roots.py
def quad_roots(a=1.0, b=2.0, c=0.0):
Returns the roots of a quadratic equation: ax^2 + bx + c = 0.
INPUTS
=======
a: float, optional, default value is 1
Coefficient of quadratic term
b: float, optional, default value is 2
Coefficient of linear term
c: float, optional, default value is 0
Constant term
RETURNS
========
roots: 2-tuple of complex floats
Has the form (root1, root2) unless a = 0
in which case a ValueError exception is raised
EXAMPLES
=========
>>> quad_roots(1.0, 1.0, -12.0)
((3+0j), (-4+0j))
import cmath # Can return complex numbers from square roots
if a == 0:
raise ValueError("The quadratic coefficient is zero. This is not a quadratic equation.")
else:
sqrtdisc = cmath.sqrt(b * b - 4.0 * a * c)
r1 = -b + sqrtdisc
r2 = -b - sqrtdisc
return (r1 / 2.0 / a, r2 / 2.0 / a)
%%file test_roots.py
import roots
def test_quadroots_result():
assert roots.quad_roots(1.0, 1.0, -12.0) == ((3+0j), (-4+0j))
def test_quadroots_types():
try:
roots.quad_roots("", "green", "hi")
except TypeError as err:
assert(type(err) == TypeError)
def test_quadroots_zerocoeff():
try:
roots.quad_roots(a=0.0)
except ValueError as err:
assert(type(err) == ValueError)
!pytest
%%file roots.py
def linear_roots(a=1.0, b=0.0):
Returns the roots of a linear equation: ax+ b = 0.
INPUTS
=======
a: float, optional, default value is 1
Coefficient of linear term
b: float, optional, default value is 0
Coefficient of constant term
RETURNS
========
roots: 1-tuple of real floats
Has the form (root) unless a = 0
in which case a ValueError exception is raised
EXAMPLES
=========
>>> linear_roots(1.0, 2.0)
-2.0
if a == 0:
raise ValueError("The linear coefficient is zero. This is not a linear equation.")
else:
return ((-b / a))
def quad_roots(a=1.0, b=2.0, c=0.0):
Returns the roots of a quadratic equation: ax^2 + bx + c = 0.
INPUTS
=======
a: float, optional, default value is 1
Coefficient of quadratic term
b: float, optional, default value is 2
Coefficient of linear term
c: float, optional, default value is 0
Constant term
RETURNS
========
roots: 2-tuple of complex floats
Has the form (root1, root2) unless a = 0
in which case a ValueError exception is raised
EXAMPLES
=========
>>> quad_roots(1.0, 1.0, -12.0)
((3+0j), (-4+0j))
import cmath # Can return complex numbers from square roots
if a == 0:
raise ValueError("The quadratic coefficient is zero. This is not a quadratic equation.")
else:
sqrtdisc = cmath.sqrt(b * b - 4.0 * a * c)
r1 = -b + sqrtdisc
r2 = -b - sqrtdisc
return (r1 / 2.0 / a, r2 / 2.0 / a)
!pytest --cov
!pytest --cov --cov-report term-missing
!pytest --doctest-modules --cov --cov-report term-missing
%%file test_roots.py
import roots
def test_quadroots_result():
assert roots.quad_roots(1.0, 1.0, -12.0) == ((3+0j), (-4+0j))
def test_quadroots_types():
try:
roots.quad_roots("", "green", "hi")
except TypeError as err:
assert(type(err) == TypeError)
def test_quadroots_zerocoeff():
try:
roots.quad_roots(a=0.0)
except ValueError as err:
assert(type(err) == ValueError)
def test_linearoots_result():
assert roots.linear_roots(2.0, -3.0) == 1.5
def test_linearroots_types():
try:
roots.linear_roots("ocean", 6.0)
except TypeError as err:
assert(type(err) == TypeError)
def test_linearroots_zerocoeff():
try:
roots.linear_roots(a=0.0)
except ValueError as err:
assert(type(err) == ValueError)
!pytest --doctest-modules --cov --cov-report term-missing
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you get something like "Pandas version
Step2: Remind yourself
Step3: Question. Can you see consolidation here?
Step4: Answer these questions below. Code is sufficient, but it's often helpful to add comments to remind yourself what you did, and why.
Step5: Question 4. Japan's aging population
Step6: Comment. Now we have the number of people in any five-year age group running down columns. The column labels are the years.
Step7: Question 5. Dynamics of the yield curve
Step8: With the dataframe ylds
|
<ASSISTANT_TASK:>
Python Code:
# to make sure things are working, run this
import pandas as pd
print('Pandas version: ', pd.__version__)
import pandas as pd
import matplotlib.pyplot as plt
import datetime as dt
%matplotlib inline
url = 'http://pages.stern.nyu.edu/~dbackus/Data/beer_production_1947-2004.xlsx'
beer = pd.read_excel(url, skiprows=12, index_col=0)
print('Dimensions:', beer.shape)
beer[list(range(1,11))].head(3)
vars = list(range(1,101)) # extract top 100 firms
pdf = beer[vars].T # transpose (flip rows and columns)
pdf[[1947, 1967, 1987, 2004]].head()
# a basic plot
fig, ax = plt.subplots()
pdf[1947].plot(ax=ax, logy=True)
pdf[1967].plot(ax=ax, logy=True)
pdf[1987].plot(ax=ax, logy=True)
pdf[2004].plot(ax=ax, logy=True)
ax.legend()
# for help
ax.set_title?
# this is easier if we put the basic plot in a function
def make_plot():
fig, ax = plt.subplots()
pdf[1947].plot(ax=ax, logy=True)
pdf[1967].plot(ax=ax, logy=True)
pdf[1987].plot(ax=ax, logy=True)
pdf[2004].plot(ax=ax, logy=True)
ax.legend()
return ax
ax = make_plot()
ax.set_title('Beer sales by industry rank', fontsize=14)
# line width: put lw=2 in each of the plot statements
ax = make_plot()
ax.set_xlabel('Industry Rank')
ax.set_ylabel('Sales (log scale)')
# log scale: otherwise the differences are too large
# we can't show the alternative because some of the numbers are zero
# color: we add color='somecolor' in each of the plot statements
# data input (takes about 20 seconds on a wireless network)
url1 = 'http://esa.un.org/unpd/wpp/DVD/Files/'
url2 = '1_Indicators%20(Standard)/EXCEL_FILES/1_Population/'
url3 = 'WPP2015_POP_F07_1_POPULATION_BY_AGE_BOTH_SEXES.XLS'
url = url1 + url2 + url3
cols = [2, 4, 5] + list(range(6,28))
prj = pd.read_excel(url, sheetname=1, skiprows=16, parse_cols=cols, na_values=['…'])
print('Dimensions: ', prj.shape)
print('Column labels: ', prj.columns)
# rename some variables
pop = prj
pop = pop.rename(columns={'Reference date (as of 1 July)': 'Year',
'Major area, region, country or area *': 'Country',
'Country code': 'Code'})
# select countries and years
countries = ['Japan']
years = [2015, 2035, 2055, 2075, 2095]
pop = pop[pop['Country'].isin(countries) & pop['Year'].isin(years)]
pop = pop.drop(['Country', 'Code'], axis=1)
pop = pop.set_index('Year').T
pop = pop/1000 # convert population from thousands to millions
pop.head()
pop.tail()
pop[[2015]].plot()
pop[[2015]].plot(kind='bar')
# my fav
pop[[2015]].plot(kind='barh')
fig, ax = plt.subplots(figsize=(10,6))
pop.plot(ax=ax)
ax.set_title('Population by age')
ax.set_xlabel('Population (millions)')
ax.set_ylabel('Age Range')
pop.plot(kind='bar', subplots=True, figsize=(6,8), sharey=True)
# data input (takes about 20 seconds on a wireless network)
url = 'http://pages.stern.nyu.edu/~dbackus/Data/feds200628.csv'
gsw = pd.read_csv(url, skiprows=9, index_col=0, usecols=list(range(11)), parse_dates=True)
print('Dimensions: ', gsw.shape)
print('Column labels: ', gsw.columns)
print('Row labels: ', gsw.index)
# grab recent data
df = gsw[gsw.index >= dt.datetime(2010,1,1)]
# convert to annual, last day of year
df = df.resample('A', how='last').sort_index()
df.head()
df.columns = list(range(1,11))
ylds = df.T
ylds.head(3)
fig, ax = plt.subplots()
ylds.plot(ax=ax)
ax.set_title('US Treasury Yields')
ax.set_ylabel('Yield')
ax.set_xlabel('Maturity in Years')
ybar = ylds.mean(axis=1)
ybar.plot(ax=ax, color='black', linewidth=3, linestyle='dashed')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: After doing a pip install, click on Reset Session so that the Python environment picks up the new package
Step2: Preprocess JPEG images to TF Records
Step3: Run as a Python module
Step4: Then, run it on Cloud ML Engine with --use_tpu
Step5: Monitoring training with TensorBoard
Step6: Deploying and predicting with model
Step7: To predict with the model, let's take one of the example images that is available on Google Cloud Storage <img src="http
Step8: The online prediction service expects images to be base64 encoded as described here.
Step9: Send it to the prediction service
|
<ASSISTANT_TASK:>
Python Code:
%%bash
pip install apache-beam[gcp]
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = 'tpu'
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['MODEL_TYPE'] = MODEL_TYPE
os.environ['TFVERSION'] = '1.8' # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt
%%bash
gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | wc -l
gsutil cat gs://cloud-ml-data/img/flower_photos/eval_set.csv | wc -l
%%bash
export PYTHONPATH=${PYTHONPATH}:${PWD}/flowersmodeltpu
gsutil -m rm -rf gs://${BUCKET}/tpu/flowers/data
python -m trainer.preprocess \
--train_csv gs://cloud-ml-data/img/flower_photos/train_set.csv \
--validation_csv gs://cloud-ml-data/img/flower_photos/eval_set.csv \
--labels_file /tmp/labels.txt \
--project_id $PROJECT \
--output_dir gs://${BUCKET}/tpu/flowers/data
%%bash
gsutil ls gs://${BUCKET}/tpu/flowers/data/
%%bash
WITHOUT_TPU="--train_batch_size=2 --train_steps=5"
OUTDIR=./flowers_trained
rm -rf $OUTDIR
export PYTHONPATH=${PYTHONPATH}:${PWD}/flowersmodeltpu
python -m flowersmodeltpu.task \
--output_dir=$OUTDIR \
--num_train_images=3300 \
--num_eval_images=370 \
$WITHOUT_TPU \
--learning_rate=0.01 \
--project=${PROJECT} \
--train_data_path=gs://${BUCKET}/tpu/flowers/data/train* \
--eval_data_path=gs://${BUCKET}/tpu/flowers/data/validation*
%%bash
WITH_TPU="--train_batch_size=256 --train_steps=3000 --batch_norm --use_tpu"
WITHOUT_TPU="--train_batch_size=2 --train_steps=5"
OUTDIR=gs://${BUCKET}/flowers/trained_${MODEL_TYPE}_delete
JOBNAME=flowers_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=flowersmodeltpu.task \
--package-path=${PWD}/flowersmodeltpu \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_TPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--num_train_images=3300 \
--num_eval_images=370 \
$WITH_TPU \
--learning_rate=0.01 \
--project=${PROJECT} \
--train_data_path=gs://${BUCKET}/tpu/flowers/data/train-* \
--eval_data_path=gs://${BUCKET}/tpu/flowers/data/validation-*
%%bash
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/flowers/trained_${MODEL_TYPE}/export/exporter | tail -1)
saved_model_cli show --dir $MODEL_LOCATION --all
from google.datalab.ml import TensorBoard
TensorBoard().start('gs://{}/flowers/trained_{}'.format(BUCKET, MODEL_TYPE))
for pid in TensorBoard.list()['pid']:
TensorBoard().stop(pid)
print 'Stopped TensorBoard with pid {}'.format(pid)
%%bash
MODEL_NAME="flowers"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/flowers/trained_${MODEL_TYPE}/export/exporter | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ml-engine versions delete --quiet ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
#gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud alpha ml-engine versions create ${MODEL_VERSION} --machine-type mls1-c4-m4 --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
%%bash
gcloud alpha ml-engine models list
%%bash
IMAGE_URL=gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg
# Copy the image to local disk.
gsutil cp $IMAGE_URL flower.jpg
# Base64 encode and create request message in json format.
python -c 'import base64, sys, json; img = base64.b64encode(open("flower.jpg", "rb").read()).decode(); print(json.dumps({"image_bytes":{"b64": img}}))' &> request.json
%%bash
gcloud ml-engine predict \
--model=flowers2 \
--version=${MODEL_TYPE} \
--json-instances=./request.json
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0.1 Directory Set up
Step2: 0.2 Display images
Step3: 1. Photometry set up
Step4: Define starting values. Fill in values here
Step5: Aperture photometry set up. Play around with adjusting the aperture radii sizes and see the resulting image under 'Tests'
Step8: 1.1 Centroiding
Step9: Use centroiding algorithm to find the actual centers of the targe and comparison.
Step10: Inspect PSF to see whether shift makes sense
Step12: 1.2 Aperture Photometry
Step14: Sky annulus
Step15: Extract values from regions
Step16: Define new regions where the target and comparison are centered.
Step17: Place mask on region
Step18: Place mask on sky annulus slice.
Step19: 1.3 Tests
Step20: b. Disply image with aperture mask and sky annulus
Step21: 2. Photometry
Step22: Sum all flux inside target and comparison apertures and divide by number of pixels to get average count per pixel.
Step23: 2.2 Optimize photometry aperture
Step24: 2.3 Calculate the target's magnitude and uncertainty
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import astropy.io.fits as fits
## make matplotlib appear in the notebook rather than in a new window
%matplotlib inline
datadir = ''
objname = '2016HO3'
def plotfits(imno):
img = fits.open(datadir+objname+'_{0:02d}.fits'.format(numb))[0].data
f = plt.figure(figsize=(10,12))
#im = plt.imshow(img, cmap='hot')
im = plt.imshow(img[480:580, 460:600], cmap='hot')
plt.clim(1800, 2800)
plt.colorbar(im, fraction=0.034, pad=0.04)
plt.savefig("figure{0}.png".format(imno))
plt.show()
numb = 1
plotfits(numb)
numb = 2
plotfits(numb)
partimg = fits.open(datadir+objname+'_01.fits'.format(numb))[0].data[480:580, 460:600]
targcen = np.array([##,##]) ## target center
compcen = np.array([##,##]) ## comparison center
searchr = 6 ## search box size
ap_r = 2 ## aperture radius
sky_inner = 3
sky_outer = 5
def cent_weight(n):
Assigns centroid weights
wghts=np.zeros((n),np.float)
for i in range(n):
wghts[i]=float(i-n/2)+0.5
return wghts
def calc_CoM(psf, weights):
Finds Center of Mass of image
cent=np.zeros((2),np.float)
### Write Equations for finding Center of Mass here ###
return cent
## Cut a box between search limits, centered around targcen
targbox = partimg[targcen[0]-searchr : targcen[0]+searchr, targcen[1]-searchr : targcen[1]+searchr]
weights = cent_weight(len(targbox))
tcenoffset = calc_CoM(targbox, weights)
print(tcenoffset)
tcenter = targcen + tcenoffset
plt.plot(sum(targbox))
plt.show()
compbox = partimg[compcen[0]-searchr : compcen[0]+searchr, compcen[1]-searchr : compcen[1]+searchr]
compw = cent_weight(len(compbox))
ccenoffset = calc_CoM(compbox,compw)
ccenter = compcen + ccenoffset
print(tcenter)
compw
def circle(npix, r1):
Builds a circle
pup=np.zeros((npix,npix),np.int)
for i in range(npix):
for j in range(npix):
r=np.sqrt((float(i-npix/2)+0.5)**2+(float(j-npix/2)+0.5)**2)
if r<=r1:
pup[i,j]=1
return pup
def annulus(npix, r_inner,r_outer=-1.):
Builds an annulus
pup=np.zeros((npix,npix),np.int)
for i in range(npix):
for j in range(npix):
#### Fill in annulus form here ####
if ((r<=r_outer)&(r>=r_inner)):
pup[i,j]=1
return pup
circmask = circle(searchr*2, ap_r)
annmask = annulus(searchr*2, sky_inner, sky_outer)
newtarg = partimg[int(round(tcenter[0]))-searchr : int(round(tcenter[0]))+searchr, int(round(tcenter[1]))-searchr : int(round(tcenter[1]))+searchr]
newcomp = partimg[int(round(ccenter[0]))-searchr : int(round(ccenter[0]))+searchr, int(round(ccenter[1]))-searchr : int(round(ccenter[1]))+searchr]
targaper = newtarg * circmask
compaper = newcomp * circmask
targann = newtarg * annmask
compann = newcomp * annmask
im = plt.imshow(partimg, cmap='hot')
plt.clim(1800, 2800)
plt.scatter(targcen[1], targcen[0], c='g', marker='x')
plt.scatter(compcen[1], compcen[0], c='g', marker='x')
plt.scatter(tcenter[1], tcenter[0], c='b', marker='x')
plt.scatter(ccenter[1], ccenter[0], c='b', marker='x')
plt.show()
im = plt.imshow(targaper, cmap='hot')
plt.clim(1800, 2800)
plt.show()
im = plt.imshow(targann, cmap='hot')
plt.clim(1800, 2800)
plt.show()
def calcsnr(target, bg):
signal = target - bg
noise = np.sqrt(signal + bg)
snr = signal / noise
return snr, noise
targc = np.sum(targaper) / np.sum(circmask)
targbg= np.sum(targann) / np.sum(annmask)
compc = np.sum(compaper) / np.sum(circmask)
compbg= np.sum(compann) / np.sum(annmask)
snr, noise = calcsnr(targc, targbg)
print(snr)
snr, noise = calcsnr(compc, compbg)
print(snr)
## Write code here that tries a range of photometry apertures and finds the best SNR ##
print(bestaper)
print(snr)
targc = circle(searchr*2, ap_r)*newtarg
targskyc = annulus(searchr*2, sky_inner, sky_outer)*newtarg
compc = circle(searchr*2, ap_r)*newcomp
compskyc = annulus(searchr*2, sky_inner, sky_outer)*newcomp
ratio = np.sum(compc)/np.sum(targc)
### complete here ###
### complete here ###
### complete here ###
refmag = 19.4
### complete here ###
print("Measured Magnitude = {:0.3f} ± {:0.3f}".format(mag, sigmamag))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'awi-cm-1-0-hr', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Import dataset
Step2: Throughout the Machine Learning part of this project we will be using scikit-learn, an open source machine learning library for the Python programming language.
Step3: We are explaining 1% of the variance with our model. Definitely nothing to write home about... This was to be expected anyway, since the numerical data we have added pertain to a whole zip code, not each individual restaurant.
Step4: b. Linear Regression on averaged data
Step5: It is of paramount importance to remember that we are NOT supposed to be extracting any useful information from this regression model we are about to apply. After all, we only have a tiny number of rows, each one corresponding to an entire Zip Code. However we might still be able to create a model that will yield some strong trends which would help us make, if not a prediction, an educated guess about the average score of all restaurants in a Zip Code with given demographic data.
Step6: This is a not a very exciting score for our model; We are explaining 35.2% of the variance, but we have to keep into account that
Step7: We can see from the scatterplot and the Pearson Correlation Coefficient that there is indeed a weak positive correlation between the two quantities. Recall that for Simple Linear Regression models like ours, the square of the Pearson Correlation Coefficient is equal to the R-squared parameter we have calculated above, i.e. the fraction of the variance in our data explained by our model.
Step8: I was expecting a much lower score for this model compared to the previous one. Population isn't really a good indicator of restaurants' health inspection scores, since the more populous Zip Codes (for instance
Step9: It is quite interesting that the correlation between the two quantities is negative. Of course this could just mean that the really "small" Zip Codes correspond to rather affluent (and hence, sparsely populated) neighborhoods with very few restaurants that probably score really well in the inspection.
Step10: Predict Score using zip code's Home Ownership
Step11: This is quite an interesting and even unexpected result, since we had this predisposition to believe that since Median Income and Home Ownership Percentages are (or should be) strongly correlated, then the percentage of Home Ownership should be a respectable indicator (if not a predictor) for successful restaurants in the context of Health Inspections. It seems it doesn't play such a role though.
Step12: This "best fit" line illustrates how disastrous this model is. Since the percentage of home owners in a Zip Code is bounded between 0 and 100%, the model "predicts" that all restaurants, regardless of Home_Ownership fraction, would score between 90 and 92, without taking into account uncertainty for the slope...
Step13: The three attempts for individual linear regression predictive models have not really made us any wiser, even though they did at least show us some broad trends pertinent to our data set, which we could loosely extrapolate to other urban areas with the characteristics of Austin. (good luck with that!)
|
<ASSISTANT_TASK:>
Python Code:
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#Reading the dataset in a dataframe using Pandas
df = pd.read_csv('../data/master.csv')
#Print first observations
df.head()
# There are some NaN values in our numerics (UT Campus and ABIA)
# Let us remove rows from those zip codes from the DataFrame:
df = df[np.isfinite(df['Population'])]
# create X and y
feature_cols = ['Med_Income', 'Population', 'Home_Ownership']
X = df[feature_cols]
y = df.Score
# follow the usual sklearn pattern: import, instantiate, fit
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X, y)
# print intercept and coefficients
print lm.intercept_
print lm.coef_
# pair the feature names with the coefficients
zip(feature_cols, lm.coef_)
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % lm.score(X,y))
# Let us check how many rows we are left with after excluding the UT Campus and ABIA areas:
len(df)
average_scores = df.groupby('Zip_Code').mean()
len(average_scores)
average_scores.head()
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
X = average_scores['Med_Income'].values
y = average_scores['Score'].values
# Check the shapes of the X and y vectors:
print X.shape
print y.shape
# Reshape to get them to work when we fit the model:
X = X.reshape(34,1);
y = y.reshape(34,1);
# Fit linear regression model:
lm.fit(X, y)
# print intercept and coefficients
print lm.intercept_
print lm.coef_
# And score it:
lm.score(X,y)
# Visualization using Seaborn:
sns.lmplot(x="Med_Income", y="Score", data=average_scores);
print "The Pearson Correlation coefficient between Median Income and Average Score is {0}".\
format(np.corrcoef(average_scores['Med_Income'].values, average_scores['Score'].values)[1][0])
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
X = average_scores['Population'].values
y = average_scores['Score'].values
# Check the shapes of the X and y vectors:
print X.shape
print y.shape
# Reshape to get them to work when we fit the model:
X = X.reshape(34,1);
y = y.reshape(34,1);
# Fit linear regression model:
lm.fit(X, y)
# print intercept and coefficients
print lm.intercept_
print lm.coef_
# And score it:
lm.score(X,y)
# Visualization using Seaborn:
sns.lmplot(x="Population", y="Score", data=average_scores);
print "The Pearson Correlation coefficient between Population and Average Score is {0}".\
format(np.corrcoef(average_scores['Population'].values, average_scores['Score'].values)[1][0])
print "For a predicted score: {0} (just below the cutoff), the population would have to be {1}".\
format(lm.predict(450000)[0][0], 450000)
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
X = average_scores['Home_Ownership'].values
y = average_scores['Score'].values
# Reshape the X and y vectors to get them to work when we fit the model:
X = X.reshape(34,1);
y = y.reshape(34,1);
# Fit linear regression model:
lm.fit(X, y)
# print intercept and coefficients
print lm.intercept_
print lm.coef_
# And score it:
lm.score(X,y)
# Visualization using Seaborn:
sns.lmplot(x="Home_Ownership", y="Score", data=average_scores);
print "The Pearson Correlation coefficient between Home Ownership Percentage and Average Score is {0}".\
format(np.corrcoef(average_scores['Home_Ownership'].values, average_scores['Score'].values)[1][0])
feature_cols = ['Med_Income', 'Population', 'Home_Ownership']
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
X = average_scores[feature_cols]
y = average_scores['Score'].values
# Check the shapes of the X and y vectors:
print X.shape
print y.shape
y = y.reshape(34,1);
# Fit linear regression model:
lm.fit(X, y)
# print intercept and coefficients
print lm.intercept_
print lm.coef_
# And score it:
lm.score(X,y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up our source space.
Step2: Export source positions to nift file
|
<ASSISTANT_TASK:>
Python Code:
# Author: Annalisa Pascarella <a.pascarella@iac.cnr.it>
#
# License: BSD (3-clause)
import os.path as op
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne import setup_volume_source_space
from mne import make_forward_solution
from mne.minimum_norm import make_inverse_operator, apply_inverse
from nilearn import plotting
# Set dir
data_path = sample.data_path()
subject = 'sample'
data_dir = op.join(data_path, 'MEG', subject)
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, subject, 'bem')
# Set file names
fname_mixed_src = op.join(bem_dir, '%s-oct-6-mixed-src.fif' % subject)
fname_aseg = op.join(subjects_dir, subject, 'mri', 'aseg.mgz')
fname_model = op.join(bem_dir, '%s-5120-bem.fif' % subject)
fname_bem = op.join(bem_dir, '%s-5120-bem-sol.fif' % subject)
fname_evoked = data_dir + '/sample_audvis-ave.fif'
fname_trans = data_dir + '/sample_audvis_raw-trans.fif'
fname_fwd = data_dir + '/sample_audvis-meg-oct-6-mixed-fwd.fif'
fname_cov = data_dir + '/sample_audvis-shrunk-cov.fif'
# List substructures we are interested in. We select only the
# sub structures we want to include in the source space
labels_vol = ['Left-Amygdala',
'Left-Thalamus-Proper',
'Left-Cerebellum-Cortex',
'Brain-Stem',
'Right-Amygdala',
'Right-Thalamus-Proper',
'Right-Cerebellum-Cortex']
# Get a surface-based source space. We could set one up like this::
#
# >>> src = setup_source_space(subject, fname=None, spacing='oct6',
# add_dist=False, subjects_dir=subjects_dir)
#
# But we already have one saved:
src = mne.read_source_spaces(op.join(bem_dir, 'sample-oct-6-src.fif'))
# Now we create a mixed src space by adding the volume regions specified in the
# list labels_vol. First, read the aseg file and the source space bounds
# using the inner skull surface (here using 10mm spacing to save time):
vol_src = setup_volume_source_space(
subject, mri=fname_aseg, pos=7.0, bem=fname_model,
volume_label=labels_vol, subjects_dir=subjects_dir, verbose=True)
# Generate the mixed source space
src += vol_src
# Visualize the source space.
src.plot(subjects_dir=subjects_dir)
n = sum(src[i]['nuse'] for i in range(len(src)))
print('the src space contains %d spaces and %d points' % (len(src), n))
# We could write the mixed source space with::
#
# >>> write_source_spaces(fname_mixed_src, src, overwrite=True)
#
nii_fname = op.join(bem_dir, '%s-mixed-src.nii' % subject)
src.export_volume(nii_fname, mri_resolution=True)
plotting.plot_img(nii_fname, cmap='nipy_spectral')
plt.show()
# Compute the fwd matrix
fwd = make_forward_solution(fname_evoked, fname_trans, src, fname_bem,
mindist=5.0, # ignore sources<=5mm from innerskull
meg=True, eeg=False, n_jobs=1)
leadfield = fwd['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
src_fwd = fwd['src']
n = sum(src_fwd[i]['nuse'] for i in range(len(src_fwd)))
print('the fwd src space contains %d spaces and %d points' % (len(src_fwd), n))
# Load data
condition = 'Left Auditory'
evoked = mne.read_evokeds(fname_evoked, condition=condition,
baseline=(None, 0))
noise_cov = mne.read_cov(fname_cov)
# Compute inverse solution and for each epoch
snr = 3.0 # use smaller SNR for raw data
inv_method = 'MNE' # sLORETA, MNE, dSPM
parc = 'aparc' # the parcellation to use, e.g., 'aparc' 'aparc.a2009s'
lambda2 = 1.0 / snr ** 2
# Compute inverse operator
inverse_operator = make_inverse_operator(evoked.info, fwd, noise_cov,
depth=None, fixed=False)
stcs = apply_inverse(evoked, inverse_operator, lambda2, inv_method,
pick_ori=None)
# Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi
labels_parc = mne.read_labels_from_annot(subject, parc=parc,
subjects_dir=subjects_dir)
# Average the source estimates within each label of the cortical parcellation
# and each sub structure contained in the src space
# If mode = 'mean_flip' this option is used only for the surface cortical label
src = inverse_operator['src']
label_ts = mne.extract_label_time_course([stcs], labels_parc, src,
mode='mean',
allow_empty=True,
return_generator=False)
# plot the times series of 2 labels
fig, axes = plt.subplots(1)
axes.plot(1e3 * stcs.times, label_ts[0][0, :], 'k', label='bankssts-lh')
axes.plot(1e3 * stcs.times, label_ts[0][71, :].T, 'r',
label='Brain-stem')
axes.set(xlabel='Time (ms)', ylabel='MNE current (nAm)')
axes.legend()
mne.viz.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2
|
<ASSISTANT_TASK:>
Python Code:
plt.figure(figsize=(10,6));
plt.scatter(Peaks,Energy);
plt.xlim(0,240)
plt.ylim(0,1000)
plt.xlabel('x (mm)');
plt.ylabel('y (mm)');
plt.plot(xlots,yfit);
plt.legend(['data','Fit'],loc='lower right');
plt.text(5,900,'a = %.3f +/- %.3f keV' % (plsq[0], np.sqrt(pcov[0,0])),size=17)
plt.text(5,800,'b = %.3f +/- %.3f keV MCA$^{-1}$' % (plsq[1], np.sqrt(pcov[1,1])),size=17)
plt.text(5,700,'c = (%.4f +/- %.4f)$\cdot 10^{-3}$ keV MCA$^{-2}$' % (plsq[2]*1e3, np.sqrt(pcov[2,2])*1e3),size=17)
plt.xlabel('MCA Number',fontsize=20);
plt.ylabel('Energy (keV)',fontsize = 20);
plt.xticks(size = 13);
plt.yticks(size = 13);
plt.savefig('LinearMCAFit')
def deltaE(N,dN):
daN = np.sqrt((da/a)**2 + (dN/N)**2)*(a*N)
dbN2 = np.sqrt((db/b)**2 + 4*(dN/N)**2)*(b*N**2)
dcN3 = np.sqrt((dc/c**2) + 9*(dN/N)**2)*(c*N**3)
dEMeas = np.sqrt(daN**2 + dbN2**2 + dcN3**2)*1e-3 #Convert to KeV
return dEMeas
N = np.array([102.40]) #Channel number of 60 degree scattered photopeak
dN = np.array([9.00])
theta = np.array([60])*np.pi/180
EMeas = myfun(N,a,b,c)
EMeas
dEMeas = deltaE(N,dN)
dEMeas
Eo = 661.7 #Initial keV energy of gamma rays (before scattering)
mc2 = 511 #electron mass in keV
def ECompton(Eo,mc2,theta):
return Eo/(1+(Eo/mc2)*(1-np.cos(theta)))
EComp = ECompton(Eo,mc2,theta)
EComp
thetas = np.linspace(-np.pi,np.pi,50);
plt.figure(figsize=(10,6));
plt.plot(thetas,ECompton(Eo,mc2,thetas),label='Compton');
plt.errorbar(theta,EMeas,dEMeas);
plt.scatter(theta,EMeas,dEMeas,label='Measured');
plt.legend();
plt.xlabel('Scattering Angle [Radians]',fontsize=20);
plt.ylabel('Final Energy (keV)',fontsize = 20);
plt.xticks(size = 13);
plt.yticks(size = 13);
plt.xlim(-np.pi,np.pi);
#plt.savefig('Sample')
def Thomson(theta):
ro = 2.82*1e-15
return (1/2)*(ro**2)*(1+np.cos(theta)**2) #set b = 1
def KleinNishina(theta):
ro = 2.82*1e-15
gamma = Eo/mc2
return (1/2)*(ro**2)*(1+np.cos(theta)**2)*((1+gamma*(1-np.cos(theta)))**(-2))*(1+((gamma*(1-np.cos(theta)))**2)/((1+np.cos(theta)**2)*(1+gamma*(1-np.cos(theta)))))
thetas = np.linspace(-np.pi,np.pi,50);
plt.figure(figsize=(10,6));
plt.plot(thetas,Thomson(thetas),label='Thomson');
plt.plot(thetas,KleinNishina(thetas),label='Klein-Nishina');
plt.legend();
plt.xlabel('Scattering Angle [Radians]',fontsize=20);
plt.ylabel('Differential Cross section',fontsize = 20);
plt.xticks(size = 13);
plt.yticks(size = 13);
plt.xlim(-np.pi,np.pi);
#plt.savefig('Sample')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CONTENTS
Step2: Check the shape of these NumPy arrays to make sure we have loaded the database correctly.
Step3: Visualizing Data
Step4: Let's have a look at the average of all the images of training and testing data.
Step5: Testing
Step6: Now, we will initialize a DataSet with our training examples, so we can use it in our algorithms.
Step7: Moving forward we can use MNIST_DataSet to test our algorithms.
Step8: It is obvious that this Learner is not very efficient. In fact, it will guess correctly in only 1135/10000 of the samples, roughly 10%. It is very fast though, so it might have its use as a quick first guess.
Step9: To make sure that the output we got is correct, let's plot that image along with its label.
Step10: k-Nearest Neighbors
Step11: To make sure that the output we got is correct, let's plot that image along with its label.
Step12: Hurray! We've got it correct. Don't worry if our algorithm predicted a wrong class. With this techinique we have only ~97% accuracy on this dataset.
Step13: Visualizing Data
Step14: Let's now see how many times each class appears in the training and testing data
Step15: Unlike Digits, in Fashion all items appear the same number of times.
Step16: Plurality Learner
Step17: Naive-Bayes
Step18: Let's check if we got the right output.
Step19: K-Nearest Neighbors
Step20: The output is 1, which means the item at index 211 is a trouser. Let's see if the prediction is correct
|
<ASSISTANT_TASK:>
Python Code:
from learning import *
from notebook import *
train_img, train_lbl, test_img, test_lbl = load_MNIST()
print("Training images size:", train_img.shape)
print("Training labels size:", train_lbl.shape)
print("Testing images size:", test_img.shape)
print("Testing labels size:", test_lbl.shape)
# takes 5-10 seconds to execute this
show_MNIST(train_lbl, train_img)
# takes 5-10 seconds to execute this
show_MNIST(test_lbl, test_img)
print("Average of all images in training dataset.")
show_ave_MNIST(train_lbl, train_img)
print("Average of all images in testing dataset.")
show_ave_MNIST(test_lbl, test_img)
print(train_img.shape, train_lbl.shape)
temp_train_lbl = train_lbl.reshape((60000,1))
training_examples = np.hstack((train_img, temp_train_lbl))
print(training_examples.shape)
# takes ~10 seconds to execute this
MNIST_DataSet = DataSet(examples=training_examples, distance=manhattan_distance)
pL = PluralityLearner(MNIST_DataSet)
print(pL(177))
%matplotlib inline
print("Actual class of test image:", test_lbl[177])
plt.imshow(test_img[177].reshape((28,28)))
# takes ~45 Secs. to execute this
nBD = NaiveBayesLearner(MNIST_DataSet, continuous = False)
print(nBD(test_img[0]))
%matplotlib inline
print("Actual class of test image:", test_lbl[0])
plt.imshow(test_img[0].reshape((28,28)))
# takes ~20 Secs. to execute this
kNN = NearestNeighborLearner(MNIST_DataSet, k=3)
print(kNN(test_img[211]))
%matplotlib inline
print("Actual class of test image:", test_lbl[211])
plt.imshow(test_img[211].reshape((28,28)))
train_img, train_lbl, test_img, test_lbl = load_MNIST(fashion=True)
# takes 5-10 seconds to execute this
show_MNIST(train_lbl, train_img, fashion=True)
# takes 5-10 seconds to execute this
show_MNIST(test_lbl, test_img, fashion=True)
print("Average of all images in training dataset.")
show_ave_MNIST(train_lbl, train_img, fashion=True)
print("Average of all images in testing dataset.")
show_ave_MNIST(test_lbl, test_img, fashion=True)
temp_train_lbl = train_lbl.reshape((60000,1))
training_examples = np.hstack((train_img, temp_train_lbl))
# takes ~10 seconds to execute this
MNIST_DataSet = DataSet(examples=training_examples, distance=manhattan_distance)
pL = PluralityLearner(MNIST_DataSet)
print(pL(177))
%matplotlib inline
print("Actual class of test image:", test_lbl[177])
plt.imshow(test_img[177].reshape((28,28)))
# takes ~45 Secs. to execute this
nBD = NaiveBayesLearner(MNIST_DataSet, continuous = False)
print(nBD(test_img[24]))
%matplotlib inline
print("Actual class of test image:", test_lbl[24])
plt.imshow(test_img[24].reshape((28,28)))
# takes ~20 Secs. to execute this
kNN = NearestNeighborLearner(MNIST_DataSet, k=3)
print(kNN(test_img[211]))
%matplotlib inline
print("Actual class of test image:", test_lbl[211])
plt.imshow(test_img[211].reshape((28,28)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The purpose of the exersise is to manipulate and plot the current weather of a number of European cities. The data has been downloaded from Openweather, and has been loaded for you below using the given function read_weather.
Step2: Part 1
Step3: Optional extension
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pickle
%matplotlib inline
def read_weather():
with open('data/weather.pkl', 'rb') as f:
return pickle.load(f)
# The file weather.pkl contains a list of dictionaries
Data = read_weather()
Data[0]
# Implement Q1 part 1 here:
# -------------------------
tempr_dict = {}
for countrydata in Data:
tempr_dict[countrydata['name']] = (countrydata['coord']['lat'], countrydata['main']['temp'])
print(tempr_dict)
# Run this cell to tests if you have completed part 1 correctly:
assert(all(key in tempr_dict for key in ['Berlin', 'Kiev', 'London', 'Moscow', 'Southampton'])),\
'keys of your weather dictionary should be Berlin, Kiev, London, Moscow, Southampton'
assert(len(tempr_dict['Moscow']) == 2), "Entries in your dictionary should be a tuple of two values"
# Part 2:
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
for name, value in tempr_dict.items():
ax.plot(value[0], value[1], 'bx')
ax.annotate(name, value)
ax.set_title('City Temperatures')
ax.set_xlabel('Latitude')
ax.set_ylabel('Temp (K)')
# Implement Q2 pt 1 here
# -----------------------
weather_dict = {}
for countrydata in Data:
weather_dict[countrydata['name']] = countrydata['coord']['lat'], countrydata['main']
print(weather_dict)
def plot_weather_lattitude(weather_dictionary, var_name):
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
for name, value in weather_dictionary.items():
ax.plot(value[0], value[1][var_name], 'bx')
ax.annotate(name, (value[0], value[1][var_name]))
ax.set_xlabel('Latitude')
ax.set_ylabel(var_name)
# Here are the variables that should be in your data
weather_vars = ['temp', 'temp_max', 'temp_min', 'pressure', 'humidity']
# EXERCISE: Loop over the variable strings above and plot them
for var_name in weather_vars:
plot_weather_lattitude(weather_dict, var_name)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generating stats files with cProfile
Step2: Table/Grid View
Step3: Chart View
Step5: Comparing Alternative Implementations
Step6: Comparison View
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import cProfile
from pstatsviewer import StatsViewer
from qgrid import nbinstall
nbinstall()
# Construct two 5000 x 8 frames with random floats.
df1 = pd.DataFrame(
np.random.randn(5000, 8),
columns=[chr(ord('A') + i) for i in range(8)],
index=range(5000),
)
df2 = pd.DataFrame(
np.random.randn(5000, 8),
columns=[chr(ord('A') + i) for i in range(8)],
index=range(5000, 10000),
)
df1.head(5)
from qgrid import show_grid
def concat_naive():
for i in range(500):
pd.concat([df1, df2])
cProfile.run(
'concat_naive()',
'naive.stats',
)
slow = StatsViewer("naive.stats")
slow.table()
slow.chart()
def concat_fast():
Concatenate using numpy primitives instead of pd.concat.
for i in range(500):
pd.DataFrame(
np.vstack([df1.values, df2.values]),
columns=df1.columns,
index=np.hstack([
df1.index.values,
df2.index.values,
])
)
cProfile.run(
'concat_fast()',
'fast.stats',
)
fast = StatsViewer("fast.stats")
slow.compare_table(fast, lsuffix="_slow", rsuffix="_fast")
slow.compare_chart(fast, 'tottime', 25)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Making new Layers and Models via subclassing
Step2: The Layer class
Step3: You would use a layer by calling it on some tensor input(s), much like a Python
Step4: Note that the weights w and b are automatically tracked by the layer upon
Step5: Note you also have access to a quicker shortcut for adding weight to a layer
Step6: Layers can have non-trainable weights
Step7: It's part of layer.weights, but it gets categorized as a non-trainable weight
Step8: Best practice
Step9: In many cases, you may not know in advance the size of your inputs, and you
Step10: The __call__() method of your layer will automatically run build the first time
Step11: Implementing build() separately as shown above nicely separates creating weights
Step12: The add_loss() method
Step13: These losses (including those created by any inner layer) can be retrieved via
Step14: In addition, the loss property also contains regularization losses created
Step15: These losses are meant to be taken into account when writing training loops,
Step16: The add_metric() method
Step17: Metrics tracked in this way are accessible via layer.metrics
Step18: Just like for add_loss(), these metrics are tracked by fit()
Step19: You can optionally enable serialization on your layers
Step20: Note that the __init__() method of the base Layer class takes some keyword
Step21: If you need more flexibility when deserializing the layer from its config, you
Step26: Privileged mask argument in the call() method
Step27: Let's write a simple training loop on MNIST
Step28: Note that since the VAE is subclassing Model, it features built-in training
Step29: Beyond object-oriented development
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from tensorflow import keras
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_dim, units), dtype="float32"),
trainable=True,
)
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(units,), dtype="float32"), trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
assert linear_layer.weights == [linear_layer.w, linear_layer.b]
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
class ComputeSum(keras.layers.Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)), trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
x = tf.ones((2, 2))
my_sum = ComputeSum(2)
y = my_sum(x)
print(y.numpy())
y = my_sum(x)
print(y.numpy())
print("weights:", len(my_sum.weights))
print("non-trainable weights:", len(my_sum.non_trainable_weights))
# It's not included in the trainable weights:
print("trainable_weights:", my_sum.trainable_weights)
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
class Linear(keras.layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
# At instantiation, we don't know on what inputs this is going to get called
linear_layer = Linear(32)
# The layer's weights are created dynamically the first time the layer is called
y = linear_layer(x)
class MLPBlock(keras.layers.Layer):
def __init__(self):
super(MLPBlock, self).__init__()
self.linear_1 = Linear(32)
self.linear_2 = Linear(32)
self.linear_3 = Linear(1)
def call(self, inputs):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.linear_2(x)
x = tf.nn.relu(x)
return self.linear_3(x)
mlp = MLPBlock()
y = mlp(tf.ones(shape=(3, 64))) # The first call to the `mlp` will create the weights
print("weights:", len(mlp.weights))
print("trainable weights:", len(mlp.trainable_weights))
# A layer that creates an activity regularization loss
class ActivityRegularizationLayer(keras.layers.Layer):
def __init__(self, rate=1e-2):
super(ActivityRegularizationLayer, self).__init__()
self.rate = rate
def call(self, inputs):
self.add_loss(self.rate * tf.reduce_sum(inputs))
return inputs
class OuterLayer(keras.layers.Layer):
def __init__(self):
super(OuterLayer, self).__init__()
self.activity_reg = ActivityRegularizationLayer(1e-2)
def call(self, inputs):
return self.activity_reg(inputs)
layer = OuterLayer()
assert len(layer.losses) == 0 # No losses yet since the layer has never been called
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # We created one loss value
# `layer.losses` gets reset at the start of each __call__
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # This is the loss created during the call above
class OuterLayerWithKernelRegularizer(keras.layers.Layer):
def __init__(self):
super(OuterLayerWithKernelRegularizer, self).__init__()
self.dense = keras.layers.Dense(
32, kernel_regularizer=tf.keras.regularizers.l2(1e-3)
)
def call(self, inputs):
return self.dense(inputs)
layer = OuterLayerWithKernelRegularizer()
_ = layer(tf.zeros((1, 1)))
# This is `1e-3 * sum(layer.dense.kernel ** 2)`,
# created by the `kernel_regularizer` above.
print(layer.losses)
import numpy as np
inputs = keras.Input(shape=(3,))
outputs = ActivityRegularizationLayer()(inputs)
model = keras.Model(inputs, outputs)
# If there is a loss passed in `compile`, the regularization
# losses get added to it
model.compile(optimizer="adam", loss="mse")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
# It's also possible not to pass any loss in `compile`,
# since the model already has a loss to minimize, via the `add_loss`
# call during the forward pass!
model.compile(optimizer="adam")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
layer = LogisticEndpoint()
targets = tf.ones((2, 2))
logits = tf.ones((2, 2))
y = layer(targets, logits)
print("layer.metrics:", layer.metrics)
print("current accuracy value:", float(layer.metrics[0].result()))
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam")
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
class Linear(keras.layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {"units": self.units}
# Now you can recreate the layer from its config:
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
class Linear(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(Linear, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(Linear, self).get_config()
config.update({"units": self.units})
return config
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
class CustomDropout(keras.layers.Layer):
def __init__(self, rate, **kwargs):
super(CustomDropout, self).__init__(**kwargs)
self.rate = rate
def call(self, inputs, training=None):
if training:
return tf.nn.dropout(inputs, rate=self.rate)
return inputs
from tensorflow.keras import layers
class Sampling(layers.Layer):
Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
class Encoder(layers.Layer):
Maps MNIST digits to a triplet (z_mean, z_log_var, z).
def __init__(self, latent_dim=32, intermediate_dim=64, name="encoder", **kwargs):
super(Encoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_mean = layers.Dense(latent_dim)
self.dense_log_var = layers.Dense(latent_dim)
self.sampling = Sampling()
def call(self, inputs):
x = self.dense_proj(inputs)
z_mean = self.dense_mean(x)
z_log_var = self.dense_log_var(x)
z = self.sampling((z_mean, z_log_var))
return z_mean, z_log_var, z
class Decoder(layers.Layer):
Converts z, the encoded digit vector, back into a readable digit.
def __init__(self, original_dim, intermediate_dim=64, name="decoder", **kwargs):
super(Decoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_output = layers.Dense(original_dim, activation="sigmoid")
def call(self, inputs):
x = self.dense_proj(inputs)
return self.dense_output(x)
class VariationalAutoEncoder(keras.Model):
Combines the encoder and decoder into an end-to-end model for training.
def __init__(
self,
original_dim,
intermediate_dim=64,
latent_dim=32,
name="autoencoder",
**kwargs
):
super(VariationalAutoEncoder, self).__init__(name=name, **kwargs)
self.original_dim = original_dim
self.encoder = Encoder(latent_dim=latent_dim, intermediate_dim=intermediate_dim)
self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim)
def call(self, inputs):
z_mean, z_log_var, z = self.encoder(inputs)
reconstructed = self.decoder(z)
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(
z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1
)
self.add_loss(kl_loss)
return reconstructed
original_dim = 784
vae = VariationalAutoEncoder(original_dim, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
mse_loss_fn = tf.keras.losses.MeanSquaredError()
loss_metric = tf.keras.metrics.Mean()
(x_train, _), _ = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype("float32") / 255
train_dataset = tf.data.Dataset.from_tensor_slices(x_train)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
epochs = 2
# Iterate over epochs.
for epoch in range(epochs):
print("Start of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, x_batch_train in enumerate(train_dataset):
with tf.GradientTape() as tape:
reconstructed = vae(x_batch_train)
# Compute reconstruction loss
loss = mse_loss_fn(x_batch_train, reconstructed)
loss += sum(vae.losses) # Add KLD regularization loss
grads = tape.gradient(loss, vae.trainable_weights)
optimizer.apply_gradients(zip(grads, vae.trainable_weights))
loss_metric(loss)
if step % 100 == 0:
print("step %d: mean loss = %.4f" % (step, loss_metric.result()))
vae = VariationalAutoEncoder(784, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=2, batch_size=64)
original_dim = 784
intermediate_dim = 64
latent_dim = 32
# Define encoder model.
original_inputs = tf.keras.Input(shape=(original_dim,), name="encoder_input")
x = layers.Dense(intermediate_dim, activation="relu")(original_inputs)
z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = Sampling()((z_mean, z_log_var))
encoder = tf.keras.Model(inputs=original_inputs, outputs=z, name="encoder")
# Define decoder model.
latent_inputs = tf.keras.Input(shape=(latent_dim,), name="z_sampling")
x = layers.Dense(intermediate_dim, activation="relu")(latent_inputs)
outputs = layers.Dense(original_dim, activation="sigmoid")(x)
decoder = tf.keras.Model(inputs=latent_inputs, outputs=outputs, name="decoder")
# Define VAE model.
outputs = decoder(z)
vae = tf.keras.Model(inputs=original_inputs, outputs=outputs, name="vae")
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)
vae.add_loss(kl_loss)
# Train.
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=3, batch_size=64)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this tutorial we focus on plotting functions of
Step2: Notice that evoked is a list of
Step3: Let's start with a simple one. We plot event related potentials / fields
Step4: All plotting functions of MNE-python returns a handle to the figure instance.
Step5: Now let's make it a bit fancier and only use MEG channels. Many of the
Step6: Notice the legend on the left. The colors would suggest that there may be two
Step7: By default the topomaps are drawn from evenly spread out points of time over
Step8: Or we can automatically select the peaks.
Step9: You can take a look at the documentation of
Step10: Notice that we created five axes, but had only four categories. The fifth
Step11: Sometimes, you may want to compare two conditions at a selection of sensors,
Step12: We can also plot the activations as images. The time runs along the x-axis
Step13: Finally we plot the sensor data as a topographical view. In the simple case
Step14: Visualizing field lines in 3D
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evoked = mne.read_evokeds(fname, baseline=(None, 0), proj=True)
print(evoked)
evoked_l_aud = evoked[0]
evoked_r_aud = evoked[1]
evoked_l_vis = evoked[2]
evoked_r_vis = evoked[3]
fig = evoked_l_aud.plot(exclude=())
fig.tight_layout()
picks = mne.pick_types(evoked_l_aud.info, meg=True, eeg=False, eog=False)
evoked_l_aud.plot(spatial_colors=True, gfp=True, picks=picks)
evoked_l_aud.plot_topomap()
times = np.arange(0.05, 0.151, 0.05)
evoked_r_aud.plot_topomap(times=times, ch_type='mag')
evoked_r_aud.plot_topomap(times='peaks', ch_type='mag')
fig, ax = plt.subplots(1, 5)
evoked_l_aud.plot_topomap(times=0.1, axes=ax[0], show=False)
evoked_r_aud.plot_topomap(times=0.1, axes=ax[1], show=False)
evoked_l_vis.plot_topomap(times=0.1, axes=ax[2], show=False)
evoked_r_vis.plot_topomap(times=0.1, axes=ax[3], show=True)
ts_args = dict(gfp=True)
topomap_args = dict(sensors=False)
evoked_r_aud.plot_joint(title='right auditory', times=[.07, .105],
ts_args=ts_args, topomap_args=topomap_args)
conditions = ["Left Auditory", "Right Auditory", "Left visual", "Right visual"]
evoked_dict = dict()
for condition in conditions:
evoked_dict[condition.replace(" ", "/")] = mne.read_evokeds(
fname, baseline=(None, 0), proj=True, condition=condition)
print(evoked_dict)
colors = dict(Left="Crimson", Right="CornFlowerBlue")
linestyles = dict(Auditory='-', visual='--')
pick = evoked_dict["Left/Auditory"].ch_names.index('MEG 1811')
mne.viz.plot_compare_evokeds(evoked_dict, picks=pick,
colors=colors, linestyles=linestyles)
evoked_r_aud.plot_image(picks=picks)
title = 'MNE sample data (condition : %s)'
evoked_l_aud.plot_topo(title=title % evoked_l_aud.comment)
colors = 'yellow', 'green', 'red', 'blue'
mne.viz.plot_evoked_topo(evoked, color=colors,
title=title % 'Left/Right Auditory/Visual')
subjects_dir = data_path + '/subjects'
trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
maps = mne.make_field_map(evoked_l_aud, trans=trans_fname, subject='sample',
subjects_dir=subjects_dir, n_jobs=1)
# explore several points in time
field_map = evoked_l_aud.plot_field(maps, time=.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here is what the simulated data look like. We use the pylab module from the plotting library matplotlib.
Step2: Model Specification
Step3: Now we build our model, which we will present in full first, then explain each part line-by-line.
Step4: The first line,
Step5: Having defined the priors, the next statement creates the expected value mu of the outcomes, specifying the linear relationship
Step6: By default, this uses Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm to find the maximum of the log-posterior but also allows selection of other optimization algorithms from the scipy.optimize module. For example, below we use Powell's method to find the MAP.
Step7: It is important to note that the MAP estimate is not always reasonable, especially if the mode is at an extreme. This can be a subtle issue; with high dimensional posteriors, one can have areas of extremely high density but low total probability because the volume is very small. This will often occur in hierarchical models with the variance parameter for the random effect. If the individual group means are all the same, the posterior will have near infinite density if the scale parameter for the group means is almost zero, even though the probability of such a small scale parameter will be small since the group means must be extremely close together.
Step8: The sample function returns a trace object that can be queried in a similar way to a dict containing a map from variable names to numpy.arrays. The first dimension of the array is the sampling index and the later dimensions match the shape of the variable. We can see the last 5 values for the alpha variable as follows
Step9: Posterior analysis
Step10: The left column consists of a smoothed histogram (using kernel density estimation) of the marginal posteriors of each stochastic random variable while the right column contains the samples of the Markov chain plotted in sequential order. The beta variable, being vector-valued, produces two histograms and two sample traces, corresponding to both predictor coefficients.
Step11: Case study 1
Step12: Model Specification
Step13: Notice that we transform the log volatility process s into the volatility process by exp(-2*s). Here, exp is a Theano function, rather than the corresponding function in NumPy; Theano provides a large subset of the mathematical functions that NumPy does.
Step14: We can check our samples by looking at the traceplot for nu and log_sigma.
Step15: Finally we plot the distribution of volatility paths by plotting many of our sampled volatility paths on the same graph. Each is rendered partially transparent (via the alpha argument in Matplotlib's plot function) so the regions where many paths overlap are shaded more darkly.
Step16: Case study 2
Step17: One approach for dealing with excess zeros is to use a mixture model. The mixture model contains two components
Step18: Notice that since the latent occupancy indicators are discrete, we cannot use a gradient-based MCMC step method like HMC or NUTS for this variable. Instead, we will sample using a BinaryMetropolis sampler that proposes only binary values at each iteration for z; for the continuous-valued parameters, theta and p we will use a standard Metropolis sampler.
Step19: The resulting posteriors for the unknown parameters suggest an occupancy rate in the neighborhood of 0.3 to 0.4, and an expected count (conditional on occupancy) of just over 2.
Step20: Arbitrary deterministics
Step21: An important drawback of this approach is that it is not possible for theano to inspect these functions in order to compute the gradient required for the Hamiltonian-based samplers. Therefore, it is not possible to use the HMC or NUTS samplers for a model that uses such an operator. However, it is possible to add a gradient if we inherit from theano.Op instead of using as_op. The PyMC example set includes a more elaborate example of the usage of as_op.
Step22: Generalized Linear Models
Step23: The model can then be very concisely specified in one line of code.
Step24: The error distribution, if not specified via the family argument, is assumed to be normal. In the case of logistic regression, this can be modified by passing in a Binomial family object.
Step25: Backends
Step26: The stored trace can then later be loaded using the load command
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# Intialize random number generator
np.random.seed(123)
# True parameter values
alpha, sigma = 1, 1
beta = [1, 2.5]
# Size of dataset
size = 100
# Predictor variable
X1 = np.linspace(0, 1, size)
X2 = np.linspace(0,.2, size)
# Simulate outcome variable
Y = alpha + beta[0]*X1 + beta[1]*X2 + np.random.randn(size)*sigma
%matplotlib inline
import pylab as pl
fig, axes = pl.subplots(1, 2, sharex=True, figsize=(10,4))
axes[0].scatter(X1, Y)
axes[1].scatter(X2, Y)
axes[0].set_ylabel('Y'); axes[0].set_xlabel('X1'); axes[1].set_xlabel('X2');
from pymc3 import Model, Normal, HalfNormal
basic_model = Model()
with basic_model:
# Priors for unknown model parameters
alpha = Normal('alpha', mu=0, sd=10)
beta = Normal('beta', mu=0, sd=10, shape=2)
sigma = HalfNormal('sigma', sd=1)
# Expected value of outcome
mu = alpha + beta[0]*X1 + beta[1]*X2
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y)
help(Normal) #try help(Model), help(Uniform) or help(basic_model)
from pymc3 import find_MAP
map_estimate = find_MAP(model=basic_model)
print(map_estimate)
from scipy import optimize
map_estimate = find_MAP(model=basic_model, fmin=optimize.fmin_powell)
print(map_estimate)
from pymc3 import NUTS, sample
with basic_model:
# obtain starting values via MAP
start = find_MAP(fmin=optimize.fmin_powell)
# instantiate sampler
step = NUTS(scaling=start)
# draw 500 posterior samples
trace = sample(500, step, start=start)
trace['alpha'][-5:]
from pymc3 import traceplot
traceplot(trace);
from pymc3 import summary
summary(trace)
n = 400
returns = np.genfromtxt("data/SP500.csv")[-n:]
pl.plot(returns);
from pymc3 import Exponential, T, logtransform, exp, Deterministic
from pymc3.distributions.timeseries import GaussianRandomWalk
with Model() as sp500_model:
nu = Exponential('nu', 1./10, testval=.1)
sigma, log_sigma = sp500_model.TransformedVar('sigma', Exponential.dist(1./.02, testval=.1),
logtransform)
s = GaussianRandomWalk('s', sigma**-2, shape=n)
volatility_process = Deterministic('volatility_process', exp(-2*s))
r = T('r', nu, lam=volatility_process, observed=returns)
import scipy
with sp500_model:
start = find_MAP(vars=[s], fmin=scipy.optimize.fmin_l_bfgs_b)
step = NUTS(scaling=start)
trace = sample(50, step, progressbar=False)
# Start next run at the last sampled position.
step = NUTS(scaling=trace[-1], gamma=.25)
trace = sample(400, step, start=trace[-1])
#figsize(12,6)
traceplot(trace, [nu, log_sigma]);
pl.title(str(volatility_process));
pl.plot(trace[volatility_process][::10].T,'b', alpha=.03);
pl.xlabel('time');
pl.ylabel('log volatility');
y = np.array([0, 2, 1, 0, 4, 2, 0, 0, 4, 0, 0, 0, 0, 0, 3, 0, 0, 6, 0, 0, 0, 2, 1,
2, 0, 0, 0, 1, 0, 0, 0, 4, 2, 0, 0, 0, 1, 0, 2, 4, 0, 0, 1, 0, 0, 0,
0, 0, 2, 0, 2, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 0, 0,
0, 0, 3, 0, 2, 0, 1, 2, 2, 2, 2, 3, 0, 0, 0, 0, 1, 0, 3, 1, 0, 0, 0,
0, 0, 2, 0, 0, 1, 0, 0])
pl.hist(y, bins=range(7));
from pymc3 import Beta, Bernoulli, ZeroInflatedPoisson, Uniform, Poisson
with Model() as zip_model:
# Estimated occupancy
p = Beta('p', 1, 1)
# Latent variable for occupancy
z = Bernoulli('z', p, shape=y.shape)
# Estimated mean count
theta = Uniform('theta', 0, 100)
# Poisson likelihood
yd = ZeroInflatedPoisson('y', theta, z, observed=y)
from pymc3 import Metropolis, BinaryMetropolis, sample
with zip_model:
start = {'p': 0.5, 'z': (y > 0), 'theta': 5, 'yd_missing': np.array([1,1])}
step1 = Metropolis([theta, p])
step2 = BinaryMetropolis([z])
trace = sample(10000, [step1, step2], start)
traceplot(trace[5000:], vars=['p', 'theta']);
import theano.tensor as T
from theano.compile.ops import as_op
@as_op(itypes=[T.lscalar], otypes=[T.lscalar])
def crazy_modulo3(value):
if value > 0:
return value % 3
else :
return (-value + 1) % 3
with Model() as model_deterministic:
a = Poisson('a', 1)
b = crazy_modulo3(a)
from pymc3.distributions import Continuous
class Beta(Continuous):
def __init__(self, mu, *args, **kwargs):
super(Beta, self).__init__(*args, **kwargs)
self.mu = mu
self.mode = mu
def logp(self, value):
mu = self.mu
return beta_logp(value - mu)
@as_op(itypes=[T.dscalar], otypes=[T.dscalar])
def beta_logp(value):
return -1.5 * np.log(1 + (value)**2)
with Model() as model:
beta = Beta('slope', mu=0, testval=0)
# Convert X and Y to a pandas DataFrame
import pandas
df = pandas.DataFrame({'x1': X1, 'x2': X2, 'y': Y})
from pymc3.glm import glm
with Model() as model_glm:
glm('y ~ x1 + x2', df)
from pymc3.glm.families import Binomial
df_logistic = pandas.DataFrame({'x1': X1, 'x2': X2, 'y': Y > 0})
with Model() as model_glm_logistic:
glm('y ~ x1 + x2', df_logistic, family=Binomial())
from pymc3.backends import SQLite
with model_glm_logistic:
backend = SQLite('trace.sqlite')
trace = sample(5000, Metropolis(), trace=backend)
summary(trace, vars=['x1', 'x2'])
from pymc3.backends.sqlite import load
with basic_model:
trace_loaded = load('trace.sqlite')
trace_loaded
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Either create a new ipyrad assembly or load an existing one
Step2: Or load a finished assembly from its JSON file
Step3: Look at the stats summary for this assembly
Step4: Load R-language extension
Step5: Transfer Python object to R
Step6: Now R knows about statsDF
Step7: Let's transfer more data from Python to R
Step8: Plot coverage among samples
Step9: Plot the distribution of SNPs among loci
|
<ASSISTANT_TASK:>
Python Code:
## import ipyrad and give it a shorter name
import ipyrad as ip
## create a test assembly
data = ip.Assembly("data")
data.set_params('project_dir', 'test')
data.set_params('raw_fastq_path', 'ipsimdata/rad_example_R1_.fastq.gz')
data.set_params('barcodes_path', 'ipsimdata/rad_example_barcodes.txt')
## Assemble data set; runs steps 1-7
data.run('1')
## load the JSON file for this assembly
data = ip.load_json("test/data.json")
## Data can be accessed from the object's stats and stats_df attributes
print data.stats
## This requires that you have the Python module `rpy2` installed.
## If you do not, it can be installed in anaconda with:
## conda install rpy2
%load_ext rpy2.ipython
## rename data.stats as statsDF
statsDF = data.stats
## import statsDF into R namespace
%R -i statsDF
%%R
print(statsDF)
%%R -w 350 -h 350
## the dimensions above tell IPython how big to make the embedded figure
## alternatively you can adjust the size when you save the figure
plot(statsDF$reads_raw,
statsDF$reads_filtered,
pch=20, cex=3)
### Other stats from our assembly are also available.
### First store names and then import into R
s5 = data.stats_dfs.s5
s7L = data.stats_dfs.s7_loci
s7S = data.stats_dfs.s7_snps
s7N = data.stats_dfs.s7_samples
## no spaces allowed between comma-separated names when
## transferring multiple objects to R
%R -i s5,s7L,s7S,s7N
%%R -w 800 -h 320
##
barplot(s7N$sample_coverage,
col='grey30', names=rownames(s7N),
ylab="N loci",
xlab="Sample")
%%R -w 450 -h 400
print(s7S)
barplot(s7S$var,
col=rgb(0,0,1,1/4),
names=rownames(s7S),
ylab="N loci", ylim=c(0, 400),
xlab="N variable sites")
barplot(s7S$pis,
col=rgb(1,0,0,1/4),
names=rownames(s7S),
ylab="N loci", ylim=c(0, 400),
xlab="N variable sites",
add=TRUE)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define path to working csv files, input and output
Step2: Create a dataframe from the csv file
Step3: Add a new column named "survived"
Step4: Then, apply the lambda function created to a new column "survived"
Step5: Rename index and column to comply the format to be submitted
Step6: Done with indexing, from this point, create a new csv file for gender based probability and submit
|
<ASSISTANT_TASK:>
Python Code:
from pandas import Series, DataFrame
import pandas as pd
f = r'/home/hase/Documents/ZHAW/InfoEng/Lectures/Scripting/data/titanic3_test.csv'
fo = r'/home/hase/Documents/ZHAW/InfoEng/Lectures/Scripting/data/submit/titanic3_test_gender.csv'
df = pd.read_csv(f, sep=';', index_col='id', usecols=['id', 'sex'])
df.head() # Get the first five rows of the dataframe
def gender(row):
if row['sex'] == 'female':
return 1
else:
return 0
df['survived'] = df.apply(lambda row: gender(row),axis=1) # axis=1 means it applies to a row level
# Needs to be lambda to a pass a function to df.apply?
df.head()
df.drop('sex', axis=1, inplace=True) # axis=1 means column-wise, and inplace=True does operation in place
# i.e. no need to do df = df.drop(....)
df.head()
df.index.name = 'key'
df.index.name
df.rename(columns={'survived':'value'}, inplace=True)
df.head()
df.to_csv(fo, sep=';')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 创建客户端
Step2: 申请秘钥
Step3: 返回值为一个Document
Step4: doc['srl']字段为语义角色标注结果,每个四元组的格式为[论元或谓词, 语义角色标签, 起始下标, 终止下标]。其中,谓词的语义角色标签为PRED,起止下标对应以tok开头的第一个单词数组。
Step5: 遍历谓词论元结构:
Step6: 为已分词的句子执行语义角色分析:
|
<ASSISTANT_TASK:>
Python Code:
!pip install hanlp_restful -U
from hanlp_restful import HanLPClient
HanLP = HanLPClient('https://www.hanlp.com/api', auth=None, language='zh') # auth不填则匿名,zh中文,mul多语种
doc = HanLP('2021年HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。', tasks='srl')
print(doc)
doc.pretty_print()
for i, pas in enumerate(doc['srl'][0]):
print(f'第{i+1}个谓词论元结构:')
for form, role, begin, end in pas:
print(f'{form} = {role} at [{begin}, {end}]')
HanLP(tokens=[
["HanLP", "为", "生产", "环境", "带来", "次世代", "最", "先进", "的", "多语种", "NLP", "技术", "。"],
["我", "的", "希望", "是", "希望", "张晚霞", "的", "背影", "被", "晚霞", "映红", "。"]
], tasks='srl', skip_tasks='tok*').pretty_print()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Convolution Layer
Step2: Pooling layer
Step3: Fully Connected Layer
Step5: Building The Model
Step6: Convolutional Neural Network
Step7: Cost Function
Step8: Stochastic Gradient Descent Methods (SGD)
Step9: Momentum
Step10: RMSProp
Step11: Training and Validating
Step12: Training the Model
|
<ASSISTANT_TASK:>
Python Code:
#### Libraries
# Third Party Libraries
import numpy as np
from sklearn.model_selection import train_test_split
import theano
import theano.tensor as T
from theano.tensor.nnet import conv2d
from theano.tensor.signal import pool
class ConvLayer(object):
def __init__(self, input, filter_shape, image_shape, padding=(0, 0),
stride=(1, 1), activation_fn=None):
assert image_shape[1] == filter_shape[1]
# rng = np.random.RandomState(seed)
self.input = input
self.filter_shape = filter_shape
self.image_shape = image_shape
self.activation_fn = activation_fn
fan_in = np.prod(filter_shape[1:])
fan_out = filter_shape[0]*np.prod(filter_shape[2:]) // 2
W_bound = np.sqrt(6/(fan_in+fan_out))
w = np.random.uniform(low=-W_bound, high=W_bound, size=filter_shape)
b_vals = np.random.uniform(size=filter_shape[0])
# Initiliaze weights with random variables
self.W = theano.shared(name='weights',
value=w.astype(theano.config.floatX),
borrow=True)
self.b = theano.shared(name='bias',
value=b_vals.astype(theano.config.floatX),
borrow=True)
conv_out = conv2d(input=input, filters=self.W, border_mode=padding,
subsample=stride, filter_shape=filter_shape,
input_shape=image_shape)
l_output = conv_out + self.b.dimshuffle(('x', 0, 'x', 'x'))
self.output = (l_output if activation_fn is None
else activation_fn(l_output))
# Parameters of the model
self.params = [self.W, self.b]
class PoolingLayer(object):
def __init__(self, input, pool_shape=(2, 2), ignore_border=True,
activation_fn=None):
self.input = input
self.pool_shape = pool_shape
self.ignore_border = ignore_border
l_output = pool.pool_2d(input=input, ds=pool_shape,
ignore_border=self.ignore_border)
self.output = (l_output if activation_fn is None
else activation_fn(l_output))
class FC(object):
def __init__(self, input, n_in, n_out, W=None, b=None, seed=35,
activation_fn=None):
# rng = np.random.RandomState(seed)
self.input = input
if W is None:
W_values = np.random.uniform(low=-np.sqrt(6./(n_in+n_out)),
high=np.sqrt(6./(n_in+n_out)),
size=(n_out, n_in)).astype(theano.config.floatX)
if activation_fn == theano.tensor.nnet.sigmoid:
W_values *= 4
W = theano.shared(name='Weights', value=W_values, borrow=True)
if b is None:
b_values = np.zeros(n_out, dtype=theano.config.floatX)
b = theano.shared(name='bias', value=b_values, borrow=True)
self.W = W
self.b = b
l_output = (T.dot(self.W, input.T)).T + self.b
self.output = (l_output if activation_fn is None
else activation_fn(l_output))
self.params = [self.W, self.b]
def elu(x, alpha=1.0):
return T.switch(x > 0, x, T.exp(x)-1)
def l2_reg(x, lmbd=0.05):
L_2 regularization
l2 = 0
for elements in x:
l2 += T.sum(elements[0]**2)
return lmbd / 2 * l2
X = T.tensor4(name='X', dtype=theano.config.floatX)
Y = T.imatrix(name='Y')
y = T.ivector(name='y')
lr = T.scalar(name='learning_rate', dtype=theano.config.floatX)
nkerns = [8, 32]
batch_size = 256
act_f = elu
conv_layer1 = ConvLayer(input=X,
filter_shape=(nkerns[0], 1, 3, 3),
image_shape=(batch_size, 1, 28, 28),
activation_fn=None)
pool_layer1 = PoolingLayer(input=conv_layer1.output,
activation_fn=act_f)
conv_layer2 = ConvLayer(input=pool_layer1.output,
filter_shape=(nkerns[1], nkerns[0], 5, 5),
image_shape=(batch_size, nkerns[0], 13, 13),
activation_fn=None)
pool_layer2 = PoolingLayer(input=conv_layer2.output,
activation_fn=act_f)
# outputs from convolution network need to be flattend before being
# passed through to the the fully-connected layer
fc_layer_input = pool_layer2.output.flatten(2)
fc_layer1 = FC(input=fc_layer_input,
n_in=nkerns[1] * 4 * 4,
n_out=512,
activation_fn=act_f)
fc_layer2 = FC(input=fc_layer1.output,
n_in=512,
n_out=10,
activation_fn=act_f)
params = fc_layer2.params + fc_layer1.params\
+ conv_layer2.params + conv_layer1.params
cost_input = T.nnet.nnet.softmax(fc_layer2.output)
cost = T.mean(T.nnet.nnet.categorical_crossentropy(cost_input, Y)) \
+ l2_reg(params)
grads = T.grad(cost, params)
def sgd(l_rate, parameters, grads):
updates = []
for param, grad in zip(parameters, grads):
updates.append((param, param - l_rate * grad))
return updates
def momentum(l_rate, parameters, grads, momentum=0.9):
def update_rule(param, velocity, df):
v_next = momentum * velocity - l_rate * df
updates = (param, param+v_next), (velocity, v_next)
return updates
assert momentum <=1 and momentum >= 0
velocities = [theano.shared(name='v_%s' % param,
value=param.get_value() * 0.,
broadcastable=param.broadcastable)
for param in parameters]
updates = []
for p, v, g in zip(parameters, velocities, grads):
param_updates, vel_updates = update_rule(p, v, g)
updates.append(param_updates)
updates.append(vel_updates)
return updates
def rmsprop(l_rate, d_rate=0.9, epsilon=1e-6, parameters=None, grads=None):
one = T.constant(1.0)
def update_rule(param, cache, df):
cache_val = d_rate * cache + (one-d_rate) * df**2
x = l_rate * df / (T.sqrt(cache_val) + epsilon)
updates = (param, param-x), (cache, cache_val)
return updates
caches = [theano.shared(name='c_{}'.format(param),
value=param.get_value() * 0.,
broadcastable=param.broadcastable)
for param in parameters]
updates = []
for p, c, g in zip(parameters, caches, grads):
param_updates, cache_updates = update_rule(p, c, g)
updates.append(param_updates)
updates.append(cache_updates)
return updates
train = theano.function(inputs=[X, Y, lr], outputs=cost,
updates=rmsprop(l_rate=lr, parameters=params,
grads=grads),
allow_input_downcast=True)
# Validation results
pred_result = cost_input.argmax(axis=1)
accu = theano.function(inputs=[X, y], outputs=T.sum(T.eq(pred_result, y)),
allow_input_downcast=True)
pred = theano.function(inputs=[X], outputs=pred_result,
allow_input_downcast=True)
def train_model(training_data, validation_data, test_data=None,
learning_rate=1e-4, epochs=100):
print('---Training Model---')
predicted_results = []
total_values, total_val_values = len(training_data), len(validation_data)
for epoch in range(epochs):
print('Currently on epoch {}'.format(epoch+1))
np.random.shuffle(training_data)
mini_batches = [training_data[k: k+batch_size]
for k in range(0, total_values, batch_size)]
validation_batches = [validation_data[m: m+batch_size]
for m in range(0, total_val_values, batch_size)]
training_cost, accuracy = 0, 0
training_cost_list, accuracy_list = [], []
for mini_batch in mini_batches:
labels = mini_batch[:, 0]
label_matrix = np.zeros(shape=(256, 10), dtype=theano.config.floatX)
for i, label in enumerate(labels):
vec = scalar_to_vec(int(label), 10)
label_matrix[i] = vec
digits = mini_batch[:, 1:]/255
digits = digits.reshape(-1, 1, 28, 28)
cost_ij = train(digits, label_matrix, learning_rate)
training_cost += cost_ij
for val_batch in validation_batches:
labels = mini_batch[:, 0]
label_matrix = np.zeros(shape=(256, 10), dtype=theano.config.floatX)
for i, label in enumerate(labels):
vec = scalar_to_vec(int(label), 10)
label_matrix[i] = vec
digits = mini_batch[:, 1:]/255
digits = digits.reshape(-1, 1, 28, 28)
accuracy += accu(digits, labels)
training_cost_list.append(training_cost/total_values)
accuracy_list.append(accuracy/total_val_values)
print('The accuracy is: {}'.format(accuracy/total_val_values))
print('The loss is: {}'.format(training_cost/total_values))
print('--------------------------')
if np.any(test_data):
print('===================================')
print('Using test data to predict results')
total_values = len(test_data)
mini_batches = [test_data[k: k+batch_size]
for k in range(0, total_values, batch_size)]
for mini_batch in mini_batches:
digits = mini_batch[:, :]/255
digits = digits.reshape(-1, 1, 28, 28)
result = pred(digits)
predicted_results = np.append(predicted_results, result)
print('Done')
return training_cost_list, accuracy_list, predicted_results
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The sequences below were generated here.
Step2: We make a list of amplicons (sequences with pairs of primers from the Dseqrecords)
Step3: We need a list of golden gate linkers, these could be generated automatically in some other way.
Step4: we zip together the golden gate linkers and sequences to a flat list.
Step5: The optional settings below are important. Sequences with a size equal to or shorter than maxlink will be incorporated in the primers.
Step6: We get only three sequences, since the golden gate linkers are incorporated in the primers.
Step7: We can assemble these by setting the limit to 4 and only_terminal_overlaps to True.
Step8: We got three circular products. The second one should be the same as the theoretical one below
Step9: The candidate and the correct sequence has the same cseguid, so they represent the same circular sequence.
|
<ASSISTANT_TASK:>
Python Code:
from pydna.all import *
frags = parse('''
>1|random sequence|A: 0.25|C: 0.25|G: 0.25|T: 0.25|length: 50 bp
ccagaatacagtgccttagatctacggatcgtatctgcgatttggccgat
>2|random sequence|A: 0.25|C: 0.25|G: 0.25|T: 0.25|length: 50 bp
gccctgcttggtagatcaggcgagccaataacattctatagtgtagcctt
>3|random sequence|A: 0.25|C: 0.25|G: 0.25|T: 0.25|length: 50 bp
gagagcgctcctgtttcaatgcttgcaaactctagcagctatactgtagg ''' )
frags
amplicons = [primer_design(f) for f in frags]
golden_gate_linkers = [Dseqrecord(lnk) for lnk in "GAAT GATC AATT GAAT".split()]
golden_gate_linkers
from itertools import chain, zip_longest
seqlist = list( chain.from_iterable( zip_longest(golden_gate_linkers, amplicons)))[:-1]
seqlist
a,b,c = assembly_fragments( seqlist, maxlink=4, overlap=4 )
a.locus, b.locus, c.locus = "sequenceA", "sequenceB", "sequenceC"
a.figure()
b.figure()
c.figure()
from pydna.assembly import terminal_overlap
asm = Assembly((a,b,c), limit=4, algorithm=terminal_overlap)
asm
correct = Dseqrecord("")
for s in seqlist[1:]:
correct += s
correct = correct.looped()
correct.cseguid()
candidate = asm.assemble_circular()[1]
candidate.cseguid()
from Bio.Restriction import BsaI
BsaI.site
for f in (a,b,c):
f.forward_primer = BsaI.site + "a" + f.forward_primer
f.reverse_primer = BsaI.site + "a" + f.reverse_primer
print(f.name)
print(f.forward_primer.format("tab"))
print(f.reverse_primer.format("tab"))
print(f.figure())
first_prod = pcr(a.forward_primer, a.reverse_primer, a.template)
first_prod.figure()
first_prod.cut(BsaI)
first_prod.cut(BsaI)[1].seq
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Optimizing Real World Problems
Step12: The Generic Problem Class
Step14: Great. Now that the class and its basic methods is defined, lets extend it for
Step21: Utility functions for genetic algorithms.
Step22: Putting it all together and making the GA
Step23: Visualize
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# All the imports
from __future__ import print_function, division
from math import *
import random
import sys
import matplotlib.pyplot as plt
# TODO 1: Enter your unity ID here
__author__ = "pwang13"
class O:
Basic Class which
- Helps dynamic updates
- Pretty Prints
def __init__(self, **kwargs):
self.has().update(**kwargs)
def has(self):
return self.__dict__
def update(self, **kwargs):
self.has().update(kwargs)
return self
def __repr__(self):
show = [':%s %s' % (k, self.has()[k])
for k in sorted(self.has().keys())
if k[0] is not "_"]
txt = ' '.join(show)
if len(txt) > 60:
show = map(lambda x: '\t' + x + '\n', show)
return '{' + ' '.join(show) + '}'
print("Unity ID: ", __author__)
# Few Utility functions
def say(*lst):
Print whithout going to new line
print(*lst, end="")
sys.stdout.flush()
def random_value(low, high, decimals=2):
Generate a random number between low and high.
decimals incidicate number of decimal places
return round(random.uniform(low, high),decimals)
def gt(a, b): return a > b
def lt(a, b): return a < b
def shuffle(lst):
Shuffle a list
random.shuffle(lst)
return lst
class Decision(O):
Class indicating Decision of a problem
def __init__(self, name, low, high):
@param name: Name of the decision
@param low: minimum value
@param high: maximum value
O.__init__(self, name=name, low=low, high=high)
class Objective(O):
Class indicating Objective of a problem
def __init__(self, name, do_minimize=True, low=0, high=1):
@param name: Name of the objective
@param do_minimize: Flag indicating if objective has to be minimized or maximized
O.__init__(self, name=name, do_minimize=do_minimize, low=low, high=high)
def normalize(self, val):
return (val - self.low)/(self.high - self.low)
class Point(O):
Represents a member of the population
def __init__(self, decisions):
O.__init__(self)
self.decisions = decisions
self.objectives = None
def __hash__(self):
return hash(tuple(self.decisions))
def __eq__(self, other):
return self.decisions == other.decisions
def clone(self):
new = Point(self.decisions[:])
new.objectives = self.objectives[:]
return new
class Problem(O):
Class representing the cone problem.
def __init__(self, decisions, objectives):
Initialize Problem.
:param decisions - Metadata for Decisions
:param objectives - Metadata for Objectives
O.__init__(self)
self.decisions = decisions
self.objectives = objectives
@staticmethod
def evaluate(point):
assert False
return point.objectives
@staticmethod
def is_valid(point):
return True
def generate_one(self, retries = 20):
for _ in xrange(retries):
point = Point([random_value(d.low, d.high) for d in self.decisions])
if self.is_valid(point):
return point
raise RuntimeError("Exceeded max runtimes of %d" % 20)
class POM3(Problem):
from pom3.pom3 import pom3 as pom3_helper
helper = pom3_helper()
def __init__(self):
Initialize the POM3 classes
names = ["Culture", "Criticality", "Criticality Modifier", "Initial Known",
"Inter-Dependency", "Dynamism", "Size", "Plan", "Team Size"]
lows = [0.1, 0.82, 2, 0.40, 1, 1, 0, 0, 1]
highs = [0.9, 1.20, 10, 0.70, 100, 50, 4, 5, 44]
# TODO 2: Use names, lows and highs defined above to code up decision
# and objective metadata for POM3.
decisions = [Decision(n, l, h) for n , l, h in zip(names, lows, highs)]
objectives = [Objective("Cost", True, 0, 1000), Objective("Score", False, 0, 1),
Objective("Completion", False, 0, 1), Objective("Idle", True, 0, 1)]
# objectives = [Objective("Cost", True, 0, 1000), Objective("Score", False, 0, 1),
# Objective("Completion", False, 0, 1), Objective("idle". True, 0, 1)]
Problem.__init__(self, decisions, objectives)
@staticmethod
def evaluate(point):
if not point.objectives:
point.objectives = POM3.helper.simulate(point.decisions)
return point.objectives
pom3 = POM3()
one = pom3.generate_one()
print(POM3.evaluate(one))
def populate(problem, size):
Create a Point list of length size
population = []
for _ in range(size):
population.append(problem.generate_one())
return population
def crossover(mom, dad):
Create a new point which contains decisions from
the first half of mom and second half of dad
n = len(mom.decisions)
return Point(mom.decisions[:n//2] + dad.decisions[n//2:])
def mutate(problem, point, mutation_rate=0.01):
Iterate through all the decisions in the point
and if the probability is less than mutation rate
change the decision(randomly set it between its max and min).
for i, decision in enumerate(problem.decisions):
if random.random() < mutation_rate:
point.decisions[i] = random_value(decision.low, decision.high)
return point
def bdom(problem, one, two):
Return if one dominates two based
on binary domintation
objs_one = problem.evaluate(one)
objs_two = problem.evaluate(two)
dominates = False
for i, obj in enumerate(problem.objectives):
better = lt if obj.do_minimize else gt
if better(objs_one[i], objs_two[i]):
dominates = True
elif objs_one[i] != objs_two[i]:
return False
return dominates
def fitness(problem, population, point, dom_func):
Evaluate fitness of a point based on the definition in the previous block.
For example point dominates 5 members of population,
then fitness of point is 5.
return len([1 for another in population if dom_func(problem, point, another)])
def elitism(problem, population, retain_size, dom_func):
Sort the population with respect to the fitness
of the points and return the top 'retain_size' points of the population
fitnesses = []
for point in population:
fitnesses.append((fitness(problem, population, point, dom_func), point))
population = [tup[1] for tup in sorted(fitnesses, reverse=True)]
return population[:retain_size]
def ga(pop_size = 100, gens = 250, dom_func=bdom):
problem = POM3()
population = populate(problem, pop_size)
[problem.evaluate(point) for point in population]
initial_population = [point.clone() for point in population]
gen = 0
while gen < gens:
say(".")
children = []
for _ in range(pop_size):
mom = random.choice(population)
dad = random.choice(population)
while (mom == dad):
dad = random.choice(population)
child = mutate(problem, crossover(mom, dad))
if problem.is_valid(child) and child not in population+children:
children.append(child)
population += children
population = elitism(problem, population, pop_size, dom_func)
gen += 1
print("")
return initial_population, population
def plot_pareto(initial, final):
initial_objs = [point.objectives for point in initial]
final_objs = [point.objectives for point in final]
initial_x = [i[1] for i in initial_objs]
initial_y = [i[2] for i in initial_objs]
final_x = [i[1] for i in final_objs]
final_y = [i[2] for i in final_objs]
plt.scatter(initial_x, initial_y, color='b', marker='+', label='initial')
plt.scatter(final_x, final_y, color='r', marker='o', label='final')
plt.title("Scatter Plot between initial and final population of GA")
plt.ylabel("Score")
plt.xlabel("Completion")
plt.legend(loc=9, bbox_to_anchor=(0.5, -0.175), ncol=2)
plt.show()
initial, final = ga(gens=50)
plot_pareto(initial, final)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's illustrate again with a standard normal base measure. We can construct a function base_measure that generates samples from it.
Step2: Because the normal distribution has continuous support, we can generate samples from it forever and we will never see the same sample twice (in theory). We can illustrate this by drawing from the distribution ten thousand times and seeing that we get ten thousand unique values.
Step3: However, when we feed the base measure through the stochastic memoization procedure and then sample, we get many duplicate samples. The number of unique samples goes down as $\alpha$ increases.
Step4: At this point, we have a function dp_draws that returns samples from a probability distribution (specifically, a probability distribution sampled from $\text{DP}(\alpha H_0)$). We can use dp_draws as a base distribution for another Dirichlet process!
Step5: How do we interpret this? norm_dp is a sampler from a probability distribution that looks like the standard normal distribution. norm_hdp is a sampler from a probability distribution that "looks like" the distribution norm_dp samples from.
Step6: And here is a histogram for samples drawn from norm_hdp, our second sampler.
Step7: The second plot doesn't look very much like the first! The level to which a sample from a Dirichlet process approximates the base distribution is a function of the dispersion parameter $\alpha$. Because I set $\alpha=10$ (which is relatively small), the approximation is fairly course. In terms of memoization, a small $\alpha$ value means the stochastic memoizer will more frequently reuse values already seen instead of drawing new ones.
Step8: Since the Hierarchical DP is a Dirichlet Process inside of Dirichlet process, we must provide it with both a first and second level $\alpha$ value.
Step9: We can sample directly from the probability distribution drawn from the Hierarchical Dirichlet Process.
Step10: norm_hdp is not equivalent to the Hierarchical Dirichlet Process; it samples from a single distribution sampled from this HDP. Each time we instantiate the norm_hdp variable, we are getting a sampler for a unique distribution. Below we sample five times and get five different distributions.
|
<ASSISTANT_TASK:>
Python Code:
from numpy.random import choice
from scipy.stats import beta
class DirichletProcessSample():
def __init__(self, base_measure, alpha):
self.base_measure = base_measure
self.alpha = alpha
self.cache = []
self.weights = []
self.total_stick_used = 0.
def __call__(self):
remaining = 1.0 - self.total_stick_used
i = DirichletProcessSample.roll_die(self.weights + [remaining])
if i is not None and i < len(self.weights) :
return self.cache[i]
else:
stick_piece = beta(1, self.alpha).rvs() * remaining
self.total_stick_used += stick_piece
self.weights.append(stick_piece)
new_value = self.base_measure()
self.cache.append(new_value)
return new_value
@staticmethod
def roll_die(weights):
if weights:
return choice(range(len(weights)), p=weights)
else:
return None
from scipy.stats import norm
base_measure = lambda: norm().rvs()
from pandas import Series
ndraws = 10000
print("Number of unique samples after {} draws:".format(ndraws),)
draws = Series([base_measure() for _ in range(ndraws)])
print(draws.unique().size)
norm_dp = DirichletProcessSample(base_measure, alpha=100)
print("Number of unique samples after {} draws:".format(ndraws),)
dp_draws = Series([norm_dp() for _ in range(ndraws)])
print(dp_draws.unique().size)
norm_hdp = DirichletProcessSample(norm_dp, alpha=10)
import matplotlib.pyplot as plt
Series(norm_dp() for _ in range(10000)).hist()
_=plt.title("Histogram of Samples from norm_dp")
Series(norm_hdp() for _ in range(10000)).hist()
_=plt.title("Histogram of Samples from norm_hdp")
class HierarchicalDirichletProcessSample(DirichletProcessSample):
def __init__(self, base_measure, alpha1, alpha2):
first_level_dp = DirichletProcessSample(base_measure, alpha1)
self.second_level_dp = DirichletProcessSample(first_level_dp, alpha2)
def __call__(self):
return self.second_level_dp()
norm_hdp = HierarchicalDirichletProcessSample(base_measure, alpha1=10, alpha2=20)
Series(norm_hdp() for _ in range(10000)).hist()
_=plt.title("Histogram of samples from distribution drawn from Hierarchical DP")
for i in range(5):
norm_hdp = HierarchicalDirichletProcessSample(base_measure, alpha1=10, alpha2=10)
_=Series(norm_hdp() for _ in range(100)).hist()
_=plt.title("Histogram of samples from distribution drawn from Hierarchical DP")
_=plt.figure()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lesson
Step2: Project 1
Step5: Transforming Text into Numbers
|
<ASSISTANT_TASK:>
Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent \
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
## Your code here
[np.ceil(pos_neg_ratios[x]) for x in review.split()]
pass
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
if label == 'POSITIVE':
return 1
else:
return 0
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Spectral Representations of Natural Images
Step2: Image Upload
Step3: We rescale images to a reasonable resolution, otherwise this would take very long. Note that we will have $h \times w$ nodes in the resulting graph, where $h$ and $w$ are the height and width of the image.
Step4: Helper Functions
Step5: By using a sparse matrix representation of the Laplacian, we save on memory significantly.
Step6: After we have computed the Laplacian, we can compute its eigenvectors.
Step7: The Laplacian is always positive semidefinite.
Step8: Keeping the Top $m$ Components
Step9: Saving Results
Step10: You can download the images from this folder as a zipped folder by running the cells below.
|
<ASSISTANT_TASK:>
Python Code:
#@title License
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import functools
import io
import itertools
import os
import matplotlib.pyplot as plt
import numpy as np
import PIL
import scipy.sparse
import scipy.sparse.linalg
from google.colab import files
imgs = files.upload()
def open_as_array(img_bytes):
img_pil = PIL.Image.open(io.BytesIO(img_bytes))
img_pil = img_pil.resize((img_width, img_height))
return np.asarray(img_pil)
img_name, img_bytes = list(imgs.items())[0]
img_data = open_as_array(img_bytes)
plt.axis('off')
_ = plt.imshow(img_data)
img_width = 50
img_height = 40
def get_index(x, y, img_width, img_height):
return y * img_width + x;
def get_neighbours(x, y, img_width, img_height):
neighbours_x_pos = [max(0, x - 1), x, min(x + 1, img_width - 1)]
neighbours_y_pos = [max(0, y - 1), y, min(y + 1, img_height - 1)]
neighbours = product(neighbours_x_pos, neighbours_y_pos)
neighbours = set(neighbours)
neighbours.discard((x, y))
return neighbours
def compute_sparse_laplacian(img_width, img_height):
neighbours_fn = functools.partial(get_neighbours,
img_width=img_width, img_height=img_height)
index_fn = functools.partial(get_index,
img_width=img_width, img_height=img_height)
senders = []
recievers = []
values = []
for x in range(img_width):
for y in range(img_height):
pos = (x, y)
pos_index = index_fn(*pos)
degree = 0.
for neighbour in neighbours_fn(*pos):
neigh_index = index_fn(*neighbour)
senders.append(pos_index)
recievers.append(neigh_index)
values.append(-1.)
degree += 1.
senders.append(pos_index)
recievers.append(pos_index)
values.append(degree)
num_nodes = img_width * img_height
laplacian_shape = (num_nodes, num_nodes)
return scipy.sparse.coo_matrix((values, (senders, recievers)))
laplacian = compute_sparse_laplacian(img_width, img_height)
num_eigenvecs = 1500
v0 = np.ones(img_width * img_height)
eigenvals, eigenvecs = scipy.sparse.linalg.eigsh(laplacian, k=num_eigenvecs,
which='SM', v0=v0)
assert np.all(eigenvals >= 0)
plt.hist(eigenvals, bins=100)
plt.title('Histogram of Laplacian Eigenvalues')
plt.show()
def keep_first_components(img_data, num_components):
orig_shape = img_data.shape
img_reshaped = np.reshape(img_data, (-1, 3))
chosen_eigenvecs = eigenvecs[:, :num_components]
spectral_coeffs = chosen_eigenvecs.T @ img_reshaped
upd_img_data_reshaped = chosen_eigenvecs @ spectral_coeffs
return np.reshape(upd_img_data_reshaped, orig_shape).astype(int)
plt.axis('off')
plt.imshow(keep_first_components(img_data, 200))
plt.savefig('test.png', bbox_inches='tight', pad_inches=0)
save_dir = 'processed'
os.mkdir(save_dir)
for img_name, img_bytes in imgs.items():
base_name = os.path.basename(img_name).split('.')[0]
img_data = open_as_array(img_name)
for num_components in [1, 2, 5, 10, 20, 100, 200, 500]:
upd_img_data = keep_first_components(img_data, num_components)
upd_img_name = f'{base_name}-{num_components}.png'
plt.axis('off')
plt.imshow(upd_img_data)
_ = plt.savefig(f'{save_dir}/{upd_img_name}', bbox_inches='tight',
pad_inches=0)
!zip -r processed.zip processed
files.download('processed.zip')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now for a bit of exploratory data analysis so we can get to know our data
Step2: Plot the data
Step3: I'm sure there are more creative and informative ways to plot the data, but for now it's time to move on.
Step4: Find the relevant features
Step5: Try various machine learning algorithms
Step6: How to format the data for the Kaggle contest submission based on the sampleSubmission.csv file
Step7: sklearn.neighbors.KNeighborsRegressor
Step8: sklearn.linear_model.LinearRegression
Step9: sklearn.neural_network.MLPClassifier
Step10: Restaurant Revenue Prediction Kaggle solution
Step11: Concatenate the train and test data together into a single dataframe to pre-process and featurize both consistently
Step12: Now is the time for us to impute values for the rare restaurant types (DT and MB).
Step13: Here we can define and train a model to impute restaurant type.
Step14: Now we can binarize the "P" columns with dummy variables
Step15: To finish up our data preprocessing, we need to scale all input features to between 0 and 1 (this is especially important for KNN or SVM(SVR) models.
Step16: Now we can define and train a Ridge Regression model.
Step17: So, now we're ready for our final submission to Kaggle
Step18: One last quick comparison.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import display
%matplotlib inline
train_data = pd.read_csv("train.csv")
train_data = train_data.drop('Id', axis=1)
test_data = pd.read_csv("test.csv")
test_data = test_data.drop('Id', axis=1)
display(train_data[:10])
display(test_data[:10])
train_data.describe()
test_data.describe()
train_data.head()
train_data.tail()
train_data.sample(5)
train_data.keys()
test_data.keys()
test_data.keys()
feature_columns = train_data[['P1', 'P2', 'P3', 'P4',
'P5', 'P6', 'P7', 'P8', 'P9', 'P10', 'P11', 'P12', 'P13', 'P14', 'P15',
'P16', 'P17', 'P18', 'P19', 'P20', 'P21', 'P22', 'P23', 'P24', 'P25',
'P26', 'P27', 'P28', 'P29', 'P30', 'P31', 'P32', 'P33', 'P34', 'P35',
'P36', 'P37']]
feature_columns.plot.box(figsize=(20, 20))
# Cribbed from https://www.kaggle.com/ani310/restaurant-revenue-prediction/restaurant-revenue
# Format the data so that dates are easier to work with.
# Create a column that contains data about the number of days the restaurant has been open.
# Remove the column that has the restaurant's opening date.
train_data['Open Date'] = pd.to_datetime(train_data['Open Date'], format='%m/%d/%Y')
test_data['Open Date'] = pd.to_datetime(test_data['Open Date'], format='%m/%d/%Y')
train_data['OpenDays'] = ""
test_data['OpenDays'] = ""
date_last_train = pd.DataFrame({'Date':np.repeat(['01/01/2015'], [len(train_data)])})
date_last_test = pd.DataFrame({'Date':np.repeat(['01/01/2015'], [len(test_data)])})
date_last_train['Date'] = pd.to_datetime(date_last_train['Date'], format='%m/%d/%Y')
date_last_test['Date'] = pd.to_datetime(date_last_test['Date'], format='%m/%d/%Y')
train_data['OpenDays'] = date_last_train['Date'] - train_data['Open Date']
test_data['OpenDays'] = date_last_test['Date'] - test_data['Open Date']
train_data['OpenDays'] = train_data['OpenDays'].astype('timedelta64[D]').astype(int)
test_data['OpenDays'] = test_data['OpenDays'].astype('timedelta64[D]').astype(int)
train_data = train_data.drop('Open Date', axis=1)
test_data = test_data.drop('Open Date', axis=1)
# Compare the revenue generated by the restaurants in Big Cities vs Other:
city_perc = train_data [["City Group", "revenue"]].groupby(['City Group'], as_index=False).mean()
sns.barplot(x='City Group', y='revenue', data=city_perc)
plt.title("Revenue by city size")
# Convert data from 'City Group' and create columns of indicator variables for 'Big Cities' or 'Other':
city_group_dummy = pd.get_dummies(train_data['City Group'])
train_data = train_data.join(city_group_dummy)
city_group_dummy_test = pd.get_dummies(test_data['City Group'])
test_data = test_data.join(city_group_dummy_test)
train_data = train_data.drop('City Group', axis=1)
test_data = test_data.drop('City Group', axis=1)
# Create scatterplot showing how long a restaurant has been open impacts revenue.
# This will also show any outliers.
plt.scatter(train_data['OpenDays'], train_data['revenue'])
plt.xlabel("Days Open")
plt.ylabel("Revenue")
plt.title("Restaurant revenue by location age")
from sklearn.feature_selection import SelectFromModel
# from sklearn.linear_model import LassoCV
from sklearn.ensemble import ExtraTreesClassifier
X_train = train_data.iloc[:, 2:]
y = train_data['revenue']
print("X_train.shape: {}".format(X_train.shape))
clf = ExtraTreesClassifier()
clf = clf.fit(X_train, y)
print("clf.feature_.importances_: \n{}".format(clf.feature_importances_))
model = SelectFromModel(clf, prefit=True)
print(model)
X_train_new = model.transform(X_train)
print("X_train_new.shape: {}".format(X_train_new.shape))
X_train_new = pd.DataFrame(X_train_new)
print(X_train_new[:5])
from sklearn.ensemble import RandomForestRegressor
# Tweak seaborn visualizations and adapt to Jupyter notebooks:
sns.set_context("notebook", font_scale=1.1)
sns.set_style("ticks")
# Make dataframes for train and test:
X_train = pd.DataFrame({'OpenDaysLog':train_data['OpenDays'].apply(np.log),
'Big Cities':train_data['Big Cities'], 'Other':train_data['Other'],
'P2':train_data['P2'], 'P8':train_data['P8'], 'P22':train_data['P22'],
'P24':train_data['P24'], 'P28':train_data['P28'], 'P26':train_data['P26']})
y_train = train_data['revenue'].apply(np.log)
X_test = pd.DataFrame({'OpenDaysLog':test_data['OpenDays'].apply(np.log),
'Big Cities':test_data['Big Cities'], 'Other':test_data['Other'],
'P2':test_data['P2'], 'P8':test_data['P8'], 'P22':test_data['P22'],
'P24':test_data['P24'], 'P28':test_data['P28'], 'P26':test_data['P26']})
# Time to build the models and make some predictions:
from sklearn import linear_model
cls = RandomForestRegressor(n_estimators=150)
cls.fit(X_train, y_train)
pred = cls.predict(X_test)
pred = np.exp(pred)
pred
cls.score(X_train, y_train)
test_data = pd.read_csv("test.csv")
submission = pd.DataFrame({
"Id": test_data["Id"],
"Prediction": pred
})
# submission.to_csv('RandomForestSimple.csv', header=True, index=False)
from sklearn.neighbors import KNeighborsRegressor
# Use dataframes from sklearn.ensemble.RandomForestRegressor example above.
knn_cls = KNeighborsRegressor(n_neighbors=2)
knn_cls.fit(X_train, y_train)
knn_pred = knn_cls.predict(X_test)
knn_pred = np.exp(knn_pred)
knn_cls.score(X_train, y_train)
from sklearn.linear_model import LinearRegression
# Use dataframes from sklearn.ensemble.RandomForestRegressor example above.
lr_cls = LinearRegression()
lr_cls.fit(X_train, y_train)
lr_pred = lr_cls.predict(X_test)
lr_pred = np.exp(lr_pred)
lr_cls.score(X_train, y_train)
from sklearn.neural_network import MLPRegressor
mlp_cls = MLPRegressor(solver='lbfgs')
mlp_cls.fit(X_train, y_train)
mlp_pred = mlp_cls.predict(X_test)
mlp_pred = np.exp(mlp_pred)
mlp_cls.score(X_train, y_train)
import datetime
%pylab inline
from sklearn.model_selection import LeaveOneOut
from sklearn.grid_search import GridSearchCV, RandomizedSearchCV
from sklearn.metrics import mean_squared_error, mean_absolute_error
from sklearn.preprocessing import MinMaxScaler
from sklearn.ensemble import ExtraTreesClassifier
# Regressors considered:
from sklearn.svm import SVR
from sklearn.neighbors import KNeighborsRegressor
# Regressor chosen by the author for final submission:
from sklearn.linear_model import Ridge
# Kaggle added ~311.5 "fake" data points to the test for each real data point.
# Dividing by this number gives more accurate counts of the "real" data in the test set.
FAKE_DATA_RATIO = 311.5
# Set a random seed:
SEED = 0
# Read in the data provided by Kaggle:
train = pd.read_csv('train.csv', index_col=0, parse_dates=[1])
test = pd.read_csv('test.csv', index_col=0, parse_dates=[1])
print("Training data dimensions: \n{}".format(train.shape))
print("Test data dimensions: \n{}".format(test.shape))
df = pd.concat((test, train), ignore_index=True)
df.describe()
# Convert date strings to "days open" numerical value:
df["Open Date"] = df["Open Date"].apply(pd.to_datetime)
last_date = df["Open Date"].max()
# Create a datetime delta object:
df["Open Date"] = last_date - df["Open Date"]
# Convert the delta object to an int:
df["Open Date"] = df["Open Date"].dt.days + 1
# Scale "days since opening" so that the marginal impact decreases over time.
# This and the similar log transform of City Count below are the modifications
# that were not in the official competition submission.
df["Log Days Opened"] = df["Open Date"].apply(np.log)
df = df.drop(["Open Date"], axis=1)
# Resize plots:
pylab.rcParams['figure.figsize'] = (8, 6)
df[["Log Days Opened", "revenue"]].plot(x="Log Days Opened", y="revenue",
kind='scatter', title="Log (Days Opened) vs Revenue")
# There is a certain set of columns that are either all zero or all non-zero.
# We have added a feature to mark this -- the 'zeros' feature will be 17 for
# these rows and 0 or 1 for the rows which are rarely or never zero.
# Here are the features with the notable zero behavior:
zero_cols = ['P14', 'P15', 'P16', 'P17', 'P18', 'P24', 'P25', 'P26', 'P27',
'P30', 'P31', 'P32', 'P33', 'P34', 'P35', 'P36', 'P37']
# We make a feature that holds this count of zero columns in the above list:
df['zeros'] = (df[zero_cols] == 0).sum(1)
pylab.rcParams['figure.figsize'] = (20, 8)
fig, axs = plt.subplots(1,2)
fig.suptitle("Distribution of new Zeros features:", fontsize=18)
# There is only one row with a zero count between 0 and 17 in the training set:
df['zeros'].ix[pd.notnull(df.revenue)].value_counts().plot(
title="Training Set", kind='bar', ax=axs[0])
# In the test set, however, there are many rows with an intermediate count of zeros.
# This is probably an artifact of how the fake test data was generated, and might
# indicate that conditional dependence between columns was not preserved.
df['zeros'].ix[pd.isnull(df.revenue)].value_counts().plot(
title="Test Set", kind='bar', ax=axs[1], color='red')
# Here we convert two categorical variables, "Restaurant Type", and "City
# Group (Size)" to dummy variables:
pylab.rcParams['figure.figsize'] = (6, 4)
# The two categories of City Group both appear very frequently:
train["City Group"].value_counts().plot(
title="City Group Distribution in the Training Set", kind='bar')
# Two of the four Restaurant Types (DT and MB) are very rare:
train["Type"].value_counts().plot(
title="Restaurant Type Distribution in the Training Set", kind='bar')
(test["Type"].value_counts() / FAKE_DATA_RATIO).plot(
title="Approximate Restaurant Type Distribution in True Test Set",
kind='bar', color='red')
df = df.join(pd.get_dummies(df['City Group'], prefix="CG"))
df = df.join(pd.get_dummies(df['Type'], prefix="T"))
# Since only n-1 columns are needed to binarize n categories, drop one
# of the new columns and drop the original columns.
# In addition, drop the rare restaurant types.
df = df.drop(["City Group", "Type", "CG_Other", "T_MB", "T_DT"], axis=1)
print(df.shape)
df.describe(include='all')
# Replace city names with the count of their frequency in the training +
# estimated frequency in the test set.
city_counts = (test["City"].value_counts() /
FAKE_DATA_RATIO).add(train["City"].value_counts(), fill_value=0)
df["City"] = df["City"].replace(city_counts)
print("Some example estimated counts of restaurants per city: \n{}".format(
city_counts.head()))
# Take the natural logarithm of city count so that the marginal effect decreases:
df["Log City Count"] = df["City"].apply(np.log)
df = df.drop(["City"], axis=1)
# The last vertical spread of points below are restaurants in Istanbul.
pylab.rcParams['figure.figsize'] = (8, 6)
df[["Log City Count", "revenue"]].plot(x="Log City Count", y="revenue",
kind='scatter', title="Log City Count vs Revenue")
# tofit are the rows in the training set that belong to one of the common restaurant types:
tofit = df.ix[((df.T_FC==1) | (df.T_IL==1)) & (pd.notnull(df.revenue))]
# tofill are rows in either train or test that belong to one of the rare types:
tofill = df.ix[((df.T_FC==0) & (df.T_IL==0))]
print("Type training set shape: \n{}".format(tofit.shape))
print("Data to impute: \n{}".format(tofill.shape))
# Restaurants with type FC are labeled 1, those with type IL are labeled 0.
y = tofit.T_FC
# Drop the label columns and revenue (which is not in the test set):
X = tofit.drop(["T_FC", "T_IL", "revenue"], axis=1)
model_grid = {'max_depth': [None, 8], 'min_samples_split': [4,9,16],
'min_samples_leaf': [1,4], 'max_features': ['sqrt', 0.5, None]}
type_model = ExtraTreesClassifier(n_estimators=25, random_state=SEED)
grid = RandomizedSearchCV(type_model, model_grid, n_iter=10, cv=5, scoring="roc_auc")
grid.fit(X, y)
print("Best parameters for Type Model: \n{}".format(grid.best_params_))
type_model.set_params(**grid.best_params_)
type_model.fit(X, y)
imputations = type_model.predict(tofill.drop(["T_FC", "T_IL", "revenue"], axis=1))
df.loc[(df.T_FC==0) & (df.T_IL==0), "T_FC"] = imputations
df = df.drop(["T_IL"], axis=1)
df[:7]
print("% labeled FC in the training set: \n{}".format(df.T_FC.mean()))
print("% of imputed values labeled FC: \n{}".format(np.mean(imputations)))
print("Pre-binarizing columns: {}".format(len(df.columns)))
for col in df.columns:
if col[0] == 'P':
print(col, len(df[col].unique()), "Unique Values")
df = df.join(pd.get_dummies(df[col], prefix=col))
df = df.drop([col, df.columns[-1]], axis=1)
print("Post-binarizing columns: {}".format(len(df.columns)))
min_max_scaler = MinMaxScaler()
rev = df.revenue
df = df.drop(['revenue'], axis=1)
df = pd.DataFrame(data=min_max_scaler.fit_transform(df), columns=df.columns, index=df.index)
df = df.join(rev)
# Now that preprocessing is finished, let's have a look at the data before modeling with it:
df.describe()
# Recover the original train and test rows based on revenue (which is null for test rows)
train = df.ix[pd.notnull(df.revenue)]
test = df.ix[pd.isnull(df.revenue)].drop(['revenue'], axis=1)
# Scale revenue by sqrt.
# The reason is to decrease the influence of the few very large revenue values.
y = train.revenue.apply(np.sqrt)
X = train.drop(["revenue"], axis=1)
model_grid = [{'normalize': [True, False], 'alpha': np.logspace(0,10)}]
model = Ridge()
# Use a grid search and leave-one-out CV on the train set to find the best regularization parameter to use.
grid = GridSearchCV(model, model_grid, scoring='neg_mean_squared_error')
grid.fit(X, y)
print("Best parameters set found on development set: \n{}".format(
grid.best_params_))
# Retrain model on the full training set using the best parameters found in the last step:
model.set_params(**grid.best_params_)
model.fit(X, y)
# Predict on the test set using the trained model:
submission = pd.DataFrame(columns=['Prediction'], index=test.index,
data=model.predict(test))
# Convert back to revenue from sqrt(revenue):
submission.Prediction = submission.Prediction.apply(np.square)
submission.Prediction[:7]
# Add required column name for Kaggle's submission parser:
submission.index.name='Id'
# Write out the submission:
# submission.to_csv("TFI_Ridge.csv")
# Quick sanity check on the submission:
submission.describe().astype(int)
# Revenue from training set for comparison:
train[['revenue']].describe().astype(int)
train[['revenue']].plot(kind='kde', title="Training Set Revenue Distribution")
submission.columns = ["predicted revenue"]
submission.plot(kind='kde', title="Prediction Revenue Distribution", color='red')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Here the code from previous chapters we'll reuse.
Step5: In the previous chapter we defined metrics that quantify the performance of bike sharing this system. In this chapter we see how those metrics depend on the parameters of the system, like the Customer rate of customers at bike stations.
Step6: When you run State, it returns a new State object
Step7: Not all functions have return values. For example, when you run step,
Step8: add_five takes a parameter, x, which could be any number. It
Step9: As a more useful example, here's a version of run_simulation that
Step10: We can call run_simulation like this
Step11: The result is a State object that represents the final state of the system, including the metrics we'll use to evaluate the performance of the system
Step12: The simulation we just ran starts with olin=10 and wellesley=2, and uses the values p1=0.3, p2=0.2, and num_steps=60.
Step13: The arguments indicate where the sequence should start and stop, and how
Step14: When this loop runs, it
Step15: Each time through the loop, we run a simulation with a different value
Step16: And add values like this
Step17: The result is a SweepSeries that maps from each value of p1 to the
Step18: NumPy provides functions that compute a variety of summary statistics, like mean, median, and std (which computes standard deviation).
Step19: In this example, computing the mean might not be useful, but in the exercises below, it will be.
Step20: Exercise
Step21: Exercise
Step22: Exercise
Step23: Optional Exercises
Step24: Exercise
|
<ASSISTANT_TASK:>
Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
def step(state, p1, p2):
Simulate one time step.
state: bikeshare State object
p1: probability of an Olin->Wellesley ride
p2: probability of a Wellesley->Olin ride
if flip(p1):
bike_to_wellesley(state)
if flip(p2):
bike_to_olin(state)
def bike_to_olin(state):
Move one bike from Wellesley to Olin.
state: bikeshare State object
if state.wellesley == 0:
state.wellesley_empty += 1
return
state.wellesley -= 1
state.olin += 1
def bike_to_wellesley(state):
Move one bike from Olin to Wellesley.
state: bikeshare State object
if state.olin == 0:
state.olin_empty += 1
return
state.olin -= 1
state.wellesley += 1
from numpy import sqrt
root_2 = sqrt(2)
root_2
bikeshare = State(olin=10, wellesley=2)
bikeshare
def add_five(x):
return x + 5
add_five(3)
def run_simulation(p1, p2, num_steps):
state = State(olin=10, wellesley=2,
olin_empty=0, wellesley_empty=0)
for i in range(num_steps):
step(state, p1, p2)
return state
final_state = run_simulation(0.3, 0.2, 60)
print(final_state.olin_empty,
final_state.wellesley_empty)
from numpy import linspace
p1_array = linspace(0, 1, 5)
p1_array
for p1 in p1_array:
print(p1)
p1_array = linspace(0, 0.6, 6)
p2 = 0.2
num_steps = 60
for p1 in p1_array:
final_state = run_simulation(p1, p2, num_steps)
print(p1, final_state.olin_empty)
sweep = SweepSeries()
for p1 in p1_array:
final_state = run_simulation(p1, p2, num_steps)
sweep[p1] = final_state.olin_empty
sweep.plot(label='Olin')
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Customer rate at Olin (p1 in customers/min)',
ylabel='Number of unhappy customers')
from numpy import mean
mean(sweep)
# Solution
def make_state():
state = State(olin=10, wellesley=2)
return state
# Solution
init = make_state()
# Solution
p1_array = linspace(0, 1, 101)
p1_array
# Solution
def sweep_p1(p1_array):
p2 = 0.2
num_steps = 60
sweep = SweepSeries()
for p1 in p1_array:
state = run_simulation(p1, p2, num_steps)
sweep[p1] = state.olin_empty
return sweep
# Solution
p1_array = linspace(0, 1, 101)
sweep = sweep_p1(p1_array)
sweep.plot(label='Olin')
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Customer rate at Olin (p1 in customers/min)',
ylabel='Number of unhappy customers')
# Solution
def sweep_p2(p2_array):
p1 = 0.5
num_steps = 60
sweep = SweepSeries()
for p2 in p2_array:
state = run_simulation(p1, p2, num_steps)
sweep[p2] = state.olin_empty
return sweep
# Solution
p2_array = linspace(0, 1, 101)
sweep = sweep_p2(p2_array)
sweep.plot(label='Olin')
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Customer rate at Wellesley (p2 in customers/min)',
ylabel='Number of unhappy customers')
# Solution
def run_multiple_simulations(p1, p2, num_steps, num_runs):
totals = TimeSeries()
for i in range(num_runs):
state = run_simulation(p1, p2, num_steps)
totals[i] = state.olin_empty + state.wellesley_empty
return totals
# Solution
p1 = 0.3
p2 = 0.3
num_steps = 60
num_runs = 10
totals = run_multiple_simulations(p1, p2, num_steps, num_runs)
show(totals)
# Solution
mean(totals)
# Solution
p1_array = linspace(0, 1, 20)
p2 = 0.3
num_steps = 60
num_runs = 20
sweep = SweepSeries()
for p1 in p1_array:
totals = run_multiple_simulations(p1, p2, num_steps, num_runs)
sweep[p1] = mean(totals)
# Solution
sweep.plot(label='total', color='green')
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Customer rate at Olin (p1 in customers/min)',
ylabel='Average total unhappy customers')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. wadiz_df Data 정리
Step2: 3. Project_money Data 처리
Step3: 4. Project_money_all, Wadiz_df 합치기
Step4: 5. Data 추가
Step5: 5. Comment Crawling
Step6: 6. 맞춤법 검사
Step7: 7. 개설자 댓글이 존재하는 Project만 선정
|
<ASSISTANT_TASK:>
Python Code:
wadiz_df = pd.DataFrame(columns=["project_id", "title", "area", "category", "target", "result", "duration", "comment_all", "comment_user",
"comment_provider", "money_supporter", "sign_supporter"])
project_money_all = pd.DataFrame()
for page in range(1, 100):
try:
project_id = page
response = requests.get("http://www.wadiz.kr/web/campaign/detail/{page_num}".format(page_num=project_id))
#print(project_id)
dom = BeautifulSoup(response.content, "html.parser")
title_1 = dom.select("div.wd-ui-title-wrap h1.wd-h1")
title = title_1[0].text
area_1 = dom.select("div.wd-ui-campaign-info li.wd-data-area")
area = area_1[0].text
category_1 = dom.select("div.wd-ui-campaign-info li.wd-data-tag")
category = category_1[0].text
target_1 = dom.select("div.wd-info-target em.wd-data-target")
target = target_1[0].text
result_1 = dom.select("div.wd-ui-target-old span.wd-data-collection")
result = result_1[0].find("em").text
comment_all_1 = dom.select_one("div.wd-ui-tab-wrap")
comment_all = comment_all_1.find_all("li")[1].text[18:-3]
comment_provider_1 = dom.select("ul.wd-list-reply")
comment_provider = len(comment_provider_1)
comment_user = int(comment_all) - comment_provider
number_join_all = dom.select_one("li.wd-last").text[17:-6]
number_money_1 = dom.select("li.wd-data-money")
number_money = number_money_1[0].text[6:-1][:-1]
number_supporter_1 = dom.select("li.wd-data-sign")
number_supporter = number_supporter_1[0].text[6:-1][:-1]
duration_1 = dom.select("li.wd-data-date")
duration = duration_1[0].text[-23:]
#print(title, area, category, target, result)
wadiz_df.loc[len(wadiz_df)] = [project_id, title, area, category, target, result, duration, comment_all,
comment_user, comment_provider, number_money, number_supporter]
p_id = page
response_1 = requests.get("http://www.wadiz.kr/web/campaign/detailBacker/{project_num}".format(project_num = p_id))
dom_1 = BeautifulSoup(response_1.content, "html.parser")
dom_1.select("span.wd-data-sponsor")
a = pd.Series(dom_1.select("span.wd-data-sponsor strong")[1::2])
b = a.apply(lambda x: x.text[97:-93])
b = b.apply(lambda x: x.replace(",", ""))
time = dom_1.select("span.wd-data-sponsor script")
date = pd.Series()
for i in time:
date_1 = i.text[67:-53]
date.loc[len(date)] = date_1
date = date.apply(lambda x: x[0:10])
p = pd.DataFrame(columns=["project_id"])
project_money = pd.concat([p, b, date], axis=1).fillna(p_id)
project_money = project_money.rename(columns={0 : "funding_money", 1: "funding_date"})
project_money = project_money.loc[project_money["funding_money"] != ""]
project_money_all = project_money_all.append(project_money)
#print(p_id)
except:
continue
#project_money_all = project_money_all[project_money_all['funding_money'] != ""]
project_money_all.index = np.arange(len(project_money_all))
# Data 저장
wadiz_df.to_csv('wadiz_df_0329.csv', encoding='utf-8')
project_money_all.to_csv('project_money_all_0329.csv')
# 최종 금액 0원 초과만 재분류
wadiz_df = wadiz_df[wadiz_df["result"] > 0]
# 날짜 처리
date = np.vstack(wadiz_df.duration.astype(str).apply(lambda x: list(map(str, x.split('-')))).values)
wadiz_df["date_start"] = date[:,0]
wadiz_df["date_end"] = date[:,1]
wadiz_df.drop("duration", axis=1, inplace=True)
# 날짜 이상치 처리
wadiz_df = wadiz_df[wadiz_df['date_start'] != '\t\t\t\t\t\t\t\t\t\t ']
wadiz_df["date_start"] = pd.to_datetime(wadiz_df["date_start"])
wadiz_df["date_end"] = pd.to_datetime(wadiz_df["date_end"])
# 펀딩 기간 추가 (date_duration)
wadiz_df["date_duration"] = wadiz_df["date_end"] - wadiz_df["date_start"]
wadiz_df.head()
# year, month 뽑기
wadiz_df['year'] = wadiz_df['date_start'].apply(lambda x : x.year)
wadiz_df['month'] = wadiz_df['date_start'].apply(lambda x: x.month)
# 한글-> 영문처리
wadiz_df["area"][wadiz_df["area"] == u'서울특별시'] = 'seoul'
wadiz_df["area"][wadiz_df["area"] == u'경기도'] = 'kyungki'
wadiz_df["area"][wadiz_df["area"] == u'부산광역시'] = 'busan'
wadiz_df["area"][wadiz_df["area"] == u'인천광역시'] = 'incheon'
wadiz_df["area"][wadiz_df["area"] == u'경상북도'] = 'kyungbuk'
wadiz_df["area"][wadiz_df["area"] == u'전라북도'] = 'jeonbuk'
wadiz_df["area"][wadiz_df["area"] == u'강원도'] = 'kangwon'
wadiz_df["area"][wadiz_df["area"] == u'대구광역시'] = 'deagu'
wadiz_df["area"][wadiz_df["area"] == u'충청남도'] = 'chungnam'
wadiz_df["area"][wadiz_df["area"] == u'충청북도'] = 'chungbuk'
wadiz_df["area"][wadiz_df["area"] == u'대전광역시'] = 'deajeon'
wadiz_df["area"][wadiz_df["area"] == u'광주광역시'] = 'gwangju'
wadiz_df["area"][wadiz_df["area"] == u'경상남도'] = 'kyungnam'
wadiz_df["area"][wadiz_df["area"] == u'제주특별자치도'] = 'jeju'
wadiz_df["area"][wadiz_df["area"] == u'울산광역시'] = 'ulsan'
wadiz_df["area"][wadiz_df["area"] == u'전라남도'] = 'jeonnam'
wadiz_df["area"][wadiz_df["area"] == u'세종특별자치시'] = 'sejong'
wadiz_df["category"][wadiz_df["category"] == u"나눔/공익"] = 'share/public'
wadiz_df["category"][wadiz_df["category"] == u"라이프/패션"] = 'life/fashion'
wadiz_df["category"][wadiz_df["category"] == u"테크/디자인"] = 'tech/design'
wadiz_df["category"][wadiz_df["category"] == u"교육"] = 'education'
wadiz_df["category"][wadiz_df["category"] == u"책/영화"] = 'book/movie'
wadiz_df["category"][wadiz_df["category"] == u"음악/공연"] = 'music/concert'
wadiz_df["category"][wadiz_df["category"] == u"미술/사진/전시"] = 'art/photo/exhibit'
wadiz_df["category"][wadiz_df["category"] == u"환경"] = 'environment'
wadiz_df["category"][wadiz_df["category"] == u"스포츠"] = 'sports'
wadiz_df["category"][wadiz_df["category"] == u"여행"] = 'travel'
wadiz_df["category"][wadiz_df["category"] == u"게임/만화"] = 'game/comics'
wadiz_df["category"][wadiz_df["category"] == u"피규어/웹툰"] = 'figure/webtoon'
# category 이상치 처리
wadiz_df["category"].fillna('etc', inplace = True)
# area, category -> LabelEncoding
le = LabelEncoder()
wadiz_df["category_label"] = le.fit_transform(wadiz_df["category"])
wadiz_df["area_label"] = le.fit_transform(wadiz_df["area"])
# area, category -> OneHotEncoding
category_dummy = pd.get_dummies(wadiz_df['category'], prefix = 'category_label')
area_dummy = pd.get_dummies(wadiz_df['area'], prefix = 'category_label')
month = pd.get_dummies(wadiz_df.month, prefix="month")
year = pd.get_dummies(wadiz_df.year, prefix="year")
wadiz_df = pd.concat([wadiz_df, category_dummy, area_dummy, year, month], axis=1)
# 콤마 제거
wadiz_df['result'] = wadiz_df['result'].apply(lambda x: x.replace(",", ""))
wadiz_df['target'] = wadiz_df['target'].apply(lambda x: x.replace(",", ""))
# int 변환
wadiz_df['result'] = wadiz_df['result'].apply(lambda x : int(x))
wadiz_df['target'] = wadiz_df['target'].apply(lambda x : int(x))
# funding_rate 생성
# Success/Fail 나누기
wadiz_df["funding_rate"] = wadiz_df["result"] / wadiz_df["target"]
wadiz_df["success"] = wadiz_df["result"] / wadiz_df["target"]
wadiz_df["success"][wadiz_df['funding_rate']>=1] = 1
wadiz_df["success"][wadiz_df['funding_rate']<1] = 0
wadiz_df['project_id'] = wadiz_df['project_id'].apply(lambda x: int(x))
# 날짜 계산용 DataFrame 생성
date_difference = pd.merge(project_money_all, wadiz_df, on="project_id")
# funding_date 처리
project_money_all["funding_date"] = pd.to_datetime(project_money_all["funding_date"])
date_difference["funding_date"] = pd.to_datetime(date_difference["funding_date"])
date_difference["date_start"] = pd.to_datetime(date_difference["date_start"])
date_difference["funding_date"] - date_difference["date_start"]
# 프로젝트 개설일과 개인별 펀딩일 차이
project_money_all["date_difference"] = date_difference["funding_date"] - date_difference["date_start"]
# NaN값 제거 (이상치)
project_money_all['date_difference'] = project_money_all['date_difference'].fillna('-1')
project_money_all = project_money_all[project_money_all['date_difference'] >= '0 days']
# 날짜 처리
project_money_all["date_difference"] = project_money_all["date_difference"].apply(lambda x: int(x)/8.640000e+13)
#project_money_all = project_money_all[project_money_all["date_difference"] >= 0]
project_money_all
type(project_money_all['date_difference'][0])
project_money_all["0day_difference"] = np.ones(len(project_money_all))
# 0~5일 이내 funding된 금액만 처리
for i in np.arange(6):
number = i
project_money_all["{number}day_difference".format(number = i)] = np.ones(len(project_money_all))
project_money_all["{number}day_difference".format(number = i)][project_money_all["date_difference"] <= number] = "short"
project_money_all["{number}day_difference".format(number = i)][project_money_all["date_difference"] > number] = "long"
project_money_all['funding_money'] = project_money_all['funding_money'].apply(lambda x: int(x))
zero_day = project_money_all.loc[project_money_all["0day_difference"] == "short"]
one_day = project_money_all.loc[project_money_all["1day_difference"] == "short"]
two_day = project_money_all.loc[project_money_all["2day_difference"] == "short"]
three_day = project_money_all.loc[project_money_all["3day_difference"] == "short"]
four_day = project_money_all.loc[project_money_all["4day_difference"] == "short"]
five_day = project_money_all.loc[project_money_all["5day_difference"] == "short"]
zero_day = zero_day.groupby("project_id", as_index=False).sum()
one_day = one_day.groupby("project_id", as_index=False).sum()
two_day = two_day.groupby("project_id", as_index=False).sum()
three_day = three_day.groupby("project_id", as_index=False).sum()
four_day = four_day.groupby("project_id", as_index=False).sum()
five_day = five_day.groupby("project_id", as_index=False).sum()
zero_day = zero_day.rename(columns={"funding_money" : "0day_funding_money"})
one_day = one_day.rename(columns={"funding_money" : "1day_funding_money"})
two_day = two_day.rename(columns={"funding_money" : "2day_funding_money"})
three_day = three_day.rename(columns={"funding_money" : "3day_funding_money"})
four_day = four_day.rename(columns={"funding_money" : "4day_funding_money"})
five_day = five_day.rename(columns={"funding_money" : "5day_funding_money"})
zero_day = zero_day.rename(columns={"date_difference" : "0day_date"})
one_day = one_day.rename(columns={"date_difference" : "1day_date"})
two_day = two_day.rename(columns={"date_difference" : "2day_date"})
three_day = three_day.rename(columns={"date_difference" : "3day_date"})
four_day = four_day.rename(columns={"date_difference" : "4day_date"})
five_day = five_day.rename(columns={"date_difference" : "5day_date"})
wadiz_df = pd.merge(wadiz_df, zero_day, on = "project_id", how='outer')
wadiz_df = pd.merge(wadiz_df, one_day, on = "project_id", how='outer')
wadiz_df = pd.merge(wadiz_df, two_day, on = "project_id", how='outer')
wadiz_df = pd.merge(wadiz_df, three_day, on = "project_id", how='outer')
wadiz_df = pd.merge(wadiz_df, four_day, on = "project_id", how='outer')
wadiz_df = pd.merge(wadiz_df, five_day, on = "project_id", how='outer')
# NaN 값 체크
# Nan값은 0~5일내에 펀딩된 금액이 없는 것을 뜻함
for i in wadiz_df.columns:
column = i
print(len(wadiz_df.loc[wadiz_df["{column}".format(column = i)].isnull() == True]))
wadiz_df.fillna(0, inplace=True)
#NaN 값 다시 체크
for i in wadiz_df.columns:
column = i
print(len(wadiz_df.loc[wadiz_df["{column}".format(column = i)].isnull() == True]))
#funding_rate 생성
for i in np.arange(6):
number = i
wadiz_df["{number}day_funding_rate".format(number = i)] = \
wadiz_df["{number}day_funding_money".format(number = i)]/wadiz_df["target"]
# funding_rate 1 이상인 값들 체크
for i in np.arange(6):
number = i
print(len(wadiz_df.loc[wadiz_df["{number}day_funding_rate".format(number = i)] >= 1]))
# funding_rate -> log scale
for i in np.arange(6):
number = i
wadiz_df["{number}day_log_funding_rate".format(number = i)] = wadiz_df["{number}day_funding_rate"\
.format(number = i)].apply(lambda x: np.log(x))
wadiz_df.to_csv('wadiz_df_0329_1.csv', encoding='utf-8')
project_id = wadiz_df.project_id
user_data = pd.DataFrame(columns=['project_id', 'user_id', 'comment', 'date'])
user_data_all = pd.DataFrame()
provider_data = pd.DataFrame(columns=['project_id', 'provider_id', 'comment', 'date'])
provider_data_all = pd.DataFrame()
for i in project_id[0:]:
project_id_list = i
response = requests.get('https://www.wadiz.kr/web/campaign/detail/qa/{project_id_list}'.format(project_id_list = i))
dom = BeautifulSoup(response.content, 'html.parser')
user_all = dom.select('div.wd-ui-recommend li.')
print(project_id_list)
if len(user_all) == 0:
pass
else:
for number in np.arange(len(user_all)):
user = user_all[number]
user_url = user.select_one('a.wd-data-name').get('href')
user_comment = user.select_one('span').text
try:
user_date = user.select_one('span.wd-data-whenCreated').text
except:
continue
user_data.loc[len(user_data)] = [project_id_list, user_url, user_comment, user_date]
provider_all = dom.select('ul.wd-list-reply')
#print(project_id_list)
for number in np.arange(len(provider_all)):
provider = provider_all[number]
provider_url = provider.select_one('a.wd-data-name').get('href')
provider_comment = provider.select('span')[-2].text
provider_date = provider.select('span')[-1].text
provider_data.loc[len(provider_data)] = [project_id_list, provider_url, provider_comment, provider_date]
user_data_all = user_data_all.append(user_data)
provider_data_all = provider_data_all.append(provider_data)
user_data_all.to_csv('user_data_all_0329.csv', encoding='utf-8')
provider_data_all.to_csv('provider_data_all_0329.csv', encoding='utf-8')
comment_analysis = pd.DataFrame(columns={'project_id', 'provider_id',
'result', 'original', 'checked', 'words', 'time', 'comment_length'})
for i in np.arange(len(provider_data_all)):
try:
result = spell_checker.check(provider_data_all['comment'][i])
comment = pd.DataFrame(provider_data_all.loc[i]).T
comment_result = pd.DataFrame([result])
comment_result.index = comment.index
comment_result_df = comment.join(comment_result)
comment_analysis = comment_analysis.append(comment_result_df)
comment_analysis['comment_length'][i] = len(comment_analysis['words'][i])
if i in 100*np.arange(220):
print(i)
except:
continue
# comment_error 생성
comment_error = pd.DataFrame([comment_analysis.project_id, comment_analysis.errors,
comment_analysis.provider_id, comment_analysis.comment_length]).T
# data int타입으로 전환
comment_error['errors'] = comment_error['errors'].apply(lambda x: int(x))
comment_error['comment_length'] = comment_error['comment_length'].apply(lambda x: int(x))
# comment error 처리
comment_error['errors'] = comment_error['errors'].apply(lambda x: int(x))
comment_error['comment_length'] = comment_error['comment_length'].apply(lambda x: int(x))
# id로 groupby
comment_error = comment_error.groupby(by='project_id', as_index=False).sum()
# grammar_level 생성
# 각 댓글에 속한 error를 전체 어절로 나눔
comment_error['provider_grammar_level'] = comment_error['errors']/comment_error['comment_length']
comment_analysis.to_csv('comment_analysis.csv', encoding='utf-8')
wadiz_provider_analysis = pd.merge(wadiz_df, comment_error, how='inner', on= 'project_id')
wadiz_provider_analysis.to_csv('wadiz_provider_analysis_0329.csv', encoding='utf-8')
wadiz_df.head()
wadiz_provider_analysis
project_money_all
provider_data_all
user_data_all
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load LendingClub Loans dataset
Step2: The target column (label column) of the dataset that we are interested in is called bad_loans. In this column 1 means a risky (bad) loan 0 means a safe loan.
Step3: Unlike the previous assignment where we used several features, in this assignment, we will just be using 4 categorical
Step4: Now, let's look at the head of the dataset.
Step5: Performing one-hot encoding with Pandas
Step6: Let's explore what the "grade_A" column looks like.
Step7: This column is set to 1 if the loan grade is A and 0 otherwise.
Step8: Using the list of the training data indicies and the test data indicies to get a DataFrame with the training data and a DataFrame with the test data.
Step9: Decision tree implementation
Step10: Because there are several steps in this assignment, we have introduced some stopping points where you can check your code and make sure it is correct before proceeding. To test your intermediate_node_num_mistakes function, run the following code until you get a Test passed!, then you should proceed. Otherwise, you should spend some time figuring out where things went wrong.
Step11: Function to pick best feature to split on
Step12: Now, creating a list of the features we are considering for the decision tree to test the above function. Not including the 0th element on the list since it corresponds to the "safe loans" column, the label we are trying to predict.
Step13: To test your best_splitting_feature function, run the following code
Step14: Building the tree
Step15: We have provided a function that learns the decision tree recursively and implements 3 stopping conditions
Step16: Here is a recursive function to count the nodes in your tree
Step17: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step18: Build the tree!
Step19: Making predictions with a decision tree
Step20: Now, let's consider the first example of the test set and see what my_decision_tree model predicts for this data point.
Step21: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class
Step22: Quiz question
Step23: Quiz question
Step24: Quiz question
Step25: Evaluating your decision tree
Step26: Now, let's use this function to evaluate the classification error on the test set.
Step27: Quiz Question
Step28: Printing out a decision stump
Step29: Quiz Question
Step30: Exploring the intermediate left subtree
Step31: Exploring the left subtree of the left subtree
Step32: Quiz question
Step33: Quiz question
|
<ASSISTANT_TASK:>
Python Code:
import json
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
loans = pd.read_csv("lending-club-data_assign_2.csv")
# safe_loans = 1 => safe
# safe_loans = -1 => risky
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.drop('bad_loans', 1)
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
loans.head(5)
loans_one_hot_enc = pd.get_dummies(loans)
loans_one_hot_enc["grade_A"].head(5)
with open('module-5-assignment-2-train-idx.json', 'r') as f:
train_idx_lst = json.load(f)
train_idx_lst = [int(entry) for entry in train_idx_lst]
with open('module-5-assignment-2-test-idx.json', 'r') as f:
test_idx_lst = json.load(f)
test_idx_lst = [int(entry) for entry in test_idx_lst]
train_data = loans_one_hot_enc.ix[train_idx_lst]
test_data = loans_one_hot_enc.ix[test_idx_lst]
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
N_count_plu_1 = (labels_in_node == 1).sum()
# Count the number of -1's (risky loans)
N_count_neg_1 = (labels_in_node == -1).sum()
# Return the number of mistakes that the majority classifier makes.
return min(N_count_plu_1, N_count_neg_1)
# Test case 1
example_labels = np.array([-1, -1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 1 failed... try again!'
# Test case 2
example_labels = np.array([-1, -1, 1, 1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 2 failed... try again!'
# Test case 3
example_labels = np.array([-1, -1, -1, -1, -1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 3 failed... try again!'
def best_splitting_feature(data, features, target):
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
right_split = data[data[feature] == 1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
left_mistakes = intermediate_node_num_mistakes(left_split[target].values)
# Calculate the number of misclassified examples in the right split.
right_mistakes = intermediate_node_num_mistakes(right_split[target].values)
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
error = (left_mistakes + right_mistakes)/num_data_points
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
if error < best_error:
best_error = error
best_feature = feature
return best_feature # Return the best feature we found
feature_lst = train_data.columns.values.tolist()[1:]
print feature_lst
if best_splitting_feature(train_data, feature_lst, 'safe_loans') == 'term_ 36 months':
print 'Test passed!'
else:
print 'Test failed... try again!'
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True }
# Count the number of data points that are +1 and -1 in this node.
num_ones = (target_values == 1).sum()
num_minus_ones = (target_values == -1).sum()
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = 1
else:
leaf['prediction'] = -1
# Return the leaf node
return leaf
def decision_tree_create(data, features, target, current_depth = 0, max_depth = 10):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target].values
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1
# (Check if there are mistakes at current node.
# Recall you wrote a function intermediate_node_num_mistakes to compute this.)
if intermediate_node_num_mistakes(target_values) == 0: ## YOUR CODE HERE
print "Stopping condition 1 reached."
# If not mistakes at current node, make current node a leaf node
return create_leaf(target_values)
# Stopping condition 2 (check if there are remaining features to consider splitting on)
if remaining_features == 0: ## YOUR CODE HERE
print "Stopping condition 2 reached."
# If there are no remaining features to consider, make current node a leaf node
return create_leaf(target_values)
# Additional stopping condition (limit tree depth)
if current_depth >= max_depth : ## YOUR CODE HERE
print "Reached maximum depth. Stopping for now."
# If the max tree depth has been reached, make current node a leaf node
return create_leaf(target_values)
# Find the best splitting feature (recall the function best_splitting_feature implemented above)
splitting_feature = best_splitting_feature(data, remaining_features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Create a leaf node if the split is "perfect"
if len(left_split) == len(data):
print "Creating leaf node."
return create_leaf(left_split[target])
if len(right_split) == len(data):
print "Creating leaf node."
return create_leaf(right_split[target])
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target, current_depth + 1, max_depth)
right_tree = decision_tree_create(right_split, remaining_features, target, current_depth + 1, max_depth)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
small_data_decision_tree = decision_tree_create(train_data, feature_lst, 'safe_loans', max_depth = 3)
if count_nodes(small_data_decision_tree) == 13:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_data_decision_tree)
print 'Number of nodes that should be there : 13'
my_decision_tree = decision_tree_create(train_data, feature_lst, 'safe_loans', max_depth = 6)
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
test_data.iloc[0]
print 'Predicted class: %s ' % classify(my_decision_tree, test_data.iloc[0])
classify(my_decision_tree, test_data.iloc[0], annotate=True)
print "term_ 36 months"
print "grade_D"
print "grade_D"
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
predictions = data.apply(lambda x: classify(tree, x, annotate=False) , axis = 1)
# Once you've made the predictions, calculate the classification error and return it
number_mistakes = (predictions != data['safe_loans'].values).sum()
total_examples = float(len(predictions))
classification_error = number_mistakes/total_examples
return classification_error
evaluate_classification_error(my_decision_tree, test_data)
print "Classification error of my_decision_tree on \
the test_data: %.2f" %(evaluate_classification_error(my_decision_tree, test_data))
def print_stump(tree, name = 'root'):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print "(leaf, label: %s)" % tree['prediction']
return None
split_feature, split_value = split_name.split('_')
print ' %s' % name
print ' |---------------|----------------|'
print ' | |'
print ' | |'
print ' | |'
print ' [{0} == 0] [{0} == 1] '.format(split_name)
print ' | |'
print ' | |'
print ' | |'
print ' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))
print_stump(my_decision_tree)
print "term_ 36 months"
print_stump(my_decision_tree['left'], my_decision_tree['splitting_feature'])
print_stump(my_decision_tree['left']['left'], my_decision_tree['left']['splitting_feature'])
print "term_ 36 months, grade_A, grade_B"
print_stump(my_decision_tree['right'], my_decision_tree['splitting_feature'])
print_stump(my_decision_tree['right']['right'], my_decision_tree['right']['splitting_feature'])
print "term_ 36 months, grade_D, no third feature because second split resulted in leaf"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Image As Greyscale
Step2: Save Image
|
<ASSISTANT_TASK:>
Python Code:
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread('images/plane.jpg', cv2.IMREAD_GRAYSCALE)
# Show image
plt.imshow(image, cmap='gray'), plt.axis("off")
plt.show()
# Save image
cv2.imwrite('images/plane_new.jpg', image)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-3', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Insults package
Step2: Identifying quotation marks
Step3: We need to clean up the open and closed inverted commas with straight ones.
Step4: Making recommended Google search queries
|
<ASSISTANT_TASK:>
Python Code:
# print("\x1b[30;1m\"red\"\x1b[0m")
# print("\x1b[31;1m\"red\"\x1b[0m")
# print("\x1b[32;1m\"red\"\x1b[0m")
# print("\x1b[33;1m\"red\"\x1b[0m")
# print("\x1b[34;1m\"red\"\x1b[0m")
# print("\x1b[35;1m\"red\"\x1b[0m")
# print("\x1b[36;1m\"red\"\x1b[0m")
# print("\x1b[37;1m\"red\"\x1b[0m")
# print("\x1b[30;1m\"red\"\x1b[0m")
# print("\x1b[31;1m\"red\"\x1b[0m")
# print("\x1b[32;1m\"red\"\x1b[0m")
# print("\x1b[33;1m\"red\"\x1b[0m")
# print("\x1b[34;1m\"red\"\x1b[0m")
# print("\x1b[35;1m\"red\"\x1b[0m")
# print("\x1b[36;1m\"red\"\x1b[0m")
# print("\x1b[37;1m\"red\"\x1b[0m")
# print("\x1b[30m\"red\"\x1b[0m")
# print("\x1b[31m\"red\"\x1b[0m")
# print("\x1b[32m\"red\"\x1b[0m")
# print("\x1b[33m\"red\"\x1b[0m")
# print("\x1b[34m\"red\"\x1b[0m")
# print("\x1b[35m\"red\"\x1b[0m")
# print("\x1b[36m\"red\"\x1b[0m")
# print("\x1b[37m\"red\"\x1b[0m")
from __future__ import print_function
import regex as re
# for x in range(256):
# # # formatting = "\x1b[" + str(x) + "m\"red\"\x1b[0m"
# # # formatting = "\x1b[" + str(x) + "m k \x1b[0m"
# formatting = "\033[38;5;" + str(x) + "mo \x1b[0m"
# print(formatting, end=" ")
# if x%36 == 15:
# print("")
def print_this_way(string, x, y):
# formatting = "\033[38;5;" + str(16 + int(4*x) + 36*int(4*y)) + "m\"red\"\x1b[0m"
formatting = '\x1b[38;5;' + str(16 + int(4*x) + 36*int(4*y)) + 'm'+ string +'\x1b[0m'
# print(formatting, end=" ")
print('\x1b[38;5;' + str(16 + int(4*x) + 36*int(4*y)) + 'm'+ string +'\x1b[0m', end=" ")
import numpy as np
for x in np.arange(0,1,0.2):
for y in np.arange(0,1,0.2):
print_this_way("o", x, y)
print("")
# formatting_ = "\x1b[37m\"red\"\x1b[0m"
# print(formatting_, end=" ")
# print(type(formatting))
# print(type("\x1b[37m\"red\"\x1b[0m"))
# "\x1b[37m\"red\"\x1b[0m"
print('\x1b[1;31m'+'Hello world'+'\x1b[0m' + " noobs")
import sys
from termcolor import colored, cprint
text = colored('Hello, World!', 'red', attrs=['reverse', 'blink'])
print(text)
cprint('Hello, World!', 'green', 'on_red')
print_red_on_cyan = lambda x: cprint(x, 'red', 'on_cyan')
print_red_on_cyan('Hello, World!')
print_red_on_cyan('Hello, Universe!')
for i in range(10):
cprint(i, 'magenta', end=' ')
cprint("Attention!", 'red', attrs=['bold'], file=sys.stderr)
from insults import Insults
Insults.load_model()
comment = "You eat people?"
Insults.rate_comment(comment)
comments = ["You called me a \"dickhead\", so I'll say you're a cunt.", "These shitakes taste like shit."]
print(Insults.foul_language(comments, context=False))
fake_news = u'''
Prime Minister Lee Hsien Loong has been addressed as a dictator by many Singaporeans and rightly so, but few can point out what he actually did that defines him as one – after all, everything is legal right? For a start, dictators write their own rules and never subject themselves to the rules they wrote. The one trait shared between Lee Hsien Loong and dictators like Kim Jong Un, Hitler, Stalin and Mao is when they are all above the laws, it is still legal. One just need to look at the joke of a Presidential Election conducted this year to understand what a dictator is all about:
Writing his own laws
Knowing that having a race-based election infringes the Constitution, Lee Hsien Loong single-handedly abused his majority power in Parliament to re-write the Constitution. After making a mistake about Halimah Yacob’s Indian race, Lee Hsien Loong appointed a committee of his cronies to re-write the definition of the Malay race.
Creating proxies to act on his behalf
The 16-member committee to decide “Malayness” was appointed by Lee Hsien Loong and the committee actively consults the dictator for directions. The Council of Presidential Advisers (CPA) restricting the powers of a President also acts by Lee Hsien Loong’s bidding. Halimah Yacob herself is a walking puppet and proxy of Lee Hsien Loong. These proxies are created to create a facade that due processes are in place, but a sharper mind knows better.
Breaching the laws he created
When opposition party’s ex-member Yaw Shin Long resigned from his MP position, a by-election was called and insisted so by Lee Hsien Loong. However, when Halimah Yacob resigned from her MP position, Lee Hsien Loong feigned ignorance and claimed no by-election is needed. However the fact remains that the Parliament lost one representative voice (not that it matters since it is still PAP anyway), so a by-election should be called for Yew Tee-Marsiling GRC.
Abuse laws to his political advantage
A walkover is legal, according to the election laws he written. Therefore, the Lee Hsien Loong-endorsed candidate becoming a President through a walkover is wholly legit. Banning the two opponent Presidential contestants because they fail to meet the S$500 million financial requirements is also perfectly legit. Halimah Yacob, an Indian, is now a legit Malay because a committee said so. Everything is legal yet corrupted, the colour white is actually a stain itself.
His words are final
Like all dictators, Lee Hsien Loong’s words are final. Yes there are unhappiness, he said, “but I did the right thing”. Lee Hsien Loong’s words exemplify the saying that the road to hell is paved with good intentions. A civil court case launched by Dr Tan Cheng Bock was blocked by the High Court. Opposition MP Sylvia Lim’s question in Parliament is denied, twice. Nobody could ask a damn question, because his words are final. Gutter politics at it’s finest.
Whether Singapore is corrupted depends on one’s acceptance of the political system. If Singapore is perceived as a democracy, it breaches every value of the Singapore pledge and the government is corrupted inside out. Whereas if Singapore is perceived as a dictatorship, it is fully legitimised and corruption-free.
'''
import unidecode
fake_news_ = unidecode.unidecode(fake_news)
import regex as re
print(re.findall(r'"(.*?)"', fake_news_))
#todo
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define parameters
Step2: Let's investigate spatial filter with max power ratio.
Step3: Let's also look at the power spectrum of that source and compare it to
Step4: Epoched data
|
<ASSISTANT_TASK:>
Python Code:
# Author: Denis A. Engemann <denis.engemann@gmail.com>
# Victoria Peterson <victoriapeterson09@gmail.com>
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne import Epochs
from mne.datasets.fieldtrip_cmc import data_path
from mne.decoding import SSD
fname = data_path() / 'SubjectCMC.ds'
# Prepare data
raw = mne.io.read_raw_ctf(fname)
raw.crop(50., 110.).load_data() # crop for memory purposes
raw.resample(sfreq=250)
raw.pick_types(meg=True, eeg=False, ref_meg=False)
freqs_sig = 9, 12
freqs_noise = 8, 13
ssd = SSD(info=raw.info,
reg='oas',
sort_by_spectral_ratio=False, # False for purpose of example.
filt_params_signal=dict(l_freq=freqs_sig[0], h_freq=freqs_sig[1],
l_trans_bandwidth=1, h_trans_bandwidth=1),
filt_params_noise=dict(l_freq=freqs_noise[0], h_freq=freqs_noise[1],
l_trans_bandwidth=1, h_trans_bandwidth=1))
ssd.fit(X=raw.get_data())
pattern = mne.EvokedArray(data=ssd.patterns_[:4].T,
info=ssd.info)
pattern.plot_topomap(units=dict(mag='A.U.'), time_format='')
# The topographies suggest that we picked up a parietal alpha generator.
# Transform
ssd_sources = ssd.transform(X=raw.get_data())
# Get psd of SSD-filtered signals.
psd, freqs = mne.time_frequency.psd_array_welch(
ssd_sources, sfreq=raw.info['sfreq'], n_fft=4096)
# Get spec_ratio information (already sorted).
# Note that this is not necessary if sort_by_spectral_ratio=True (default).
spec_ratio, sorter = ssd.get_spectral_ratio(ssd_sources)
# Plot spectral ratio (see Eq. 24 in Nikulin 2011).
fig, ax = plt.subplots(1)
ax.plot(spec_ratio, color='black')
ax.plot(spec_ratio[sorter], color='orange', label='sorted eigenvalues')
ax.set_xlabel("Eigenvalue Index")
ax.set_ylabel(r"Spectral Ratio $\frac{P_f}{P_{sf}}$")
ax.legend()
ax.axhline(1, linestyle='--')
# We can see that the initial sorting based on the eigenvalues
# was already quite good. However, when using few components only
# the sorting might make a difference.
below50 = freqs < 50
# for highlighting the freq. band of interest
bandfilt = (freqs_sig[0] <= freqs) & (freqs <= freqs_sig[1])
fig, ax = plt.subplots(1)
ax.loglog(freqs[below50], psd[0, below50], label='max SNR')
ax.loglog(freqs[below50], psd[-1, below50], label='min SNR')
ax.loglog(freqs[below50], psd[:, below50].mean(axis=0), label='mean')
ax.fill_between(freqs[bandfilt], 0, 10000, color='green', alpha=0.15)
ax.set_xlabel('log(frequency)')
ax.set_ylabel('log(power)')
ax.legend()
# We can clearly see that the selected component enjoys an SNR that is
# way above the average power spectrum.
# Build epochs as sliding windows over the continuous raw file.
events = mne.make_fixed_length_events(raw, id=1, duration=5.0, overlap=0.0)
# Epoch length is 5 seconds.
epochs = Epochs(raw, events, tmin=0., tmax=5,
baseline=None, preload=True)
ssd_epochs = SSD(info=epochs.info,
reg='oas',
filt_params_signal=dict(l_freq=freqs_sig[0],
h_freq=freqs_sig[1],
l_trans_bandwidth=1,
h_trans_bandwidth=1),
filt_params_noise=dict(l_freq=freqs_noise[0],
h_freq=freqs_noise[1],
l_trans_bandwidth=1,
h_trans_bandwidth=1))
ssd_epochs.fit(X=epochs.get_data())
# Plot topographies.
pattern_epochs = mne.EvokedArray(data=ssd_epochs.patterns_[:4].T,
info=ssd_epochs.info)
pattern_epochs.plot_topomap(units=dict(mag='A.U.'), time_format='')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get epochs
Step2: Run beamformers and look at maximum outputs
Step3: We can also look at the spatial distribution
|
<ASSISTANT_TASK:>
Python Code:
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
# sphinx_gallery_thumbnail_number = 3
import matplotlib.pyplot as plt
import numpy as np
import mne
from mne.datasets import sample
from mne.beamformer import make_lcmv, apply_lcmv
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
subjects_dir = data_path + '/subjects'
event_id, tmin, tmax = 1, -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,
exclude='bads')
# Pick the channels of interest
raw.pick_channels([raw.ch_names[pick] for pick in picks])
# Re-normalize our empty-room projectors, so they are fine after subselection
raw.info.normalize_proj()
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=(None, 0), preload=True, proj=True,
reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))
evoked = epochs.average()
forward = mne.read_forward_solution(fname_fwd)
forward = mne.convert_forward_solution(forward, surf_ori=True)
# Compute regularized noise and data covariances
noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method='shrunk')
data_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15,
method='shrunk')
evoked.plot(time_unit='s')
pick_oris = [None, 'normal', 'max-power']
names = ['free', 'normal', 'max-power']
descriptions = ['Free orientation, voxel: %i', 'Normal orientation, voxel: %i',
'Max-power orientation, voxel: %i']
colors = ['b', 'k', 'r']
fig, ax = plt.subplots(1)
max_voxs = list()
for pick_ori, name, desc, color in zip(pick_oris, names, descriptions, colors):
# compute unit-noise-gain beamformer with whitening of the leadfield and
# data (enabled by passing a noise covariance matrix)
filters = make_lcmv(evoked.info, forward, data_cov, reg=0.05,
noise_cov=noise_cov, pick_ori=pick_ori,
weight_norm='unit-noise-gain')
# apply this spatial filter to source-reconstruct the evoked data
stc = apply_lcmv(evoked, filters, max_ori_out='signed')
# View activation time-series in maximum voxel at 100 ms:
time_idx = stc.time_as_index(0.1)
max_idx = np.argmax(stc.data[:, time_idx])
# we know these are all left hemi, so we can just use vertices[0]
max_voxs.append(stc.vertices[0][max_idx])
ax.plot(stc.times, stc.data[max_idx, :], color, label=desc % max_idx)
ax.set(xlabel='Time (ms)', ylabel='LCMV value', ylim=(-0.8, 2.2),
title='LCMV in maximum voxel')
ax.legend()
mne.viz.utils.plt_show()
# take absolute value for plotting
np.abs(stc.data, out=stc.data)
# Plot last stc in the brain in 3D with PySurfer if available
brain = stc.plot(hemi='lh', subjects_dir=subjects_dir,
initial_time=0.1, time_unit='s')
brain.show_view('lateral')
for color, vertex in zip(colors, max_voxs):
brain.add_foci([vertex], coords_as_verts=True, scale_factor=0.5,
hemi='lh', color=color)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fit ICA model using the FastICA algorithm, detect and inspect components
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.preprocessing import ICA, create_ecg_epochs
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 30, method='iir')
raw.pick_types(meg=True, eeg=False, exclude='bads', stim=True)
# longer + more epochs for more artifact exposure
events = mne.find_events(raw, stim_channel='STI 014')
epochs = mne.Epochs(raw, events, event_id=None, tmin=-0.2, tmax=0.5)
ica = ICA(n_components=0.95, method='fastica').fit(epochs)
ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5)
ecg_inds, scores = ica.find_bads_ecg(ecg_epochs)
ica.plot_components(ecg_inds)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you already have an H2O cluster running that you'd like to connect to (for example, in a multi-node Hadoop environment), then you can specify the IP and port of that cluster as follows
Step2: Download EEG Data
Step3: Explore Data
Step4: Now let's take a look at the top of the frame
Step5: The first 14 columns are numeric values that represent EEG measurements from the headset. The "eyeDetection" column is the response. There is an additional column called "split" that was added (by me) in order to specify partitions of the data (so we can easily benchmark against other tools outside of H2O using the same splits). I randomly divided the dataset into three partitions
Step6: To select a subset of the columns to look at, typical Pandas indexing applies
Step7: Now let's select a single column, for example -- the response column, and look at the data more closely
Step8: It looks like a binary response, but let's validate that assumption
Step9: If you don't specify the column types when you import the file, H2O makes a guess at what your column types are. If there are 0's and 1's in a column, H2O will automatically parse that as numeric by default.
Step10: Now we can check that there are two levels in our response column
Step11: We can query the categorical "levels" as well ('0' and '1' stand for "eye open" and "eye closed") to see what they are
Step12: We may want to check if there are any missing values, so let's look for NAs in our dataset. For tree-based methods like GBM and RF, H2O handles missing feature values automatically, so it's not a problem if we are missing certain feature values. However, it is always a good idea to check to make sure that you are not missing any of the training labels.
Step13: The isna method doesn't directly answer the question, "Does the response column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look
Step14: Great, no missing labels.
Step15: The sum is still zero, so there are no missing values in any of the cells.
Step16: Ok, the data is not exactly evenly distributed between the two classes -- there are more 0's than 1's in the dataset. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below).
Step17: Split H2O Frame into a train and test set
Step18: Machine Learning in H2O
Step19: We first create a model object of class, "H2OGradientBoostingEstimator". This does not actually do any training, it just sets the model up for training by specifying model parameters.
Step20: Specify the predictor set and response
Step21: Now that we have specified x and y, we can train the model
Step22: Inspect Model
Step23: Model Performance on a Test Set
Step24: Individual model performance metrics can be extracted using methods like r2, auc and mse. In the case of binary classification, we may be most interested in evaluating test set Area Under the ROC Curve (AUC).
Step25: Cross-validated Performance
Step26: This time around, we will simply pull the training and cross-validation metrics out of the model. To do so, you use the auc method again, and you can specify train or xval as True to get the correct metric.
Step27: Grid Search
Step28: Define an "H2OGridSearch" object by specifying the algorithm (GBM) and the hyper parameters
Step29: An "H2OGridSearch" object also has a train method, which is used to train all the models in the grid.
Step30: Compare Models
Step31: The "best" model in terms of validation set AUC is listed first in auc_table.
Step32: The last thing we may want to do is generate predictions on the test set using the "best" model, and evaluate the test set AUC.
|
<ASSISTANT_TASK:>
Python Code:
import h2o
# Start an H2O Cluster on your local machine
h2o.init()
# This will not actually do anything since it's a fake IP address
# h2o.init(ip="123.45.67.89", port=54321)
#csv_url = "http://www.stat.berkeley.edu/~ledell/data/eeg_eyestate_splits.csv"
csv_url = "https://h2o-public-test-data.s3.amazonaws.com/eeg_eyestate_splits.csv"
data = h2o.import_file(csv_url)
data.shape
data.head()
data.columns
columns = ['AF3', 'eyeDetection', 'split']
data[columns].head()
y = 'eyeDetection'
data[y]
data[y].unique()
data[y] = data[y].asfactor()
data[y].nlevels()
data[y].levels()
data.isna()
data[y].isna()
data[y].isna().sum()
data.isna().sum()
data[y].table()
n = data.shape[0] # Total number of training samples
data[y].table()['Count']/n
train = data[data['split']=="train"]
train.shape
valid = data[data['split']=="valid"]
valid.shape
test = data[data['split']=="test"]
test.shape
# Import H2O GBM:
from h2o.estimators.gbm import H2OGradientBoostingEstimator
model = H2OGradientBoostingEstimator(distribution='bernoulli',
ntrees=100,
max_depth=4,
learn_rate=0.1)
x = list(train.columns)
x
del x[12:14] #Remove the 13th and 14th columns, 'eyeDetection' and 'split'
x
model.train(x=x, y=y, training_frame=train, validation_frame=valid)
print(model)
perf = model.model_performance(test)
print(perf.__class__)
perf.r2()
perf.auc()
perf.mse()
cvmodel = H2OGradientBoostingEstimator(distribution='bernoulli',
ntrees=100,
max_depth=4,
learn_rate=0.1,
nfolds=5)
cvmodel.train(x=x, y=y, training_frame=data)
print(cvmodel.auc(train=True))
print(cvmodel.auc(xval=True))
ntrees_opt = [5,50,100]
max_depth_opt = [2,3,5]
learn_rate_opt = [0.1,0.2]
hyper_params = {'ntrees': ntrees_opt,
'max_depth': max_depth_opt,
'learn_rate': learn_rate_opt}
from h2o.grid.grid_search import H2OGridSearch
gs = H2OGridSearch(H2OGradientBoostingEstimator, hyper_params = hyper_params)
gs.train(x=x, y=y, training_frame=train, validation_frame=valid)
print(gs)
# print out the auc for all of the models
auc_table = gs.sort_by('auc(valid=True)',increasing=False)
print(auc_table)
best_model = h2o.get_model(auc_table['Model Id'][0])
best_model.auc()
best_perf = best_model.model_performance(test)
best_perf.auc()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
Step2: With the nuclides we defined, we will now create three distinct materials for water, clad and fuel.
Step3: With our materials, we can now create a MaterialsFile object that can be exported to an actual XML file.
Step4: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
Step5: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
Step6: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step7: We now must create a geometry that is assigned a root universe, put the geometry into a GeometryFile object, and export it to XML.
Step8: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 190 active batches each with 10,000 particles.
Step9: Now we are finally ready to make use of the openmc.mgxs module to generate multi-group cross sections! First, let's define "coarse" 2-group and "fine" 8-group structures using the built-in EnergyGroups class.
Step10: Now we will instantiate a variety of MGXS objects needed to run an OpenMOC simulation to verify the accuracy of our cross sections. In particular, we define transport, fission, nu-fission, nu-scatter and chi cross sections for each of the three cells in the fuel pin with the 8-group structure as our energy groups.
Step11: Next, we showcase the use of OpenMC's tally precision trigger feature in conjunction with the openmc.mgxs module. In particular, we will assign a tally trigger of 1E-2 on the standard deviation for each of the tallies used to compute multi-group cross sections.
Step12: Now, we must loop over all cells to set the cross section domains to the various cells - fuel, clad and moderator - included in the geometry. In addition, we will set each cross section to tally cross sections on a per-nuclide basis through the use of the MGXS class' boolean by_nuclide instance attribute.
Step13: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
Step14: Tally Data Processing
Step15: In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. This is necessary for the openmc.mgxs module to properly process the tally data. We first create a Summary object and link it with the statepoint.
Step16: The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
Step17: That's it! Our multi-group cross sections are now ready for the big spotlight. This time we have cross sections in three distinct spatial zones - fuel, clad and moderator - on a per-nuclide basis.
Step18: Our multi-group cross sections are capable of summing across all nuclides to provide us with macroscopic cross sections as well.
Step19: Although a printed report is nice, it is not scalable or flexible. Let's extract the microscopic cross section data for the moderator as a Pandas DataFrame .
Step20: Next, we illustate how one can easily take multi-group cross sections and condense them down to a coarser energy group structure. The MGXS class includes a get_condensed_xs(...) method which takes an EnergyGroups parameter with a coarse(r) group structure and returns a new MGXS condensed to the coarse groups. We illustrate this process below using the 2-group structure created earlier.
Step21: Group condensation is as simple as that! We now have a new coarse 2-group TransportXS in addition to our original 16-group TransportXS. Let's inspect the 2-group TransportXS by printing it to the screen and extracting a Pandas DataFrame as we have already learned how to do.
Step22: Verification with OpenMOC
Step23: Next, we we can inject the multi-group cross sections into the equivalent fuel pin cell OpenMOC geometry.
Step24: We are now ready to run OpenMOC to verify our cross-sections from OpenMC.
Step25: We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results.
Step26: As a sanity check, let's run a simulation with the coarse 2-group cross sections to ensure that they also produce a reasonable result.
Step27: There is a non-trivial bias in both the 2-group and 8-group cases. In the case of a pin cell, one can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias
Step28: Now, we use matplotlib and seaborn to plot the continuous-energy and multi-group cross sections on a single plot.
Step29: Another useful type of illustration is scattering matrix sparsity structures. First, we extract Pandas DataFrames for the H-1 and O-16 scattering matrices.
Step30: Matplotlib's imshow routine can be used to plot the matrices to illustrate their sparsity structures.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import openmc
import openmc.mgxs as mgxs
from openmc.source import Source
from openmc.stats import Box
import openmoc
from openmoc.compatible import get_openmoc_geometry
import pyne.ace
%matplotlib inline
# Instantiate some Nuclides
h1 = openmc.Nuclide('H-1')
o16 = openmc.Nuclide('O-16')
u235 = openmc.Nuclide('U-235')
u238 = openmc.Nuclide('U-238')
zr90 = openmc.Nuclide('Zr-90')
# 1.6% enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide(u235, 3.7503e-4)
fuel.add_nuclide(u238, 2.2625e-2)
fuel.add_nuclide(o16, 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide(h1, 4.9457e-2)
water.add_nuclide(o16, 2.4732e-2)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide(zr90, 7.2758e-3)
# Instantiate a MaterialsFile, add Materials
materials_file = openmc.MaterialsFile()
materials_file.add_material(fuel)
materials_file.add_material(water)
materials_file.add_material(zircaloy)
materials_file.default_xs = '71c'
# Export to "materials.xml"
materials_file.export_to_xml()
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective')
max_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective')
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
pin_cell_universe.add_cell(moderator_cell)
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.region = +min_x & -max_x & +min_y & -max_y
root_cell.fill = pin_cell_universe
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry()
openmc_geometry.root_universe = root_universe
# Instantiate a GeometryFile
geometry_file = openmc.GeometryFile()
geometry_file.geometry = openmc_geometry
# Export to "geometry.xml"
geometry_file.export_to_xml()
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 10000
# Instantiate a SettingsFile
settings_file = openmc.SettingsFile()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True, 'summary': True}
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
settings_file.source = Source(space=Box(
bounds[:3], bounds[3:], only_fissionable=True))
# Activate tally precision triggers
settings_file.trigger_active = True
settings_file.trigger_max_batches = settings_file.batches * 4
# Export to "settings.xml"
settings_file.export_to_xml()
# Instantiate a "coarse" 2-group EnergyGroups object
coarse_groups = mgxs.EnergyGroups()
coarse_groups.group_edges = np.array([0., 0.625e-6, 20.])
# Instantiate a "fine" 8-group EnergyGroups object
fine_groups = mgxs.EnergyGroups()
fine_groups.group_edges = np.array([0., 0.058e-6, 0.14e-6, 0.28e-6,
0.625e-6, 4.e-6, 5.53e-3, 821.e-3, 20.])
# Extract all Cells filled by Materials
openmc_cells = openmc_geometry.get_all_material_cells()
# Create dictionary to store multi-group cross sections for all cells
xs_library = {}
# Instantiate 8-group cross sections for each cell
for cell in openmc_cells:
xs_library[cell.id] = {}
xs_library[cell.id]['transport'] = mgxs.TransportXS(groups=fine_groups)
xs_library[cell.id]['fission'] = mgxs.FissionXS(groups=fine_groups)
xs_library[cell.id]['nu-fission'] = mgxs.NuFissionXS(groups=fine_groups)
xs_library[cell.id]['nu-scatter'] = mgxs.NuScatterMatrixXS(groups=fine_groups)
xs_library[cell.id]['chi'] = mgxs.Chi(groups=fine_groups)
# Create a tally trigger for +/- 0.01 on each tally used to compute the multi-group cross sections
tally_trigger = openmc.Trigger('std_dev', 1E-2)
# Add the tally trigger to each of the multi-group cross section tallies
for cell in openmc_cells:
for mgxs_type in xs_library[cell.id]:
xs_library[cell.id][mgxs_type].tally_trigger = tally_trigger
# Instantiate an empty TalliesFile
tallies_file = openmc.TalliesFile()
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
# Set the cross sections domain type to the cell
xs_library[cell.id][rxn_type].domain = cell
xs_library[cell.id][rxn_type].domain_type = 'cell'
# Tally cross sections by nuclide
xs_library[cell.id][rxn_type].by_nuclide = True
# Add OpenMC tallies to the tallies file for XML generation
for tally in xs_library[cell.id][rxn_type].tallies.values():
tallies_file.add_tally(tally, merge=True)
# Export to "tallies.xml"
tallies_file.export_to_xml()
# Run OpenMC
executor = openmc.Executor()
executor.run_simulation(output=True)
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.080.h5')
# Load the summary file and link it with the statepoint
su = openmc.Summary('summary.h5')
sp.link_with_summary(su)
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
xs_library[cell.id][rxn_type].load_from_statepoint(sp)
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='micro', nuclides=['U-235', 'U-238'])
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='macro', nuclides='sum')
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
df.head(10)
# Extract the 16-group transport cross section for the fuel
fine_xs = xs_library[fuel_cell.id]['transport']
# Condense to the 2-group structure
condensed_xs = fine_xs.get_condensed_xs(coarse_groups)
condensed_xs.print_xs()
df = condensed_xs.get_pandas_dataframe(xs_type='micro')
df
# Create an OpenMOC Geometry from the OpenCG Geometry
openmoc_geometry = get_openmoc_geometry(su.opencg_geometry)
# Get all OpenMOC cells in the gometry
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
# Get a reference to the Material filling this Cell
openmoc_material = cell.getFillMaterial()
# Set the number of energy groups for the Material
openmoc_material.setNumEnergyGroups(fine_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Inject NumPy arrays of cross section data into the Material
# NOTE: Sum across nuclides to get macro cross sections needed by OpenMOC
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
# Generate tracks for OpenMOC
openmoc_geometry.initializeFlatSourceRegions()
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined[0]
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
openmoc_geometry = get_openmoc_geometry(su.opencg_geometry)
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
openmoc_material = cell.getFillMaterial()
openmoc_material.setNumEnergyGroups(coarse_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Perform group condensation
transport = transport.get_condensed_xs(coarse_groups)
nufission = nufission.get_condensed_xs(coarse_groups)
nuscatter = nuscatter.get_condensed_xs(coarse_groups)
chi = chi.get_condensed_xs(coarse_groups)
# Inject NumPy arrays of cross section data into the Material
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
# Generate tracks for OpenMOC
openmoc_geometry.initializeFlatSourceRegions()
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined[0]
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
# Instantiate a PyNE ACE continuous-energy cross sections library
pyne_lib = pyne.ace.Library('../../../../data/nndc/293.6K/U_235_293.6K.ace')
pyne_lib.read('92235.71c')
# Extract the U-235 data from the library
u235 = pyne_lib.tables['92235.71c']
# Extract the continuous-energy U-235 fission cross section data
fission = u235.reactions[18]
# Create a loglog plot of the U-235 continuous-energy fission cross section
plt.loglog(u235.energy, fission.sigma, color='b', linewidth=1)
# Extract energy group bounds and MGXS values to plot
nufission = xs_library[fuel_cell.id]['fission']
energy_groups = nufission.energy_groups
x = energy_groups.group_edges
y = nufission.get_xs(nuclides=['U-235'], order_groups='decreasing', xs_type='micro')
# Fix low energy bound to the value defined by the ACE library
x[0] = u235.energy[0]
# Extend the mgxs values array for matplotlib's step plot
y = np.insert(y, 0, y[0])
# Create a step plot for the MGXS
plt.plot(x, y, drawstyle='steps', color='r', linewidth=3)
plt.title('U-235 Fission Cross Section')
plt.xlabel('Energy [MeV]')
plt.ylabel('Micro Fission XS')
plt.legend(['Continuous', 'Multi-Group'])
plt.xlim((x.min(), x.max()))
# Construct a Pandas DataFrame for the microscopic nu-scattering matrix
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
# Slice DataFrame in two for each nuclide's mean values
h1 = df[df['nuclide'] == 'H-1']['mean']
o16 = df[df['nuclide'] == 'O-16']['mean']
# Cast DataFrames as NumPy arrays
h1 = h1.as_matrix()
o16 = o16.as_matrix()
# Reshape arrays to 2D matrix for plotting
h1.shape = (fine_groups.num_groups, fine_groups.num_groups)
o16.shape = (fine_groups.num_groups, fine_groups.num_groups)
# Create plot of the H-1 scattering matrix
fig = plt.subplot(121)
fig.imshow(h1, interpolation='nearest', cmap='jet')
plt.title('H-1 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
plt.grid()
# Create plot of the O-16 scattering matrix
fig2 = plt.subplot(122)
fig2.imshow(o16, interpolation='nearest', cmap='jet')
plt.title('O-16 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
plt.grid()
# Show the plot on screen
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: errorbarjitter function
Step2: Example 1
Step3: Iris dataset example
Step4: References
Step5: Learning Index
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_style('darkgrid')
sns.set_context('talk')
def errorbarjitter(df, groupByCol, statsCol, fig=None, xlab='group', ylab='units', rotate = 0):
grouped = df.groupby([groupByCol])
stats = grouped.aggregate({statsCol:[np.std, np.mean]})
groups = df[groupByCol].unique()
means, devs = stats[statsCol]['mean'], stats[statsCol]['std']
plt.figure(figsize=(15,7))
if fig is None:
fig = plt.figure()
for (i, (m, s)) in enumerate(zip(means, devs)):
pts = np.array(df[df[groupByCol]==groups[i]][statsCol])
x = i*np.ones(len(pts)) + 0.2*np.random.rand(len(pts))-0.1
plt.scatter(x, pts, c='k', alpha=0.5)
delta = 0.22
plt.scatter(i+delta, m, edgecolor='k', facecolor='none', linewidth=3, s=25)
plt.plot([i+delta, i+delta], [m-s, m+s], '-', c=[0, 0, 0], lw=2.0)
plt.xticks(range(len(groups)), groups, rotation=rotate);
plt.xlabel(xlab)
plt.ylabel(ylab)
path = "ex-data.csv";
exdata = pd.read_csv(path)
exdata.head()
grouped = exdata.groupby(['runner'])
stats = grouped.aggregate({'time':[np.std, np.mean]})
stats
errorbarjitter(exdata, 'runner', 'time', xlab = 'runner', ylab = 'time (s)')
plt.title('5 runners average course time')
path = "iris.csv";
data = pd.read_csv(path)
data.head()
fig = plt.figure()
errorbarjitter(data, 'Species', 'Sepal.Length', xlab = 'species', ylab = 'sepal length')
plt.title('Sepal length by species')
fig = plt.figure()
errorbarjitter(data, 'Species', 'Sepal.Width', xlab = 'species', ylab = 'sepal width')
plt.title('Sepal width by species')
fig = plt.figure()
errorbarjitter(data, 'Species', 'Petal.Length', xlab = 'species', ylab = 'petal length')
plt.title('Petal length by species')
fig = plt.figure()
errorbarjitter(data, 'Species', 'Petal.Width', xlab = 'species', ylab = 'petal width')
plt.title('Petal width by species')
sns.set_style('darkgrid')
fig = plt.figure()
errorbarjitter(data, 'Species', 'Petal.Width', xlab = 'species', ylab = 'petal width')
plt.title('Petal width by species')
sns.set_style('whitegrid')
fig = plt.figure()
errorbarjitter(data, 'Species', 'Petal.Width', xlab = 'species', ylab = 'petal width')
plt.title('Petal width by species')
sns.set_style('dark')
fig = plt.figure()
errorbarjitter(data, 'Species', 'Petal.Width', xlab = 'species', ylab = 'petal width')
plt.title('Petal width by species')
sns.set_style('white')
fig = plt.figure()
errorbarjitter(data, 'Species', 'Petal.Width', xlab = 'species', ylab = 'petal width')
plt.title('Petal width by species')
sns.set_style('ticks')
fig = plt.figure()
errorbarjitter(data, 'Species', 'Petal.Width', xlab = 'species', ylab = 'petal width')
plt.title('Petal width by species')
path = "learning-data.csv";
learning = pd.read_csv(path)
sns.set_style('darkgrid')
fig = plt.figure()
errorbarjitter(learning, 'animal', 'pi', xlab = 'animal', ylab = 'performance index')
plt.title('learning assay')
plt.ylim(-1,1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Default Values and placeholder
Step2: JupyterJSWidgets work with EasyForm
|
<ASSISTANT_TASK:>
Python Code:
from beakerx import *
f = EasyForm("Form and Run")
f.addTextField("first")
f.addTextField("last")
f['first'] = "First"
f['last'] = "Last"
f.addButton("Go!", tag="run")
f
"Good morning " + f["first"] + " " + f["last"]
f['last'][::-1] + '...' + f['first']
f['first'] = 'Beaker'
f['last'] = 'Berzelius'
h = EasyForm(title="Form and Run")
h.addTextField("first", width=10)
h.addTextField("default")
h.addTextArea("Text Area 1", height=5, width=20)
h.addTextArea("Text Area 2")
h.addTextArea("Text Area 3", height=10)
h.addTextArea("Text Area 4",width=20)
h
g2 = EasyForm("Field Types")
options = ["a", "b", "c", "d", "e", "f"]
g2.addList("List Single", options, multi=False)
g2.addList("List Two Row", options, rows=2)
g2
f['last']+ ", "+f['first']
f['last'] = "new Value"
f['first'] = "new Value2"
# All Kinds of Fields
g = EasyForm("Field Types")
g.addTextField("Short Text Field", width=10)
g.addTextField("Text Field")
g.addPasswordField("Password Field", width=10)
g.addTextArea("Text Area")
g.addTextArea("Tall Text Area", 10, 5)
g.addCheckBox("Check Box")
options = ["a", "b", "c", "d"]
g.addComboBox("Combo Box", options)
g.addComboBox("Combo Box editable", options, editable=True)
g.addList("List", options)
g.addList("List Single", options, multi=False)
g.addList("List Two Row", options, rows=2)
g.addCheckBoxes("Check Boxes", options)
g.addCheckBoxes("Check Boxes H", options, orientation=EasyForm.HORIZONTAL)
g.addRadioButtons("Radio Buttons", options)
g.addRadioButtons("Radio Buttons H", options, orientation=EasyForm.HORIZONTAL)
g.addDatePicker("Date")
g.addButton("Go!", tag="run2")
g
result = dict()
for child in g:
result[child] = g[child]
result
gdp = EasyForm("Field Types")
gdp.addDatePicker("Date")
gdp
gdp['Date']
f.put("first", "Micheal")
f.put("last", "Fox")
# Read values from form
firstName = f.get("first")
lastName = f.get("last")
print("Good morning " + firstName + " " + lastName)
f = EasyForm("actionPerformed demo")
f.addTextField("first")
f['first'] = "First"
b = f.addButton("Action!")
b.actionPerformed = lambda: print("clicked "+f["first"])
f
import operator
f1 = EasyForm("OnInit and OnChange")
f1.addTextField("first", width=15)
f1.addTextField("last", width=15)\
.onInit(lambda: operator.setitem(f1, 'last', "setinit1"))\
.onChange(lambda text: operator.setitem(f1, 'first', text + ' extra'))
button = f1.addButton("action", "action_button")
button.actionPerformed = lambda: operator.setitem(f1, 'last', 'action done')
f1
f3c = EasyForm("form3")
f3c = EasyForm("form3")
f3c.addTextArea("Default Value", value = "Initial value")
f3c.addTextArea("Place Holder", placeholder = "Put here some text")
f3c.addCheckBox("Default Checked", value = True)
f3c
result = dict()
for child in f3c:
result[child] = f3c[child]
result
from beakerx import *
from ipywidgets import *
w = IntSlider()
f = EasyForm("Form and Run")
f.addTextField("first")
f.addTextField("last")
f.addWidget("slider", w)
f['first'] = "First"
f['last'] = "Last"
f.addButton("Go!", tag="run")
f
f['slider']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using Provenance in Adama
Step2: Connect the adama Python object to your API server of choice. The official one is https
Step3: Create a random namespace
Step4: Add the service prov, already included in this notebook
Step5: If you are interested in checking out the metadata file of this service, evaluate the next cell
Step6: Do a simple search and save the result
Step7: result can be treated as a standard Python list (it is actually a subclass of list). An additional method .prov returns the provenance of the response. It is equivalent to inspecting the headers of the HTTP response and following the link to the provenance object.
Step8: The format prov returns the PROV object as a native Python object, that can be manipulated according to the prov library (already included in this notebook/container)
Step9: If this is being evaluated in an IPython notebook, requesting the png format should display the image of the provenance graph (double-click on the image to see at full resolution). The image can also be saved to a file by passing an extra argument
|
<ASSISTANT_TASK:>
Python Code:
import requests
import string
import random
lorem = requests.get('http://loripsum.net/api/plaintext').text
WORDS = [word.lower()
for word in filter(lambda c: c not in string.punctuation, lorem).split()]
def random_words(n=2):
return '_'.join(random.choice(WORDS) for i in range(n))
import requests
requests.packages.urllib3.disable_warnings()
import adamalib
reload(adamalib.adamalib)
API = 'https://adama-dev.cloudapp.net/community/v0.3'
TOKEN = 'mytoken'
adama = adamalib.Adama(API, token=TOKEN, verify=False)
namespace = adama.namespaces.add(name=random_words())
namespace
import provn.main
service = namespace.services.add(provn.main)
service
!cat provn/metadata.yml
result = service.search()
result
result.prov()
result.prov(format='prov')
result.prov(format='png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Inference
Step2: 2.1 Helper functions
Step3: 2.2 Convert the model with TF-TRT
Step4: 2.3 Run inference with converted model
Step5: Compare to the original function
Step6: 3. Dynamic sequence length
Step7: The converted model is optimized for a sequnce length of 128 (and batch size 8). If we infer the converted model using a different sequence length, then two things can happen
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q tf-models-official
import tensorflow as tf
import tensorflow_hub as hub
tfhub_handle_encoder = 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3'
bert_saved_model_path = 'bert_base'
bert_model = hub.load(tfhub_handle_encoder)
tf.saved_model.save(bert_model, bert_saved_model_path)
import matplotlib.pyplot as plt
import numpy as np
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.compiler.tensorrt import trt_convert as trt
from timeit import default_timer as timer
tf.get_logger().setLevel('ERROR')
def get_func_from_saved_model(saved_model_dir):
saved_model_loaded = tf.saved_model.load(
saved_model_dir, tags=[tag_constants.SERVING])
graph_func = saved_model_loaded.signatures[
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
return graph_func, saved_model_loaded
def predict_and_benchmark_throughput(input_dict, model, N_warmup_run=50, N_run=500,
result_key='predictions', batch_size=None):
elapsed_time = []
for val in input_dict.values():
input_batch_size = val.shape[0]
break
if batch_size is None or batch_size > input_batch_size:
batch_size = input_batch_size
print('Benchmarking with batch size', batch_size)
elapsed_time = np.zeros(N_run)
for i in range(N_warmup_run):
preds = model(**input_dict)
# Force device synchronization with .numpy()
tmp = preds[result_key][0].numpy()
for i in range(N_run):
start_time = timer()
preds = model(**input_dict)
# Synchronize
tmp += preds[result_key][0].numpy()
end_time = timer()
elapsed_time[i] = end_time - start_time
if i>=50 and i % 50 == 0:
print('Steps {}-{} average: {:4.1f}ms'.format(i-50, i, (elapsed_time[i-50:i].mean()) * 1000))
latency = elapsed_time.mean() * 1000
print('Latency: {:5.2f}+/-{:4.2f}ms'.format(latency, elapsed_time.std() * 1000))
print('Throughput: {:.0f} samples/s'.format(N_run * batch_size / elapsed_time.sum()))
return latency
def trt_convert(input_path, output_path, input_shapes, explicit_batch=False,
dtype=np.float32, precision='FP32', prof_strategy='Optimal'):
conv_params=trt.TrtConversionParams(
precision_mode=precision, minimum_segment_size=50,
max_workspace_size_bytes=12*1<<30, maximum_cached_engines=1)
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=input_path, conversion_params=conv_params,
use_dynamic_shape=explicit_batch,
dynamic_shape_profile_strategy=prof_strategy)
converter.convert()
def input_fn():
for shapes in input_shapes:
# return a list of input tensors
yield [np.ones(shape=x).astype(dtype) for x in shapes]
converter.build(input_fn)
converter.save(output_path)
def random_input(batch_size, seq_length):
# Generate random input data
mask = tf.convert_to_tensor(np.ones((batch_size, seq_length), dtype=np.int32))
type_id = tf.convert_to_tensor(np.zeros((batch_size, seq_length), dtype=np.int32))
word_id = tf.convert_to_tensor(np.random.randint(0, 1000, size=[batch_size, seq_length], dtype=np.int32))
return {'input_mask':mask, 'input_type_ids': type_id, 'input_word_ids':word_id}
bert_trt_path = bert_saved_model_path + '_trt'
input_shapes = [[(1, 128), (1, 128), (1, 128)]]
trt_convert(bert_saved_model_path, bert_trt_path, input_shapes, True, np.int32, precision='FP16')
trt_func, _ = get_func_from_saved_model(bert_trt_path)
input_dict = random_input(1, 128)
result_key = 'bert_encoder_1' # 'classifier'
res = predict_and_benchmark_throughput(input_dict, trt_func, result_key=result_key)
func, model = get_func_from_saved_model(bert_saved_model_path)
res = predict_and_benchmark_throughput(input_dict, func, result_key=result_key)
seq1 = random_input(1, 128)
res1 = func(**seq1)
seq2 = random_input(1, 180)
res2 = func(**seq2)
bert_trt_path = bert_saved_model_path + '_trt2'
input_shapes = [[(1, 128), (1, 128), (1, 128)], [(1, 180), (1, 180), (1, 180)]]
trt_convert(bert_saved_model_path, bert_trt_path, input_shapes, True, np.int32, precision='FP16',
prof_strategy='Range')
trt_func_dynamic, _ = get_func_from_saved_model(bert_trt_path)
trt_res = trt_func_dynamic(**seq1)
result_key = 'bert_encoder_1' # 'classifier'
res = predict_and_benchmark_throughput(seq1, trt_func_dynamic, result_key=result_key)
res = predict_and_benchmark_throughput(seq2, trt_func_dynamic, result_key=result_key)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Skautská hra
Step2: Řešení bez slovníků, ale hlavně takové, kde by nebylo úplně snadné přidat další otázky.
Step3: Řešení, kde přidání, změna či odebrání jakékoli otázky znamená jen změnu na jednom míste v seznamu otázek.
|
<ASSISTANT_TASK:>
Python Code:
cisla = [(1, 1), (2, 4), (3, 9), (4, 16), (5, 25), (6,36), (7, 49), (8, 64), (9, 81), (10, 100)]
mocniny = dict(cisla)
print(mocniny)
import random
while True:
odpoved = input('Na kolik odpovědí chceš hrát? ')
try:
odpoved = int(odpoved)
break
except ValueError:
print('Musíš napsat celé číslo!')
kdo = []
s_kym = []
co_delali = []
kde = []
kdy = []
proc = []
vsechno = [kdo,s_kym,co_delali,kde,kdy,proc]
for i in range (odpoved):
kdo1 = input ('Kdo? ')
s_kym1 = input ('S kým? ')
co_delali1 = input ('Co dělali? ')
kde1 = input ('Kde? ')
kdy1 = input ('Kdy? ')
proc1 = input ('Proč? ')
print('')
kdo.append(kdo1)
s_kym.append(s_kym1)
co_delali.append(co_delali1)
kde.append(kde1)
kdy.append(kdy1)
proc.append(proc1)
a=random.choice(kdo)
b=random.choice(s_kym)
c=random.choice(co_delali)
d=random.choice(kde)
e=random.choice(kdy)
f=random.choice(proc)
print(a, b, c, d, e, f)
import random
kdo_seznam = []
s_kym_seznam = []
kde_seznam = []
co_delali_seznam = []
proc_seznam = []
for pocet in range(3):
kdo = input('Kdo? ')
kdo_seznam.append(kdo)
s_kym = input('S kým? ')
s_kym_seznam.append(s_kym)
kde = input('Kde? ')
kde_seznam.append(kde)
co_delali = input('Co dělali? ')
co_delali_seznam.append(co_delali)
proc = input('Proč? ')
proc_seznam.append(proc)
print(random.choice(kdo_seznam), 's', random.choice(s_kym_seznam),'v', random.choice(kde_seznam), random.choice(co_delali_seznam), random.choice(proc_seznam))
from random import choice
otazky = ['Kdo', 'S kym', 'Co delali', 'Kde']
odpovedi = {}
for otazka in otazky:
odpovedi[otazka] = []
# Alternativní cesta
# odpovedi = {otazka: [] for otazka in otazky}
for otazka in otazky:
while True:
odpoved = input('Zadej odpoved na otazku {}? '.format(otazka))
if not odpoved:
break
else:
odpovedi[otazka].append(odpoved)
veta = ''
for otazka in otazky:
veta = veta + choice(odpovedi[otazka]) + ' '
print(veta)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Program 1
Step2: Program 2
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
def calc_diff_err_p1(n):
h = 2.*np.pi/n
x = tf.range(1., n+1.)[:, None]*h - np.pi
u = tf.exp(tf.sin(x))
u_prime = tf.cos(x) * u
e = tf.cast(tf.ones(tf.cast(n, tf.int64)), n.dtype)
indices = tf.cast(tf.concat([
tf.tile(tf.range(n)[:, None], [4,1]),
tf.concat([tf.roll(tf.range(n)[:, None], i, 0) for i in [-1, -2, 1, 2]], axis=0)
],
axis=1
), tf.int64)
values = tf.concat(
[2.*e/3., -e/12., -2.*e/3, e/12.], axis=0
)/h
D = tf.sparse.reorder(tf.sparse.SparseTensor(indices, values, tf.cast((n, n), tf.int64)))
err = tf.norm(tf.sparse.sparse_dense_matmul(D, u) - u_prime, np.inf)
return err
n_vec = 2.**tf.range(3., 13.)
fig = plt.figure(figsize=(10,8))
_ = plt.loglog(n_vec, tf.map_fn(calc_diff_err_p1, tf.cast(n_vec, tf.float64)),
linestyle='none', marker='.', label='float64')
_ = plt.loglog(n_vec, tf.map_fn(calc_diff_err_p1, tf.cast(n_vec, tf.float32)),
linestyle='none', marker='x', label='float32')
_ = plt.title('Error for 4th order method with differing precision')
_ = plt.xlabel('N')
_ = plt.ylabel('error')
_ = plt.legend()
def calc_diff_err_p2(n):
# n = tf.constant(8., dtype=tf.float64) # n_vec[0]
h = 2.*np.pi/n
x = tf.range(1., n+1.)[:, None]*h - np.pi
u = tf.exp(tf.sin(x))
u_prime = tf.cos(x) * u
d = 0.5*((-1.)**np.arange(1., n))/np.tan(np.arange(1., n)*h/2.)
col = np.append([0.], d.tolist())
row = np.append(col[0], col[-1:0:-1])
D = tf.cast(tf.linalg.LinearOperatorToeplitz(col, row).to_dense(), n.dtype)
err = tf.norm(D @ u - u_prime, np.inf)
return err
n_vec = tf.range(2., 100., 2.)
fig = plt.figure(figsize=(10,8))
_ = plt.loglog(n_vec, tf.map_fn(calc_diff_err_p2, tf.cast(n_vec, tf.float64)),
linestyle='none', marker='.', label='float64')
_ = plt.loglog(n_vec, tf.map_fn(calc_diff_err_p2, tf.cast(n_vec, tf.float32)),
linestyle='none', marker='x', label='float32')
_ = plt.title('Error for Spectral method with differing precision')
_ = plt.xlabel('N')
_ = plt.ylabel('error')
_ = plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: \begin{equation}
Step4: \begin{array}{r | r | l | l | l | l} \hline
|
<ASSISTANT_TASK:>
Python Code:
writer = pytablewriter.LatexMatrixWriter()
writer.table_name = "B"
writer.value_matrix = [
["a_{11}", "a_{12}", "\\ldots", "a_{1n}"],
["a_{21}", "a_{22}", "\\ldots", "a_{2n}"],
[r"\vdots", "\\vdots", "\\ddots", "\\vdots"],
["a_{n1}", "a_{n2}", "\\ldots", "a_{nn}"],
]
writer.write_table()
writer = pytablewriter.LatexTableWriter()
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.MarkdownTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.MarkdownTableWriter()
writer.table_name = "write example with a margin"
writer.header_list = header_list
writer.value_matrix = data
writer.margin = 1 # add a whitespace for both sides of each cell
writer.write_table()
writer = pytablewriter.MediaWikiTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.NumpyTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.PandasDataFrameWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.PandasDataFrameWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.is_datetime_instance_formatting = False
writer.write_table()
writer = pytablewriter.PythonCodeTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.PythonCodeTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.is_datetime_instance_formatting = False
writer.write_table()
writer = pytablewriter.RstGridTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.RstSimpleTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.RstCsvTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.LtsvTableWriter()
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.TomlTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
from datetime import datetime
import pytablewriter as ptw
writer = ptw.JavaScriptTableWriter()
writer.header_list = ["header_a", "header_b", "header_c"]
writer.value_matrix = [
[-1.1, "2017-01-02 03:04:05", datetime(2017, 1, 2, 3, 4, 5)],
[0.12, "2017-02-03 04:05:06", datetime(2017, 2, 3, 4, 5, 6)],
]
print("// without type hints: column data types detected automatically by default")
writer.table_name = "without type hint"
writer.write_table()
print("// with type hints: Integer, DateTime, String")
writer.table_name = "with type hint"
writer.type_hint_list = [ptw.Integer, ptw.DateTime, ptw.String]
writer.write_table()
from datetime import datetime
import pytablewriter as ptw
writer = ptw.PythonCodeTableWriter()
writer.value_matrix = [
[-1.1, float("inf"), "2017-01-02 03:04:05", datetime(2017, 1, 2, 3, 4, 5)],
[0.12, float("nan"), "2017-02-03 04:05:06", datetime(2017, 2, 3, 4, 5, 6)],
]
# column data types detected automatically by default
writer.table_name = "python variable without type hints"
writer.header_list = ["float", "infnan", "string", "datetime"]
writer.write_table()
# set type hints
writer.table_name = "python variable with type hints"
writer.header_list = ["hint_int", "hint_str", "hint_datetime", "hint_str"]
writer.type_hint_list = [ptw.Integer, ptw.String, ptw.DateTime, ptw.String]
writer.write_table()
writer = pytablewriter.MarkdownTableWriter()
writer.from_csv(
dedent(
\
"i","f","c","if","ifc","bool","inf","nan","mix_num","time"
1,1.10,"aa",1.0,"1",True,Infinity,NaN,1,"2017-01-01 00:00:00+09:00"
2,2.20,"bbb",2.2,"2.2",False,Infinity,NaN,Infinity,"2017-01-02 03:04:05+09:00"
3,3.33,"cccc",-3.0,"ccc",True,Infinity,NaN,NaN,"2017-01-01 00:00:00+09:00"
)
)
writer.write_table()
writer = pytablewriter.MarkdownTableWriter()
writer.table_name = "ps"
writer.from_csv(
dedent(
\
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.4 77664 8784 ? Ss May11 0:02 /sbin/init
root 2 0.0 0.0 0 0 ? S May11 0:00 [kthreadd]
root 4 0.0 0.0 0 0 ? I< May11 0:00 [kworker/0:0H]
root 6 0.0 0.0 0 0 ? I< May11 0:00 [mm_percpu_wq]
root 7 0.0 0.0 0 0 ? S May11 0:01 [ksoftirqd/0]
),
delimiter=" ",
)
writer.write_table()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Observations
Step2: Obtaining the parameters from the database
Step3: The RefLightCurves Class
Step4: Find the number of objects in the table
Step5: Get all ids on the table
Step6: The astrophysical object properties can be obtained by using the following function in the form of a pd.DataFrame
Step7: Get the parameters for an object
Step8: The instance of the class representing the astrophysical object itself can be obtained by the following method for SN
Step9: Following the usual methods in sims.catUtils.supernovae.SNObject the properties of this SN can be seen using
Step10: Simpler methods for Twinkles
Step11: The tableName, idCol, objectTypeID, and columns will change from one astrophysical object to another.
Step12: We can get the light curves for each band by
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import absolute_import, division, print_function
import os
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# Import from monitor
from desc.monitor import RefLightCurves
import desc.monitor as monitor
data_dir = os.path.join(os.environ['MONITOR_DIR'], 'data')
opsimCsv = os.path.join(data_dir, 'SelectedKrakenVisits.csv')
opsimdf = pd.read_csv(opsimCsv, index_col='obsHistID')
df = opsimdf[['expMJD', 'filter', 'fiveSigmaDepth']]
import pymssql
from lsst.utils import getPackageDir
import lsst.sims.catUtils.baseCatalogModels as bcm
from lsst.daf.persistence import DbAuth
config = bcm.BaseCatalogConfig()
config.load(os.path.join(getPackageDir("sims_catUtils"), "config", "db.py"))
username = DbAuth.username(config.host, config.port)
password = DbAuth.password(config.host, config.port)
hostname = config.host
DBConnection = pymssql.connect(user=username,
password=password,
host=hostname,
database=config.database,
port=config.port)
db = DBConnection.cursor()
# The ids are obtained from Instance Catalogs
reflc = RefLightCurves(idSequence=(6144007055260714, 6144158471480362),
tableName='TwinkSN',
dbConnection=DBConnection,
dbCursor=db)
reflc.dbConnection
print(reflc.get_numObjects())
reflc.get_numObjects()
ids = reflc.allIdinTable(chunksize=None)
print(ids.astype(int).values.flatten())
ids = reflc.allIdinTable(chunksize=10)
print(ids.next().astype(int).values.flatten())
reflc.get_params(6144007055260714)
allParamsInIdSequence = reflc.get_params()
allParamsInIdSequence
reflcAll = RefLightCurves(tableName='TwinkSN',
dbConnection=DBConnection,
dbCursor=db)
# Slow because all rows
allParams = reflcAll.get_params()
allParams.head()
sn = reflc.astro_object(idValue=6144007055260714)
from lsst.sims.photUtils import BandpassDict
# Get the `bandpassDict` instance from files using catsim methods
lsstBP = BandpassDict.loadBandpassesFromFiles()
# This is a tuple, the first component gives the total bandpass, while the second gives the hardware bandpass
# Pass the bandpass and get the lightcurve for the observations
reflc.lightCurve(idValue=6144007055260714, observations=df, bandPassDict=lsstBP[0])
reflcTwink = RefLightCurves.fromTwinklesData(tableName='TwinkSN',
idCol='id',
objectTypeID=42,
dbHostName=None,
columns=('id', 'redshift', 'snra', 'sndec', 't0',
'x0', 'x1', 'c'),
idSequence=None)
reflcTwink.idCol
reflcTwink.lightCurve(idValue=6144007055260714)
reflcTwink.lightCurve(idValue=6144007055260714, bandName='r')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's get all the available date from the IPython community. For now, this is just the mailing lists. One day, BigBang will also get its issue tracker data! That will be very exciting.
Step2: Now let's isolate the messages involving Fernando Perez.
Step3: Note that our way of finding Fernando Perez was not very precise. We've picked up another Fernando.
Step4: In future iterations, we will use a more sensitive entity recognition technique to find Fernando. This will have to do for now.
Step5: We now have two Archives made from the original Archive, with the same range of dates, but one with and the other without Fernando. Both contain emails from many addresses. We want to get a single metric of activity.
Step6: Let's make a stackplot of this data to see how much of the conversation on the IPython developer's list has been Fernando, over time.
|
<ASSISTANT_TASK:>
Python Code:
from bigbang.archive import Archive
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
url = "ipython-user"
arx = Archive(url)
fernandos = Archive(arx.data[arx.data.From.map(lambda x: 'Fernando' in x)])
fernandos.data[:3]
[x for x in fernandos.get_activity()]
not_fernandos = Archive(arx.data[arx.data.From.map(lambda x: 'Fernando' not in x)])
not_fernandos.data[:3]
not_fernandos.get_activity().sum(1).values.shape
nf = pd.DataFrame(not_fernandos.get_activity().sum(1))
f = pd.DataFrame(fernandos.get_activity().sum(1))
both = pd.merge(nf,f,how="outer",left_index=True,right_index=True,suffixes=("_nf","_f")).fillna(0)
fig = plt.figure(figsize=(12.5, 7.5))
fa = fernandos.get_activity()
d = np.row_stack((both['0_f'],
both['0_nf']))
plt.stackplot(both.index.values,d,linewidth=0)
fig.axes[0].xaxis_date()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The control freak sequence
Step2: Below we write a generic function that takes the functions $u(t)$,$v(t)$ and $w(t)$ as an argument and then visualizes the pumping process in $d$-space. We will use this function to explore the control freak sequence and the later on also the not so control freak sequence.
Step3: Now let us see what happends as time proceedes!
Step4: Now that we have explored the momentum space behaviour let us again look at a small real space sample! First we define a function that generates Rice-Mele type finitel lattice Hamiltonians for given values of $u$,$v$ and $w$.
Step5: Next we define a class that we will mainly use to hold data about our pumping sequence. The information in these objects will be used to visualize the spectrum and wavefunctions of bulk and edge localized states.
Step6: Now let us create an instance of the above class with the data of the control freak pump sequence
Step7: Finally we write a simple function to visualize the spectrum and the wavefunctions in a symmilar fashion as we did for the SSH model. We shall now explicitly mark the edge states in the spectrum with red and blue.
Step8: We can now interact with the above function and see the evolution of the surface states.
Step9: To complete the analysis of the control freak sequence we now investigate the flow of Wannier centers in time in a chain with periodic boundary conditions. We again first define a class that holds the approporiate data and then write a plotting function.
Step10: An alternative way to visualize Wannier flow of a periodic system is shown below. The inner circle represent $t/T=0$ and the outer $t/T=1$, the sections of the disc correspond to unitcells.
Step11: If we investigate pumping in a finite but sample without periodic boundary condition we will see that the edgestates cross the gap!
Step12: We have now done all the heavy lifting with regards of coding. Now we can reuse all the plotting and data generating classes and functions for other sequences.
Step13: The $d$ space story can now be easily explored via the seq_and_d function we have defined earlier.
Step14: Similarly the spectrum and wavefunctions can also be investigated via the pumpdata class
Step15: Finally wannierflow class let us see the movement of the Wannier centers.
|
<ASSISTANT_TASK:>
Python Code:
# The usual imports
%pylab inline
from ipywidgets import *
# Some extra imports for 3D
from mpl_toolkits.mplot3d import *
# These are only needed to make things pretty..
# they are mostly refered to in the formatting part of the figures
# and enshure us to have the figures also present in the book.
from matplotlib.patches import FancyArrowPatch
class Arrow3D(FancyArrowPatch):
def __init__(self, xs, ys, zs, *args, **kwargs):
FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)
self._verts3d = xs, ys, zs
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
FancyArrowPatch.draw(self, renderer)
# this generates a parameter mesh in momentum and time
kran,tran=meshgrid(linspace(-pi,pi,30),linspace(0,1,51))
# a helper function for defining the d vector
def dkt(k,t,uvw):
'''
A simple function that returns the d vector of the RM model.
'''
return [uvw(t)[1]+uvw(t)[2]*cos(k),uvw(t)[2]*sin(k),uvw(t)[0]]
def f(t):
'''
A piecewise function for the control freak sequence
used to define u(t),v(t),w(t)
'''
t=mod(t,1);
return (
8*t*((t>=0)&(t<1/8))+\
(0*t+1)*((t>=1/8)&(t<3/8))+\
(4-8*t)*((t>=3/8)&(t<1/2))+\
0*t*((t>=1/2)&(t<1)));
def uvwCF(t):
'''
u,v and w functions of the control freak sequence
'''
return array([f(t)-f(t-1/2),2*f(t+1/4),f(t-1/4)])
def seq_and_d(funcs,ti=10):
'''
A figure generating function for the Rice Mele model.
It plots the functions defining the sequence and the d-space structure.
'''
figsize(10,5)
fig=figure()
func=eval(funcs);
ax1=fig.add_subplot(121)
ftsz=20
# plotting the functions defining the sequence
plot(tran[:,0],func(tran[:,0])[1],'k-',label=r'$v$',linewidth=3)
plot(tran[:,0],func(tran[:,0])[2],'g--',label=r'$w$',linewidth=3)
plot(tran[:,0],func(tran[:,0])[0],'m-',label=r'$u$',linewidth=3)
plot([tran[ti,0],tran[ti,0]],[-3,3],'r-',linewidth=3)
# this is just to make things look like in the book
ylim(-1.5,2.5)
legend(fontsize=20,loc=3)
xlabel(r'time $t/T$',fontsize=ftsz)
xticks(linspace(0,1,5),[r'$0$',r'$0.25$',r'$0.5$',r'$0.75$',r'$1$'],fontsize=ftsz)
ylabel(r'amplitudes $u,v,w$',fontsize=ftsz)
yticks([-1,0,1,2],[r'$-1$',r'$0$',r'$1$',r'$2$'],fontsize=ftsz)
grid(True)
ax2=fig.add_subplot(122, projection='3d')
# plotting d space image of the pumping sequence
plot(*dkt(kran[ti,:],tran[ti,:],func),marker='o',mec='red',mfc='red',ls='-',lw=6,color='red')
plot(*dkt(kran.flatten(),tran.flatten(),func),color='blue',alpha=0.5)
# this is just to make things look like in the book
# basically everything below is just to make things look nice..
ax2.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))
ax2.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))
ax2.w_zaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))
ax2.set_axis_off()
ax2.grid(False)
arrprop=dict(mutation_scale=20, lw=1,arrowstyle='-|>,head_length=1.4,head_width=0.6',color="k")
ax2.add_artist(Arrow3D([-2,4],[0,0],[0,0], **arrprop))
ax2.add_artist(Arrow3D([0,0],[-2,3.3],[0,0], **arrprop))
ax2.add_artist(Arrow3D([0,0],[0,0],[-1,2], **arrprop))
ftsz2=30
ax2.text(4.4, -1, 0, r'$d_x$', None,fontsize=ftsz2)
ax2.text(0.3, 3.0, 0, r'$d_y$', None,fontsize=ftsz2)
ax2.text(0, 0.6, 2.0, r'$d_z$', None,fontsize=ftsz2)
ax2.plot([0],[0],[0],'ko',markersize=8)
ax2.view_init(elev=21., azim=-45)
ax2.set_aspect(1.0)
ax2.set_zlim3d(-0.5, 2)
ax2.set_ylim3d(-0.5, 2)
ax2.set_xlim3d(-0.5, 2)
tight_layout()
interact(seq_and_d,funcs=fixed('uvwCF'),ti=(0,len(tran[:,0])-1));
def H_RM_reals(L,u,v,w,**kwargs):
'''
A function to bulid a finite RM chain.
The number of unitcells is L.
As usual v is intracell and w ins intercell hopping.
We also have now an asymmetric sublattice potential u.
'''
idL=eye(L); # identity matrix of dimension L
odL=diag(ones(L-1),1);# upper off diagonal matrix with ones of size L
odc=matrix(diag([1],-L+1));#lower corner for periodic boundary condition
U=matrix([[u,v],[v,-u]]) # intracell
T=matrix([[0,0],[1,0]]) # intercell
p=0
if kwargs.get('periodic',False):
p=1
H=(kron(idL,U)+
kron(odL,w*T)+
kron(odL,w*T).H+
p*(kron(odc,w*T)+kron(odc,w*T).H))
return H
class pumpdata:
'''
A class that holds information on spectrum and wavefunctions
of a pump sequence performed on a finite lattice model.
Default values are tailored to the control freak sequence.
'''
def __init__(self,L=10,numLoc=1,norm_treshold=0.99,func=uvwCF,**kwargs):
'''
Initialization function. The default values are set in such a way that they correspond
to the control freak sequence.
'''
self.L=L
self.dat=[] # We will collect the data to be
self.vecdat=[] # plotted in these arrays.
self.lefty=[]
self.righty=[]
self.lefty=[]
self.righty=[]
tlim=kwargs.get('edge_tlim',(0,1)) # We can use this to restrict classification
# of left and right localized states in time
for t in tran[:,0]:
u,v,w=func(t) # obtain u(t),v(t) and w(t)
H=H_RM_reals(L,u,v,w) #
eigdat=eigh(H); # for a given t here we calculate the eigensystem (values and vectors)
if tlim[0]<t<tlim[1]:
# for the interesting time intervall we look for states localized to the edge
for i in range(2*L):
if sum((array(eigdat[1][0::2,i])**2+array(eigdat[1][1::2,i])**2)[0:2*numLoc:2])>norm_treshold:
self.lefty=append(self.lefty,[[t,eigdat[0][i]]]);
if sum((array(eigdat[1][0::2,i])**2+array(eigdat[1][1::2,i])**2)[:L-2*numLoc:-2])>norm_treshold:
self.righty=append(self.righty,[[t,eigdat[0][i]]]);
self.dat=append(self.dat,eigdat[0]);
self.vecdat=append(self.vecdat,eigdat[1]);
self.dat=reshape(self.dat,[len(tran[:,0]),2*L]); # rewraping the data
self.vecdat=reshape(self.vecdat,[len(tran[:,0]),2*L,2*L]) # to be more digestable
# Filling up data for the control freak sequence
CFdata=pumpdata(edge_tlim=(0.26,0.74))
def enpsi(PD,ti=10,n=10):
figsize(14,5)
subplot(121)
lcol='#53a4d7'
rcol='#d7191c'
# Plotting the eigenvalues and
# a marker showing for which state
# we are exploring the wavefunction
plot(tran[:,0],PD.dat,'k-');
(lambda x:plot(x[:,0],x[:,1],'o',mec=lcol,mfc=lcol,
markersize=10))(reshape(PD.lefty,(PD.lefty.size/2,2)))
(lambda x:plot(x[:,0],x[:,1],'o',mec=rcol,mfc=rcol,
markersize=10))(reshape(PD.righty,(PD.righty.size/2,2)))
plot(tran[ti,0],PD.dat[ti,n],'o',markersize=13,mec='k',mfc='w')
# Make it look like the book
xlabel(r'$t/T$',fontsize=25);
xticks(linspace(0,1,5),fontsize=25)
ylabel(r'energy $E$',fontsize=25);
yticks(fontsize=25)
ylim(-2.99,2.99)
grid()
subplot(122)
# Plotting the sublattice resolved wavefunction
bar(array(range(0,2*PD.L,2)), real(array(PD.vecdat[ti][0::2,n].T)),0.9,color='grey',label='A') # sublattice A
bar(array(range(0,2*PD.L,2))+1,real(array(PD.vecdat[ti][1::2,n].T)),0.9,color='white',label='B') # sublattice B
# Make it look like the book
xticks(2*(array(range(10))),[' '+str(i) for i in array(range(11))[1:]],fontsize=25)
ylim(-1.2,1.2)
yticks(linspace(-1,1,5),fontsize=25,x=1.2)
ylabel('Wavefunction',fontsize=25,labelpad=-460,rotation=-90)
grid()
legend(loc='lower right')
xlabel(r'cell index $m$',fontsize=25);
tight_layout()
interact(enpsi,PD=fixed(CFdata),ti=(0,len(tran[:,0])-1),n=(0,19));
class wannierflow:
'''
A class that holds information on Wannier center flow.
'''
def __init__(self,L=6,func=uvwCF,periodic=True,tspan=linspace(0,1,200),**kwargs):
self.L=L
self.func=func
self.periodic=periodic
self.tspan=tspan
# get position operator
if self.periodic:
POS=matrix(kron(diag(exp(2.0j*pi*arange(L)/(L))),eye(2)))
else:
POS=matrix(kron(diag(arange(1,L+1)),eye(2)))
Lwanflow=[]
Hwanflow=[]
Lwane=[]
Hwane=[]
for t in tspan:
u,v,w=self.func(t)
H=H_RM_reals(L,u,v,w,periodic=periodic)
sys=eigh(H)
Lval=sys[0][sys[0]<0]
Lvec=matrix(sys[1][:,sys[0]<0])
LP=Lvec*Lvec.H
LW=LP*POS*LP
LWval,LWvec=eig(LW)
LWvec=LWvec[:,abs(LWval)>1e-10]
LWe=real(diag(LWvec.H*H*LWvec))
Hval=sys[0][sys[0]>0]
Hvec=matrix(sys[1][:,sys[0]>0])
HP=Hvec*Hvec.H
HW=HP*POS*HP
HWval,HWvec=eig(HW)
HWvec=HWvec[:,abs(HWval)>1e-10]
HWe=real(diag(HWvec.H*H*HWvec))
Lwane=append(Lwane,LWe)
Hwane=append(Hwane,HWe)
if periodic:
Lwanflow=append(Lwanflow,L/(2*pi)*sort(angle(LWval[abs(LWval)>1e-10])))
Hwanflow=append(Hwanflow,L/(2*pi)*sort(angle(HWval[abs(HWval)>1e-10])))
else:
Lwanflow=append(Lwanflow,sort(LWval[abs(LWval)>1e-10]))
Hwanflow=append(Hwanflow,sort(HWval[abs(HWval)>1e-10]))
self.Lwanflow=Lwanflow
self.Hwanflow=Hwanflow
self.Lwane=Lwane
self.Hwane=Hwane
def plot_w_vs_t(self,LorH='Lower band',*args,**kwargs):
'''
A function for plotting the Wannier flow.
The Wannier centers against time are plotted.
'''
#figsize(7,5)
data=eval('self.'+(LorH[0] if (LorH[0] in ['L','H']) else 'L')+'wanflow')
for i in range(self.L):
descr=(LorH if i==0 else '')
plot(real(data[i::self.L]),self.tspan,*args,label=descr,**kwargs)
if self.periodic:
xticks(arange(self.L)-self.L/2+0.5*mod(self.L,2),fontsize=25)
else:
xticks(arange(self.L)+1,fontsize=25)
yticks(linspace(0,1,5),fontsize=25)
xlabel(r'position $\langle \hat{x}\rangle$',fontsize=25);
ylabel(r"time $t/T$",fontsize=25);
grid()
def plot_w_vs_e(self,LorH='Lower band',*args,**kwargs):
'''
A function for plotting the Wannier flow.
The Wannier centers against energy are plotted.
'''
#figsize(7,5)
dataw=eval('self.'+(LorH[0] if (LorH[0] in ['L','H']) else 'L')+'wanflow')
datae=eval('self.'+(LorH[0] if (LorH[0] in ['L','H']) else 'L')+'wane')
for i in range(self.L):
descr=(LorH if i==0 else '')
plot(dataw[i::self.L],datae[i::self.L],*args,label=descr,**kwargs)
pos=100
vx=real(dataw[i::self.L][pos:(pos+2)])
vy=real(datae[i::self.L][pos:(pos+2)])
#plot(vx[0],vy[0],'bo')
arrow(vx[0],vy[0],
(vx[1]-vx[0])/2,
(vy[1]-vy[0])/2,fc='k',zorder=1000,
head_width=0.3, head_length=0.1)
if self.periodic:
xticks(arange(self.L)-self.L/2+0.5*mod(self.L,2),fontsize=25)
else:
xticks(arange(self.L)+1,fontsize=25)
yticks(fontsize=25)
xlabel(r'position $\langle \hat{x}\rangle$',fontsize=25);
ylabel(r'energy $\langle \hat{H}\rangle$',fontsize=25);
grid()
def polar_w_vs_t(self,LorH='Lower band',*args,**kwargs):
'''
A function for plotting the Wannier flow.
A figure in polar coordinates is produced.
'''
if self.periodic==False:
print('This feature is only supported for periodic boundary conditions')
return
#figsize(7,7)
data=eval('self.'+(LorH[0] if (LorH[0] in ['L','H']) else 'L')+'wanflow')
for i in range(self.L):
descr=(LorH if i==0 else '')
plot((self.tspan+0.5)*cos((2*pi)/self.L*data[i::self.L]),
(self.tspan+0.5)*sin((2*pi)/self.L*data[i::self.L]),
*args,label=descr,**kwargs)
phi=linspace(0,2*pi,100);
plot(0.5*sin(phi),0.5*cos(phi),'k-',linewidth=2);
plot(1.5*sin(phi),1.5*cos(phi),'k-',linewidth=2);
xlim(-1.5,1.5);
ylim(-1.5,1.5);
phiran=linspace(-pi,pi,self.L+1)
for i in range(len(phiran)-1):
phi0=0
plot([0.5*sin(phiran[i]+phi0),1.5*sin(phiran[i]+phi0)],
[0.5*cos(phiran[i]+phi0),1.5*cos(phiran[i]+phi0)],'k--')
text(1.3*cos(phiran[i]+pi/self.L/2),1.3*sin(phiran[i]+pi/self.L/2),i+1,fontsize=20)
axis('off')
text(-0.45,-0.1,r'$t/T=0$',fontsize=20)
text(1.1,-1.1,r'$t/T=1$',fontsize=20)
CFwan=wannierflow()
figsize(12,4)
subplot(121)
CFwan.plot_w_vs_t('Lower band','ko',ms=10)
CFwan.plot_w_vs_t('Higher band','o',mec='grey',mfc='grey')
legend(fontsize=15,numpoints=100);
subplot(122)
CFwan.plot_w_vs_e('Lower band','k.')
CFwan.plot_w_vs_e('Higher band','.',mec='grey',mfc='grey')
#legend(fontsize=15,numpoints=100);
tight_layout()
figsize(6,6)
CFwan.polar_w_vs_t('Lower band','ko',ms=10)
CFwan.polar_w_vs_t('Higher band','o',mec='grey',mfc='grey')
legend(numpoints=100,fontsize=15,ncol=2,bbox_to_anchor=(1,0));
CFwan_finite=wannierflow(periodic=False)
figsize(6,4)
CFwan_finite.plot_w_vs_t('Lower band','ko',ms=10)
CFwan_finite.plot_w_vs_t('Higher band','o',mec='grey',mfc='grey')
legend(fontsize=15,numpoints=100);
xlim(0,7);
def uvwNSCF(t):
'''
The u,v and w functions of the not so control freak sequence.
For the time beeing we assume vbar to be fixed.
'''
vbar=1
return array([sin(t*(2*pi)),vbar+cos(t*(2*pi)),1*t**0])
interact(seq_and_d,funcs=fixed('uvwNSCF'),ti=(0,len(tran[:,0])-1));
# Generating the not-so control freak data
NSCFdata=pumpdata(numLoc=2,norm_treshold=0.6,func=uvwNSCF)
interact(enpsi,PD=fixed(NSCFdata),ti=(0,len(tran[:,0])-1),n=(0,19));
NSCFwan=wannierflow(periodic=True,func=uvwNSCF)
figsize(12,4)
subplot(121)
NSCFwan.plot_w_vs_t('Lower band','ko',ms=10)
NSCFwan.plot_w_vs_t('Higher band','o',mec='grey',mfc='grey')
legend(fontsize=15,numpoints=100);
subplot(122)
NSCFwan.plot_w_vs_e('Lower band','k.')
NSCFwan.plot_w_vs_e('Higher band','.',mec='grey',mfc='grey')
#legend(fontsize=15,numpoints=100);
tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Inputs
Step2: Generator network
Step3: Discriminator
Step4: Hyperparameters
Step5: Build network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
# Size of input image to discriminator
input_size = 784
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Smoothing
smooth = 0.1
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
!mkdir checkpoints
batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A quoi ça ressemble
Step2: C'est gros.
Step3: Idée de la compétation
Step4: Le code qui suit est construit après plusieurs essais en fonction des warnings retournés par le module dask.
Step5: On ajoute la colonne à prédire, booleénne, qui indique la présence d'additif commençant par 'e
Step6: On se limite au produit pour lesquels on a quelques informations sur le contenu.
Step7: Bon la suite prend un peu de temps et ça n'est pas hyper efficace. Il faudrait un dask qui n'utilise pas dask mais uniquement les dataframes pour que ça aille plus vite. Café.
Step8: Bon je crois que je vais vraiment développer une truc comme dask juste avec pandas.
Step9: Split...
Step10: Ah j'allais oublié, il faut bidouiller la colonne additives pour retirer éviter un memory leak et on recalcule la colonne hasE pour être sûr.
Step11: On recompose le tout.
Step12: Il y aura probablement un ou deux data leak dans les autres colonnes..
Step13: Premier modèle
Step14: ROC
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
import os
os.stat("c:/temp/fr.openfoodfacts.org.products.csv").st_size / 2**30, 'Go'
import pyensae
%load_ext pyensae
%head -n 2 c:/temp/fr.openfoodfacts.org.products.csv
import pandas
df = pandas.read_csv("c:/temp/fr.openfoodfacts.org.products.csv",
sep="\t", encoding="utf-8", nrows=10000, low_memory=False)
df.head().T.to_excel("e.xlsx")
df[df.additives.notnull() & df.additives.str.contains("E4")].head().T
import dask
import dask.dataframe as dd
ddf = dd.read_csv("c:/temp/fr.openfoodfacts.org.products.csv", sep="\t", encoding="utf-8", low_memory=False,
dtype={'allergens': 'object',
'cities_tags': 'object',
'emb_codes': 'object',
'emb_codes_tags': 'object',
'first_packaging_code_geo': 'object',
'generic_name': 'object',
'ingredients_from_palm_oil_tags': 'object',
'labels': 'object',
'labels_fr': 'object',
'labels_tags': 'object',
'manufacturing_places': 'object',
'manufacturing_places_tags': 'object',
'origins': 'object',
'origins_tags': 'object',
'stores': 'object',
'code': 'object','allergens_fr': 'object',
'cities': 'object',
'created_t': 'object',
'last_modified_t': 'object'})
ddf.head()
print(type(ddf))
ddfe = ddf.assign(hasE=ddf.apply(lambda row: isinstance(row.additives, str) and "en:e" in row.additives,
axis=1, meta=bool))
ddfe.head()
g100 = [_ for _ in ddf.columns if '100g' in _]
g100
ddfe.compute().shape
import numpy
ddfe100 = ddfe.assign(s100=ddf.apply(lambda row: sum(0 if numpy.isnan(row[g]) else 1 for g in g100),
axis=1, meta=float))
ddfe100 = ddfe100[ddfe100.s100 > 0]
ddfe100.head()
ddfe100.to_csv("ddfe100*.csv", sep="\t", encoding="utf-8", index=False)
dffefiles = [_ for _ in os.listdir(".") if "ddfe" in _]
dffefiles
types = {k:v for k, v in zip(ddfe100.columns, ddfe100.dtypes)}
from sklearn.model_selection import train_test_split
for i, name in enumerate(dffefiles):
print("name", name)
df = pandas.read_csv(name, sep="\t", encoding="utf-8", dtype=types)
df_train, df_test = train_test_split(df, test_size =0.5)
df_test, df_eval = train_test_split(df_test, test_size =0.5)
df_train.to_csv("off_train{0}.txt".format(i), sep="\t", index=False, encoding="utf-8")
df_test.to_csv("off_test{0}.txt".format(i), sep="\t", index=False, encoding="utf-8")
df_eval.to_csv("off_eval{0}.txt".format(i), sep="\t", index=False, encoding="utf-8")
df[["additives", "hasE"]].head()
import re
reg = re.compile("[[](.*?)[]]")
addi = re.compile("(en[:]e[0-9])")
def has_emachine(v):
if isinstance(v, (list, pandas.core.series.Series)):
rem = []
add = []
for _ in v:
if isinstance(_, str):
fd = reg.findall(_)
for __ in fd:
if " en:e" in __ and addi.search(__):
add.append(__)#.split("->")[-1].strip())
elif " en:" not in __:
continue
else:
rem.append(__.split("->")[-1].strip())
else:
continue
return add, list(sorted(set(rem)))
elif isinstance(v, float) and numpy.isnan(v):
return [], []
elif isinstance(v, str):
if "," in v:
raise Exception('{0}\n{1}'.format(type(v), v))
return has_emachine([v])
else:
# ???
raise Exception('{0}\n{1}'.format(type(v), v))
hasE, clean = has_emachine(df.loc[1,"additives"])
hasE, clean
off = [_ for _ in os.listdir(".") if "off" in _ and "all" not in _]
for cont in ['train', 'test', 'eval']:
sub = [_ for _ in off if cont in _]
dfs = []
for name in sub:
df = pandas.read_csv(name, sep="\t", encoding="utf-8", dtype=types)
print("name", name, df.shape)
df["hasE"] = df["additives"].apply(lambda x: len(has_emachine(x)[0]) > 0)
df["additives"] = df["additives"].apply(lambda x: ";".join(has_emachine(x)[1]))
dfs.append(df)
df = pandas.concat(dfs, axis=0)
print("merged", df.shape)
df.to_csv("off_{0}_all.txt".format(cont), sep="\t", index=False, encoding="utf-8")
len(types)
df_eval = pandas.read_csv("off_eval_all.txt", sep="\t", dtype=types, encoding="utf-8")
df_eval_X = df_eval.drop("hasE", axis=1)
df_eval_X.to_csv("off_eval_all_X.txt")
df_eval[["hasE"]].to_csv("off_eval_all_Y.txt")
df_train = pandas.read_csv("off_train_all.txt", sep="\t", dtype=types, encoding="utf-8")
df_train.shape
X = df_train[g100].fillna(0)
Y = df_train['hasE']
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(X, Y)
pred = clf.predict(X)
from sklearn.metrics import confusion_matrix
confusion_matrix(Y, pred)
df_test = pandas.read_csv("off_test_all.txt", sep="\t", dtype=types, encoding="utf-8")
X_test = df_test[g100].fillna(0)
Y_test = df_test['hasE']
pred = clf.predict(X_test)
confusion_matrix(Y_test, pred)
y_proba = clf.predict_proba(X_test)
y_pred = clf.predict(X_test)
print(y_proba[:3])
print(y_pred[:3])
y_test = Y_test.values
type(y_pred), type(Y_test), type(y_test)
import numpy
prob_pred = numpy.array([(y_proba[i, 1] if c else y_proba[i, 0]) for i, c in enumerate(y_pred)])
prob_pred[:3]
from sklearn.metrics import roc_curve
fpr, tpr, th = roc_curve(y_pred == y_test, prob_pred)
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange', lw=lw, label='Courbe ROC')
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("Proportion mal classée")
plt.ylabel("Proportion bien classée")
plt.title('ROC')
plt.legend(loc="lower right")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data from *.csv file
Step2: Explore the correct data
Step3: Prepare the Data for CNN
Step4: Prepare the data for CNN
Step5: Model 1 - Overfitting the data TODO not overfitting with 35k data
Step6: As it is possible to see from this result, the model overfits the training data already at iteration 400, while getting a test accuracy of only 28%.
Step7: Model 2 - 4 x Convolutional Layers, 1x Fully Connected
Step8: Computational graph - 6 Layers, Conv-Relu-Maxpool, 1 Fully Connected L.
Step9: saving the trained graph in TF file
Step10: Feeding the CNN with some data (camera/file)
|
<ASSISTANT_TASK:>
Python Code:
import random
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import csv
import scipy.misc
import time
import collections
import os
import utils as ut
import importlib
import copy
importlib.reload(ut)
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (20.0, 20.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
#Data Visualization
# Load the shortened raw CSV data, it contains only 300 pictures with labels
emotions_dataset_dir = 'fer2013_full.csv'
#obtaining the number of line of the csv file
file = open(emotions_dataset_dir)
numline = len(file.readlines())
print ('Number of data in the dataset:',numline)
#Load the file in csv
ifile = open(emotions_dataset_dir, "rt")
reader = csv.reader(ifile)
hist_threshold = 350 # images above this threshold will be removed
hist_div = 100 #parameter of the histogram
print('Loading Images. It may take a while, depending on the database size.')
images, emotions, strange_im, num_strange, num_skipped = ut.load_dataset(reader, numline, hist_div, hist_threshold)
ifile.close()
print('Skipped', num_skipped, 'happy class images.')
print(str( len(images) ) + ' are left after \'strange images\' removal.')
print('Deleted ' + str( num_strange ) + ' strange images. Images are shown below')
# showing strange images
plt.rcParams['figure.figsize'] = (5.0, 5.0) # set default size of plots
idxs = np.random.choice(range(1,num_strange ), 6, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i
plt.subplot(1, 6, plt_idx+1)
plt.imshow(strange_im[idx])
plt.axis('off')
if(i == 0):
plt.title('Some of the images removed from dataset (max(histogram) thresholded)')
plt.show()
classes = [0,1,2,3,4,5]
str_emotions = ['angry','scared','happy','sad','surprised','normal']
num_classes = len(classes)
samples_per_class = 6
plt.rcParams['figure.figsize'] = (10.0, 10.0) # set default size of plots
for y, cls in enumerate(classes):
idxs = np.flatnonzero(emotions == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(images[idx])
y_h, x_h = np.histogram( images[idx], hist_div );
plt.axis('off')
if(i == 0):
plt.title(str_emotions[y] )
plt.show()
print('number of clean data:' + str(images.shape[0]) + ' 48x48 pix , 0-255 greyscale images')
n_all = images.shape[0];
n_train = 64; # number of data for training and for batch
# dividing the input data
train_data_orig = images[0:n_all-n_train,:,:]
train_labels = emotions[0:n_all-n_train]
test_data_orig = images[n_all-n_train:n_all,:,:]
test_labels = emotions[n_all-n_train:n_all]
# Convert to float
train_data_orig = train_data_orig.astype('float32')
y_train = train_labels.astype('float32')
test_data_orig = test_data_orig.astype('float32')
y_test = test_labels.astype('float32')
print('orig train data ' + str(train_data_orig.shape))
print('orig train labels ' + str(train_labels.shape) + 'from ' + str(train_labels.min()) + ' to ' + str(train_labels.max()) )
print('orig test data ' + str(test_data_orig.shape))
print('orig test labels ' + str(test_labels.shape)+ 'from ' + str(test_labels.min()) + ' to ' + str(test_labels.max()) )
for i in range (0, 5):
print('TRAIN: number of' , i, 'labels',len(train_labels[train_labels == i]))
for i in range (0, 5):
print('TEST: number of', i, 'labels',len(test_labels[test_labels == i]))
# Data pre-processing
n = train_data_orig.shape[0];
train_data = np.zeros([n,48**2])
for i in range(n):
xx = train_data_orig[i,:,:]
xx -= np.mean(xx)
xx /= np.linalg.norm(xx)
train_data[i,:] = xx.reshape(2304); #np.reshape(xx,[-1])
n = test_data_orig.shape[0]
test_data = np.zeros([n,48**2])
for i in range(n):
xx = test_data_orig[i,:,:]
xx -= np.mean(xx)
xx /= np.linalg.norm(xx)
test_data[i] = np.reshape(xx,[-1])
#print(train_data.shape)
#print(test_data.shape)
#print(train_data_orig[0][2][2])
#print(test_data[0][2])
plt.rcParams['figure.figsize'] = (2.0, 2.0) # set default size of plots
plt.imshow(train_data[4].reshape([48,48]));
plt.title('example image after processing');
# Convert label values to one_hot vector
train_labels = ut.convert_to_one_hot(train_labels,num_classes)
test_labels = ut.convert_to_one_hot(test_labels,num_classes)
print('train labels shape',train_labels.shape)
print('test labels shape',test_labels.shape)
# Define computational graph (CG)
batch_size = n_train # batch size
d = train_data.shape[1] # data dimensionality
nc = 6 # number of classes
# CG inputs
xin = tf.placeholder(tf.float32,[batch_size,d]); #print('xin=',xin,xin.get_shape())
y_label = tf.placeholder(tf.float32,[batch_size,nc]); #print('y_label=',y_label,y_label.get_shape())
#d = tf.placeholder(tf.float32);
# Convolutional layer
K0 = 8 # size of the patch
F0 = 64 # number of filters
ncl0 = K0*K0*F0
Wcl0 = tf.Variable(tf.truncated_normal([K0,K0,1,F0], stddev=tf.sqrt(2./tf.to_float(ncl0)) )); print('Wcl=',Wcl0.get_shape())
#bcl0 = tf.Variable(tf.zeros([F0])); print('bcl=',bcl0.get_shape())
bcl0 = bias_variable([F0]); print('bcl0=',bcl0.get_shape()) #in ReLu case, small positive bias added to prevent killing of gradient when input is negative.
x_2d0 = tf.reshape(xin, [-1,48,48,1]); print('x_2d=',x_2d0.get_shape())
x = tf.nn.conv2d(x_2d0, Wcl0, strides=[1, 1, 1, 1], padding='SAME')
x += bcl0; print('x2=',x.get_shape())
# ReLU activation
x = tf.nn.relu(x)
# Dropout
#x = tf.nn.dropout(x, 0.25)
# Fully Connected layer
nfc = 48*48*F0
x = tf.reshape(x, [batch_size,-1]); print('x3=',x.get_shape())
Wfc = tf.Variable(tf.truncated_normal([nfc,nc], stddev=tf.sqrt(2./tf.to_float(nfc+nc)) )); print('Wfc=',Wfc.get_shape())
bfc = tf.Variable(tf.zeros([nc])); print('bfc=',bfc.get_shape())
y = tf.matmul(x, Wfc); print('y1=',y.get_shape())
y += bfc; print('y2=',y.get_shape())
# Softmax
y = tf.nn.softmax(y); print('y3(SOFTMAX)=',y.get_shape())
# Loss
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))
total_loss = cross_entropy
# Optimization scheme
#train_step = tf.train.GradientDescentOptimizer(0.02).minimize(total_loss)
train_step = tf.train.AdamOptimizer(0.004).minimize(total_loss)
# Accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Run Computational Graph
n = train_data.shape[0]
indices = collections.deque()
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(1001):
# Batch extraction
if len(indices) < batch_size:
indices.extend(np.random.permutation(n))
idx = [indices.popleft() for i in range(batch_size)]
batch_x, batch_y = train_data[idx,:], train_labels[idx]
#print(batch_x.shape,batch_y.shape)
# Run CG for vao to increase the test acriable training
_,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y})
# Run CG for test set
if not i%100:
print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)
acc_test = sess.run(accuracy, feed_dict={xin: test_data, y_label: test_labels})
print('test accuracy=',acc_test)
d = train_data.shape[1]
#Défining network
def weight_variable2(shape, nc10):
initial2 = tf.random_normal(shape, stddev=tf.sqrt(2./tf.to_float(ncl0)) )
return tf.Variable(initial2)
def conv2dstride2(x,W):
return tf.nn.conv2d(x,W,strides=[1, 2, 2, 1], padding='SAME')
def conv2d(x,W):
return tf.nn.conv2d(x,W,strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=1/np.sqrt(d/2) )
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.01,shape=shape)
return tf.Variable(initial)
tf.reset_default_graph()
# Define computational graph (CG)
batch_size = n_train # batch size
d = train_data.shape[1] # data dimensionality
nc = 6 # number of classes
# CG inputs
xin = tf.placeholder(tf.float32,[batch_size,d]); #print('xin=',xin,xin.get_shape())
y_label = tf.placeholder(tf.float32,[batch_size,nc]); #print('y_label=',y_label,y_label.get_shape())
#d = tf.placeholder(tf.float32);
# Convolutional layer
K0 = 7 # size of the patch
F0 = 16 # number of filters
ncl0 = K0*K0*F0
K1 = 5 # size of the patch
F1 = 16 # number of filters
ncl0 = K1*K1*F1
K2 = 3 # size of the patch
F2 = 2 # number of filters
ncl0 = K2*K2*F2
nfc = int(48*48*F0/4)
nfc1 = int(48*48*F1/4)
nfc2 = int(48*48*F2/4)
keep_prob_input=tf.placeholder(tf.float32)
#First set of conv followed by conv stride 2 operation and dropout 0.5
W_conv1=weight_variable([K0,K0,1,F0]); print('W_conv1=',W_conv1.get_shape())
b_conv1=bias_variable([F0]); print('b_conv1=',b_conv1.get_shape())
x_2d0 = tf.reshape(xin, [-1,48,48,1]); print('x_2d0=',x_2d0.get_shape())
h_conv1=tf.nn.relu(conv2d(x_2d0,W_conv1)+b_conv1); print('h_conv1=',h_conv1.get_shape())
h_conv1= tf.nn.dropout(h_conv1,keep_prob_input);
# 2nd convolutional layer
W_conv2=weight_variable([K0,K0,F0,F0]); print('W_conv2=',W_conv2.get_shape())
b_conv2=bias_variable([F0]); print('b_conv2=',b_conv2.get_shape())
h_conv2 = tf.nn.relu(conv2d(h_conv1,W_conv2)+b_conv2); print('h_conv2=',h_conv2.get_shape())
h_conv2_pooled = max_pool_2x2(h_conv2); print('h_conv2_pooled=',h_conv2_pooled.get_shape())
# reshaping for fully connected
h_conv2_pooled_rs = tf.reshape(h_conv2_pooled, [batch_size,-1]); print('x_rs',h_conv2_pooled_rs.get_shape());
W_norm3 = weight_variable([nfc1, nfc]); print('W_norm3=',W_norm3.get_shape())
b_conv3 = bias_variable([nfc1]); print('b_conv3=',b_conv3.get_shape())
# fully connected layer
h_full3 = tf.matmul( W_norm3, tf.transpose(h_conv2_pooled_rs) ); print('h_full3=',h_full3.get_shape())
h_full3 = tf.transpose(h_full3); print('h_full3=',h_full3.get_shape())
h_full3 += b_conv3; print('h_full3=',h_full3.get_shape())
h_full3=tf.nn.relu(h_full3); print('h_full3=',h_full3.get_shape())
h_full3=tf.nn.dropout(h_full3,keep_prob_input); print('h_full3_dropout=',h_full3.get_shape())
#reshaping back to conv
h_full3_rs = tf.reshape(h_full3, [batch_size, 24,24,-1]); print('h_full3_rs=',h_full3_rs.get_shape())
#Second set of conv followed by conv stride 2 operation
W_conv4=weight_variable([K1,K1,F1,F1]); print('W_conv4=',W_conv4.get_shape())
b_conv4=bias_variable([F1]); print('b_conv4=',b_conv4.get_shape())
h_conv4=tf.nn.relu(conv2d(h_full3_rs,W_conv4)+b_conv4); print('h_conv4=',h_conv4.get_shape())
h_conv4 = max_pool_2x2(h_conv4); print('h_conv4_pooled=',h_conv4.get_shape())
# reshaping for fully connected
h_conv4_pooled_rs = tf.reshape(h_conv4, [batch_size,-1]); print('x2_rs',h_conv4_pooled_rs.get_shape());
W_norm4 = weight_variable([ 2304, nc]); print('W_norm4=',W_norm4.get_shape())
b_conv4 = tf.Variable(tf.zeros([nc])); print('b_conv4=',b_conv4.get_shape())
# fully connected layer
h_full4 = tf.matmul( h_conv4_pooled_rs, W_norm4 ); print('h_full4=',h_full4.get_shape())
h_full4 += b_conv4; print('h_full4=',h_full4.get_shape())
y = h_full4;
## Softmax
y = tf.nn.softmax(y); print('y(SOFTMAX)=',y.get_shape())
# Loss
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))
total_loss = cross_entropy
# Optimization scheme
#train_step = tf.train.GradientDescentOptimizer(0.02).minimize(total_loss)
train_step = tf.train.AdamOptimizer(0.001).minimize(total_loss)
# Accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Run Computational Graph
n = train_data.shape[0]
indices = collections.deque()
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(15001):
# Batch extraction
if len(indices) < batch_size:
indices.extend(np.random.permutation(n))
idx = [indices.popleft() for i in range(batch_size)]
batch_x, batch_y = train_data[idx,:], train_labels[idx]
#print(batch_x.shape,batch_y.shape)
# Run CG for vao to increase the test acriable training
_,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y, keep_prob_input: 0.2})
# Run CG for test set
if not i%50:
print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)
acc_test = sess.run(accuracy, feed_dict = {xin: test_data, y_label: test_labels, keep_prob_input: 1.0})
print('test accuracy=',acc_test)
tf.reset_default_graph()
# implementation of Conv-Relu-COVN-RELU - pool
# based on : http://cs231n.github.io/convolutional-networks/
# Define computational graph (CG)
batch_size = n_train # batch size
d = train_data.shape[1] # data dimensionality
nc = 6 # number of classes
# CG inputs
xin = tf.placeholder(tf.float32,[batch_size,d]); #print('xin=',xin,xin.get_shape())
y_label = tf.placeholder(tf.float32,[batch_size,nc]); #print('y_label=',y_label,y_label.get_shape())
#d = tf.placeholder(tf.float32);
#for the first conc-conv
# Convolutional layer
K0 = 8 # size of the patch
F0 = 22 # number of filters
ncl0 = K0*K0*F0
#for the second conc-conv
K1 = 4 # size of the patch
F1 = F0 # number of filters
ncl1 = K1*K1*F1
#drouput probability
keep_prob_input=tf.placeholder(tf.float32)
#1st set of conv followed by conv2d operation and dropout 0.5
W_conv1=weight_variable([K0,K0,1,F0]); print('W_conv1=',W_conv1.get_shape())
b_conv1=bias_variable([F0]); print('b_conv1=',b_conv1.get_shape())
x_2d1 = tf.reshape(xin, [-1,48,48,1]); print('x_2d1=',x_2d1.get_shape())
#conv2d
h_conv1=tf.nn.relu(conv2d(x_2d1, W_conv1) + b_conv1); print('h_conv1=',h_conv1.get_shape())
#h_conv1= tf.nn.dropout(h_conv1,keep_prob_input);
# 2nd convolutional layer + max pooling
W_conv2=weight_variable([K0,K0,F0,F0]); print('W_conv2=',W_conv2.get_shape())
b_conv2=bias_variable([F0]); print('b_conv2=',b_conv2.get_shape())
# conv2d + max pool
h_conv2 = tf.nn.relu(conv2d(h_conv1,W_conv2)+b_conv2); print('h_conv2=',h_conv2.get_shape())
h_conv2_pooled = max_pool_2x2(h_conv2); print('h_conv2_pooled=',h_conv2_pooled.get_shape())
#3rd set of conv
W_conv3=weight_variable([K0,K0,F0,F0]); print('W_conv3=',W_conv3.get_shape())
b_conv3=bias_variable([F1]); print('b_conv3=',b_conv3.get_shape())
x_2d3 = tf.reshape(h_conv2_pooled, [-1,24,24,F0]); print('x_2d3=',x_2d3.get_shape())
#conv2d
h_conv3=tf.nn.relu(conv2d(x_2d3, W_conv3) + b_conv3); print('h_conv3=',h_conv3.get_shape())
# 4th convolutional layer
W_conv4=weight_variable([K1,K1,F1,F1]); print('W_conv4=',W_conv4.get_shape())
b_conv4=bias_variable([F1]); print('b_conv4=',b_conv4.get_shape())
#conv2d + max pool 4x4
h_conv4 = tf.nn.relu(conv2d(h_conv3,W_conv4)+b_conv4); print('h_conv4=',h_conv4.get_shape())
h_conv4_pooled = max_pool_2x2(h_conv4); print('h_conv4_pooled=',h_conv4_pooled.get_shape())
h_conv4_pooled = max_pool_2x2(h_conv4_pooled); print('h_conv4_pooled=',h_conv4_pooled.get_shape())
#5th set of conv
W_conv5=weight_variable([K1,K1,F1,F1]); print('W_conv5=',W_conv5.get_shape())
b_conv5=bias_variable([F1]); print('b_conv5=',b_conv5.get_shape())
x_2d5 = tf.reshape(h_conv4_pooled, [-1,6,6,F1]); print('x_2d5=',x_2d5.get_shape())
#conv2d
h_conv5=tf.nn.relu(conv2d(x_2d5, W_conv5) + b_conv5); print('h_conv5=',h_conv5.get_shape())
# 6th convolutional layer
W_conv6=weight_variable([K1,K1,F1,F1]); print('W_con6=',W_conv6.get_shape())
b_conv6=bias_variable([F1]); print('b_conv6=',b_conv6.get_shape())
b_conv6= tf.nn.dropout(b_conv6,keep_prob_input);
#conv2d + max pool 4x4
h_conv6 = tf.nn.relu(conv2d(h_conv5,W_conv6)+b_conv6); print('h_conv6=',h_conv6.get_shape())
h_conv6_pooled = max_pool_2x2(h_conv6); print('h_conv6_pooled=',h_conv6_pooled.get_shape())
# reshaping for fully connected
h_conv6_pooled_rs = tf.reshape(h_conv6, [batch_size,-1]); print('x2_rs',h_conv6_pooled_rs.get_shape());
W_norm6 = weight_variable([ 6*6*F1, nc]); print('W_norm6=',W_norm6.get_shape())
b_norm6 = bias_variable([nc]); print('b_conv6=',b_norm6.get_shape())
# fully connected layer
h_full6 = tf.matmul( h_conv6_pooled_rs, W_norm6 ); print('h_full6=',h_full6.get_shape())
h_full6 += b_norm6; print('h_full6=',h_full6.get_shape())
y = h_full6;
## Softmax
y = tf.nn.softmax(y); print('y3(SOFTMAX)=',y.get_shape())
# Loss
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))
total_loss = cross_entropy
# Optimization scheme
#train_step = tf.train.GradientDescentOptimizer(0.02).minimize(total_loss)
train_step = tf.train.AdamOptimizer(0.001).minimize(total_loss)
# Accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Run Computational Graph
n = train_data.shape[0]
indices = collections.deque()
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(20001):
# Batch extraction
if len(indices) < batch_size:
indices.extend(np.random.permutation(n))
idx = [indices.popleft() for i in range(batch_size)]
batch_x, batch_y = train_data[idx,:], train_labels[idx]
#print(batch_x.shape,batch_y.shape)
# Run CG for vao to increase the test acriable training
_,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y, keep_prob_input: 0.5})
# Run CG for test set
if not i%100:
print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)
acc_test = sess.run(accuracy, feed_dict = {xin: test_data, y_label: test_labels, keep_prob_input: 1.0})
print('test accuracy=',acc_test)
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Save the variables to disk.
save_path = saver.save(sess, "model_6layers.ckpt")
print("Model saved in file: %s" % save_path)
# calculating accuracy for each class separately for the test set
result_cnn = sess.run([y], feed_dict = {xin: test_data, keep_prob_input: 1.0})
#result = sess.run(y, feed_dict={xin: test_data, keep_prob_input: 1.0})
tset = test_labels.argmax(1);
result = np.asarray(result_cnn[:][0]).argmax(1);
for i in range (0,nc):
print('accuracy',str_emotions[i]+str(' '), '\t',ut.calc_partial_accuracy(tset, result, i))
faces, marked_img = ut.get_faces_from_img('diff_emotions.jpg');
#faces, marked_img = ut.get_faces_from_img('big_bang.png');
#faces, marked_img = ut.get_faces_from_img('camera');
# if some face was found in the image
if(len(faces)):
#creating the blank test vector
data_orig = np.zeros([n_train, 48,48])
#putting face data into the vector (only first few)
for i in range(0, len(faces)):
data_orig[i,:,:] = ut.contrast_stretch(faces[i,:,:]);
#preparing image and putting it into the batch
n = data_orig.shape[0];
data = np.zeros([n,48**2])
for i in range(n):
xx = data_orig[i,:,:]
xx -= np.mean(xx)
xx /= np.linalg.norm(xx)
data[i,:] = xx.reshape(2304); #np.reshape(xx,[-1])
result = sess.run([y], feed_dict={xin: data, keep_prob_input: 1.0})
plt.rcParams['figure.figsize'] = (10.0, 10.0) # set default size of plots
for i in range(0, len(faces)):
emotion_nr = np.argmax(result[0][i]);
plt_idx = (2*i)+1;
plt.subplot( 5, 2*len(faces)/5+1, plt_idx)
plt.imshow(np.reshape(data[i,:], (48,48)))
plt.axis('off')
plt.title(str_emotions[emotion_nr])
ax = plt.subplot(5, 2*len(faces)/5+1, plt_idx +1)
ax.bar(np.arange(nc) , result[0][i])
ax.set_xticklabels(str_emotions, rotation=45, rotation_mode="anchor")
ax.set_yticks([])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Exploration
Step2: Implementation
Step3: Question 1
Step4: Answer
Step5: The heatmap above shows the percentile ranks of samples' category spending. It further helps visualize the amount of spending in each category. This helps us further understand the key categories of spending for each sample. It also confirms that the initial analysis of comparision with the descriptive statistics of the data is correct.
Step6: Question 2
Step7: Question 3
Step8: Answer
Step9: Observation
Step10: Implementation
Step11: Question 4
Step12: Question 5
Step13: Implementation
Step14: Observation
Step15: Question 6
Step16: Question 7
Step17: Implementation
Step18: Question 8
Step19: Answer
Step20: The heatmap above helps visualize the segments alongside the samples. This helps better understand that the sample 1 exhibits characteristics of segment 1 while sample 2 and 3 look like they belong to segment 0.
Step21: Answer
|
<ASSISTANT_TASK:>
Python Code:
# Show matplotlib plots inline (nicely formatted in the notebook)
%matplotlib inline
# Import libraries necessary for this project
import numpy as np
import pandas as pd
import renders as rs
import seaborn as sns
from matplotlib import pylab as plt
from IPython.display import display # Allows the use of display() for DataFrames
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"
# Display a description of the dataset
display(data.describe())
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [13,120,390]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)
print "Mean Differences"
display(samples - np.round(data.mean()))
print "Median Differences"
display(samples - np.round(data.median()))
# look at percentile ranks
pcts = 100. * data.rank(axis=0, pct=True).iloc[indices].round(decimals=3)
# visualize percentiles with heatmap
sns.heatmap(pcts.reset_index(drop=True), annot=True, vmin=1, vmax=99, fmt='.1f', cmap='YlGnBu')
plt.title('Percentile ranks of\nsamples\' category spending')
plt.xticks(rotation=45, ha='center');
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = data.drop('Fresh',axis=1)
# TODO: Split the data into training and testing sets using the given feature as the target
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(new_data,data['Fresh'],test_size=0.25,random_state=42)
# TODO: Create a decision tree regressor and fit it to the training set
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor(random_state=42)
# TODO: Report the score of the prediction using the testing set
regressor.fit(X_train,y_train)
score = regressor.score(X_test,y_test)
print score
# Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
import seaborn as sns
import pylab as plt
corr = data.corr()
mask = np.zeros_like(corr)
mask[np.triu_indices_from(mask, 1)] = True
with sns.axes_style("white"):
ax = sns.heatmap(corr, mask=mask, square=True, annot=True, cmap='RdBu_r')
plt.xticks(rotation=45, ha='center');
# TODO: Scale the data using the natural logarithm
log_data = np.log(data)
# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples)
# Produce a scatter matrix for each pair of newly-transformed features
idx_reorder = ['Detergents_Paper', 'Grocery', 'Milk', 'Fresh', 'Frozen', 'Delicatessen']
axes = pd.scatter_matrix(log_data[idx_reorder], alpha = 0.3, figsize = (14,8), diagonal = 'kde')
corr = log_data[idx_reorder].corr().as_matrix()
for i, j in zip(*plt.np.triu_indices_from(axes, k=1)):
axes[i, j].annotate("%+.3f" %corr[i,j], (0.8, 0.2), xycoords='axes fraction', ha='center', va='center',color="red", fontsize=14)
# Display the log-transformed sample data
display(log_samples)
# For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature],q=25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature],q=75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = (Q3-Q1)*1.5
# Display the outliers
#print "Data points considered outliers for the feature '{}':".format(feature)
#display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])
# OPTIONAL: Select the indices for data points you wish to remove [66,95,75,109,128,142,187,218,338]
outliers = [val for sublist in
[log_data[~((log_data[feature] >= np.percentile(log_data[feature],q=25) -
((np.percentile(log_data[feature],q=75) -
np.percentile(log_data[feature],q=25))*1.5)) &
(log_data[feature] <= np.percentile(log_data[feature],q=75) +
((np.percentile(log_data[feature],q=75) -
np.percentile(log_data[feature],q=25))*1.5)))].index.values
for feature in log_data.keys()] for val in sublist]
outliers = list(set([x for x in outliers if outliers.count(x)>1]))
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
#visualizing the outliers removed with box-plots.
ax = sns.boxplot(data=log_data)
ax = sns.swarmplot(data=log_data.iloc[outliers], color="red", size=8)
# TODO: Apply PCA to the good data with the same number of dimensions as features
from sklearn.decomposition import PCA
pca = PCA()
pca.fit(good_data)
# TODO: Apply a PCA transformation to the sample log-data
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = rs.pca_results(good_data, pca)
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
# TODO: Fit PCA to the good data using only two dimensions
pca = PCA(n_components=2)
pca.fit(good_data)
# TODO: Apply a PCA transformation the good data
reduced_data = pca.transform(good_data)
# TODO: Apply a PCA transformation to the sample log-data
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
#implement joint Grid to further understand the categories in the two dimensions
g = sns.JointGrid("Dimension 1", "Dimension 2", reduced_data, xlim=(-6,6), ylim=(-5,5))
g = g.plot_joint(sns.kdeplot, cmap="Blues_d")
g = g.plot_marginals(sns.kdeplot, shade=True)
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
# TODO: Apply your clustering algorithm of choice to the reduced data
from sklearn.mixture import GMM
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
for i in range (6,1,-1):
clusterer = GMM(n_components = i,random_state=42)
clusterer.fit(reduced_data)
# TODO: Predict the cluster for each data point
preds = clusterer.predict(reduced_data)
# TODO: Find the cluster centers
centers = clusterer.means_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data,preds)
print "clusters = {} : score = {:.3f}\n".format(i,score),
# Display the results of the clustering from implementation
rs.cluster_results(reduced_data, preds, centers, pca_samples)
# TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
print "Mean Difference"
display(true_centers - np.round(data.mean()))
print "Median Differences"
display((true_centers - np.round(data.median())))
print "N-stds"
display((true_centers - np.round(data.mean()))/data.std())
# add the true centers as rows to our original data
newdata = data.append(true_centers)
# show the percentiles of the centers
ctr_pcts = 100. * newdata.rank(axis=0, pct=True).loc[['Segment 0', 'Segment 1']].round(decimals=3)
#print ctr_pcts
# visualize percentiles with heatmap and compare to samples (as kept in `pcts` variable)
pcts = 100. * data.rank(axis=0, pct=True).iloc[indices].round(decimals=3)
_ = sns.heatmap(pcts.append(ctr_pcts), annot=True, cmap='Greens', fmt='.1f')
plt.xticks(rotation=45, ha='center')
plt.title('Percentile ranks of spending by\nsamples & segment centers');
# Display the predictions
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
# Display the clustering results based on 'Channel' data
rs.channel_results(reduced_data, outliers, pca_samples)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tools
Step2: Reading the instances
Step3: Statistical difficulty and average multiplicity
Step4: The dispersion of the pagination sizes could have been measured in several other ways, for instance with the standard deviation (below).
Step5: Number of instances by statistical difficulty
Step6: [Sec. 4.3] Predicting the statistical difficulty of a given instance
Step7: [Fig. 5] Number of instances by average multiplicity
Step8: [Sec. 4.4] Discussion
Step9: [Sec. 4.4.2] Comparison of the heuristic methods
Step10: [Fig. 7] Relative quality of the five main heuristics
Step11: Exact algorithms vs. heuristics
Step12: Hence, the optimal value found by CPLEX may be lesser than this one
Step13: It may happen that the upper bound found by CPLEX is less than the best page count found by the heuristics. In this case, we know that there exists a better pagination (although CPLEX cannot prove its optimality)
Step14: The column best gives the minimum pagination sizes found by the heuristics and CPLEX (including the upper bound)
Step15: Grouping GA vs. the other heuristics
|
<ASSISTANT_TASK:>
Python Code:
from collections import OrderedDict
INPUT_PATH = "gauss/"
(MIN_PREFIX, MAX_PREFIX) = ("C015", "C055") # for instance filenames
OUTPUT_PATH = "plots/"
WINDOW = 150 # size of the subsets of instances used as a moving window
SOLVER_NAMES = OrderedDict([
("GeneticGroup", "Grouping GA"),
("GeneticStandard", "Standard GA"),
("OverloadAndRemove", "Overload-and-Remove"),
("OverloadAndRemovePresort", "Overload-and-Remove (with presort)"),
("BestFusion", "Best Fusion"),
("FirstFit", "First Fit"),
])
EXCLUDED_SOLVER_NAMES = {"OverloadAndRemovePresort"} # excluded from certain plots
solvers = ["solvers" + name for name in SOLVER_NAMES.keys()]
times = ["times" + name for name in SOLVER_NAMES.keys()]
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.ticker import Locator
np.warnings.filterwarnings("ignore", category=RuntimeWarning)
np.warnings.filterwarnings("ignore", category=UserWarning)
!pip install seaborn
import seaborn as sns
sns.set_style("white")
sns.set_context("paper", font_scale=2)
sns.set_palette(sns.color_palette("Set1", 5))
def plot_linear_regression(x, y):
fit = np.polyfit(x, y, deg=1)
plt.plot(x, fit[0] * x + fit[1])
correlation = round(x.corr(y), 3)
print("Pearson:", correlation)
return correlation
!pip install pandas --upgrade
import os, json
df = []
indexes = []
for filename in os.listdir(INPUT_PATH):
if not filename.endswith("json") or not MIN_PREFIX <= filename <= MAX_PREFIX:
continue
with open(os.path.join(INPUT_PATH, filename)) as f:
instances = json.loads(f.read())
indexes.extend([(filename, discriminant) for discriminant in range(len(instances))])
for instance in instances:
for (k, v) in list(instance.items()):
if isinstance(v, dict): # flatten any sub-dict with dot notation
for (sub_key, sub_value) in v.items():
instance[k + sub_key] = sub_value
del instance[k]
df.extend(instances)
df = pd.DataFrame(df, index=pd.MultiIndex.from_tuples(indexes, names=("filename", "i")))
df["best"] = df[["pageCount", "cplexOpt", "cplexUB"]].min(axis = 1) # add a column for the best known pagination size
df["cardinality"] = df["tiles"].apply(lambda tiles: sum(len(tile) for tile in tiles))
df_sorted_by_multiplicity = df.sort_values(by="avgMultiplicity") # for use with a moving window
print(df.info())
df.describe()
print("There are a %s instances." % len(df))
x = df[solvers].mean(axis=1) - df["best"]
y = df["best"]
plt.xlabel("Statistical difficulty")
plt.ylabel("Best pagination size")
plt.scatter(x, y, marker="o", s=1)
_ = plot_linear_regression(x, y)
x = df["avgMultiplicity"]
y = df[solvers].std(axis=1)
axes = plt.gca()
axes.set_xlim([0, 70])
plot_linear_regression(x, y)
plt.scatter(x, y, marker="o", s=1)
plt.xlabel("Average multiplicity")
plt.ylabel("Average standard deviation")
plt.grid()
plt.show()
result = df.groupby(round(2 * (df[solvers].mean(axis=1) - df["best"]))/2).size()
result.plot(kind="bar")
plt.yscale("symlog")
plt.xlabel("Statistical difficulty")
plt.ylabel("Number of instances (sym-log scale)")
plt.show()
print("Number of instances per statistical difficulty:\n", result)
print("Average statistical difficulty: %.02f" % (df[solvers].mean(axis=1) - df["best"]).mean())
print("Median statistical difficulty: %.02f" % (df[solvers].mean(axis=1) - df["best"]).median())
plt.figure(figsize=(10,5))
x = df["avgMultiplicity"]
y = df[solvers].mean(axis=1) - df["best"]
axes = plt.gca()
axes.set_xlim([0, 70])
axes.set_ylim([-1, 9.5])
plot_linear_regression(x, y)
plt.scatter(x, y, marker="o", s=1)
plt.xlabel("Average multiplicity")
plt.ylabel("Average range (statistical difficulty)")
plt.grid()
plt.savefig(os.path.join(OUTPUT_PATH, "difficulty_by_multiplicity.pdf"), bbox_inches='tight')
plt.figure(figsize=(20, 10))
df["bitSize"] = df["symbolCount"] * df["tileCount"]
for (i, column) in enumerate(["symbolCount", "bitSize", "tileCount", "cardinality"], 1):
plt.subplot(2, 2, i)
x = df[column]
y = df[solvers].mean(axis=1) - df["best"]
if i in [1, 3]:
plt.ylabel("Average range (statistical difficulty)")
plt.scatter(x, y, marker="o", s=1)
correlation = plot_linear_regression(x, y)
plt.xlabel("%s (r = %s)" % (column, correlation))
plt.show()
plt.figure(figsize=(10,6))
range_width = 2
ranges = np.arange(1, df["avgMultiplicity"].max() + range_width, range_width)
slices = pd.cut(df["avgMultiplicity"], ranges)
instances_per_slice = df.groupby(slices).size()
instances_per_slice.plot(kind="bar", width=0.9, color="#ffffbf")
cplex_instances = df[df["cplexOpt"].notnull() | df["cplexLB"].notnull() | df["cplexUB"].notnull()]
cplex_slices = pd.cut(cplex_instances["avgMultiplicity"], ranges)
cplex_instances.groupby(cplex_slices).size().plot(kind="bar", width=0.7, color='#abdda4')
cplex_solved_instances = df[df["cplexOpt"].notnull()]
cplex_solved_slices = pd.cut(cplex_solved_instances["avgMultiplicity"], ranges)
cplex_solved_instances.groupby(cplex_solved_slices).size().plot(kind="bar", width=0.5, color="#2b83ba")
plt.xlabel("Ranges of average multiplicity")
plt.ylabel("Number of instances (sym-log scale)")
plt.yscale('symlog')
axes = plt.gca()
axes.set_ylim(0, 3000)
plt.tick_params(axis='x', which='both', bottom='off', top='off')
axes.yaxis.grid(True)
plt.legend(["All instances", "Submitted to CPLEX", "Solved to optimality by CPLEX"])
plt.savefig(os.path.join(OUTPUT_PATH, "count_by_multiplicity.pdf"), bbox_inches='tight')
range_width = 1
ranges = np.arange(1, df["avgMultiplicity"].max() + range_width, range_width)
slices = pd.cut(df["avgMultiplicity"], ranges)
instances_per_slice = df.groupby(slices).size()
for start in (4, 23, 53):
n = instances_per_slice[range_width * (start - 1)]
print("There are %d instances whose average multiplicity lies between %s and %s." % (n, start, start + range_width))
(a, b) = (1, 9)
rate = 100.0 * sum(instances_per_slice[a-1:b-1]) / len(df)
print("%0.2f %% of the instances concentrate between average multiplicities %s and %s." % (rate, a, b))
cplex_instances = df[df["cplexOpt"].notnull() | df["cplexLB"].notnull() | df["cplexUB"].notnull()]
print("%s instances (%.2f %%) submitted to CPLEX." % (len(cplex_instances), 100.0 * len(cplex_instances)/len(df)))
print("CPLEX's success in less than one hour: %s instances (%.1f %%)." % (df["cplexOpt"].count(), 100.0 * df["cplexOpt"].count() / len(cplex_instances)))
for above in (13, 20):
cplex_instances_above = cplex_instances[df["avgMultiplicity"] > above]
print("CPLEX's success in less than one hour above an average multiplicity of %s: %.1f %%." % (above, 100.0 * cplex_instances_above["cplexOpt"].count() / len(cplex_instances_above)))
cplex_results = df[df["cplexOpt"].notnull() | df["cplexUB"].notnull()][["cplexOpt","cplexUB","pageCount"]]
print("All the %s instances for which CPLEX has found either a solution, either an upper bound:" % len(cplex_results))
cplex_results
x = pd.Series.rolling(df_sorted_by_multiplicity["avgMultiplicity"], WINDOW, center=True).mean()
plt.figure(figsize=(10,5))
axes = plt.gca()
axes.set_xlim([2, 52])
for time in times:
solver_name = time[len("times"):]
if solver_name in EXCLUDED_SOLVER_NAMES:
continue
y = pd.Series.rolling(df_sorted_by_multiplicity[time], WINDOW, center=True).mean()
plt.plot(x, y, label=SOLVER_NAMES[solver_name])
plt.yscale('log')
plt.xlabel("Average multiplicity (rolling mean on %s instances)" % WINDOW)
plt.ylabel("Execution time (seconds, log scale)")
plt.grid()
plt.savefig(os.path.join(OUTPUT_PATH, "speed_by_multiplicity.pdf"), bbox_inches='tight')
plt.legend(loc=7) # legend not plotted for the paper version
plt.show()
contents = [
df[times].min().map('{:,.2f}'.format),
df[times].max().map('{:,.2f}'.format),
df[times].mean().map('{:,.2f}'.format),
df[times].std().map('{:,.2f}'.format)
]
digest = pd.DataFrame(contents, index = ["min", "max", "mean", "std"])
digest.columns = SOLVER_NAMES.values()
print("Basic aggregations on execution times (in seconds):")
digest
x = pd.Series.rolling(df_sorted_by_multiplicity["avgMultiplicity"], WINDOW, center=True).mean()
plt.figure(figsize=(10,7))
axes = plt.gca()
axes.set_xlim([2, 52])
axes.set_ylim([0.74, 1.01])
axes.spines['right'].set_visible(False)
axes.spines['top'].set_visible(False)
for solver in solvers:
solver_name = solver[len("solvers"):]
if solver_name in EXCLUDED_SOLVER_NAMES:
continue
ratio = df_sorted_by_multiplicity["best"] / df_sorted_by_multiplicity[solver]
y = pd.Series.rolling(ratio, WINDOW, center=True).mean()
plt.plot(x, y, label=SOLVER_NAMES[solver_name])
plt.xlabel("Average multiplicity (rolling mean on %s instances)" % WINDOW)
plt.ylabel("Average pagination size vs. best known result")
plt.grid()
# move the legend to an empty place
legend = plt.legend(loc=7)
plt.draw()
bb = legend.legendPatch.get_bbox().inverse_transformed(axes.transAxes)
bb.set_points([[bb.x0 - 0.02, bb.y0 + 0.2], [bb.x1 - 0.02, bb.y1 + 0.2]])
legend.set_bbox_to_anchor(bb)
plt.savefig(os.path.join(OUTPUT_PATH, "relative_size_by_multiplicity.pdf"), bbox_inches='tight')
assert len(df[df["pageCount"] != df[solvers].min(axis=1)]) == 0
suboptimal_instances_1 = df[df["cplexOpt"] < df["pageCount"]][["cplexOpt", "pageCount"] + solvers]
suboptimal_instances_1.columns = ["cplexOpt", "pageCount"] + list(SOLVER_NAMES.values())
print("The optimal solution is better than the best approximation for these %s instances:" % len(suboptimal_instances_1))
suboptimal_instances_1
suboptimal_instances_2 = df[df["cplexUB"] < df["pageCount"]][["cplexUB", "pageCount"] + solvers]
suboptimal_instances_2.columns = ["cplexOpt", "pageCount"] + list(SOLVER_NAMES.values())
print("For %s more instances, we know that the best approximation is not optimal:" % len(suboptimal_instances_2))
suboptimal_instances_2
df[df["best"] < df["pageCount"]][["best", "pageCount"]]
count = len(suboptimal_instances_1) + len(suboptimal_instances_2)
print("All in all, ILP improved on the heuristics in %s cases" % count, end=" ")
print("(%.02f %% of the %s selected instances)." % (100.0 * count / len(cplex_instances), len(cplex_instances)))
prefix = ["avgMultiplicity", "pageCount"]
columns = [
"solversGeneticGroup",
"solversGeneticStandard",
"solversOverloadAndRemove",
"solversOverloadAndRemovePresort"
]
bad_gga = df[df["pageCount"] < df["solversGeneticGroup"]][prefix + columns]
for column in columns[1:]:
bad_gga[column] = bad_gga[column][bad_gga[column] < bad_gga["solversGeneticGroup"]]
bad_gga.columns = prefix + [SOLVER_NAMES[column[len("solvers"):]] for column in columns]
print("In %.02f %% of the cases," % (100.0 - 100.0 * len(bad_gga) / len(df)),)
print("Grouping GA was the best heuristics, except on these %s instances" % len(bad_gga), end=" ")
print("(greater values erased for clarity, sorted by increasing average multiplicity).")
bad_gga.sort_values(by="avgMultiplicity").fillna("")
for column in bad_gga.columns[len(prefix) + 1:]:
count = bad_gga[column].count()
print("%s produced a better pagination than Grouping GA on %s instances (%.03f %%)." % (column, count, (100.0 * count / len(df))))
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.