Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
4,400
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import pandas.io.data as web
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
file_path = "/Users/Jennifer/Desktop/Freshmen Spring Semester/Data_Bootcamp/PROJECT/Obesity Project/basicstats.xls"
df = pd.read_excel(file_path)
df1 = df.set_index("Region")
df2 = df1.rename(columns ={"Underweight (less than 5th percentile) %":"Under_Weight", "Healthy weight (5th to 84th percentile) %": "Healthy_Weight", "Overweight (85th to 94th percentile) %":"Over_weight","Obese (95th percentile or above) %": "Obese"})
df2.head(5)
df2["Healthy_Weight"].mean()
file = "/Users/Jennifer/Desktop/Freshmen Spring Semester/Data_Bootcamp/PROJECT/Obesity Project/Incomestats.xls"
healthy = pd.read_excel(file)
healthy = healthy.rename(columns ={"State":"Region"})
healthy = healthy.set_index("Region")
healthy.head(5)
newfile= "/Users/Jennifer/Desktop/Freshmen Spring Semester/Data_Bootcamp/PROJECT/Obesity Project/Overweight.xls"
over= pd.read_excel(newfile)
over= over.set_index("Region")
over.head(5)
files= "/Users/Jennifer/Desktop/Freshmen Spring Semester/Data_Bootcamp/PROJECT/Obesity Project/Obese.xls"
obese= pd.read_excel(files)
obese=obese.set_index("Region")
obese.head(5)
fig, ax = plt.subplots(figsize=(10,5))
df2.plot(ax=ax)
ax.set_ylabel('Percent of Children', fontsize=12)
ax.set_xlabel('')
ax.set_title('Childrens Weight Status', fontsize=16, loc='center')
fig, ax = plt.subplots(figsize=(10,5))
df3= df2.T
df3.plot(ax=ax,
legend=False)
ax.set_ylabel('Percent of Children', fontsize=12)
ax.set_xlabel('')
ax.set_title('Childrens Weight Status', fontsize=16, loc='center')
df3["Mississippi"].plot(linewidth=7)
df3["Colorado"].plot(linewidth=7)
fig, ax=plt.subplots()
obese.plot(ax=ax,kind='hist', bins=50, figsize=(10,10))
ax.set_title("Income:Obesity", fontsize=16)
ax.set_ylabel("Number of States", fontsize=13)
ax.set_xlabel("Percentage of Obsese Population", fontsize=13)
fig,ax=plt.subplots()
obese["0 - 99% FPL %"].plot()
obese["400% FPL or higher %"].plot()
df2["Obese"].plot()
ax.legend(["Low Income", "High Income", "All Incomes"])
ax.set_title("% Obese According to Income in each State", fontsize=15)
ax.set_ylabel("Percentage of Children Obese", fontsize=13)
ax.set_xlabel(" ")
foodfile= "/Users/Jennifer/Desktop/HealthyNot.xls"
food=pd.read_excel(foodfile)
food = food.rename(columns={"2002.1":"2002", "2012.1":"2012"})
food
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Data Set
Step2: On average, 63.86% of children are in the range considered to be a healthy weight.
Step3: The following chart shows the percentage of children who are overweight according to their family's income level.
Step4: The following chart shows the percentage of children who are obese according to their family's income level.
Step5: The following graph shows children's weight status across all states. We can see that majority of children in the US are of a healthy weight. Few children are underweight and more are either over weight or obese.
Step6: The following graph shows the percentage of children in each state based on their health status. We can see that a majority of the children are of a healthy weight status nationwide.
Step7: Colorado, represented by the turquoise line, has one of the highest percentage of healthy children. On the other hand, Mississippi, represented by the red line, has the lowest percentage of healthy children and the highest percentage of obese children.
Step8: The following graph further proves the correlation between income and obesity. The middle line shows the total percentage of obesity in each state for all levels of income. The line representing low income lies completely above this line and the line representing high income, lies completely below this line.
Step9: One reason I think contributes to the higher percentage of obesity among lower income families, is the rising prices of healthier foods. At the University of Cambridge, a study was done to see how much costs of food have risen, comparing 2002 to 2012. I found the study at Unviersity of Cambridge Research
|
4,401
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import math
import cvxpy
def get_markowitz_weights(mu, Sigma, gamma=1, max_position=1.0, max_leverage=1.0, short=False):
w = cvxpy.Variable(len(Sigma))
g = cvxpy.Parameter(sign='positive')
L = cvxpy.Parameter()
g.value = gamma
L.value = max_leverage
try:
ret = mu.T*w
except ValueError:
ret = mu*w
risk = cvxpy.quad_form(w, Sigma)
objective = cvxpy.Maximize(ret - g*risk)
constraints = [
cvxpy.abs(w) < max_position,
cvxpy.norm(w, 1) <= L, # Make it so we don't have to invest everything
]
if not short:
constraints.append(w >= 0) # Force all positive weights
prob = cvxpy.Problem(
objective,
constraints
)
result = prob.solve()
return w.value
universes = 1000
evens = 19
total = 38
payout = 100
rounds = 1
results = np.zeros(universes)
#Your code goes here
universes = 1000
evens = 19
total = 38
payout = 1
rounds = 100
results = np.zeros(universes)
#Your code goes here
time_start = '2015-01-01'
time_halfway = '2015-07-01'
time_end = '2016-01-01'
AMZN_r = get_pricing('AMZN', fields='price', start_date=time_start, end_date=time_end).pct_change()[1:]
X = np.linspace(0, len(AMZN_r), len(AMZN_r))
#Your code goes here
symbol_list = ['BEN', 'SYMC', 'IP', 'SWKS', 'IVZ', 'MJN', 'WMB', 'LB', 'TWX', 'NFX', 'PFE', 'LLY', 'HP', 'JPM', 'CXO', 'TJX', 'CAG', 'BBT', 'ATVI', 'NFLX']
prices_df = get_pricing(symbol_list, fields=['price']
, start_date=time_start, end_date=time_end)['price']
prices_df.columns = map(lambda x: x.symbol, prices_df.columns)
eweights_df = len(symbol_list) * [float(1)/len(symbol_list)]
returns_df = prices_df.pct_change(1)[1:]
#Your code goes here
#Pipeline Setup
from quantopian.research import run_pipeline
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data import morningstar
from quantopian.pipeline.factors import CustomFactor
from quantopian.pipeline.classifiers.morningstar import Sector
from quantopian.pipeline.filters import QTradableStocksUS
from time import time
universe = QTradableStocksUS()
pipe = Pipeline(columns = {'Market Cap' : morningstar.valuation.market_cap.latest},
screen=universe
)
start_timer = time()
results = run_pipeline(pipe, time_start, time_end)
end_timer = time()
results.fillna(value=0);
print "Time to run pipeline %.2f secs" % (end_timer - start_timer)
# This is important as sometimes the first data returned won't be on the specified start date
first_trading_day = results.index.levels[0][1]
market_cap = results.loc[first_trading_day]['Market Cap']
market_cap.index = [x.symbol for x in market_cap.index]#pd.MultiIndex.from_tuples([(x[0], x[1].symbol) for x in market_cap.index])
mcs = market_cap # pd.DataFrame(market_cap.loc[(first_trading_day,)].loc[symbol_list]).transpose()
mweights = (mcs[symbol_list]/sum(mcs[symbol_list])).transpose()
#Your code goes here
mu = returns_df[symbol_list].\
loc[:time_halfway].fillna(0).mean().as_matrix()
sigma = returns_df[symbol_list].\
loc[:time_halfway].fillna(0).cov().as_matrix()
mkweights_df = get_markowitz_weights(mu, sigma)
#Your code goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Helper Functions
Step2: Exercise 1
Step3: b. $1 Bets
Step4: Exercise 2
Step5: b. Equally Weighted Portfolio
Step6: c. Market Weighted Portfolio
Step7: d. Markowitz Portfolio
|
4,402
|
<ASSISTANT_TASK:>
Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
class Grid(object):
def find_path(self, matrix):
# Implemente aqui sua solução
%%writefile missao3.py
from nose.tools import assert_equal
class TestGridPath(object):
def test_grid_path(self):
grid = Grid()
assert_equal(grid.find_path(None), None)
assert_equal(grid.find_path([[]]), None)
max_rows = 8
max_cols = 4
matrix = [[1] * max_cols for _ in range(max_rows)]
matrix[1][1] = 0
matrix[2][2] = 0
matrix[3][0] = 0
matrix[4][2] = 0
matrix[5][3] = 0
matrix[6][1] = 0
matrix[6][3] = 0
matrix[7][1] = 0
result = grid.find_path(matrix)
expected = [(0, 0), (1, 0), (2, 0),
(2, 1), (3, 1), (4, 1),
(5, 1), (5, 2), (6, 2),
(7, 2), (7, 3)]
assert_equal(result, expected)
matrix[7][2] = 0
result = grid.find_path(matrix)
assert_equal(result, None)
print('Sua solução foi executada com sucesso! Parabéns!')
def main():
test = TestGridPath()
test.test_grid_path()
if __name__ == '__main__':
main()
%run -i missao3.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Missão
Step2: Teste da Solução
|
4,403
|
<ASSISTANT_TASK:>
Python Code:
import scipy as sp
import scipy.stats as stats
import matplotlib.pyplot as plt
from numpy.random import normal
%pylab inline
def h(x, w): return w[1] * x + w[0]
def quadratic_loss(y, hx):
return (y - hx)**2
def error(h, X, y):
err = 0
for xi, yi in zip(X, y):
err += quadratic_loss(yi, h(xi))
return 0.5 * err
def f(x): return 3 * x + 2
xs = sp.linspace(0, 10)
plt.plot(xs, f(xs));
X = linspace(0, 10)
y = f(X) + 2 * stats.norm.rvs(scale=3, size=50)
X
len(_)
y
plt.plot(xs, f(xs), '--')
plt.scatter(X, y)
plt.show()
def h1(x): return h(x, [0,1])
def h2(x): return h(x, [0,2])
weights = [[0,1], [0,2], [1,2]]
plt.plot(xs, f(xs), '--')
plt.scatter(X, y)
plt.plot(xs, h1(xs), 'r', label='h1')
plt.plot(xs, h2(xs), 'g', label='h2')
plt.legend();
error(h1, X, y)
error(h2, X, y)
N = len(X)
x_mean = sp.mean(X)
y_mean = sp.mean(y)
w1 = (np.dot(X, y) - N * x_mean * y_mean) / (sum(X**2) - N * (x_mean**2))
w0 = sp.mean(y) - w1 * sp.mean(X)
print w1, w0
def h_best(x): return h(x, [w0,w1])
plt.plot(xs, f(xs), '--')
plt.scatter(X, y)
plt.plot(xs, h_best(xs), 'r');
error(h_best, X, y)
from SU import PolyRegression
X1 = X.reshape((50,1))
h2 = PolyRegression(2).fit(X1, y)
h4 = PolyRegression(4).fit(X1, y)
plt.plot(xs, f(xs), '--')
plt.scatter(X, y)
plt.plot(X1, h2.predict(X1), 'r');
plt.plot(X1, h4.predict(X1), 'g');
error(h2, X, y)
error(h4, X, y)
def g(x): return x**3 - 10 * x**2 + 2 * x - 2
xs = sp.linspace(0, 10)
plt.plot(xs, g(xs));
X = sp.linspace(0,10)
y = g(X) + 5 * stats.norm.rvs(scale=3, size=50)
plt.plot(xs, g(xs), '--')
plt.scatter(X, y)
plt.show()
plt.plot(xs, g(xs), '--')
plt.scatter(X, y)
X1 = X.reshape((50,1))
for degree in range(1, 8):
h = PolyRegression(degree).fit(X1, y)
plt.plot(X1, h.predict(X1), label="d=%d" % degree);
print "error(h%d) = %.2f" % (degree, error(h, X, y))
plt.legend()
plt.show()
XY = np.column_stack((X1, y))
np.random.shuffle(XY)
X_train, y_train = XY[:30,0:1], XY[:30,1]
X_test, y_test = XY[30:,0:1], XY[30:,1]
len(X_train), len(X_test)
plt.plot(xs, g(xs), '--')
plt.scatter(X_train, y_train, c='b')
plt.scatter(X_test, y_test, c='r');
plt.plot(xs, g(xs), '--')
plt.scatter(X_train, y_train, c='b')
plt.scatter(X_test, y_test, c='r');
for degree in range(1, 8):
h = PolyRegression(degree).fit(X_train, y_train)
plt.plot(X1, h.predict(X1), label="d=%d" % degree);
print "train_error(h%d) = %.2f; test_error(h%d) = %.2f" % (degree, error(h, X_train, y_train), degree, error(h, X_test, y_test))
plt.legend()
plt.show()
train_errors = []
test_errors = []
degrees = range(1,8)
for degree in degrees:
h = PolyRegression(degree).fit(X_train, y_train)
train_error = error(h, X_train, y_train)
test_error = error(h, X_test, y_test)
train_errors.append(train_error)
test_errors.append(test_error)
plt.plot(list(degrees), train_errors, label="train_error")
plt.plot(list(degrees), test_errors, label="test_error")
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sadržaj
Step2: (2) Funkcija gubitka (i njoj odgovarajuća funkcija pogreške)
Step3: Funkcija koja generira podatke (i koju zapravo želimo naučiti)
Step4: Skup primjera za učenje $\mathcal{D}=(\mathbf{X},\mathbf{y})$ dobiven je iz $f(x)$, uz dodatan šum
Step5: Dvije hipoteze iz našeg modela
Step6: Empirijske pogreške hipoteza na skupu $\mathcal{D}$
Step7: (3) Optimizacijski postupak
Step8: U gornjem primjeru radili smo s modelom prvog stupnja
Step9: Možemo očekivati da vrijedi
Step10: Model koji odgovara pravoj funkciji koja je generirala podatke je $h_3$, tj. optimalan hiperparametar je $d=3$
|
4,404
|
<ASSISTANT_TASK:>
Python Code:
import sys, os
sys.path.insert(0, "/Users/kaestner/git/lib/lib")
sys.path.insert(0, "/Users/kaestner/git/scripts/python/")
if 'LD_LIBRARY_PATH' not in os.environ:
os.environ['LD_LIBRARY_PATH'] = '/Users/kaestner/git/lib/lib'
# os.execv(sys.argv[0], sys.argv)
import numpy as np
import muhrectomo as mt
import matplotlib.pyplot as plt
import amglib.readers as io
import amglib.imageutils as amg
import importlib
importlib.reload(mt)
importlib.reload(io)
importlib.reload(amg)
recon = mt.Reconstructor(mt.bpMultiProj)
print("Created reconstructor :",recon.name())
Nproj = 10
proj = np.ones([Nproj,256,256])
args = {"angles" : np.linspace(0,180,num=Nproj),
"weights" : np.ones(Nproj)}
recon.configure({ "center" : 50,
"resolution" : 0.135
})
recon.process(proj,args)
vol = recon.volume()
plt.imshow(vol[0])
ob = io.readImages('/data/P08062_wood/raw_CCD/ob_{0:04d}.fits',1,5,averageStack=True).mean(axis=0)
dc = io.readImages('/data/P08062_wood/raw_CCD/dark_{0:04d}.fits',1,5,averageStack=True).mean(axis=0)
proj = io.readImages('/data/P08062_wood/raw_CCD/wood_{0:04d}.fits',1,626) # This takes a while
fig,ax = plt.subplots(1,3,figsize=[15,7])
ax[0].imshow(dc)
ax[1].imshow(ob)
ax[2].imshow(proj[0])
nproj = amg.normalizeImage(img=proj, ob=ob, dc=dc, doseROI=[100,250,200,300])
cproj = nproj[:,50:920,250:850]
plt.imshow(cproj[0])
del nproj
Nproj = cproj.shape[0]
# Information per projection
args = {"angles" : np.linspace(0,360,num=Nproj),
"weights" : np.ones(Nproj)/Nproj}
# Geometry information
recon.configure({ "center" : 295,
"resolution" : 0.05
})
recon.process(cproj[:,500:600,:],args) # Reconstruct a part of the slices (32 slices here)
vol = recon.volume() # Retrieve the reconstructed volume
plt.imshow(vol[0])
import ipyvolume as ipv
ipv.quickvolshow(vol)
vol.shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a reconstructor object
Step2: Reconstruction workflow
Step3: The wood data
Step4: Preprocessing
Step5: Prepare and run the back-projection
|
4,405
|
<ASSISTANT_TASK:>
Python Code:
import pandas
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.cluster import KMeans
from pprint import pprint
TITANIC_TRAIN = 'train.csv'
TITANIC_TEST = 'test.csv'
# t_df refers to titanic_dataframe
t_df = pandas.read_csv(TITANIC_TRAIN, header=0)
t_df.drop(['Name', 'Ticket', 'Cabin', 'Embarked', 'Sex'], axis=1, inplace=True)
t_df.info()
t_df.head(1)
t_df.Age.fillna(np.mean(t_df.Age), inplace=True)
t_df.info()
def precision(tp, fp):
Determtine The Precision of Algorithm
return tp / (tp + fp)
def recall(tp, fn):
Determine The Recall of Algorithm
return tp / (tp + fn)
def f1_score(tp, fn, fp):
Return the F1 score of a algorithm
pre = precision(tp, fp)
rec = recall(tp, fn)
return (2 * ((pre * rec) / (pre + rec)))
train, test = train_test_split(t_df, test_size = 0.2)
y = np.array(train['Survived'])
x = np.array(train[['Pclass', 'Age', 'SibSp', 'Parch', 'Fare']])
train_fares = []
for i in range(len(x)):
train_fares.append(x[i][-1])
k = 2
kmeans = KMeans(n_clusters=k)
results = kmeans.fit_predict(x)
tp = 0
fp = 0
fn = 0
one_fare = []
zero_fare = []
for i in range(len(results)):
if results[i] == 1:
one_fare.append(train_fares[i])
elif results[i] == 0:
zero_fare.append(train_fares[i])
one_mean_fare = np.mean(one_fare)
print("Mean Fare of Group One: {}".format(one_mean_fare))
zero_mean_fare = np.mean(zero_fare)
print("Mean Fare of Group Zero: {}".format(zero_mean_fare))
if one_mean_fare > zero_mean_fare:
for i in range(len(results)):
diff = y[i] - results[i]
if diff == 1:
fp += 1
elif diff == 0:
tp += 1
else:
fn += 1
else:
for i in range(len(results)):
diff = y[i] - results[i]
if diff == 1:
fn += 1
elif diff == 0:
tp += 1
else:
fp += 1
print("True Positives: " + str(tp))
print("False Positives: " + str(fp))
print("False Negative: " + str(fn))
f1 = f1_score(tp, fn, fp)
print("F1 Score: " + str(f1))
test_df = pandas.read_csv(TITANIC_TEST, header=0)
test_df.drop(['Name', 'Ticket', 'Cabin', 'Embarked', 'Sex'], axis=1, inplace=True)
test_df.Age.fillna(np.mean(test_df.Age), inplace=True)
test_df.Fare.fillna(np.mean(test_df.Fare), inplace=True)
x = np.array(test_df[['Pclass', 'Age', 'SibSp', 'Parch', 'Fare']])
kmeans = KMeans(n_clusters=k)
results = kmeans.fit_predict(x)
s1 = pandas.Series(np.array(test_df.PassengerId), name='PassengerId')
s2 = pandas.Series(results, name='Survived')
kaggle_result = pandas.concat([s1,s2], axis=1)
kaggle_result.to_csv('titanic_day2.csv', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Selection of Features
Step2: Cleaning Data
Step6: Experiment Heueristics (Design)
Step7: Representation
Step8: Experiment
Step9: The K-Means split up the passengers into two groups 0 and 1 but it's not clear which of these represents Surivived and Non-Survived. The assumption is made that whichever group has the higher mean fare is the survival group. Depending on which group is the survival group the True Positives/False Positives calculations are slighty different.
Step10: Conclusions
|
4,406
|
<ASSISTANT_TASK:>
Python Code:
# from sklearn.datasets import fetch_20newsgroups
from sklearn.datasets import load_files
# categories = ['alt.atheism', 'soc.religion.christian', 'comp.graphics', 'sci.med']
# all_of_it = fetch_20newsgroups(subset='train', categories=categories, shuffle=True, random_state=42)
all_of_it = load_files("D:\kaam\AdditionalParsed", shuffle=True, random_state = None)
total = len(all_of_it.data)
num = int(0.8*total)
print("No. of Training data: " , num)
print("No. of Testing data: " , total - num)
train_data = all_of_it.data[:num]
test_data = all_of_it.data[num:]
all_of_it.target_names
print(dir(all_of_it))
print(all_of_it.target_names)
# print(type(all_of_it.description))
print(len(all_of_it.data))
print(all_of_it.target[:10])
all_of_it.filenames[:5]
print(all_of_it.data[0])
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X = count_vect.fit_transform(train_data)
print(X.shape)
count_vect.vocabulary_
X.toarray()
from sklearn.feature_extraction.text import TfidfTransformer
tf = TfidfTransformer(sublinear_tf=True)
X_train_tf = tf.fit_transform(X)
X_train_tf.shape
X_train_tf.toarray()
# docs_test = ["God is great", "Retina scan gives early diagnosis about diabetes"]
# X_test = count_vect.transform(docs_test)
# X_tf_test = tf.transform(X_test)
X_test = count_vect.transform(test_data)
X_test_tf = tf.transform(X_test)
X_test_tf.shape
# from sklearn.naive_bayes import MultinomialNB
# clf = MultinomialNB()
# clf.fit(X_train_tf,all_of_it.target[:num])
from sklearn import svm
clf = svm.SVC(decision_function_shape="ovo", C = 10000.0, kernel='rbf',gamma = 0.6)
clf.fit(X_train_tf, all_of_it.target[:num])
# from sklearn.externals import joblib
# modelfile = "20-news-svm.sav"
# joblib.dump(clf,modelfile)
# loadedmodel = joblib.load(modelfile)
print(clf.score(X_test_tf, all_of_it.target[num:]))
# predict = clf.predict(X_test_tf)
# for text, category in zip(docs_test,predict):
# print(text + "\tbelongs to:\t" + all_of_it.target_names[category])
# all_of_it.target_names[int(clf.predict(tf.fit_transform(count_vect.transform(f.read()))))]
HOME_DIR = "D:\kaam\AdditionalParsedTest"
import os
print("File:\tClassification:")
for home,subdir,files in os.walk(HOME_DIR):
for file in files:
with open(os.path.join(HOME_DIR, file)) as f:
print(file + "\t" + all_of_it.target_names[int(clf.predict(tf.transform(count_vect.transform([f.read()]))))])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dividing the training and test data into 80-20 ratio(Roughly).
Step2: Some details about the dataset
Step3: How the files look like
Step4: Sample document
Step5: CountVectorizer converts text documents to a matrix representation of token counts.
Step6: You can take a look at the dimensions of the matrix and the vocabulary that it has created
|
4,407
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# here the usual imports. If any of the imports fails,
# make sure that pynoddy is installed
# properly, ideally with 'python setup.py develop'
# or 'python setup.py install'
import sys, os
import matplotlib.pyplot as plt
import numpy as np
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
sys.path.append(repo_path)
import pynoddy
import pynoddy.history
import pynoddy.experiment
import importlib
importlib.reload(pynoddy.experiment)
rcParams.update({'font.size': 15})
# From notebook 4/ Traning Set example 1:
importlib.reload(pynoddy.history)
importlib.reload(pynoddy.events)
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 3,
'layer_names' : ['layer 1', 'layer 2', 'layer 3'],
'layer_thickness' : [1500, 500, 1500]}
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (4000, 0, 5000),
'dip_dir' : 90.,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
history = 'normal_fault.his'
output_name = 'normal_fault_out'
nm.write_history(history)
importlib.reload(pynoddy.history)
importlib.reload(pynoddy.experiment)
from pynoddy.experiment import monte_carlo
ue = pynoddy.experiment.Experiment(history)
ue.change_cube_size(100)
ue.plot_section('y')
ue.freeze()
ue.set_random_seed(12345)
ue.info(events_only = True)
ev2 = ue.events[2]
ev2.properties
param_stats = [{'event' : 2,
'parameter': 'Slip',
'stdev': 300.0,
'type': 'normal'},
{'event' : 2,
'parameter': 'Dip',
'stdev': 10.0,
'type': 'normal'},]
ue.set_parameter_statistics(param_stats)
resolution = 100
ue.change_cube_size(resolution)
tmp = ue.get_section('y')
prob_2 = np.zeros_like(tmp.block[:,:,:])
n_draws = 10
for i in range(n_draws):
ue.random_draw()
tmp = ue.get_section('y', resolution = resolution)
prob_2 += (tmp.block[:,:,:] == 2)
# Normalise
prob_2 = prob_2 / float(n_draws)
fig = plt.figure(figsize = (12,8))
ax = fig.add_subplot(111)
ax.imshow(prob_2.transpose()[:,0,:],
origin = 'lower left',
interpolation = 'none')
plt.title("Estimated probability of unit 4")
plt.xlabel("x (E-W)")
plt.ylabel("z")
ue.random_draw()
s1 = ue.get_section('y')
s1.block.shape
s1.block[np.where(s1.block == 3)] = 1
s1.plot_section('y', cmap='Greys')
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
n_layers = 8
strati_options['num_layers'] = n_layers
strati_options['layer_names'] = []
strati_options['layer_thickness'] = []
for n in range(n_layers):
strati_options['layer_names'].append("layer %d" % n)
strati_options['layer_thickness'].append(5000./n_layers)
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (1000, 0, 5000),
'dip_dir' : 90.,
'dip' : 60,
'slip' : 500}
nm.add_event('fault', fault_options)
history = 'normal_fault.his'
output_name = 'normal_fault_out'
nm.write_history(history)
importlib.reload(pynoddy.history)
importlib.reload(pynoddy.experiment)
from pynoddy.experiment import monte_carlo
ue = pynoddy.experiment.Experiment(history)
ue.freeze()
ue.set_random_seed(12345)
ue.set_extent(2800, 100, 2800)
ue.change_cube_size(50)
ue.plot_section('y')
param_stats = [{'event' : 2,
'parameter': 'Slip',
'stdev': 100.0,
'type': 'lognormal'},
{'event' : 2,
'parameter': 'Dip',
'stdev': 10.0,
'type': 'normal'},
# {'event' : 2,
# 'parameter': 'Y',
# 'stdev': 150.0,
# 'type': 'normal'},
{'event' : 2,
'parameter': 'X',
'stdev': 150.0,
'type': 'normal'},]
ue.set_parameter_statistics(param_stats)
# randomly select layers:
ue.random_draw()
s1 = ue.get_section('y')
# create "feature" model:
f1 = s1.block.copy()
# randomly select layers:
f1 = np.squeeze(f1)
# n_featuers: number of "features" -> gray values in image
n_features = 5
vals = np.random.randint(0,255,size=n_features)
for n in range(n_layers):
f1[f1 == n] = np.random.choice(vals)
f1.shape
plt.imshow(f1.T, origin='lower_left', cmap='Greys', interpolation='nearest')
# blur image
from scipy import ndimage
f2 = ndimage.filters.gaussian_filter(f1, 1, mode='nearest')
plt.imshow(f2.T, origin='lower_left', cmap='Greys', interpolation='nearest', vmin=0, vmax=255)
# randomly swap image
if np.random.randint(2) == 1:
f2 = f2[::-1,:]
plt.imshow(f2.T, origin='lower_left', cmap='Greys', interpolation='nearest', vmin=0, vmax=255)
# back to before: re-initialise model:
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
n_layers = 18
strati_options['num_layers'] = n_layers
strati_options['layer_names'] = []
strati_options['layer_thickness'] = []
for n in range(n_layers):
strati_options['layer_names'].append("layer %d" % n)
strati_options['layer_thickness'].append(5000./n_layers)
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (1000, 0, 5000),
'dip_dir' : 90.,
'dip' : 60,
'slip' : 500}
nm.add_event('fault', fault_options)
history = 'normal_fault.his'
output_name = 'normal_fault_out'
nm.write_history(history)
from pynoddy.experiment import monte_carlo
ue = pynoddy.experiment.Experiment(history)
ue.freeze()
ue.set_random_seed(12345)
ue.set_extent(2800, 100, 2800)
ue.change_cube_size(50)
param_stats = [{'event' : 2,
'parameter': 'Slip',
'stdev': 100.0,
'type': 'lognormal'},
{'event' : 2,
'parameter': 'Dip',
'stdev': 10.0,
'type': 'normal'},
# {'event' : 2,
# 'parameter': 'Y',
# 'stdev': 150.0,
# 'type': 'normal'},
{'event' : 2,
'parameter': 'X',
'stdev': 150.0,
'type': 'normal'},]
ue.set_parameter_statistics(param_stats)
n_train = 10000
F_train = np.empty((n_train, 28*28))
ue.change_cube_size(100)
for i in range(n_train):
# randomly select layers:
ue.random_draw()
s1 = ue.get_section('y')
# create "feature" model:
f1 = s1.block.copy()
# randomly select layers:
f1 = np.squeeze(f1)
# n_featuers: number of "features" -> gray values in image
n_features = 4
vals = np.random.randint(0,255,size=n_features)
for n in range(n_layers):
f1[f1 == n+1] = np.random.choice(vals)
f1 = f1.T
f2 = ndimage.filters.gaussian_filter(f1, 0, mode='nearest')
# scale image
f2 = f2 - np.min(f2)
if np.max(f2) != 0:
f2 = f2/np.max(f2)*255
# randomly swap image
if np.random.randint(2) == 1:
f2 = f2[::-1,:]
F_train[i] = f2.flatten().T
plt.imshow(f2, origin='lower_left', cmap='Greys', interpolation='nearest', vmin=0, vmax=255)
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True, figsize=(12,6))
ax = ax.flatten()
for i in range(10):
img = F_train[i].reshape(28, 28)
ax[i].imshow(img, cmap='Greys', interpolation='nearest')
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
# plt.savefig('./figures/mnist_all.png', dpi=300)
plt.show()
import pickle
f = open("f_train_normal.pkl", 'wb')
pickle.dump(F_train, f)
# back to before: re-initialise model:
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
n_layers = 18
strati_options['num_layers'] = n_layers
strati_options['layer_names'] = []
strati_options['layer_thickness'] = []
for n in range(n_layers):
strati_options['layer_names'].append("layer %d" % n)
strati_options['layer_thickness'].append(5000./n_layers)
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (1000, 0, 5000),
'dip_dir' : 90.,
'dip' : 60,
'slip' : -500}
nm.add_event('fault', fault_options)
history = 'normal_fault.his'
output_name = 'normal_fault_out'
nm.write_history(history)
reload(pynoddy.history)
reload(pynoddy.experiment)
from pynoddy.experiment import monte_carlo
ue = pynoddy.experiment.Experiment(history)
ue.freeze()
ue.set_random_seed(12345)
ue.set_extent(2800, 100, 2800)
ue.change_cube_size(50)
param_stats = [{'event' : 2,
'parameter': 'Slip',
'stdev': -100.0,
'type': 'lognormal'},
{'event' : 2,
'parameter': 'Dip',
'stdev': 10.0,
'type': 'normal'},
# {'event' : 2,
# 'parameter': 'Y',
# 'stdev': 150.0,
# 'type': 'normal'},
{'event' : 2,
'parameter': 'X',
'stdev': 150.0,
'type': 'normal'},]
ue.set_parameter_statistics(param_stats)
n_train = 10000
F_train_rev = np.empty((n_train, 28*28))
ue.change_cube_size(100)
for i in range(n_train):
# randomly select layers:
ue.random_draw()
s1 = ue.get_section('y')
# create "feature" model:
f1 = s1.block.copy()
# randomly select layers:
f1 = np.squeeze(f1)
# n_featuers: number of "features" -> gray values in image
n_features =
vals = np.random.randint(0,255,size=n_features)
for n in range(n_layers):
f1[f1 == n+1] = np.random.choice(vals)
f1 = f1.T
f2 = ndimage.filters.gaussian_filter(f1, 0, mode='nearest')
# scale image
f2 = f2 - np.min(f2)
if np.max(f2) != 0:
f2 = f2/np.max(f2)*255
# randomly swap image
if np.random.randint(2) == 1:
f2 = f2[::-1,:]
F_train_rev[i] = f2.flatten().T
fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True, figsize=(12,6))
ax = ax.flatten()
for i in range(10):
img = F_train_rev[i].reshape(28, 28)
ax[i].imshow(img, cmap='Greys', interpolation='nearest')
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
# plt.savefig('./figures/mnist_all.png', dpi=300)
plt.show()
pickle.dump(F_train_rev, open("f_train_reverse.pkl", 'w'))
l1 = np.empty_like(s1.block[:,0,:])
n_layers = 18
for i in range(l1.shape[0]):
l1[:,i] = i
l1_ori = np.floor(l1*n_layers/l1.shape[0])
F_train_line = np.empty((n_train, 28*28))
for i in range(n_train):
n_features = 4
vals = np.random.randint(0,255,size=n_features)
l1 = l1_ori.copy()
for n in range(n_layers):
l1[l1 == n+1] = np.random.choice(vals)
f1 = l1.T
f2 = ndimage.filters.gaussian_filter(f1, 0, mode='nearest')
# scale image
f2 = f2 - np.min(f2)
if np.max(f2) != 0:
f2 = f2/np.max(f2)*255
F_train_line[i] = f2.flatten().T
fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True, figsize=(12,6))
ax = ax.flatten()
for i in range(10):
img = F_train_line[i].reshape(28, 28)
ax[i].imshow(img, cmap='Greys', interpolation='nearest')
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
# plt.savefig('./figures/mnist_all.png', dpi=300)
plt.show()
pickle.dump(F_train_line, open("f_train_line.pkl", 'w'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initiate experiment with this input file
Step2: Before we start to draw random realisations of the model, we should first store the base state of the model for later reference. This is simply possibel with the freeze() method which stores the current state of the model as the "base-state"
Step3: We now intialise the random generator. We can directly assign a random seed to simplify reproducibility (note that this is not essential, as it would be for the definition in a script function
Step4: The next step is to define probability distributions to the relevant event parameters. Let's first look at the different events
Step5: Next, we define the probability distributions for the uncertain input parameters
Step6: This example shows how the base module for reproducible experiments with kinematics can be used. For further specification, child classes of Experiment can be defined, and we show examples of this type of extension in the next sections.
Step7: Idea
Step8: All in one function
Step9: Generate training set for normal faults
Step10: Generate reverse faults
Step11: Generate simple layer structure
|
4,408
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from statsmodels.tsa.exponential_smoothing.ets import ETSModel
plt.rcParams['figure.figsize'] = (12, 8)
oildata = [
111.0091, 130.8284, 141.2871, 154.2278,
162.7409, 192.1665, 240.7997, 304.2174,
384.0046, 429.6622, 359.3169, 437.2519,
468.4008, 424.4353, 487.9794, 509.8284,
506.3473, 340.1842, 240.2589, 219.0328,
172.0747, 252.5901, 221.0711, 276.5188,
271.1480, 342.6186, 428.3558, 442.3946,
432.7851, 437.2497, 437.2092, 445.3641,
453.1950, 454.4096, 422.3789, 456.0371,
440.3866, 425.1944, 486.2052, 500.4291,
521.2759, 508.9476, 488.8889, 509.8706,
456.7229, 473.8166, 525.9509, 549.8338,
542.3405
]
oil = pd.Series(oildata, index=pd.date_range('1965', '2013', freq='AS'))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
model = ETSModel(oil, error='add', trend='add', damped_trend=True)
fit = model.fit(maxiter=10000)
oil.plot(label='data')
fit.fittedvalues.plot(label='statsmodels fit')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label='R fit', linestyle='--')
plt.legend();
model_heuristic = ETSModel(oil, error='add', trend='add', damped_trend=True,
initialization_method='heuristic')
fit_heuristic = model_heuristic.fit()
oil.plot(label='data')
fit.fittedvalues.plot(label='estimated')
fit_heuristic.fittedvalues.plot(label='heuristic', linestyle='--')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label='with R params', linestyle=':')
plt.legend();
print(fit.summary())
print(fit_heuristic.summary())
austourists_data = [
30.05251300, 19.14849600, 25.31769200, 27.59143700,
32.07645600, 23.48796100, 28.47594000, 35.12375300,
36.83848500, 25.00701700, 30.72223000, 28.69375900,
36.64098600, 23.82460900, 29.31168300, 31.77030900,
35.17787700, 19.77524400, 29.60175000, 34.53884200,
41.27359900, 26.65586200, 28.27985900, 35.19115300,
42.20566386, 24.64917133, 32.66733514, 37.25735401,
45.24246027, 29.35048127, 36.34420728, 41.78208136,
49.27659843, 31.27540139, 37.85062549, 38.83704413,
51.23690034, 31.83855162, 41.32342126, 42.79900337,
55.70835836, 33.40714492, 42.31663797, 45.15712257,
59.57607996, 34.83733016, 44.84168072, 46.97124960,
60.01903094, 38.37117851, 46.97586413, 50.73379646,
61.64687319, 39.29956937, 52.67120908, 54.33231689,
66.83435838, 40.87118847, 51.82853579, 57.49190993,
65.25146985, 43.06120822, 54.76075713, 59.83447494,
73.25702747, 47.69662373, 61.09776802, 66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel('Australian Tourists');
# fit in statsmodels
model = ETSModel(austourists, error="add", trend="add", seasonal="add",
damped_trend=True, seasonal_periods=4)
fit = model.fit()
# fit with R params
params_R = [
0.35445427, 0.03200749, 0.39993387, 0.97999997, 24.01278357,
0.97770147, 1.76951063, -0.50735902, -6.61171798, 5.34956637
]
fit_R = model.smooth(params_R)
austourists.plot(label='data')
plt.ylabel('Australian Tourists')
fit.fittedvalues.plot(label='statsmodels fit')
fit_R.fittedvalues.plot(label='R fit', linestyle='--')
plt.legend();
print(fit.summary())
pred = fit.get_prediction(start='2014', end='2020')
df = pred.summary_frame(alpha=0.05)
df
simulated = fit.simulate(anchor="end", nsimulations=17, repetitions=100)
for i in range(simulated.shape[1]):
simulated.iloc[:,i].plot(label='_', color='gray', alpha=0.1)
df["mean"].plot(label='mean prediction')
df["pi_lower"].plot(linestyle='--', color='tab:blue', label='95% interval')
df["pi_upper"].plot(linestyle='--', color='tab:blue', label='_')
pred.endog.plot(label='data')
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple exponential smoothing
Step2: The plot above shows annual oil production in Saudi Arabia in million tonnes. The data are taken from the R package fpp2 (companion package to prior version [1]).
Step3: By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. Additionally it is possible to only use a heuristic for the initial values. In this case this leads to better agreement with the R implementation.
Step4: The fitted parameters and some other measures are shown using fit.summary(). Here we can see that the log-likelihood of the model using fitted initial states is a bit lower than the one using a heuristic for the initial states.
Step5: Holt-Winters' seasonal method
Step6: Predictions
Step7: In this case the prediction intervals were calculated using an analytical formula. This is not available for all models. For these other models, prediction intervals are calculated by performing multiple simulations (1000 by default) and using the percentiles of the simulation results. This is done internally by the get_prediction method.
|
4,409
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
from matplotlib import pylab as plt
from numpy import sin, cos, pi, matrix, random, linalg, asarray
from scipy.linalg import pinv
from __future__ import division
from math import atan2
from IPython import display
from ipywidgets import interact, fixed
def trans(x, y, a):
'''create a 2D transformation'''
s = sin(a)
c = cos(a)
return matrix([[c, -s, x],
[s, c, y],
[0, 0, 1]])
def from_trans(m):
'''get x, y, theta from transform matrix'''
return [m[0, -1], m[1, -1], atan2(m[1, 0], m[0, 0])]
trans(0, 0, 0)
l = [0, 3, 2, 1]
#l = [0, 3, 2, 1, 1]
#l = [0, 3, 2, 1, 1, 1]
#l = [1] * 30
N = len(l) - 1 # number of links
max_len = sum(l)
a = random.random_sample(N) # angles of joints
T0 = trans(0, 0, 0) # base
def forward_kinematics(T0, l, a):
T = [T0]
for i in range(len(a)):
Ti = T[-1] * trans(l[i], 0, a[i])
T.append(Ti)
Te = T[-1] * trans(l[-1], 0, 0) # end effector
T.append(Te)
return T
def show_robot_arm(T):
plt.cla()
x = [Ti[0,-1] for Ti in T]
y = [Ti[1,-1] for Ti in T]
plt.plot(x, y, '-or', linewidth=5, markersize=10)
plt.plot(x[-1], y[-1], 'og', linewidth=5, markersize=10)
plt.xlim([-max_len, max_len])
plt.ylim([-max_len, max_len])
ax = plt.axes()
ax.set_aspect('equal')
t = atan2(T[-1][1, 0], T[-1][0,0])
ax.annotate('[%.2f,%.2f,%.2f]' % (x[-1], y[-1], t), xy=(x[-1], y[-1]), xytext=(x[-1], y[-1] + 0.5))
plt.show()
return ax
theta = random.random(N) * 1e-5
lambda_ = 1
max_step = 0.1
def inverse_kinematics(x_e, y_e, theta_e, theta):
target = matrix([[x_e, y_e, theta_e]]).T
for i in range(1000):
Ts = forward_kinematics(T0, l, theta)
Te = matrix([from_trans(Ts[-1])]).T
e = target - Te
e[e > max_step] = max_step
e[e < -max_step] = -max_step
T = matrix([from_trans(i) for i in Ts[1:-1]]).T
J = Te - T
dT = Te - T
J[0, :] = -dT[1, :] # x
J[1, :] = dT[0, :] # y
J[-1, :] = 1 # angular
d_theta = lambda_ * pinv(J) * e
theta += asarray(d_theta.T)[0]
if linalg.norm(d_theta) < 1e-4:
break
return theta
T = forward_kinematics(T0, l, theta)
show_robot_arm(T)
Te = matrix([from_trans(T[-1])])
@interact(x_e=(0, max_len, 0.01), y_e=(-max_len, max_len, 0.01), theta_e=(-pi, pi, 0.01), theta=fixed(theta))
def set_end_effector(x_e=Te[0,0], y_e=Te[0,1], theta_e=Te[0,2], theta=theta):
theta = inverse_kinematics(x_e, y_e, theta_e, theta)
T = forward_kinematics(T0, l, theta)
show_robot_arm(T)
return theta
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Coordinate Transformation
Step2: Parameters of robot arm
Step3: Forward Kinematics
Step4: Inverse Kinematics
|
4,410
|
<ASSISTANT_TASK:>
Python Code:
%%bash
cd /tmp
rm -rf playground
git clone https://github.com/crystalzhaizhai/playground.git
%%bash
cd /tmp/playground
git pull origin mybranch1
ls
%%bash
cd /tmp/playground
git status
%%bash
cd /tmp/playground
git reset --hard origin/master
ls
%%bash
cd /tmp/playground
git status
%%bash
cd /tmp/playground
cat .git/config
%%bash
cd /tmp/playground
git remote add course https://github.com/IACS-CS-207/playground.git
cat .git/config
%%bash
cd /tmp/playground
git fetch course master
%%bash
cd /tmp/playground
git checkout course/master -- README.md
cat README.md
%%bash
cd /tmp/playground
git add .
git commit -m "playgroundchange" -a
git status
git push
with open("../../lectures/L4/languages.txt","r") as f:
primary_course=f.read().split()
from collections import Counter
course_count=Counter(primary_course)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x_coords=np.arange(len(course_count))
total=np.sum(course_count.values())
freqs=course_count.values()
plt.xticks(x_coords,course_count.keys())
plt.bar(x_coords,freqs)
def kinetics(p,T, R=8.314):
import numpy as np
if len(p)<3:
print("Error! Less than 3 parameters")
return()
try:
k=p[0]*(T**p[1])*np.exp(-p[2]/(R*T))
return k
except ZeroDivisionError:
print("Error! Divided by 0")
return()
kinetics([1,2],0)
TT=np.arange(1,5000)
plt.plot(TT,kinetics([3,6,2],TT),'r')
plt.plot(TT,kinetics([4,5,6],TT),'g')
plt.plot(TT,kinetics([6,5,4],TT),'b')
plt.legend(["A,b,E=[3,6,2]","A,b,E=[4,5,6]","A,b,E=[6,5,4]"])
plt.xlabel("T")
plt.ylabel("k")
plt.title("kinetics")
%%bash
git add "HW3_final.ipynb"
git commit -m "HW3" -a
git status
git remote
git push origin master
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 2
Step2: Problem 3
Step3: Problem 4
Step4: Problem 5
|
4,411
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
from collections import Counter
total_counts = Counter() # bag of words here
for _, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
word2idx = {word: i for i, word in enumerate(vocab)}## create the word-to-index dictionary here
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
if word in word2idx:
idx = word2idx[word]
word_vector[idx] += 1
return np.array(word_vector)
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split, 0], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
Y.values[[0,1]]
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, 10000])
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
sentence = 'terrible ugly'
test_sentence(sentence)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the data
Step2: Counting word frequency
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Step6: Text to vector function
Step7: If you do this right, the following code should return
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Step10: Building the network
Step11: Intializing the model
Step12: Training the network
Step13: Testing
Step14: Try out your own text!
|
4,412
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import keras
import numpy as np
t = np.arange(50).reshape(1, -1)
x = np.sin(2*np.pi/50*t)
print(x.shape)
plot(t[0], x[0]);
from keras.models import Sequential
from keras.layers import containers
from keras.layers.core import Dense, AutoEncoder
encoder = containers.Sequential([Dense(25, input_dim=50), Dense(12)])
decoder = containers.Sequential([Dense(25, input_dim=12), Dense(50)])
model = Sequential()
model.add(AutoEncoder(encoder=encoder, decoder=decoder, output_reconstruction=True))
model.compile(loss='mean_squared_error', optimizer='sgd')
# prediction with initial weight should be random
plot(model.predict(x)[0]);
# train the model and store the loss values as function of time
from loss_history import LossHistory
loss_history = LossHistory()
model.fit(x, x, nb_epoch=500, batch_size=1, callbacks=[loss_history])
plot(loss_history.losses);
plot(log10(loss_history.losses));
plot(model.predict(x)[0])
plot(x[0]);
x_noised = x + 0.2 * np.random.random(len(x[0]))
plot(x_noised[0], label='input')
plot(model.predict(x_noised)[0], label='predicted')
legend();
x_shifted = np.cos(2*np.pi/50*t)
plot(x_shifted[0], label='input')
plot(model.predict(x_shifted)[0], label='predicted')
legend();
x_scaled = 0.2 * x
plot(x_scaled[0], label='input')
plot(model.predict(x_scaled)[0], label='predicted')
legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Input signal. Single training example.
Step2: Simple autoencoder of four layers
Step3: The model fits the data quite nicely.
Step4: The model is able to predict on noise-corrupted data.
Step5: However the model does is not able to predict a sinusoid with different phase.
Step6: The model is able to deal with scaled sinuoid, but the farther it is from the original amplitude, the more noise.
|
4,413
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import os
import tensorflow as tf
from ppo.history import *
from ppo.models import *
from ppo.trainer import Trainer
from unityagents import *
### General parameters
max_steps = 5e5 # Set maximum number of steps to run environment.
run_path = "ppo" # The sub-directory name for model and summary statistics
load_model = False # Whether to load a saved model.
train_model = True # Whether to train the model.
summary_freq = 10000 # Frequency at which to save training statistics.
save_freq = 50000 # Frequency at which to save model.
env_name = "environment" # Name of the training environment file.
### Algorithm-specific parameters for tuning
gamma = 0.99 # Reward discount rate.
lambd = 0.95 # Lambda parameter for GAE.
time_horizon = 2048 # How many steps to collect per agent before adding to buffer.
beta = 1e-3 # Strength of entropy regularization
num_epoch = 5 # Number of gradient descent steps per batch of experiences.
epsilon = 0.2 # Acceptable threshold around ratio of old and new policy probabilities.
buffer_size = 2048 # How large the experience buffer should be before gradient descent.
learning_rate = 3e-4 # Model learning rate.
hidden_units = 64 # Number of units in hidden layer.
batch_size = 64 # How many experiences per gradient descent update step.
env = UnityEnvironment(file_name=env_name)
print(str(env))
brain_name = env.brain_names[0]
tf.reset_default_graph()
# Create the Tensorflow model graph
ppo_model = create_agent_model(env, lr=learning_rate,
h_size=hidden_units, epsilon=epsilon,
beta=beta, max_step=max_steps)
is_continuous = (env.brains[brain_name].action_space_type == "continuous")
use_observations = (env.brains[brain_name].number_observations > 0)
use_states = (env.brains[brain_name].state_space_size > 0)
model_path = './models/{}'.format(run_path)
summary_path = './summaries/{}'.format(run_path)
if not os.path.exists(model_path):
os.makedirs(model_path)
if not os.path.exists(summary_path):
os.makedirs(summary_path)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
# Instantiate model parameters
if load_model:
print('Loading Model...')
ckpt = tf.train.get_checkpoint_state(model_path)
saver.restore(sess, ckpt.model_checkpoint_path)
else:
sess.run(init)
steps = sess.run(ppo_model.global_step)
summary_writer = tf.summary.FileWriter(summary_path)
info = env.reset(train_mode=train_model)[brain_name]
trainer = Trainer(ppo_model, sess, info, is_continuous, use_observations, use_states)
while steps <= max_steps:
if env.global_done:
info = env.reset(train_mode=train_model)[brain_name]
# Decide and take an action
new_info = trainer.take_action(info, env, brain_name)
info = new_info
trainer.process_experiences(info, time_horizon, gamma, lambd)
if len(trainer.training_buffer['actions']) > buffer_size and train_model:
# Perform gradient descent with experience buffer
trainer.update_model(batch_size, num_epoch)
if steps % summary_freq == 0 and steps != 0 and train_model:
# Write training statistics to tensorboard.
trainer.write_summary(summary_writer, steps)
if steps % save_freq == 0 and steps != 0 and train_model:
# Save Tensorflow model
save_model(sess, model_path=model_path, steps=steps, saver=saver)
steps += 1
sess.run(ppo_model.increment_step)
# Final save Tensorflow model
if steps != 0 and train_model:
save_model(sess, model_path=model_path, steps=steps, saver=saver)
env.close()
export_graph(model_path, env_name)
export_graph(model_path, env_name)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hyperparameters
Step2: Load the environment
Step3: Train the Agent(s)
Step4: Export the trained Tensorflow graph
|
4,414
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from jyquickhelper import add_notebook_menu
add_notebook_menu()
from actuariat_python.data import population_france_year
population = population_france_year()
df = population
df.head(n=3)
hommes = df["hommes"]
femmes = df["femmes"]
somme = hommes - femmes
from matplotlib import pyplot as plt
from numpy import arange
fig, ax = plt.subplots(figsize=(8,8))
ValH = ax.barh(arange(len(hommes)), hommes, 1.0, label="Hommes",
color='b', linewidth=0, align='center')
ValF = ax.barh(arange(len(femmes)), -femmes, 1.0, label="Femmes",
color='r', linewidth=0, align='center')
diff, = ax.plot(somme, arange(len(femmes)), 'y', linewidth=2)
ax.set_title("Pyramide des âges")
ax.set_ylabel("Ages")
ax.set_xlabel("Habitants")
ax.set_ylim([0, 110])
ax.legend((ValH[0], ValF[0], diff), ('Hommes', 'Femmes', 'différence'));
from actuariat_python.plots import plot_population_pyramid
plot_population_pyramid(df["hommes"], df["femmes"], figsize=(8, 4));
from actuariat_python.data import table_mortalite_france_00_02
df=table_mortalite_france_00_02()
import pandas
pandas.concat([df.head(n=3), df.tail(n=3)])
import numpy
hf = df[["Homme", "Femme"]].values
hf = numpy.vstack([hf, numpy.zeros((8, 2))])
hf.shape
nb = hf.shape[0]
esp = numpy.zeros ((nb,2))
for t in range(0,nb):
for i in (0,1):
if hf[t,i] == 0:
esp[t,i] = 0
else:
somme = 0.0
for d in range(1,nb-t):
if hf[t+d,i] > 0:
somme += d * (hf[t+d,i] - hf[t+d+1,i]) / hf[t,i]
esp[t,i] = somme
esp[:1]
h = plt.plot(esp)
plt.legend(h, ["Homme", "Femme"])
plt.title("Espérance de vie");
# à suivre
mortalite = (hf[:-1] - hf[1:]) / hf[:-1]
mortalite = numpy.nan_to_num(mortalite) # les divisions nulles deviennent nan, on les remplace par 0
mortalite = numpy.vstack([mortalite, numpy.zeros((1, 2))])
m = mortalite
pop = population[["hommes","femmes"]].values
pop = numpy.vstack( [pop, numpy.zeros((m.shape[0] - pop.shape[0], 2))])
pop0 = pop.copy()
pop.shape
pop_next = pop * (1-m)
pop_next = numpy.vstack([numpy.zeros((1, 2)), pop_next[:-1]])
pop_next[:5]
pop[:5]
from actuariat_python.plots import plot_population_pyramid
plot_population_pyramid(pop_next[:, 0], pop_next[:, 1]);
def iteration(pop, mortalite):
pop_next = pop * (1-mortalite)
pop_next = numpy.vstack([numpy.zeros((1, 2)),
pop_next[:-1]]) # aucune naissance
return pop_next
popt = pop
for year in range(2020, 2051):
popt = iteration(popt, mortalite)
plot_population_pyramid(popt[:,0], popt[:,1], title="Pyramide des âges en 2050");
ratio = pop[0, 0] / (pop[0, 1] + pop[0, 0])
ratio
from actuariat_python.data import fecondite_france
df = fecondite_france()
df.head()
from matplotlib import pyplot as plt
df.plot(x="age", y=["2005", "2015"]);
ages = pandas.DataFrame(dict(age=range(0,120)))
merge = ages.merge(df, left_on="age", right_on="age", how="outer")
fecondite = merge.fillna(0.0)
fecondite[13:17]
mat_fec = fecondite[["2015"]].values / 100 # les chiffres sont pour 100 femmes
mat_fec.shape
mat_fec.sum()
if mat_fec.shape[0] < pop.shape[0]:
zeros = numpy.zeros((pop.shape[0] - mat_fec.shape[0], mat_fec.shape[1]))
mat_fec = numpy.vstack([mat_fec, zeros])
mat_fec.sum()
def naissances(pop, fec):
# on suppose que pop est une matrice avec deux colonnes homme, femme
# et que fec est une matrice avec une colonne fécondité
n = pop[:, 1] * fec[:, 0]
return n.sum()
nais = naissances(pop, mat_fec)
nais
def iteration(pop, mortalite, fec, ratio):
pop_next = pop * (1 - mortalite)
nais = naissances(pop, fec)
row = numpy.array([[nais * ratio, nais * (1 - ratio)]])
pop_next = numpy.vstack([row, pop_next[:-1]]) # aucune naissance
return pop_next
popt = pop
for year in range(2020, 2101):
popt = iteration(popt, m, mat_fec, ratio)
if year == 2050:
popt2050 = popt.copy()
if year == 2100:
popt2100 = popt.copy()
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
plot_population_pyramid(popt2050[:, 0], popt2050[:, 1], ax=ax[0],
title="Pyramide des âges en 2050")
plot_population_pyramid(popt2100[:, 0], popt2100[:, 1], ax=ax[1],
title="Pyramide des âges en 2100");
total = [[2020, pop[:,0].sum(),pop[:,1].sum()]]
popt = pop
for year in range(2020, 2101):
popt = iteration(popt, m, mat_fec, ratio)
total.append([year, popt[:,0].sum(),popt[:,1].sum()])
plot_population_pyramid(popt[:, 0], popt[:, 1], title="Pyramide des âges en 2101");
df = pandas.DataFrame(data=total, columns=["année","hommes","femmes"])
df.plot(x="année", y=["hommes", "femmes"], title="projection population française");
from matplotlib import pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(14,6))
plot_population_pyramid(popt[:,0], popt[:,1], title="Pyramide des âges en 2050", ax=ax[0])
df.plot(x="année", y=["hommes", "femmes"], title="projection population française", ax=ax[1]);
from tqdm import tqdm
evol = []
popt = pop0.copy()
age_etude = 23
pivot = list(range(60, 68))
for year in tqdm(range(2020, 2101)):
popt = iteration(popt, m, mat_fec, ratio)
jeune = popt[:age_etude + 1].sum()
row = dict(year=year)
for p in pivot:
actif = popt[age_etude + 1:p].sum()
retraite = popt[p:].sum()
rt = actif / retraite
row['r%d' % p] = rt
evol.append(row)
df_evol = pandas.DataFrame(evol)
df_evol.head()
ax = df_evol.plot(x="year", y=["r%d" % p for p in pivot], figsize=(10, 4))
years = df_evol.year
ax.plot(years, [1.7 for _ in years], 'b--')
ax.set_title("Ratio actifs / retraités en fonction de l'âge pivot");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercice 1
Step2: Je reprends ici le code exposé à Damien Vergnaud's Homepage en l'adaptant un peu avec les fonctions de matplotlib via l'interface pyplot. Puis j'ajoute la différence par âge. On commence souvent par la gallerie pour voir si un graphe ou juste une partie est similaire à ce qu'on veut obtenir.
Step3: Le même en utilisant la fonction insérée dans le module actuariat_python.
Step4: Exercice 2
Step5: On note $P_t$ la population l'âge $t$. La probabilité de mourir à la date $t+d$ lorsqu'on a l'âge $t$ correspond à la probabilité de rester en vie à jusqu'à l'âge $t+d$ puis de mourir dans l'année qui suit
Step6: Enfin, on dessine le résultat avec matplotlib
Step7: Le calcul implémenté ci-dessus n'est pas le plus efficace. On fait deux boucles imbriquées dont le coût global est en $O(n^2)$ mais surtout on effectue les mêmes calculs plusieurs fois. Pour le réduire à un coût linéaire $O(n)$, il faut s'intéresser à la quantité
Step8: numpy et pandas ont plusieurs fonction en commun dès qu'il s'agit de parcourir les données. Il existe aussi la fonction DataFrame.cumsum.
Step9: La population a été obtenue lors de l'exercice 1, on la convertit en un objet numpy
Step10: Ensuite on calcule la population en 2020
Step11: Exercice 4
Step12: Exercice 5
Step13: Il y a un peu plus de garçons qui naissent chaque année.
Step14: On convertit ces données en une matrice numpy sur 120 lignes comme les précédentes. On se sert des méthodes fillna et merge.
Step15: Si la matrice pop a plus de ligne que la matrice mat_fec, on doit compléter la seconde avec autant de lignes nulles que la précédente.
Step16: Il faut maintenant coder une fonction qui calcule le naissances pour l'année suivantes.
Step17: Et on reprend la fonction iteration et le code de l'exercice précédent
Step18: On va plus loin et on stocke la population dans un vecteur
Step19: Le code suivant permet de combiner les deux graphes sur la même ligne avec la fonction subplots
Step20: Retraites
|
4,415
|
<ASSISTANT_TASK:>
Python Code:
## EXAMPLE: Get all images from experiment 11.
xp_11_images = all_data_images.filter(xp_id=156)
## EXAMPLE: Get all images from CJRs 140, 158, and 161.
selected_cjrs_images = all_data_images.filter(cjr_id__in=[140,158,161])
## EXAMPLE: Get all images from experiments 11 and 94.
selected_xps_images = all_data_images.filter(xp_id__in=[11,94])
## EXAMPLE: Get every 13th image from experiment 94.
xp_94_images = all_data_images.filter(xp_id__in=[156,157,158,159,162])
every_13th = list(xp_94_images)[::5]
print "{} / {} = {}".format(len(every_13th),
xp_94_images.count(),
float(xp_94_images.count()) / len(every_13th))
## This can take some time.
prioritize_images(every_13th, priority=100) # very low priority
#prioritize_images(every_13th) # very low priority
## EXAMPLE: Find out how many images are currently prioritized.
print dm.PriorityManualImage.objects.count()
## EXAMPLE: Find out which CJRs contain prioritized images.
cjr_list = [x.image.cjr_id for x in dm.PriorityManualImage.objects.all()]
print set(cjr_list)
## EXAMPLE: Find out the proportion of images for experiment 70 that are tagged.
images_in_xp_70 = dm.Image.objects.filter(xp_id=70).count()
tags_for_xp_70 = dm.ManualTag.objects.filter(image__xp_id=70).count()
print "{} / {} = {}".format(tags_for_xp_70, images_in_xp_70, float(tags_for_xp_70) / images_in_xp_70)
### WARNING ###
### THIS WILL DELETE ALL OF YOUR PRIORITIES ###
### WARNING ###
dm.PriorityManualImage.objects.all().delete()
## EXAMPLE: Delete the priorities for experiment 11, if any.
dm.PriorityManualImage.objects.filter(image__xp_id=11).delete()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Store Priorities - DON'T FORGET TO DO THIS - This is what actually queues the images to be tagged
Step2: This can take some time.
Step3: Clear Current Priorities
|
4,416
|
<ASSISTANT_TASK:>
Python Code:
def create_matrix(size):
mat = np.zeros((size, size))
for i in range(size):
for j in range (size):
mat[i, j] = i * j
return mat
create_matrix(4)
mat = create_matrix(20)
plt.imshow(mat)
plt.colorbar() # Adds a colorbar to the plot to aid in interpretation.
plt.xlabel("x")
plt.ylabel("y")
plt.title("Matrix Plot")
mat = create_matrix(20)
plt.imshow(mat, interpolation="spline16")
plt.colorbar()
plt.xlabel("x")
plt.ylabel("y")
plt.title("Matrix Plot")
mat = create_matrix(60)
plt.imshow(mat)
plt.colorbar()
plt.xlabel("x")
plt.ylabel("y")
plt.title("Denser Matrix Plot")
import matplotlib.cm as cm
plt.imshow(mat, cmap=cm.Reds)
plt.colorbar()
plt.xlabel("x")
plt.ylabel("y")
plt.title("Switching Color Scale ")
plt.imshow(mat, cmap=cm.winter_r)
plt.colorbar()
plt.xlabel("x")
plt.ylabel("y")
plt.title("Switching Color Scale ")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the plot above each cell of the matrix corresponds to one of the coloured grids, with the colour indicating the cell value.
Step2: It's possible to smooth the plot by utilizing interpolation. This isn't something that I would recommend though as it hides the structure of your data. Note however that this USED to be the default behaviour of the implot function in earlier versions of Matplotlib.
Step3: The color scale used to represent the data can also be modified using the cmap keyword argument.
Step4: There are lots of different colormaps to choose from.
|
4,417
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import pandas as pd
import numpy as np
ndarray = np.array(['a','b','c','d'])
serie = pd.Series(ndarray)
print(serie)
dog_data=[
['Pedro','Doberman',3],\
['Clementine','Golden Retriever',8],\
['Norah','Great Dane',6],\
['Mabel','Austrailian Shepherd',1],\
['Bear','Maltese',4],\
['Bill','Great Dane',10]
]
dog_df=pd.DataFrame(dog_data,columns=['name','breed','age'])
dog_df
print(type(dog_df['age'].iloc[0]))
dog_df.head()
dog_df.tail(3)
dog_df.shape
len(dog_df)
dog_df.columns
dog_df.dtypes
dog_df.values
dog_df.describe()
dog_df['breed'].value_counts()
dog_df[['name','age']]
dog_df.iloc[2:4]
dog_df.iloc[1:4, 0:2]
dog_df[dog_df['breed'].isin(['Great Dane', 'Maltese'])]
dog_df[dog_df['name']=='Norah']
dog_df[(dog_df['name']=='Bill') & (dog_df['breed']=='Great Dane')]
dog_df[dog_df['age']<5]
dog_df[dog_df['breed'].str.contains('G')]
owner_data=[['Bilbo','Pedro'],['Gandalf','Bear'],['Sam','Bill']]
owner_df=pd.DataFrame(owner_data,columns=['owner_name','dog_name'])
df=pd.merge(owner_df,dog_df,left_on='dog_name',right_on='name',how='inner')
df
inner_df = owner_df.merge(dog_df, left_on='dog_name', right_on='name', how='inner')
inner_df
inner_df=inner_df.drop(['name'],axis=1)
inner_df
left_df = owner_df.merge(dog_df, left_on='dog_name', right_on='name', how='left')
left_df
right_df = owner_df.merge(dog_df, left_on='dog_name', right_on='name', how='right')
right_df
outer_df = owner_df.merge(dog_df, left_on='dog_name', right_on='name', how='outer')
outer_df
df=df.drop(['name'],axis=1)
df
import matplotlib
# Will allow us to embed images in the notebook
%matplotlib inline
plot_df = pd.DataFrame({
'col1': [1, 3, 2, 4],
'col2': [3, 6, 5, 1],
'col3': [4, 7, 6, 2],
})
plot_df.plot()
plot_df.plot(kind='box')
plot_df.plot(kind='bar')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pandas is well suited for many different kinds of data
Step2: Create a data frame
Step3: Previewing the data frame
Step4: DataFrame.tail(n=5)
Step5: DataFrame.shape
Step6: DataFrame.columns
Step7: DataFrame.dtypes
Step8: DataFrame.values
Step9: DataFrame.describe(percentiles=None, include=None, exclude=None)
Step10: Series.value_counts(normalize=False, sort=True, ascending=False, bins=None, dropna=True)
Step11: Sorting
Step12: DataFrame.iloc
Step13: Combining data frames
Step14: DataFrame.merge(right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None)
Step15: More details on merge parameters
Step16: Left Merge
Step17: Right Merge
Step18: Outer Merge
Step19: Dropping Columns
Step20: Basic plotting
Step21: Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.
Step22: matplotlib.pyplot.plot(*args, scalex=True, scaley=True, data=None, kwargs)**
|
4,418
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'A': ['Good & bad', 'BB', 'CC', 'DD', 'Good & bad'], 'B': range(5), 'C': ['Good & bad'] * 5})
def g(df):
return df.replace('&','&', regex=True)
df = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
4,419
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'mpi-esm-1-2-hr', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
4,420
|
<ASSISTANT_TASK:>
Python Code:
# 1. Input model parameters
parameters = pd.Series()
parameters['rhoa'] = .9
parameters['sigma'] = 0.001
print(parameters)
# 2. Define a function that evaluates the equilibrium conditions
def equilibrium_equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Variables
fwd = variables_forward
cur = variables_current
# Exogenous tfp
tfp_proc = p.rhoa*np.log(cur.a) - np.log(fwd.a)
# Stack equilibrium conditions into a numpy array
return np.array([
tfp_proc
])
# 3. Initialize the model
model = ls.model(equations = equilibrium_equations,
nstates=1,
varNames=['a'],
shockNames=['eA'],
parameters = parameters)
# 4. Have linearsolve compute the steady state numerically
guess = [1]
model.compute_ss(guess)
print(model.ss)
# 5. Find the log-linear approximation around the non-stochastic steady state and solve
model.approximate_and_solve()
# 6 (a) Compute impulse responses
model.impulse(T=41,t0=5,shock=None)
print(model.irs['eA'].head(10))
# 6 (b) Plot impulse responses
model.irs['eA'][['eA','a']].plot(lw='5',alpha=0.5,grid=True).legend(loc='upper right',ncol=2)
# 6(c) Compute stochastic simulation
model.stoch_sim(seed=192,covMat= [[parameters['sigma']]])
print(model.simulated.head(10))
# 6(d) Plot stochastic simulation
model.simulated[['eA','a']].plot(lw='5',alpha=0.5,grid=True).legend(loc='upper right',ncol=2)
# 1. Input model parameters
# 2. Define a function that evaluates the equilibrium conditions
def equilibrium_equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Variables
fwd = variables_forward
cur = variables_current
# Production function
prod_fn = cur.a*cur.k**p.alpha - cur.y
# Capital evolution
# Exogenous tfp
tfp_proc = p.rhoa*np.log(cur.a) - np.log(fwd.a)
# Stack equilibrium conditions into a numpy array
return np.array([
tfp_proc
])
# 3. Initialize the model
model = ls.model(equations = equilibrium_equations,
nstates=,
varNames=[], # Any order as long as the state variables are named first
shockNames=[], # Name a shock for each state variable *even if there is no corresponding shock in the model*
parameters = parameters)
# 4. Have linearsolve compute the steady state numerically
# 5. Find the log-linear approximation around the non-stochastic steady state and solve
# Print the coeficient matrix P
# Print the coeficient matrix F
# 6 (a) Compute impulse responses and print the computed impulse responses
# 6(b) Plot the computed impulse responses to a TFP shock
# 6(c) Compute stochastic simulation and print the simulated values
# 6(d) Plot the computed stochastic simulation
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example 2
Step2: The previous step constructs a log-linear approximation of the model and then solves for the endogenous variables as functions of the state variables and exogenous shocks only
|
4,421
|
<ASSISTANT_TASK:>
Python Code:
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('1119703.CSV')
%pylab inline
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X', 'RPM TRAC']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,10),ylim=(0.5,2.7)).hlines([1.85,1.65],0,3500,colors='r')
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
Step2: En el boxplot, se ve como la mayoría de los datos están por encima de la media (primer cuartil). Se va a tratar de bajar ese porcentaje. La primera aproximación que vamos a realizar será la de hacer mayores incrementos al subir la velocidad en los tramos que el diámetro se encuentre entre $1.80mm$ y $1.75 mm$(caso 5) haremos incrementos de $d_v2$ en lugar de $d_v1$
Step3: Filtrado de datos
Step4: Representación de X/Y
Step5: Analizamos datos del ratio
Step6: Límites de calidad
|
4,422
|
<ASSISTANT_TASK:>
Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
from collections import Counter
import numpy as np
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
if(labels[i] =='POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] +=1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] +=1
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for term, cnt in list(total_counts.most_common()):
if(cnt > 10):
pos_neg_ratio = positive_counts[term]/float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word, ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1/(ratio+0.01)))
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
# TODO: Convert ratios to logs
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
from IPython.display import Image
Image(filename='sentiment_network_2.png')
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
import numpy as np
layer_0 = np.zeros((1,vocab_size))
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
for word in review.split(" "):
layer_0[0][word2index[word]] +=1
update_input_layer(reviews[0])
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
# TODO: Your code here
if label == "POSITIVE":
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = None
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = None
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
pass
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
pass
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
pass
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
pass
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
pass
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.test(reviews[-1000:],labels[-1000:])
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
# TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Lesson
Step3: Project 1
Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
Step5: TODO
Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
Step8: Examine the ratios you've calculated for a few words
Step9: Looking closely at the values you just calculated, we see the following
Step10: Examine the new ratios you've calculated for the same words from before
Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Step12: End of Project 1.
Step13: Project 2
Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
Step16: TODO
Step17: Run the following cell. It should display (1, 74074)
Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
Step20: TODO
Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
Step23: TODO
Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
Step29: End of Project 2.
Step30: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
Step31: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
Step32: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
Step33: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
Step34: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
Step35: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
Step36: Project 4
Step37: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
Step38: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
Step39: End of Project 4.
Step40: Project 5
Step41: Run the following cell to recreate the network and train it once again.
Step42: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Step43: End of Project 5.
Step44: Project 6
Step45: Run the following cell to train your network with a small polarity cutoff.
Step46: And run the following cell to test it's performance. It should be
Step47: Run the following cell to train your network with a much larger polarity cutoff.
Step48: And run the following cell to test it's performance.
Step49: End of Project 6.
|
4,423
|
<ASSISTANT_TASK:>
Python Code:
from sympy import isprime
print(isprime.__doc__[:180])
first_number = 6_00_00_00_00
last_number = 7_99_99_99_99
# test rapide
#last_number = first_number + 20
all_numbers = range(first_number, last_number + 1)
def count_prime_numbers_in_range(some_range):
count = 0
for number in some_range:
if isprime(number):
count += 1
return count
count = count_prime_numbers_in_range(all_numbers)
print(f"Pour des numéros de téléphones, nombres entre {first_number} et {last_number} (inclus), il y a {count} nombres premiers.")
total_number = len(all_numbers)
print(f"Pour des numéros de téléphones, nombres entre {first_number} et {last_number} (inclus), il y a {count/total_number:%} nombres premiers.")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Réponse
Step2: Conclusion
Step3: Et donc, on peut calculer la part de nombres premiers parmi les numéros de téléphones mobiles français.
|
4,424
|
<ASSISTANT_TASK:>
Python Code:
def fix_status(current_value):
if current_value == -2: return 'no_consumption'
elif current_value == -1: return 'paid_full'
elif current_value == 0: return 'revolving'
elif current_value in [1,2]: return 'delay_2_mths'
elif current_value in [3,4,5,6,7,8,9]: return 'delay_3+_mths'
else: return 'other'
for column_name in df.columns:
if column_name.startswith('status'):
df[column_name] = df[column_name].map(lambda x: fix_status(x)).astype(str)
df = pd.concat([df,pd.get_dummies(df['sex'], prefix='sex')],axis=1)
df.drop(['sex'],axis=1,inplace=True)
df = pd.concat([df,pd.get_dummies(df['education'], prefix='education')],axis=1)
df.drop(['education'],axis=1,inplace=True)
df = pd.concat([df,pd.get_dummies(df['marriage'], prefix='marriage')],axis=1)
df.drop(['marriage'],axis=1,inplace=True)
# also all status columns
for column_name in df.columns:
if column_name.startswith('status'):
df = pd.concat([df,pd.get_dummies(df[column_name], prefix=column_name)],axis=1)
df.drop([column_name],axis=1,inplace=True)
df.sample(10)
data = df.drop('default',axis=1)
target = df['default']
data.head()
X_train, X_test, y_train, y_test = train_test_split(
data.values,
target.values,
test_size=0.25)
clf = XGBClassifier()
clf.fit(X_train, y_train.ravel())
y_preds = clf.predict_proba(X_test)
# take the second column because the classifier outputs scores for
# the 0 class as well
preds = y_preds[:,1]
# fpr means false-positive-rate
# tpr means true-positive-rate
fpr, tpr, _ = metrics.roc_curve(y_test, preds)
auc_score = metrics.auc(fpr, tpr)
plt.title('ROC Curve')
plt.plot(fpr, tpr, label='AUC = {:.3f}'.format(auc_score))
# it's helpful to add a diagonal to indicate where chance
# scores lie (i.e. just flipping a coin)
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.legend(loc='lower right')
plt.show()
n_features=43
plt.clf()
d = dict(zip(data.columns, clf.feature_importances_))
d['marriage'] = 0
d['sex'] = 0
d['education'] = 0
for n in range(0,3):
d['marriage'] += d['marriage_{}'.format(n)]
del(d['marriage_{}'.format(n)])
for n in range(1,3):
d['sex'] += d['sex_{}'.format(n)]
del(d['sex_{}'.format(n)])
for n in range(0,7):
d['education'] += d['education_{}'.format(n)]
del(d['education_{}'.format(n)])
ss = sorted(d, key=d.get, reverse=True)
top_names = ss[0:n_features]
plt.title("Feature importances")
plt.bar(range(n_features), [d[i] for i in top_names], color="r", align="center")
plt.xlim(-1, n_features)
plt.xticks(range(n_features), top_names, rotation='vertical')
plt.yticks(np.arange(0, 0.12, 0.005))
plot_value_labels(plt.gca(),format='{:.3f}')
plt.gcf().set_size_inches(10,6)
plt.ylim(0.0,0.11)
plt.tight_layout()
plt.show()
df['age'].describe()
data = df[df['age']<=30].drop('default',axis=1)
target = df[df['age']<=30]['default']
X_train, X_test, y_train, y_test = train_test_split(
data.values,
target.values,
test_size=0.25)
clf = XGBClassifier()
clf.fit(X_train, y_train.ravel())
y_preds = clf.predict_proba(X_test)
# take the second column because the classifier outputs scores for
# the 0 class as well
preds = y_preds[:,1]
# fpr means false-positive-rate
# tpr means true-positive-rate
fpr, tpr, _ = metrics.roc_curve(y_test, preds)
auc_score = metrics.auc(fpr, tpr)
plt.title('ROC Curve')
plt.plot(fpr, tpr, label='AUC = {:.3f}'.format(auc_score))
# it's helpful to add a diagonal to indicate where chance
# scores lie (i.e. just flipping a coin)
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.legend(loc='lower right')
plt.show()
data = df[df['age'].between(31,50)].drop('default',axis=1)
target = df[df['age'].between(31,50)]['default']
X_train, X_test, y_train, y_test = train_test_split(
data.values,
target.values,
test_size=0.25)
clf = XGBClassifier()
clf.fit(X_train, y_train.ravel())
y_preds = clf.predict_proba(X_test)
# take the second column because the classifier outputs scores for
# the 0 class as well
preds = y_preds[:,1]
# fpr means false-positive-rate
# tpr means true-positive-rate
fpr, tpr, _ = metrics.roc_curve(y_test, preds)
auc_score = metrics.auc(fpr, tpr)
plt.title('ROC Curve')
plt.plot(fpr, tpr, label='AUC = {:.3f}'.format(auc_score))
# it's helpful to add a diagonal to indicate where chance
# scores lie (i.e. just flipping a coin)
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.legend(loc='lower right')
plt.show()
data = df[df['age'] > 50].drop('default',axis=1)
target = df[df['age'] > 50]['default']
X_train, X_test, y_train, y_test = train_test_split(
data.values,
target.values,
test_size=0.25)
clf = XGBClassifier()
clf.fit(X_train, y_train.ravel())
y_preds = clf.predict_proba(X_test)
# take the second column because the classifier outputs scores for
# the 0 class as well
preds = y_preds[:,1]
# fpr means false-positive-rate
# tpr means true-positive-rate
fpr, tpr, _ = metrics.roc_curve(y_test, preds)
auc_score = metrics.auc(fpr, tpr)
plt.title('ROC Curve')
plt.plot(fpr, tpr, label='AUC = {:.3f}'.format(auc_score))
# it's helpful to add a diagonal to indicate where chance
# scores lie (i.e. just flipping a coin)
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.legend(loc='lower right')
plt.show()
data = df[df['sex_1'] == 1].drop('default',axis=1)
target = df[df['sex_1'] == 1]['default']
X_train, X_test, y_train, y_test = train_test_split(
data.values,
target.values,
test_size=0.25)
clf = XGBClassifier()
clf.fit(X_train, y_train.ravel())
y_preds = clf.predict_proba(X_test)
# take the second column because the classifier outputs scores for
# the 0 class as well
preds = y_preds[:,1]
# fpr means false-positive-rate
# tpr means true-positive-rate
fpr, tpr, _ = metrics.roc_curve(y_test, preds)
auc_score = metrics.auc(fpr, tpr)
plt.title('ROC Curve')
plt.plot(fpr, tpr, label='AUC = {:.3f}'.format(auc_score))
# it's helpful to add a diagonal to indicate where chance
# scores lie (i.e. just flipping a coin)
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.legend(loc='lower right')
plt.show()
data = df[df['sex_2'] == 1].drop('default',axis=1)
target = df[df['sex_2'] == 1]['default']
X_train, X_test, y_train, y_test = train_test_split(
data.values,
target.values,
test_size=0.25)
clf = XGBClassifier()
clf.fit(X_train, y_train.ravel())
y_preds = clf.predict_proba(X_test)
# take the second column because the classifier outputs scores for
# the 0 class as well
preds = y_preds[:,1]
# fpr means false-positive-rate
# tpr means true-positive-rate
fpr, tpr, _ = metrics.roc_curve(y_test, preds)
auc_score = metrics.auc(fpr, tpr)
plt.title('ROC Curve')
plt.plot(fpr, tpr, label='AUC = {:.3f}'.format(auc_score))
# it's helpful to add a diagonal to indicate where chance
# scores lie (i.e. just flipping a coin)
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.legend(loc='lower right')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: one hot encoding where needed
Step2: can we do better by training a different model by subpopulation?
Step3: young people (age<=30)
Step4: solid gains here
Step5: older people (age > 50)
Step6: men only
Step7: women only
|
4,425
|
<ASSISTANT_TASK:>
Python Code:
# here we define a function that we can call to execute our simulation under
# a variety of different alternative scenarios
import scipy as sp
import numpy as np
import matplotlib.pyplot as pl
import pandas as pd
import shap
%config InlineBackend.figure_format = 'retina'
def run_credit_experiment(N, job_history_sex_impact=0, reported_income_sex_impact=0, income_sex_impact=0,
late_payments_sex_impact=0, default_rate_sex_impact=0,
include_brandx_purchase_score=False, include_sex=False):
np.random.seed(0)
sex = np.random.randint(0, 2, N) == 1 # randomly half men and half women
# four hypothetical causal factors influence customer quality
# they are all scaled to the same units between 0-1
income_stability = np.random.rand(N)
income_amount = np.random.rand(N)
if income_sex_impact > 0:
income_amount -= income_sex_impact/90000 * sex * np.random.rand(N)
income_amount -= income_amount.min()
income_amount /= income_amount.max()
spending_restraint = np.random.rand(N)
consistency = np.random.rand(N)
# intuitively this product says that high customer quality comes from simultaneously
# being strong in all factors
customer_quality = income_stability * income_amount * spending_restraint * consistency
# job history is a random function of the underlying income stability feature
job_history = np.maximum(
10 * income_stability + 2 * np.random.rand(N) - job_history_sex_impact * sex * np.random.rand(N)
, 0)
# reported income is a random function of the underlying income amount feature
reported_income = np.maximum(
10000 + 90000*income_amount + np.random.randn(N) * 10000 - \
reported_income_sex_impact * sex * np.random.rand(N)
, 0)
# credit inquiries is a random function of the underlying spending restraint and income amount features
credit_inquiries = np.round(6 * np.maximum(-spending_restraint + income_amount, 0)) + \
np.round(np.random.rand(N) > 0.1)
# credit inquiries is a random function of the underlying consistency and income stability features
late_payments = np.maximum(
np.round(3 * np.maximum((1-consistency) + 0.2 * (1-income_stability), 0)) + \
np.round(np.random.rand(N) > 0.1) - np.round(late_payments_sex_impact * sex * np.random.rand(N))
, 0)
# bundle everything into a data frame and define the labels based on the default rate and customer quality
X = pd.DataFrame({
"Job history": job_history,
"Reported income": reported_income,
"Credit inquiries": credit_inquiries,
"Late payments": late_payments
})
default_rate = 0.40 + sex * default_rate_sex_impact
y = customer_quality < np.percentile(customer_quality, default_rate * 100)
if include_brandx_purchase_score:
brandx_purchase_score = sex + 0.8 * np.random.randn(N)
X["Brand X purchase score"] = brandx_purchase_score
if include_sex:
X["Sex"] = sex + 0
# build model
import xgboost
model = xgboost.XGBClassifier(max_depth=1, n_estimators=500, subsample=0.5, learning_rate=0.05)
model.fit(X, y)
# build explanation
import shap
explainer = shap.TreeExplainer(model, shap.sample(X, 100))
shap_values = explainer.shap_values(X)
return shap_values, sex, X, explainer.expected_value
N = 10000
shap_values_A, sex_A, X_A, ev_A = run_credit_experiment(N)
model_outputs_A = ev_A + shap_values_A.sum(1)
glabel = "Demographic parity difference\nof model output for women vs. men"
xmin = -0.8
xmax = 0.8
shap.group_difference_plot(shap_values_A.sum(1), sex_A, xmin=xmin, xmax=xmax, xlabel=glabel)
slabel = "Demographic parity difference\nof SHAP values for women vs. men"
shap.group_difference_plot(shap_values_A, sex_A, X_A.columns, xmin=xmin, xmax=xmax, xlabel=slabel)
shap_values_B, sex_B, X_B, ev_B = run_credit_experiment(N, reported_income_sex_impact=30000)
model_outputs_B = ev_B + shap_values_B.sum(1)
shap.group_difference_plot(shap_values_B.sum(1), sex_B, xmin=xmin, xmax=xmax, xlabel=glabel)
shap.group_difference_plot(shap_values_B, sex_B, X_B.columns, xmin=xmin, xmax=xmax, xlabel=slabel)
shap_values_C, sex_C, X_C, ev_C = run_credit_experiment(N, late_payments_sex_impact=2)
model_outputs_C = ev_C + shap_values_C.sum(1)
shap.group_difference_plot(shap_values_C.sum(1), sex_C, xmin=xmin, xmax=xmax, xlabel=glabel)
shap.group_difference_plot(shap_values_C, sex_C, X_C.columns, xmin=xmin, xmax=xmax, xlabel=slabel)
shap_values_D, sex_D, X_D, ev_D = run_credit_experiment(N, default_rate_sex_impact=-0.1) # 20% change
model_outputs_D = ev_D + shap_values_D.sum(1)
shap.group_difference_plot(shap_values_D.sum(1), sex_D, xmin=xmin, xmax=xmax, xlabel=glabel)
shap.group_difference_plot(shap_values_D, sex_D, X_D.columns, xmin=xmin, xmax=xmax, xlabel=slabel)
shap_values_E, sex_E, X_E, ev_E = run_credit_experiment(
N, default_rate_sex_impact=-0.1, include_brandx_purchase_score=True
)
model_outputs_E = ev_E + shap_values_E.sum(1)
shap.group_difference_plot(shap_values_E.sum(1), sex_E, xmin=xmin, xmax=xmax, xlabel=glabel)
shap.group_difference_plot(shap_values_E, sex_E, X_E.columns, xmin=xmin, xmax=xmax, xlabel=slabel)
shap_values_F, sex_F, X_F, ev_F = run_credit_experiment(
N, default_rate_sex_impact=-0.1, include_brandx_purchase_score=True,
job_history_sex_impact=2
)
model_outputs_F = ev_F + shap_values_F.sum(1)
shap.group_difference_plot(shap_values_F.sum(1), sex_F, xmin=xmin, xmax=xmax, xlabel=glabel)
shap.group_difference_plot(shap_values_F, sex_F, X_F.columns, xmin=xmin, xmax=xmax, xlabel=slabel)
shap_values_G, sex_G, X_G, ev_G = run_credit_experiment(
N, default_rate_sex_impact=-0.1, include_brandx_purchase_score=True,
job_history_sex_impact=2, include_sex=True
)
model_outputs_G = ev_G + shap_values_G.sum(1)
shap.group_difference_plot(shap_values_G.sum(1), sex_G, xmin=xmin, xmax=xmax, xlabel=glabel)
shap.group_difference_plot(shap_values_G, sex_G, X_G.columns, xmin=xmin, xmax=xmax, xlabel=slabel)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <!--## Scenario A
Step2: Now we can use SHAP to decompose the model output among each of the model's input features and then compute the demographic parity difference on the component attributed to each feature. As noted above, because the SHAP values sum up to the model's output, the sum of the demographic parity differences of the SHAP values for each feature sum up to the demographic parity difference of the whole model. This means that the sum of the bars below equals the bar above (the demographic parity difference of our baseline scenario model).
Step3: Scenario B
Step4: If this were a real application, this demographic parity difference might trigger an in-depth analysis of the model to determine what might be causing the disparity. While this investigation is challenging given just a single demographic parity difference value, it is much easier given the per-feature demographic parity decomposition based on SHAP. Using SHAP we can see there is a significant bias coming from the reported income feature that is increasing the risk of women disproportionately to men. This allows us to quickly identify which feature has the reporting bias that is causing our model to violate demographic parity
Step5: It is important to note at this point how our assumptions can impact the interpretation of SHAP fairness explanations. In our simulated scenario we know that women actually have identical income profiles to men, so when we see that the reported income feature is biased lower for women than for men, we know that has come from a bias in the measurement errors in the reported income feature. The best way to address this problem would be figure out how to debias the measurement errors in the reported income feature. Doing so would create a more accurate model that also has less demographic disparity. However, if we instead assume that women actually are making less money than men (and it is not just a reporting error), then we can't just "fix" the reported income feature. Instead we have to carefully consider how best to account for real differences in default risk between two protected groups. It is impossible to determine which of these two situations is happening using just the SHAP fairness explanation, since in both cases the reported income feature will be responsible for an observed disparity between the predicted risks of men and women.
Step6: And as we would hope, the SHAP explanations correctly highlight the late payments feature as the cause of the model's demographic parity difference, as well as the direction of the effect
Step7: Scenario D
Step8: We also see no evidence of any demographic parity differences in the SHAP explanations
Step9: Scenario E
Step10: When we explain the demographic parity difference with SHAP we see that, as expected, the brand X purchase score feature drives the difference. In this case it is not because we have a bias in how we measure the brand X purchase score feature, but rather because we have a bias in our training label that gets captured by any input features that are sufficiently correlated with sex
Step11: Scenario F
Step12: However, if we look at the SHAP explanation of the demographic parity difference we clearly see both (counteracting) biases
Step13: Identifying multiple potentially offsetting bias effects can be important since while on average there is no disparate impact on men or women, there is disparate impact on individuals. For example, in this simulation women who have not shopped at brand X will receive a lower credit score than they should have because of the bias present in job history reporting.
Step14: The SHAP explanation for scenario G shows that all of the demographic parity difference that used to be attached to the brand X purchase score feature in scenario F has now moved to the sex feature, while none of the demographic parity difference attached to the job history feature in scenario F has moved. This can be interpreted to mean that all of the disparity attributed to brand X purchase score in scenario F was due to label bias, while all of the disparity attributed to job history in scenario F was due to feature bias.
|
4,426
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def gen_complex_chirp(fs=44100, pad_frac=.01, time_s=1):
f0= -fs / (2. * (1 + pad_frac))
f1= fs / (2. *(1 + pad_frac))
t1 = time_s
beta = (f1 - f0) / float(t1)
t = np.arange(0, t1, t1/ float(fs))
return np.exp(2j * np.pi * (.5 * beta * (t ** 2) + f0 * t))
def gen_noisy_chirp():
#Add noise to make it a little less synthetic looking
chirp = gen_complex_chirp()
return chirp + .5 * (np.random.randn(*chirp.shape) + 1j * np.random.randn(*chirp.shape))
def plot_specgram(d, xlim=20000):
_ = plt.specgram(d, 512, sides="twosided", cmap="gray")
ax = plt.gca()
#Remove whitespace on plot
ax.set_xlim(0, xlim)
_ = ax.set_ylabel("Normalized complex frequency")
_ = ax.set_xlabel("Time (samples)")
chirp = gen_noisy_chirp()
plot_specgram(chirp)
from scipy.signal import firwin, freqz, hilbert
def plot_filter(f, cx=False):
w, h = freqz(f)
plt.plot(w / max(w), np.abs(h), color="steelblue")
if cx:
plt.plot(-w / max(w), np.abs(h), color="darkred")
ax = plt.gca()
ax.set_xlabel("Normalized frequency")
ax.set_ylabel("Gain")
#Make a filter with an arbitrary number of taps (50) multiplied by decimation factor (2)
#Cutoff is set to avoid aliasing, this will be further discussed
filt = firwin(2 * 50, 1 / (2 * 1.1))
plot_filter(filt)
plt.figure()
filtered_chirp = np.convolve(chirp, filt)
plot_specgram(filtered_chirp)
plot_filter(filt, cx=True)
plt.figure()
plot_specgram(filtered_chirp)
a = np.arange(12)
print("Full data:")
print(a)
print()
print("Decimated by 2:")
print(a[::2])
print("Alternate decimation by 2:")
print(a[1::2])
print()
print("Decimate by 3, along with alternate decimations:")
print(a[::3])
print(a[1::3])
print(a[2::3])
plot_specgram(filtered_chirp[::2], xlim=10000)
plot_specgram(filtered_chirp[1::2], xlim=10000)
plot_specgram(filtered_chirp[::4], xlim=5000)
from IPython.display import Image
Image(url='http://zone.ni.com/images/reference/en-XX/help/370051V-01/aliasing_effects.gif')
chirp = gen_noisy_chirp()
plot_specgram(chirp)
def polyphase_core(x, m, f):
#x = input data
#m = decimation rate
#f = filter
#Hack job - append zeros to match decimation rate
if x.shape[0] % m != 0:
x = np.append(x, np.zeros((m - x.shape[0] % m,)))
if f.shape[0] % m != 0:
f = np.append(f, np.zeros((m - f.shape[0] % m,)))
polyphase = p = np.zeros((m, (x.shape[0] + f.shape[0]) / m), dtype=x.dtype)
p[0, :-1] = np.convolve(x[::m], f[::m])
#Invert the x values when applying filters
for i in range(1, m):
p[i, 1:] = np.convolve(x[m - i::m], f[i::m])
return p
def polyphase_single_filter(x, m, f):
return np.sum(polyphase_core(x, m, f), axis=0)
filt = firwin(2 * 50, 1 / (2 * 1.1))
filtered_chirp = polyphase_single_filter(chirp, 2, filt)
plot_specgram(filtered_chirp, xlim=10000)
filt = firwin(10 * 50, 1 / (2 * 1.1))
%timeit -n 10 _ = np.convolve(chirp, filt)[::10]
%timeit -n 10 _ = polyphase_single_filter(chirp, 10, filt)
def polyphase_analysis(x, m, f):
return np.fft.ifft(polyphase_core(x, m, f), n=m, axis=0)
filt = firwin(4 * 50, 1 / (4 * 1.1))
an = polyphase_analysis(chirp, 4, filt)
plot_specgram(an[0, :], xlim=5000)
plot_specgram(an[1, :], xlim=5000)
plot_specgram(an[2, :], xlim=5000)
plot_specgram(an[3, :], xlim=5000)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Filtering
Step2: We can see that the chirp has been filtered. Now you may be saying "I thought this was a low pass filter, but it took the center of the signal?!? WAT" - hold on! The designed filter is a lowpass filter, but in the complex domain it actually takes the middle of the signal, since the the real part is only shown in the upper half of this graph. The full low-pass filter we designed is actually a bandpass filter in the complex domain - this is crucial for the operation of the polyphase filterbank, and will be utilized extensively.
Step3: Decimation
Step4: Now, what happens when we take a decimation by 2 of the chirp data? Sampling theory tells us that no frequency greater than $\frac{f_s}{2}$ can be represented by a signal sampled at frequency rate $f_s$. This means that decimating by 2 will affect any frequency greater than $\frac{f_s}{2\times2} = \frac{f_s}{4}$. Since our original signal went from $-1.0
Step5: We see that both decimations return a similar stream, and the useless data outside our filter has been eliminated (or so it seems). However, we also notice that the plot limit X axis had to be changed, because we have half of the samples as before in each stream, effectively halving the sampling rate. What happens if we decimate by more than 2?
Step6: What has happened to our beautiful chirp? It has been mangled and garbled by excessive decimation. This mangling is known as aliasing, and will be the topic of the next section. Aliasing is once of the most critical issues in DSP, and avoiding aliasing while downsampling, in a computationally efficient way, is the entire reason to use polyphase filtering.
Step7: Thinking about this further, you may realize that all signals have aliasing of some form or another. The key difference between the two plots is that one of them had an aliasing signal that was much, much weaker than the in band signal (the signal in $-0.5
Step8: Here we can see that the polyphase filtered output is identical to the earlier filter, then decimate result. We split the polyphase core computation in order to reuse it with the polyphase filterbank, shown later in this notebook. Though the code is a little convoluted, it should provide more computational savings as the decimation rate is increased. See below for an example with $\downarrow M = 10$.
Step9: An optimized polyphase filter should beat the optimized convolve-decimate form every time. Even with unoptimized polyphase filter code, we see decent improvement, though this isn't a true benchmark. Depending on the size of the filter, choosing the right convolution code can also be a huge optimization! See this link for more details.
|
4,427
|
<ASSISTANT_TASK:>
Python Code:
metaphors_url = 'http://metacorps.io/static/viomet-snapshot-project-df.csv'
project_df = get_project_data_frame(metaphors_url)
print(project_df.columns)
from viomet_9_10_17 import fit_all_networks
import pandas as pd
date_range = pd.date_range('2016-9-1', '2016-11-30', freq='D')
# uncomment below to run model fits; takes tens of seconds at least
fit_networks = fit_all_networks(project_df, date_range=date_range, iatv_corpus_name='Viomet Sep-Nov 2016')
print(fit_networks)
# set by_network=False to get the fit for all networks taken together
fit_sum = fit_all_networks(project_df, by_network=False, date_range=date_range, iatv_corpus_name='Viomet Sep-Nov 2016')
print(fit_sum)
from viomet_9_10_17 import by_network_frequency_figure
partition_infos = {network: fit_networks[network][0] for network in ['MSNBCW', 'CNNW', 'FOXNEWSW']}
by_network_frequency_figure(
project_df, date_range=date_range,
iatv_corpus_name='Viomet Sep-Nov 2016',
partition_infos=partition_infos,
save_path='Figures/model_fits.pdf'
)
from IPython.display import IFrame
IFrame("Figures/model_fits.pdf", width=600, height=450)
soa_dict = subject_object_analysis(
project_df, plot=True, save_dir=SAVE_DIR, font_scale=1.5
)
# check that the figures were saved to disk
os.listdir(SAVE_DIR)
from viomet_9_10_17 import by_facet_word
excited, ground = by_facet_word(
project_df, partition_infos, facet_words=['attack', 'beat', 'hit']
)
from IPython.display import display
print('Excited:')
display(excited)
print('\nGround:')
display(ground)
print('\nExcited - Ground:')
display(excited - ground)
IFrame('https://books.google.com/ngrams/graph?content=attack%2Chit%2Cbeat&year_start=2000&year_end=2016&corpus=17&smoothing=3&share=&direct_url=t1%3B%2Cattack%3B%2Cc0%3B.t1%3B%2Chit%3B%2Cc0%3B.t1%3B%2Cbeat%3B%2Cc0',
width=650, height=400)
from project.common import get_project_data_frame
metaphors_url = 'http://metacorps.io/static/data/viomet-2012-snapshot-project-df.csv'
project_df = get_project_data_frame(metaphors_url)
print(project_df.columns)
from viomet_9_10_17 import fit_all_networks
import pandas as pd
IATV_CORPUS_NAME = 'Viomet Sep-Nov 2012'
date_range = pd.date_range('2012-9-1', '2012-11-30', freq='D')
# uncomment below to run model fits; takes tens of seconds at least
fit_networks = fit_all_networks(project_df, date_range=date_range,
iatv_corpus_name=IATV_CORPUS_NAME)
from viomet_9_10_17 import by_network_frequency_figure
partition_infos = {network: fit_networks[network][0]
for network in ['MSNBCW', 'CNNW', 'FOXNEWSW']}
by_network_frequency_figure(
project_df, date_range=date_range,
iatv_corpus_name=IATV_CORPUS_NAME,
partition_infos=partition_infos,
save_path='Figures/model_fits_2012.pdf'
)
from IPython.display import IFrame
IFrame("Figures/model_fits_2012.pdf", width=600, height=450)
soa_dict = subject_object_analysis(
project_df, subj_obj=[
('Romney', 'Obama'),
('Obama', 'Romney'),
('Romney', None),
('Obama', None),
(None, 'Romney'),
(None, 'Obama')
],
date_range=date_range,
plot=True, save_dir=SAVE_DIR, font_scale=1.5
)
from viomet_9_10_17 import by_facet_word
excited, ground = by_facet_word(
project_df, partition_infos, facet_words=['attack', 'beat', 'hit']
)
from IPython.display import display
print('Excited:')
display(excited)
print('\nGround:')
display(ground)
print('\nExcited - Ground:')
display(excited - ground)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fitting excited state models to each network and all networks
Step2: Visualize model fits overlaid on timeseries data
Step3: Trump, Clinton as Subject and Object, and vice-versa
Step4: Violent phrase activating source domain
Step5: September to November 2012 and $Q$
Step6: From Google Ngram Viewer, we get that the frequency of attack, hit, and beat are .0067, .0062, and .0034 for their American English corpus in 2008. We can use this to compare frequencies of metaphor with attack, hit, and beat. We could also use the total instances identified through search in our corpus.
|
4,428
|
<ASSISTANT_TASK:>
Python Code:
data_in_shape = (3, 5, 2, 2)
L = ZeroPadding3D(padding=(1, 1, 1), data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(260)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding3D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (3, 5, 2, 2)
L = ZeroPadding3D(padding=(1, 1, 1), data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(261)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding3D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (3, 2, 1, 4)
L = ZeroPadding3D(padding=(3, 2, 2), data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(262)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding3D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (3, 2, 1, 4)
L = ZeroPadding3D(padding=(3, 2, 2), data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(263)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding3D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (3, 2, 1, 4)
L = ZeroPadding3D(padding=((1,2),(3,4),(3,1)), data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(264)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding3D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (3, 2, 1, 4)
L = ZeroPadding3D(padding=2, data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(265)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding3D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
import os
filename = '../../../test/data/layers/convolutional/ZeroPadding3D.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: [convolutional.ZeroPadding3D.1] padding (1,1,1) on 3x5x2x2 input, data_format='channels_first'
Step2: [convolutional.ZeroPadding3D.2] padding (3,2,2) on 3x2x1x4 input, data_format='channels_last'
Step3: [convolutional.ZeroPadding3D.3] padding (3,2,2) on 3x2x1x4 input, data_format='channels_first'
Step4: [convolutional.ZeroPadding3D.4] padding ((1,2),(3,4),(3,1)) on 3x2x1x4 input, data_format='channels_last'
Step5: [convolutional.ZeroPadding3D.5] padding 2 on 3x2x1x4 input, data_format='channels_last'
Step6: export for Keras.js tests
|
4,429
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
# get titanic training file as a DataFrame
titanic = pd.read_csv("../datasets/titanic_train.csv")
titanic.shape
# preview the data
titanic.head()
titanic.describe()
titanic.info()
ports = pd.get_dummies(titanic.Embarked , prefix='Embarked')
ports.head()
titanic = titanic.join(ports)
titanic.drop(['Embarked'], axis=1, inplace=True) # then drop the original column
titanic.Sex = titanic.Sex.map({'male':0, 'female':1})
y = titanic.Survived.copy() # copy “y” column values out
X = titanic.drop(['Survived'], axis=1) # then, drop y column
X.drop(['Cabin'], axis=1, inplace=True)
X.drop(['Ticket'], axis=1, inplace=True)
X.drop(['Name'], axis=1, inplace=True)
X.drop(['PassengerId'], axis=1, inplace=True)
X.info()
X.isnull().values.any()
#X[pd.isnull(X).any(axis=1)]
X.Age.fillna(X.Age.mean(), inplace=True) # replace NaN with average age
X.isnull().values.any()
from sklearn.model_selection import train_test_split
# 80 % go into the training test, 20% in the validation test
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.2, random_state=7)
def simple_heuristic(titanicDF):
'''
predict whether or not the passngers survived or perished.
Here's the algorithm, predict the passenger survived:
1) If the passenger is female or
2) if his socioeconomic status is high AND if the passenger is under 18
'''
predictions = [] # a list
for passenger_index, passenger in titanicDF.iterrows():
if passenger['Sex'] == 1:
# female
predictions.append(1) # survived
elif passenger['Age'] < 18 and passenger['Pclass'] == 1:
# male but minor and rich
predictions.append(1) # survived
else:
predictions.append(0) # everyone else perished
return predictions
simplePredictions = simple_heuristic(X_valid)
correct = sum(simplePredictions == y_valid)
print ("Baseline: ", correct/len(y_valid))
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train, y_train)
model.score(X_train, y_train)
model.score(X_valid, y_valid)
model.intercept_ # the fitted intercept
model.coef_ # the fitted coefficients
titanic.corr()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Variable Description
Step2: Not all features are numeric
Step3: 2. Process the Data
Step4: Now the feature Embarked (a category) has been trasformed into 3 binary features, e.g. Embarked_C = 0 not embarked in Cherbourg, 1 = embarked in Cherbourg.
Step5: Transform the gender feature
Step6: Extract the target variable
Step7: Drop not so important features
Step8: All features are now numeric, ready for regression.
Step9: True, there are missing values in the data (NaN) and a quick look at the data reveals that they are all in the Age feature.
Step10: Now all missing values have been removed.
Step11: 3. Modelling
Step12: Let's see how this simple algorithm will behave on the validation dataset and we will keep that number as our baseline
Step13: Baseline
Step14: 4. Evaluate the model
Step15: Two things
Step16: Which means that the formula is
|
4,430
|
<ASSISTANT_TASK:>
Python Code:
# Import the required packages
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import scipy
import math
import random
import string
random.seed(123)
# Display plots inline
%matplotlib inline
# Define plot's default figure size
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
#read the datasets
train = pd.read_csv("data/intro_to_ann.csv")
print (train.head())
X, y = np.array(train.ix[:,0:2]), np.array(train.ix[:,2])
print(X.shape, y.shape)
plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.BuGn)
# calculate a random number where: a <= rand < b
def rand(a, b):
return (b-a)*random.random() + a
# Make a matrix
def makeMatrix(I, J, fill=0.0):
return np.zeros([I,J])
# our sigmoid function
def sigmoid(x):
#return math.tanh(x)
return 1/(1+np.exp(-x))
# derivative of our sigmoid function, in terms of the output (i.e. y)
def dsigmoid(y):
return (y * (1- y))
class NN:
def __init__(self, ni, nh, no):
# number of input, hidden, and output nodes
self.ni = ni + 1 # +1 for bias node
self.nh = nh
self.no = no
# activations for nodes
self.ai = [1.0]*self.ni
self.ah = [1.0]*self.nh
self.ao = [1.0]*self.no
# create weights
self.wi = makeMatrix(self.ni, self.nh)
self.wo = makeMatrix(self.nh, self.no)
# set them to random vaules
for i in range(self.ni):
for j in range(self.nh):
self.wi[i][j] = rand(-0.2, 0.2)
for j in range(self.nh):
for k in range(self.no):
self.wo[j][k] = rand(-2.0, 2.0)
# last change in weights for momentum
self.ci = makeMatrix(self.ni, self.nh)
self.co = makeMatrix(self.nh, self.no)
class NN:
def __init__(self, ni, nh, no):
# number of input, hidden, and output nodes
self.ni = ni + 1 # +1 for bias node
self.nh = nh
self.no = no
# activations for nodes
self.ai = [1.0]*self.ni
self.ah = [1.0]*self.nh
self.ao = [1.0]*self.no
# create weights
self.wi = makeMatrix(self.ni, self.nh)
self.wo = makeMatrix(self.nh, self.no)
# set them to random vaules
for i in range(self.ni):
for j in range(self.nh):
self.wi[i][j] = rand(-0.2, 0.2)
for j in range(self.nh):
for k in range(self.no):
self.wo[j][k] = rand(-2.0, 2.0)
# last change in weights for momentum
self.ci = makeMatrix(self.ni, self.nh)
self.co = makeMatrix(self.nh, self.no)
def backPropagate(self, targets, N, M):
if len(targets) != self.no:
print(targets)
raise ValueError('wrong number of target values')
# calculate error terms for output
#output_deltas = [0.0] * self.no
output_deltas = np.zeros(self.no)
for k in range(self.no):
error = targets[k]-self.ao[k]
output_deltas[k] = dsigmoid(self.ao[k]) * error
# calculate error terms for hidden
#hidden_deltas = [0.0] * self.nh
hidden_deltas = np.zeros(self.nh)
for j in range(self.nh):
error = 0.0
for k in range(self.no):
error = error + output_deltas[k]*self.wo[j][k]
hidden_deltas[j] = dsigmoid(self.ah[j]) * error
# update output weights
for j in range(self.nh):
for k in range(self.no):
change = output_deltas[k]*self.ah[j]
self.wo[j][k] = self.wo[j][k] + N*change + M*self.co[j][k]
self.co[j][k] = change
#print N*change, M*self.co[j][k]
# update input weights
for i in range(self.ni):
for j in range(self.nh):
change = hidden_deltas[j]*self.ai[i]
self.wi[i][j] = self.wi[i][j] + N*change + M*self.ci[i][j]
self.ci[i][j] = change
# calculate error
error = 0.0
for k in range(len(targets)):
error = error + 0.5*(targets[k]-self.ao[k])**2
return error
def test(self, patterns):
self.predict = np.empty([len(patterns), self.no])
for i, p in enumerate(patterns):
self.predict[i] = self.activate(p)
#self.predict[i] = self.activate(p[0])
def weights(self):
print('Input weights:')
for i in range(self.ni):
print(self.wi[i])
print('Output weights:')
for j in range(self.nh):
print(self.wo[j])
def activate(self, inputs):
if len(inputs) != self.ni-1:
print(inputs)
raise ValueError('wrong number of inputs')
# input activations
for i in range(self.ni-1):
#self.ai[i] = sigmoid(inputs[i])
self.ai[i] = inputs[i]
# hidden activations
for j in range(self.nh):
sum = 0.0
for i in range(self.ni):
sum = sum + self.ai[i] * self.wi[i][j]
self.ah[j] = sigmoid(sum)
# output activations
for k in range(self.no):
sum = 0.0
for j in range(self.nh):
sum = sum + self.ah[j] * self.wo[j][k]
self.ao[k] = sigmoid(sum)
return self.ao[:]
def train(self, patterns, iterations=1000, N=0.5, M=0.1):
# N: learning rate
# M: momentum factor
patterns = list(patterns)
for i in range(iterations):
error1 = 0.0
#j = 0
for p in patterns:
inputs = p[0]
targets = p[1]
self.activate(inputs)
error1 = error1 + self.backPropagate([targets], N, M)
#j= j+1
#print (j)
#self.weights()
#if i % 5 == 0:
print('error in iiteration %d : %-.5f' % (i,error1))
#print('Final training error: %-.5f' % error1)
# Helper function to plot a decision boundary.
# This generates the contour plot to show the decision boundary visually
def plot_decision_boundary(nn_model):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
nn_model.test(np.c_[xx.ravel(), yy.ravel()])
Z = nn_model.predict
Z[Z>=0.5] = 1
Z[Z<0.5] = 0
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], s=40, c=y, cmap=plt.cm.BuGn)
n = NN(2, 4, 1)
print (n.weights())
print ("prediction")
print ("y=1 --- yhat=",n.activate([2.067788, 0.258133]))
print ("y=1 --- yhat=",n.activate([0.993994, 0.258133]))
print ("y=0 --- yhat=",n.activate([-0.690315, 0.749921]))
print ("y=0 --- yhat=",n.activate([1.023582, 0.529003]))
print ("y=1 --- yhat=",n.activate([0.700747, -0.496724]))
%timeit -n 1 -r 1 n.train(zip(X,y), iterations=1000)
plot_decision_boundary(n)
plt.title("Our next model with 4 hidden units")
print (n.weights())
print ("prediction")
print ("y=1 --- yhat=",n.activate([2.067788, 0.258133]))
print ("y=1 --- yhat=",n.activate([0.993994, 0.258133]))
print ("y=0 --- yhat=",n.activate([-0.690315, 0.749921]))
print ("y=0 --- yhat=",n.activate([1.023582, 0.529003]))
print ("y=1 --- yhat=",n.activate([0.700747, -0.496724]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start building our NN's building blocks.
Step2: Our NN class
Step3: Let's visualize and observe the resultset
Step4: Create Neural networks with 1 hidden layer.
Step5: Data Set
Step6: Train the Neural Networks = estimate the ws while minimizing the error
Step7: Data Set
|
4,431
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Hari Bharadwaj <hari@nmr.mgh.harvard.edu>
# Denis Engemann <denis.engemann@gmail.com>
# Chris Holdgraf <choldgraf@berkeley.edu>
#
# License: BSD-3-Clause
import numpy as np
from matplotlib import pyplot as plt
from mne import create_info, EpochsArray
from mne.baseline import rescale
from mne.time_frequency import (tfr_multitaper, tfr_stockwell, tfr_morlet,
tfr_array_morlet)
from mne.viz import centers_to_edges
print(__doc__)
sfreq = 1000.0
ch_names = ['SIM0001', 'SIM0002']
ch_types = ['grad', 'grad']
info = create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types)
n_times = 1024 # Just over 1 second epochs
n_epochs = 40
seed = 42
rng = np.random.RandomState(seed)
noise = rng.randn(n_epochs, len(ch_names), n_times)
# Add a 50 Hz sinusoidal burst to the noise and ramp it.
t = np.arange(n_times, dtype=np.float64) / sfreq
signal = np.sin(np.pi * 2. * 50. * t) # 50 Hz sinusoid signal
signal[np.logical_or(t < 0.45, t > 0.55)] = 0. # Hard windowing
on_time = np.logical_and(t >= 0.45, t <= 0.55)
signal[on_time] *= np.hanning(on_time.sum()) # Ramping
data = noise + signal
reject = dict(grad=4000)
events = np.empty((n_epochs, 3), dtype=int)
first_event_sample = 100
event_id = dict(sin50hz=1)
for k in range(n_epochs):
events[k, :] = first_event_sample + k * n_times, 0, event_id['sin50hz']
epochs = EpochsArray(data=data, info=info, events=events, event_id=event_id,
reject=reject)
epochs.average().plot()
freqs = np.arange(5., 100., 3.)
vmin, vmax = -3., 3. # Define our color limits.
n_cycles = freqs / 2.
time_bandwidth = 2.0 # Least possible frequency-smoothing (1 taper)
power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
time_bandwidth=time_bandwidth, return_itc=False)
# Plot results. Baseline correct based on first 100 ms.
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
title='Sim: Least smoothing, most variance')
n_cycles = freqs # Increase time-window length to 1 second.
time_bandwidth = 4.0 # Same frequency-smoothing as (1) 3 tapers.
power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
time_bandwidth=time_bandwidth, return_itc=False)
# Plot results. Baseline correct based on first 100 ms.
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
title='Sim: Less frequency smoothing, more time smoothing')
n_cycles = freqs / 2.
time_bandwidth = 8.0 # Same time-smoothing as (1), 7 tapers.
power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
time_bandwidth=time_bandwidth, return_itc=False)
# Plot results. Baseline correct based on first 100 ms.
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
title='Sim: Less time smoothing, more frequency smoothing')
fig, axs = plt.subplots(1, 3, figsize=(15, 5), sharey=True)
fmin, fmax = freqs[[0, -1]]
for width, ax in zip((0.2, .7, 3.0), axs):
power = tfr_stockwell(epochs, fmin=fmin, fmax=fmax, width=width)
power.plot([0], baseline=(0., 0.1), mode='mean', axes=ax, show=False,
colorbar=False)
ax.set_title('Sim: Using S transform, width = {:0.1f}'.format(width))
plt.tight_layout()
fig, axs = plt.subplots(1, 3, figsize=(15, 5), sharey=True)
all_n_cycles = [1, 3, freqs / 2.]
for n_cycles, ax in zip(all_n_cycles, axs):
power = tfr_morlet(epochs, freqs=freqs,
n_cycles=n_cycles, return_itc=False)
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
axes=ax, show=False, colorbar=False)
n_cycles = 'scaled by freqs' if not isinstance(n_cycles, int) else n_cycles
ax.set_title('Sim: Using Morlet wavelet, n_cycles = %s' % n_cycles)
plt.tight_layout()
n_cycles = freqs / 2.
power = tfr_morlet(epochs, freqs=freqs,
n_cycles=n_cycles, return_itc=False, average=False)
print(type(power))
avgpower = power.average()
avgpower.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
title='Using Morlet wavelets and EpochsTFR', show=False)
power = tfr_array_morlet(epochs.get_data(), sfreq=epochs.info['sfreq'],
freqs=freqs, n_cycles=n_cycles,
output='avg_power')
# Baseline the output
rescale(power, epochs.times, (0., 0.1), mode='mean', copy=False)
fig, ax = plt.subplots()
x, y = centers_to_edges(epochs.times * 1000, freqs)
mesh = ax.pcolormesh(x, y, power[0], cmap='RdBu_r', vmin=vmin, vmax=vmax)
ax.set_title('TFR calculated on a numpy array')
ax.set(ylim=freqs[[0, -1]], xlabel='Time (ms)')
fig.colorbar(mesh)
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulate data
Step2: Calculate a time-frequency representation (TFR)
Step3: (1) Least smoothing (most variance/background fluctuations).
Step4: (2) Less frequency smoothing, more time smoothing.
Step5: (3) Less time smoothing, more frequency smoothing.
Step6: Stockwell (S) transform
Step7: Morlet Wavelets
Step8: Calculating a TFR without averaging over epochs
Step9: Operating on arrays
|
4,432
|
<ASSISTANT_TASK:>
Python Code:
# download image from github: -q quiet mode; -N overwrite on the next download
!wget -q -N https://github.com/robertoalotufo/ia898/raw/830a0f5f6e6a1ddd459127631bf9c0c750bf1f58/data/cameraman.tif
!wget -q -N https://github.com/robertoalotufo/ia898/raw/830a0f5f6e6a1ddd459127631bf9c0c750bf1f58/data/keyb.tif
!wget -q -N https://github.com/robertoalotufo/ia898/raw/830a0f5f6e6a1ddd459127631bf9c0c750bf1f58/figures/versicolor.jpg
!wget -q -N https://github.com/robertoalotufo/rnap/raw/master/figures/Iris_versicolor.jpg
from PIL import Image
%matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
a = np.array(Image.open('cameraman.tif'))[100:100+15,50:50+20]
print("a = \n", a)
print('a.max()=', a.max())
print('a.mean()=', a.mean())
print('a.sum()=', a.sum())
print('a =\n',a)
print('a.shape = ', a.shape)
print('a.max(axis=0) = ', a.max(0))
print('a.max(axis=0).shape = ', a.max(0).shape)
print('a.max(axis=1) = ', a.max(1))
print('a.max(axis=1).shape = ', a.max(1).shape)
print('a =\n',a)
print('a.mean(axis=0) = ', a.mean(0))
print('a.min(axis=0) = ', a.min(0))
print('a.sum(axis=0) = ', a.sum(0))
f = mpimg.imread('keyb.tif')
plt.figure(0)
plt.imshow(f,cmap='gray'); plt.title('f: shape(%d,%d)' % (f.shape[0], f.shape[1]))
hmean = f.mean(axis=0)
plt.figure(1)
plt.plot(hmean,color='r')
plt.ylabel='intensidade'
plt.xlabel='coluna'
plt.title('f.mean(0), valor médio das colunas');
f = mpimg.imread('keyb.tif')
plt.figure(0)
plt.imshow(f,cmap='gray'); plt.title('f: shape(%d,%d)' % (f.shape[0], f.shape[1]))
hmin = f.min(axis=0)
plt.figure(1)
plt.plot(hmin,color='r')
plt.ylabel='intensidade'
plt.xlabel='coluna'
plt.title('f.mean(0), valor médio de cada coluna');
from PIL import Image
img_pil = Image.open('Iris_versicolor.jpg')
img_pil
img = np.array(img_pil)
img.shape, img.min(), img.max()
img_c = img.transpose(2,0,1)
img_c.shape
# Mostrar os canais R, G, B como imagens monocromáticas individuais
# Calcular o valor médio de cada um dos canais RGB
# Calcular a imagem média dos 3 canais -> resultado é uma imagem em niveis de cinza de 2 dimensões (165,220)
#
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introdução ao NumPy - Redução de eixo
Step2: A título de curiosidade, em processamento paralelo, fazer este tipo de operação, que acumula um
Step3: Note que a matriz possui 2 dimensões denominados eixos 0 e 1
Step4: Redução no eixo com aplicações em imagens
Step5: Veja o exemplo similar, mas agora calculando os valores mínimos das colunas.
Step6: Veja agora uma imagem colorida com 3 dimensões
Step7: A imagem possui 3 dimensões
|
4,433
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import matplotlib.pyplot as plt
import numpy as np
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
zip_dir = tf.keras.utils.get_file('cats_and_dogs_filterted.zip', origin=_URL, extract=True)
zip_dir_base = os.path.dirname(zip_dir)
!find $zip_dir_base -type d -print
base_dir = os.path.join(os.path.dirname(zip_dir), 'cats_and_dogs_filtered')
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
BATCH_SIZE = 100 # Number of training examples to process before updating our models variables
IMG_SHAPE = 150 # Our training data consists of images with width of 150 pixels and height of 150 pixels
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
train_data_gen = train_image_generator.flow_from_directory(batch_size=BATCH_SIZE,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE), #(150,150)
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=BATCH_SIZE,
directory=validation_dir,
shuffle=False,
target_size=(IMG_SHAPE,IMG_SHAPE), #(150,150)
class_mode='binary')
sample_training_images, _ = next(train_data_gen)
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip(images_arr, axes):
ax.imshow(img)
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5]) # Plot images 0-4
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(2)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
EPOCHS = 100
history = model.fit_generator(
train_data_gen,
steps_per_epoch=int(np.ceil(total_train / float(BATCH_SIZE))),
epochs=EPOCHS,
validation_data=val_data_gen,
validation_steps=int(np.ceil(total_val / float(BATCH_SIZE)))
)
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(EPOCHS)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.savefig('./foo.png')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dogs vs Cats Image Classification Without Image Augmentation
Step2: Data Loading
Step3: The dataset we have downloaded has the following directory structure.
Step4: We'll now assign variables with the proper file path for the training and validation sets.
Step5: Understanding our data
Step6: Setting Model Parameters
Step7: Data Preparation
Step8: After defining our generators for training and validation images, flow_from_directory method will load images from the disk, apply rescaling, and resize them using single line of code.
Step9: Visualizing Training images
Step10: The next function returns a batch from the dataset. One batch is a tuple of (many images, many labels). For right now, we're discarding the labels because we just want to look at the images.
Step11: Model Creation
Step12: Compile the model
Step13: Model Summary
Step14: Train the model
Step15: Visualizing results of the training
|
4,434
|
<ASSISTANT_TASK:>
Python Code:
from Bio import SeqIO
counter = 0
for seq in SeqIO.parse('../data/proteome.faa', 'fasta'):
counter += 1
counter
%matplotlib inline
import matplotlib.pyplot as plt
sizes = []
for seq in SeqIO.parse('../data/proteome.faa', 'fasta'):
sizes.append(len(seq))
plt.hist(sizes, bins=100)
plt.xlabel('protein size')
plt.ylabel('count');
counter = 0
for seq in SeqIO.parse('../data/ecoli.gbk', 'genbank'):
for feat in seq.features:
if feat.type == 'CDS':
counter += 1
counter
from Bio import Phylo
tree = Phylo.read('../data/tree.nwk', 'newick')
distances = []
for node in tree.get_terminals():
distances.append(tree.distance(tree.root, node))
sum(distances)/float(len(distances))
import networkx as nx
graph = nx.read_gml('../data/yeast.gml')
plt.hist(nx.degree(graph).values(), bins=20)
plt.xlabel('degree')
plt.ylabel('count');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Can you plot the distribution of protein sizes in the data/proteome.faa file?
Step2: Can you count the number of CDS sequences in the data/ecoli.gbk file?
Step3: Can you compute the average root-to-tip distance in the data/tree.nwk file?
Step4: Networkx
|
4,435
|
<ASSISTANT_TASK:>
Python Code:
import absl
import os
import tempfile
import time
import pandas as pd
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
import tfx
from pprint import pprint
from tensorflow_metadata.proto.v0 import schema_pb2, statistics_pb2, anomalies_pb2
from tensorflow_transform.tf_metadata import schema_utils
from tfx.components import CsvExampleGen
from tfx.components import Evaluator
from tfx.components import ExampleValidator
from tfx.components import InfraValidator
from tfx.components import Pusher
from tfx.components import ResolverNode
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Trainer
from tfx.components import Transform
from tfx.components import Tuner
from tfx.dsl.components.base import executor_spec
from tfx.components.common_nodes.importer_node import ImporterNode
from tfx.components.trainer import executor as trainer_executor
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import metadata
from tfx.orchestration import pipeline
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import evaluator_pb2
from tfx.proto import example_gen_pb2
from tfx.proto import infra_validator_pb2
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.proto.evaluator_pb2 import SingleSlicingSpec
from tfx.types import Channel
from tfx.types.standard_artifacts import Model
from tfx.types.standard_artifacts import HyperParameters
from tfx.types.standard_artifacts import ModelBlessing
from tfx.types.standard_artifacts import InfraBlessing
print("Tensorflow Version:", tf.__version__)
print("TFX Version:", tfx.__version__)
print("TFDV Version:", tfdv.__version__)
print("TFMA Version:", tfma.VERSION_STRING)
absl.logging.set_verbosity(absl.logging.INFO)
os.environ['PATH'] += os.pathsep + '/home/jupyter/.local/bin'
ARTIFACT_STORE = os.path.join(os.sep, 'home', 'jupyter', 'artifact-store')
SERVING_MODEL_DIR=os.path.join(os.sep, 'home', 'jupyter', 'serving_model')
DATA_ROOT = './data'
DATA_ROOT = f'{ARTIFACT_STORE}/data'
!mkdir -p $DATA_ROOT
data = pd.read_csv('./data/titles_sample.csv')
data.head()
LABEL_MAPPING = {
'github': 0,
'nytimes': 1,
'techcrunch': 2
}
data['source'] = data['source'].apply(lambda label: LABEL_MAPPING[label])
data.head()
data.to_csv(f'{DATA_ROOT}/dataset.csv', index=None)
!head $DATA_ROOT/*.csv
PIPELINE_NAME = 'tfx-title-classifier'
PIPELINE_ROOT = os.path.join(ARTIFACT_STORE, PIPELINE_NAME, time.strftime("%Y%m%d_%H%M%S"))
os.makedirs(PIPELINE_ROOT, exist_ok=True)
context = InteractiveContext(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
metadata_connection_config=None)
output_config = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(splits=[
example_gen_pb2.SplitConfig.Split(name='train', hash_buckets=4),
example_gen_pb2.SplitConfig.Split(name='eval', hash_buckets=1)
]))
example_gen = tfx.components.CsvExampleGen(
input_base=DATA_ROOT,
output_config=output_config)
context.run(example_gen)
examples_uri = example_gen.outputs['examples'].get()[0].uri
tfrecord_filenames = [
os.path.join(examples_uri, 'train', name)
for name in os.listdir(os.path.join(examples_uri, 'train'))
]
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
for tfrecord in dataset.take(2):
example = tf.train.Example()
example.ParseFromString(tfrecord.numpy())
for name, feature in example.features.feature.items():
if feature.HasField('bytes_list'):
value = feature.bytes_list.value
if feature.HasField('float_list'):
value = feature.float_list.value
if feature.HasField('int64_list'):
value = feature.int64_list.value
print('{}: {}'.format(name, value))
print('******')
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
context.show(statistics_gen.outputs['statistics'])
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen)
context.show(schema_gen.outputs['schema'])
schema_proto_path = '{}/{}'.format(schema_gen.outputs['schema'].get()[0].uri, 'schema.pbtxt')
schema = tfdv.load_schema_text(schema_proto_path)
schema_dir = os.path.join(ARTIFACT_STORE, 'schema')
tf.io.gfile.makedirs(schema_dir)
schema_file = os.path.join(schema_dir, 'schema.pbtxt')
tfdv.write_schema_text(schema, schema_file)
!cat {schema_file}
schema_importer = ImporterNode(
instance_name='Schema_Importer',
source_uri=schema_dir,
artifact_type=tfx.types.standard_artifacts.Schema,
reimport=False)
context.run(schema_importer)
context.show(schema_importer.outputs['result'])
example_validator = ExampleValidator(
instance_name="Data_Validation",
statistics=statistics_gen.outputs['statistics'],
schema=schema_importer.outputs['result'])
context.run(example_validator)
train_uri = example_validator.outputs['anomalies'].get()[0].uri
train_anomalies_filename = os.path.join(train_uri, "train/anomalies.pbtxt")
!cat $train_anomalies_filename
context.show(example_validator.outputs['output'])
%%writefile config.py
FEATURE_KEY = 'title'
LABEL_KEY = 'source'
N_CLASSES = 3
HUB_URL = "https://tfhub.dev/google/nnlm-en-dim50/2"
HUB_DIM = 50
N_NEURONS = 16
TRAIN_BATCH_SIZE = 5
EVAL_BATCH_SIZE = 5
MODEL_NAME = 'tfx_title_classifier'
def transformed_name(key):
return key + '_xf'
%%writefile preprocessing.py
import tensorflow as tf
from config import (
LABEL_KEY,
N_CLASSES,
FEATURE_KEY,
transformed_name
)
def _fill_in_missing(x):
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
def preprocessing_fn(inputs):
features = _fill_in_missing(inputs[FEATURE_KEY])
labels = _fill_in_missing(inputs[LABEL_KEY])
return {
transformed_name(FEATURE_KEY): features,
transformed_name(LABEL_KEY): labels
}
TRANSFORM_MODULE = 'preprocessing.py'
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_importer.outputs['result'],
module_file=TRANSFORM_MODULE)
context.run(transform)
os.listdir(transform.outputs['transform_graph'].get()[0].uri)
os.listdir(transform.outputs['transformed_examples'].get()[0].uri)
transform_uri = transform.outputs['transformed_examples'].get()[0].uri
tfrecord_filenames = [
os.path.join(transform_uri, 'train', name)
for name in os.listdir(os.path.join(transform_uri, 'train'))
]
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
for tfrecord in dataset.take(4):
example = tf.train.Example()
example.ParseFromString(tfrecord.numpy())
for name, feature in example.features.feature.items():
if feature.HasField('bytes_list'):
value = feature.bytes_list.value
if feature.HasField('float_list'):
value = feature.float_list.value
if feature.HasField('int64_list'):
value = feature.int64_list.value
print('{}: {}'.format(name, value))
print('******')
%%writefile model.py
import tensorflow as tf
import tensorflow_transform as tft
from tensorflow.keras.callbacks import TensorBoard
from tensorflow_hub import KerasLayer
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tfx_bsl.tfxio import dataset_options
from config import (
HUB_URL,
HUB_DIM,
N_NEURONS,
N_CLASSES,
LABEL_KEY,
TRAIN_BATCH_SIZE,
EVAL_BATCH_SIZE,
MODEL_NAME,
transformed_name
)
def _get_serve_tf_examples_fn(model, tf_transform_output):
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
Returns the output to be used in the serving signature.
feature_spec = tf_transform_output.raw_feature_spec()
feature_spec.pop(LABEL_KEY)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
return model(transformed_features)
return serve_tf_examples_fn
def _input_fn(file_pattern, data_accessor, tf_transform_output, batch_size=200):
return data_accessor.tf_dataset_factory(
file_pattern,
dataset_options.TensorFlowDatasetOptions(
batch_size=batch_size,
label_key=transformed_name(LABEL_KEY)),
tf_transform_output.transformed_metadata.schema
)
def _load_hub_module_layer():
hub_module = KerasLayer(
HUB_URL, output_shape=[HUB_DIM],
input_shape=[], dtype=tf.string, trainable=True)
return hub_module
def _build_keras_model():
hub_module = _load_hub_module_layer()
model = Sequential([
hub_module,
Dense(N_NEURONS, activation='relu'),
Dense(N_CLASSES, activation='softmax')
])
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]
)
return model
def run_fn(fn_args):
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, fn_args.data_accessor,
tf_transform_output, TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(fn_args.eval_files, fn_args.data_accessor,
tf_transform_output, EVAL_BATCH_SIZE)
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = _build_keras_model()
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=fn_args.model_run_dir, update_freq='batch')
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback])
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model,
tf_transform_output).get_concrete_function(
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
TRAINER_MODULE_FILE = 'model.py'
trainer = Trainer(
custom_executor_spec=executor_spec.ExecutorClassSpec(trainer_executor.GenericExecutor),
module_file=TRAINER_MODULE_FILE,
transformed_examples=transform.outputs.transformed_examples,
schema=schema_importer.outputs.result,
transform_graph=transform.outputs.transform_graph,
train_args=trainer_pb2.TrainArgs(splits=['train'], num_steps=20),
eval_args=trainer_pb2.EvalArgs(splits=['eval'], num_steps=5))
context.run(trainer)
logs_path = trainer.outputs['model_run'].get()[0].uri
print(logs_path)
model_resolver = ResolverNode(
instance_name='latest_blessed_model_resolver',
resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,
model=Channel(type=Model),
model_blessing=Channel(type=ModelBlessing))
context.run(model_resolver)
accuracy_threshold = tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.30},
upper_bound={'value': 0.99})
)
metrics_specs = tfma.MetricsSpec(
metrics = [
tfma.MetricConfig(class_name='SparseCategoricalAccuracy',
threshold=accuracy_threshold),
tfma.MetricConfig(class_name='ExampleCount')])
eval_config = tfma.EvalConfig(
model_specs=[
tfma.ModelSpec(label_key='source')
],
metrics_specs=[metrics_specs]
)
eval_config
model_analyzer = Evaluator(
examples=example_gen.outputs.examples,
model=trainer.outputs.model,
baseline_model=model_resolver.outputs.model,
eval_config=eval_config
)
context.run(model_analyzer, enable_cache=False)
model_blessing_uri = model_analyzer.outputs.blessing.get()[0].uri
!ls -l {model_blessing_uri}
trainer.outputs['model']
pusher = Pusher(
model=trainer.outputs['model'],
model_blessing=model_analyzer.outputs['blessing'],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=SERVING_MODEL_DIR)))
context.run(pusher)
pusher.outputs
# Set `PATH` to include a directory containing `saved_model_cli.
PATH=%env PATH
%env PATH=/opt/conda/envs/tfx/bin:{PATH}
latest_pushed_model = os.path.join(SERVING_MODEL_DIR, max(os.listdir(SERVING_MODEL_DIR)))
!saved_model_cli show --dir {latest_pushed_model} --all
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: If the versions above do not match, update your packages in the current Jupyter kernel below. The default %pip package installation location is not on your system installation PATH; use the command below to append the local installation path to pick up the latest package versions. Note that you may also need to restart your notebook kernel to pick up the specified package versions and re-run the imports cell above before proceeding with the lab.
Step3: Configure lab settings
Step4: Preparing the dataset
Step5: Interactive Context
Step6: Ingesting data using ExampleGen
Step7: Examine the ingested data
Step8: Generating statistics using StatisticsGen
Step9: Visualize statistics
Step10: Infering data schema using SchemaGen
Step11: Visualize the inferred schema
Step12: Updating the auto-generated schema
Step13: Modify the schema
Step14: Importing the updated schema using ImporterNode
Step15: Visualize the imported schema
Step16: Validating data with ExampleValidator
Step17: Examine the output of ExampleValidator
Step18: Visualize validation results
Step19: In our case no anomalies were detected in the eval split.
Step20: Configure and run the Transform component.
Step21: Examine the Transform component's outputs
Step22: And the transform.examples artifact
Step24: Train your TensorFlow model with the Trainer component
Step25: Create and run the Trainer component
Step26: Analyzing training runs with TensorBoard
Step27: Upload the logs and start TensorBoard.dev
Step28: Configure evaluation metrics and slices.
Step29: Check the model performance validation status
Step30: Deploying models with Pusher
Step31: Examine the output of Pusher
|
4,436
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
from collections import Counter
total_counts = Counter()
for idx, row in reviews.iterrows():
for word in str(row.values).split(' '):
total_counts[word] += 1
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
del total_counts
word2idx = {}
for i, word in enumerate(vocab):
word2idx[word] = i
vocab_size = len(vocab)
del vocab
def text_to_vector(text):
word2vec = np.zeros((vocab_size))
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is not None:
word2vec[idx] += 1
return word2vec
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
word_vectors = np.zeros((len(reviews), vocab_size), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
del word_vectors
trainY
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, vocab_size]) # Input layer
net = tflearn.fully_connected(net, 50, activation='ReLU') # Hidden layer
net = tflearn.fully_connected(net, 10, activation='ReLU')
net = tflearn.fully_connected(net, 2, activation='softmax') # Output layer
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
text = "This movie is so bad. It was awful and the worst"
positive_prob = model.predict([text_to_vector(text.lower())])[0][1]
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the data
Step2: Counting word frequency
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Step6: Text to vector function
Step7: If you do this right, the following code should return
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Step10: Building the network
Step11: Intializing the model
Step12: Training the network
Step13: Testing
Step14: Try out your own text!
|
4,437
|
<ASSISTANT_TASK:>
Python Code:
import os
import mne
from mne.preprocessing import (ICA, create_eog_epochs, create_ecg_epochs,
corrmap)
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(tmax=60.)
# pick some channels that clearly show heartbeats and blinks
regexp = r'(MEG [12][45][123]1|EEG 00.)'
artifact_picks = mne.pick_channels_regexp(raw.ch_names, regexp=regexp)
raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
eog_evoked = create_eog_epochs(raw).average()
eog_evoked.apply_baseline(baseline=(None, -0.2))
eog_evoked.plot_joint()
ecg_evoked = create_ecg_epochs(raw).average()
ecg_evoked.apply_baseline(baseline=(None, -0.2))
ecg_evoked.plot_joint()
filt_raw = raw.copy()
filt_raw.load_data().filter(l_freq=1., h_freq=None)
ica = ICA(n_components=15, random_state=97)
ica.fit(filt_raw)
raw.load_data()
ica.plot_sources(raw)
ica.plot_components()
# blinks
ica.plot_overlay(raw, exclude=[0], picks='eeg')
# heartbeats
ica.plot_overlay(raw, exclude=[1], picks='mag')
ica.plot_properties(raw, picks=[0, 1])
ica.exclude = [0, 1] # indices chosen based on various plots above
# ica.apply() changes the Raw object in-place, so let's make a copy first:
reconst_raw = raw.copy()
ica.apply(reconst_raw)
raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
reconst_raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
del reconst_raw
ica.exclude = []
# find which ICs match the EOG pattern
eog_indices, eog_scores = ica.find_bads_eog(raw)
ica.exclude = eog_indices
# barplot of ICA component "EOG match" scores
ica.plot_scores(eog_scores)
# plot diagnostics
ica.plot_properties(raw, picks=eog_indices)
# plot ICs applied to raw data, with EOG matches highlighted
ica.plot_sources(raw)
# plot ICs applied to the averaged EOG epochs, with EOG matches highlighted
ica.plot_sources(eog_evoked)
ica.exclude = []
# find which ICs match the ECG pattern
ecg_indices, ecg_scores = ica.find_bads_ecg(raw, method='correlation')
ica.exclude = ecg_indices
# barplot of ICA component "ECG match" scores
ica.plot_scores(ecg_scores)
# plot diagnostics
ica.plot_properties(raw, picks=ecg_indices)
# plot ICs applied to raw data, with ECG matches highlighted
ica.plot_sources(raw)
# plot ICs applied to the averaged ECG epochs, with ECG matches highlighted
ica.plot_sources(ecg_evoked)
# refit the ICA with 30 components this time
new_ica = ICA(n_components=30, random_state=97)
new_ica.fit(filt_raw)
# find which ICs match the ECG pattern
ecg_indices, ecg_scores = new_ica.find_bads_ecg(raw, method='correlation')
new_ica.exclude = ecg_indices
# barplot of ICA component "ECG match" scores
new_ica.plot_scores(ecg_scores)
# plot diagnostics
new_ica.plot_properties(raw, picks=ecg_indices)
# plot ICs applied to raw data, with ECG matches highlighted
new_ica.plot_sources(raw)
# plot ICs applied to the averaged ECG epochs, with ECG matches highlighted
new_ica.plot_sources(ecg_evoked)
# clean up memory before moving on
del raw, filt_raw, ica, new_ica
mapping = {
'Fc5.': 'FC5', 'Fc3.': 'FC3', 'Fc1.': 'FC1', 'Fcz.': 'FCz', 'Fc2.': 'FC2',
'Fc4.': 'FC4', 'Fc6.': 'FC6', 'C5..': 'C5', 'C3..': 'C3', 'C1..': 'C1',
'Cz..': 'Cz', 'C2..': 'C2', 'C4..': 'C4', 'C6..': 'C6', 'Cp5.': 'CP5',
'Cp3.': 'CP3', 'Cp1.': 'CP1', 'Cpz.': 'CPz', 'Cp2.': 'CP2', 'Cp4.': 'CP4',
'Cp6.': 'CP6', 'Fp1.': 'Fp1', 'Fpz.': 'Fpz', 'Fp2.': 'Fp2', 'Af7.': 'AF7',
'Af3.': 'AF3', 'Afz.': 'AFz', 'Af4.': 'AF4', 'Af8.': 'AF8', 'F7..': 'F7',
'F5..': 'F5', 'F3..': 'F3', 'F1..': 'F1', 'Fz..': 'Fz', 'F2..': 'F2',
'F4..': 'F4', 'F6..': 'F6', 'F8..': 'F8', 'Ft7.': 'FT7', 'Ft8.': 'FT8',
'T7..': 'T7', 'T8..': 'T8', 'T9..': 'T9', 'T10.': 'T10', 'Tp7.': 'TP7',
'Tp8.': 'TP8', 'P7..': 'P7', 'P5..': 'P5', 'P3..': 'P3', 'P1..': 'P1',
'Pz..': 'Pz', 'P2..': 'P2', 'P4..': 'P4', 'P6..': 'P6', 'P8..': 'P8',
'Po7.': 'PO7', 'Po3.': 'PO3', 'Poz.': 'POz', 'Po4.': 'PO4', 'Po8.': 'PO8',
'O1..': 'O1', 'Oz..': 'Oz', 'O2..': 'O2', 'Iz..': 'Iz'
}
raws = list()
icas = list()
for subj in range(4):
# EEGBCI subjects are 1-indexed; run 3 is a left/right hand movement task
fname = mne.datasets.eegbci.load_data(subj + 1, runs=[3])[0]
raw = mne.io.read_raw_edf(fname)
# remove trailing `.` from channel names so we can set montage
raw.rename_channels(mapping)
raw.set_montage('standard_1005')
# fit ICA
ica = ICA(n_components=30, random_state=97)
ica.fit(raw)
raws.append(raw)
icas.append(ica)
# use the first subject as template; use Fpz as proxy for EOG
raw = raws[0]
ica = icas[0]
eog_inds, eog_scores = ica.find_bads_eog(raw, ch_name='Fpz')
corrmap(icas, template=(0, eog_inds[0]))
for index, (ica, raw) in enumerate(zip(icas, raws)):
fig = ica.plot_sources(raw)
fig.suptitle('Subject {}'.format(index))
corrmap(icas, template=(0, eog_inds[0]), threshold=0.9)
corrmap(icas, template=(0, eog_inds[0]), threshold=0.9, label='blink',
plot=False)
print([ica.labels_ for ica in icas])
icas[3].plot_components(picks=icas[3].labels_['blink'])
icas[3].exclude = icas[3].labels_['blink']
icas[3].plot_sources(raws[3])
template_eog_component = icas[0].get_components()[:, eog_inds[0]]
corrmap(icas, template=template_eog_component, threshold=0.9)
print(template_eog_component)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <div class="alert alert-info"><h4>Note</h4><p>Before applying ICA (or any artifact repair strategy), be sure to observe
Step2: We can get a summary of how the ocular artifact manifests across each channel
Step3: Now we'll do the same for the heartbeat artifacts, using
Step4: Filtering to remove slow drifts
Step5: Fitting and plotting the ICA solution
Step6: Some optional parameters that we could have passed to the
Step7: Here we can pretty clearly see that the first component (ICA000) captures
Step8: <div class="alert alert-info"><h4>Note</h4><p>
Step9: We can also plot some diagnostics of each IC using
Step10: In the remaining sections, we'll look at different ways of choosing which ICs
Step11: Now that the exclusions have been set, we can reconstruct the sensor signals
Step12: Using an EOG channel to select ICA components
Step13: Note that above we used
Step14: The last of these plots is especially useful
Step15: Much better! Now we've captured both ICs that are reflecting the heartbeat
Step16: Selecting ICA components using template matching
Step17: Now let's run
Step18: The first figure shows the template map, while the second figure shows all
Step19: Notice that subject 1 does seem to have an IC that looks like it reflects
Step20: Now we get the message At least 1 IC detected for each subject (which is
Step21: Notice that the first subject has 3 different labels for the IC at index 0
Step22: As a final note, it is possible to extract ICs numerically using the
|
4,438
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from matplotlib import style
import matplotlib.pyplot as plt
style.use('ggplot')
def diurnal_tide(t, K1amp, K1phase, O1amp, O1phase, randamp):
out = K1amp * np.sin(2 * np.pi * t / 23.9344 - K1phase)
out += O1amp * np.sin(2 * np.pi * t / 25.8194 - O1phase)
out += randamp * np.random.randn(len(t))
return out
def semidiurnal_tide(t, M2amp, M2phase, S2amp, S2phase, randamp):
out = M2amp * np.sin(2 * np.pi * t / 12.42 - M2phase)
out += S2amp * np.sin(2 * np.pi * t / 12.0 - S2phase)
out += randamp * np.random.randn(len(t))
return out
t = np.arange(600)
u = semidiurnal_tide(t, 2, 0, 1, 0, 0.2)
v = semidiurnal_tide(t, 1.2, np.pi / 2, 0.6, np.pi / 2, 0.2)
fig, ax = plt.subplots(figsize=(9, 3.75))
t = t/24
kw = dict(alpha=0.5, linewidth=2)
ax.plot(t, u, label='U', **kw)
ax.plot(t, v, label='V', **kw)
ax.legend(loc='lower right')
ax.set_xlabel('Dias')
ax.set_ylabel('m s$^{-1}$')
ax.set_title('Típico registro de velocidade de maré')
fig.savefig("fake_semi-diurnal_tides.svg", bbox_inches='tight')
t = np.arange(600)
u = diurnal_tide(t, 2, 0, 1, 0, 0.2)
v = diurnal_tide(t, 1.2, np.pi / 2, 0.6, np.pi / 2, 0.2)
fig, ax = plt.subplots(figsize=(9, 3.75))
t = t/24
kw = dict(alpha=0.5, linewidth=2)
ax.plot(t, u, label='U', **kw)
ax.plot(t, v, label='V', **kw)
ax.legend(loc='lower right')
ax.set_xlabel('Dias')
ax.set_ylabel('m s$^{-1}$')
ax.set_title('Típico registro de velocidade de maré')
fig.savefig("fake_diurnal_tides.svg", bbox_inches='tight')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Semi-diurnal
Step2: Diurnal
|
4,439
|
<ASSISTANT_TASK:>
Python Code:
reactions = [
# (coeff, r_stoich, net_stoich)
('k1', {'A': 1}, {'B': 1, 'A': -1}),
('k2', {'B': 1, 'C': 1}, {'A': 1, 'B': -1}),
('k3', {'B': 2}, {'B': -1, 'C': 1})
]
names = 'A B C'.split()
%load_ext scipy2017codegen.exercise
%exercise exercise_symbolic.py
sym.init_printing()
ydot, y, k = mk_exprs_symbs(reactions, names)
ydot
t = sym.symbols('t') # not used in this case.
f = sym.lambdify((y, t) + k, ydot)
import numpy as np
from scipy.integrate import odeint
tout = np.logspace(-6, 6)
k_vals = (0.04, 1e4, 3e7) # from the literature
y0 = [1, 0, 0]
yout, info = odeint(f, y0, tout, k_vals, full_output=True)
import matplotlib.pyplot as plt
%matplotlib inline
plt.loglog(tout, yout)
plt.legend(names)
print("The Jacobian was evaluated %d times." % info['nje'][-1])
%exercise exercise_lambdify_jac.py
yout, info = odeint(f, y0, tout, k_vals, full_output=True, Dfun=J_cb)
plt.loglog(tout, yout)
plt.legend(names)
print("The Jacobian was evaluated %d times." % info['nje'][-1])
%timeit odeint(f, y0, tout, k_vals)
%timeit odeint(f, y0, tout, k_vals, Dfun=J_cb)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise
Step2: Use either the %exercise or %load magic to get the exercise / solution respectively
Step3: To complete the above exercise you may want to remember what the law of mass action looks like from the previous notebook
Step4: $\dot{\mathbf{y}}$ now represent our ODE system, where $\mathbf{y}$ is our state vector (concentrations). We will need a callback to evaluate $\dot{\mathbf{y}}$ when we integrate this ODE system numerically (using scipy.integrate.odeint). As we have seen SymPy can provide us with this callback
Step5: If we look closer at the info-dictionary we will see that odeint (or rather LSODA which is the unerlying package) switched method from an explicit Adams method to an implicit Backward Differentiation Formula (BDF). It is common for chemical kinetics problems that the problem becomes stiff.
Step6: We see that the solver needed to evaluate the Jacobian fewer times (due to it being essentially exact this time around). For larger systems the impact of an analytic Jacobian is often even greater (being the difference between a failed and successful integration).
|
4,440
|
<ASSISTANT_TASK:>
Python Code:
import mne
from mne.datasets import sample
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# these data already have an EEG average reference
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.pick_types(meg=False, eeg=True, eog=True)
print(raw.info)
raw.set_channel_types(mapping={'EOG 061': 'eeg'})
print(raw.info)
raw.rename_channels(mapping={'EOG 061': 'EOG'})
raw.set_channel_types(mapping={'EOG': 'eog'})
print(raw.info['chs'][0]['loc'])
raw.plot_sensors()
raw.plot_sensors('3d') # in 3D
montage = mne.channels.read_montage('standard_1020')
print(montage)
raw_no_ref, _ = mne.set_eeg_reference(raw, [])
reject = dict(eeg=180e-6, eog=150e-6)
event_id, tmin, tmax = {'left/auditory': 1}, -0.2, 0.5
events = mne.read_events(event_fname)
epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax,
reject=reject)
evoked_no_ref = mne.Epochs(raw_no_ref, **epochs_params).average()
del raw_no_ref # save memory
title = 'EEG Original reference'
evoked_no_ref.plot(titles=dict(eeg=title), time_unit='s')
evoked_no_ref.plot_topomap(times=[0.1], size=3., title=title, time_unit='s')
raw.del_proj()
raw_car, _ = mne.set_eeg_reference(raw, 'average', projection=True)
evoked_car = mne.Epochs(raw_car, **epochs_params).average()
del raw_car # save memory
title = 'EEG Average reference'
evoked_car.plot(titles=dict(eeg=title), time_unit='s')
evoked_car.plot_topomap(times=[0.1], size=3., title=title, time_unit='s')
raw_custom, _ = mne.set_eeg_reference(raw, ['EEG 001', 'EEG 002'])
evoked_custom = mne.Epochs(raw_custom, **epochs_params).average()
del raw_custom # save memory
title = 'EEG Custom reference'
evoked_custom.plot(titles=dict(eeg=title), time_unit='s')
evoked_custom.plot_topomap(times=[0.1], size=3., title=title, time_unit='s')
event_id = {'left/auditory': 1, 'right/auditory': 2,
'left/visual': 3, 'right/visual': 4}
epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax,
reject=reject)
epochs = mne.Epochs(raw, **epochs_params)
print(epochs)
left, right = epochs["left"].average(), epochs["right"].average()
# create and plot difference ERP
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
mne.combine_evoked([left, -right], weights='equal').plot_joint(**joint_kwargs)
aud_l = epochs["auditory", "left"].average()
aud_r = epochs["auditory", "right"].average()
vis_l = epochs["visual", "left"].average()
vis_r = epochs["visual", "right"].average()
all_evokeds = [aud_l, aud_r, vis_l, vis_r]
print(all_evokeds)
all_evokeds = [epochs[cond].average() for cond in sorted(event_id.keys())]
print(all_evokeds)
# Then, we construct and plot an unweighted average of left vs. right trials
# this way, too:
mne.combine_evoked(
all_evokeds, weights=(0.25, -0.25, 0.25, -0.25)).plot_joint(**joint_kwargs)
# If they are stored in a list, they can be easily averaged, for example,
# for a grand average across subjects (or conditions).
grand_average = mne.grand_average(all_evokeds)
mne.write_evokeds('/tmp/tmp-ave.fif', all_evokeds)
# If Evokeds objects are stored in a dictionary, they can be retrieved by name.
all_evokeds = dict((cond, epochs[cond].average()) for cond in event_id)
print(all_evokeds['left/auditory'])
# Besides for explicit access, this can be used for example to set titles.
for cond in all_evokeds:
all_evokeds[cond].plot_joint(title=cond, **joint_kwargs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup for reading the raw data
Step2: Let's restrict the data to the EEG channels
Step3: By looking at the measurement info you will see that we have now
Step4: In practice it's quite common to have some EEG channels that are actually
Step5: And to change the nameo of the EOG channel
Step6: Let's reset the EOG channel back to EOG type.
Step7: The EEG channels in the sample dataset already have locations.
Step8: And it's actually possible to plot the channel locations using
Step9: Setting EEG montage
Step10: To apply a montage on your data use the set_montage method.
Step11: We next define Epochs and compute an ERP for the left auditory condition.
Step12: Average reference
Step13: Custom reference
Step14: Evoked arithmetics
Step15: Next, we create averages of stimulation-left vs stimulation-right trials.
Step16: This is an equal-weighting difference. If you have imbalanced trial numbers,
Step17: This can be simplified with a Python list comprehension
Step18: Often, it makes sense to store Evoked objects in a dictionary or a list -
|
4,441
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
from __future__ import division
import numpy as np
from sympy import symbols, sin, cos, pi, simplify
from math import radians as d2r
from math import degrees as r2d
from math import atan2, sqrt, acos, fabs
t1, t2, t3 = symbols('t1 t2 t3')
Lc, Lf, Lt = symbols('Lc Lf Lt')
class DH(object):
def __init__(self):
pass
def fk(self, params):
t = np.eye(4)
for p in params:
t = t.dot(self.makeT(*p))
return t
def makeT(self, a, alpha, d, theta):
return np.array([ # classic DH
[cos(theta), -sin(theta) * cos(alpha), sin(theta) * sin(alpha), cos(theta) * a],
[sin(theta), cos(theta) * cos(alpha), -cos(theta) * sin(alpha), sin(theta) * a],
[ 0, sin(alpha), cos(alpha), d],
[ 0, 0, 0, 1]
])
def eval(f, inputs):
h = []
for i in range(0, 3):
tmp = (f[i,3]).subs(inputs)
h.append(tmp.evalf())
return h
# a, alpha, d, theta
params = [
[Lc, pi/2, 0, t1],
[Lf, 0, 0, t2],
[Lt, 0, 0, t3]
]
dh = DH()
t = dh.fk(params)
t = eval(t,[])
for i in t:
print(simplify(i))
def fk(t1, t2, t3, Lc, Lf, Lt):
t1 = d2r(t1)
t2 = d2r(t2)
t3 = d2r(t3)
return np.array([
(Lc + Lf*cos(t2) + Lt*cos(t2 + t3))*cos(t1),
(Lc + Lf*cos(t2) + Lt*cos(t2 + t3))*sin(t1),
Lf*sin(t2) + 1.0*Lt*sin(t2 + t3)
])
def ik(x, y, z, Lc, Lf, Lt):
t1 = atan2(y, x)
f = sqrt(x**2 + y**2) - Lc
# depending on z, you have to do a different atan
if z < 0.0:
b1 = atan2(f, fabs(z))
else:
b1 = atan2(z, f)
d = sqrt(f**2 + z**2)
b2 = acos((Lf**2 + d**2 - Lt**2) / (2.0 * Lf * d))
t2 = b1 + b2
t3 = acos((Lf**2 + Lt**2 - d**2) / (2.0 * Lf * Lt))
t3 -= pi # fix to align fk and ik frames
if z < 0.0:
t3 -= pi/2
return [r2d(t1), r2d(t2),r2d(t3)]
pts = fk(0.0,45.,-60., 10., 40., 100.)
# print(type(pts[1]))
print('pts: {:.3f} {} {:.3f}'.format(*pts))
angles = ik(pts[0], pts[1], pts[2], 10, 40, 100)
print('angles: {:.1f} {:.1f} {:.1f}'.format(*angles))
# from random import uniform
# for i in range(10):
# a=uniform(-90,90)
# b=uniform(0,90)
# c=uniform(-180,0)
# pts = fk(a,b,c, 10, 40, 100)
# angles = ik(pts[0], pts[1], pts[2], 10, 40, 100)
# if angles:
# if a-angles[0]>0.1 or b-angles[1]>0.1 or c-angles[2]>0.1:
# print('in: {:.1f} {:.1f} {:.1f} out: {:.1f} {:.1f} {:.1f}'.format(a,b,c, *angles))
# def printError(pts, pts2, angles, angles2):
# print('****************************************************')
# print('angles (orig):', angles)
# print('angles2 from ik(pts): {:.2f} {:.2f} {:.2f}'.format(*angles2))
# print('pts from fk(orig): {:.2f} {:.2f} {:.2f}'.format(*pts))
# print('pts2 from fk(angle2): {:.2f} {:.2f} {:.2f}'.format(*pts2))
# print('diff [deg]: {:.2f}'.format(np.linalg.norm(np.array(angles) - np.array(angles2))))
# print('diff [mm]: {:.2f}'.format(np.linalg.norm(pts - pts2)))
# print('\nExiting\n')
# print('****************************************************')
# def fk_ik():
# cox = 10
# fem = 40
# tib = 100
# for a in range(-45, 45, 5):
# for b in range(0,90,5):
# for g in range(-90,0,5):
# print('------------------------------------------------')
# a1 = [a,b,g]
# pts = fk(a1[0],a1[1],a1[2], cox, fem, tib)
# a2 = ik(pts[0], pts[1], pts[2], cox, fem, tib)
# pts2 = fk(a2[0], a2[1], a2[2], cox, fem, tib)
# print('points:', pts)
# # print(type(pts))
# # print(pts - pts2)
# # print(np.linalg.norm(pts - pts2))
# # print(a1)
# # print(type(a1))
# # print(a2)
# # print(type(a2))
# angle_error = np.linalg.norm(np.array(a1) - np.array(a2))
# pos_error = np.linalg.norm(pts - pts2)
# # print(angle_error, pos_error)
# if angle_error > 0.0001:
# print('Angle Error')
# printError(pts, pts2, angles, angles2)
# exit()
# elif pos_error > 0.0001:
# print('Position Error')
# printError(pts, pts2, angles, angles2)
# exit()
# else:
# print('Angle: {:.1f} {:.1f} {:.1f}'.format(angles[0], angles[1], angles[2]))
# print('Pos: {:.1f} {:.1f} {:.1f}'.format(pts[0], pts[1], pts[2]))
# print('Error(deg,mm): {:.2f} {:.2f}\n'.format(angle_error, pos_error))
# fk_ik()
def calcTorque(foot, Lc, Lf, Lt):
s s
+-----+--------+ CM
| |
| v
|
torque balance: s + s = r W/3
the servos are at location s, the center mass is at CM, the weight is W
and r is the distance from the foot to CM. The shoulder servo works perpendicular
to this plane and doesn't help to lift.
# servo = 0.15 # TG9e servo torque in Nm
servo = 0.39 # XL-320 servo torque in Nm
x = 0.001*sqrt(foot[0]**2 + foot[1]**2) # convert to mm
r = .144/2 + x
# 2*servo = r F
F = 2.0*servo/r # 2 servos per leg lifting
return 3.0*F # 3 legs are lifting at all times
Lc = 40
Lf = 50
Lt = 100
robot_weight = 560
foot = fk(0,0,-45, Lc, Lf, Lt)
print('Foot location {:.3f} {} {:.3f}'.format(*foot))
N = calcTorque(foot, Lc, Lf, Lt)
print('This can lift {:.3f} N'.format(N))
# why the fuck do people confuse mass and weight!!! weight is N and mass is kg
# the the stupid scale I used was in grams for weight ... idiots!!!
G = 1000*N/9.81
print('This can lift {:.3f} gram force'.format(G)) # convert N to grams force
print('The robot is: {} gram force'.format(robot_weight))
print('Factor of safety: {:.2f}'.format(G/robot_weight))
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import matplotlib.pyplot as plt
import numpy as np
# fig = plt.figure()
# ax = fig.gca(projection='3d')
x = np.arange(0, 45, 1.0)
y = np.arange(-135, -90, 1.0)
# X, Y = np.meshgrid(X, Y)
# x = range(0, 25)
# y = range(-115, -90)
Z = []
for femur in x:
z=[]
for tibia in y:
# print('{} {}'.format(femur, tibia))
pos = ik(0, femur, tibia, Lc, Lf, Lt)
N = calcTorque(pos, Lc, Lf, Lt)
G = 1000*N/9.81
z.append(G/robot_weight)
Z.append(z)
X, Y = np.meshgrid(x, y)
# ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm,
# linewidth=0, antialiased=False)
# ax.plot_surface(X, Y, Z);
plt.figure()
plt.contour(X,Y,Z)
import matplotlib.pyplot as plt
plt.subplot(2,2,1);
plt.plot(px)
plt.ylabel('x')
plt.subplot(2,2,2)
plt.plot(py)
plt.ylabel('y')
plt.subplot(2,2,3);
plt.plot(pz)
plt.ylabel('z')
plt.subplot(2,2,4);
plt.plot(px,py)
plt.ylabel('y')
plt.xlabel('x');
# X = np.arange(-5, 5, 1)
# Y = np.arange(-5, 5, 1)
# X, Y = np.meshgrid(X, Y)
# Z = X*Y**2
print(min(Z))
print(max(Z))
# print(Z.shape)
print(Z)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The following class follows the traditional DH convention. Where
Step2: The parameters are
Step3: Inverse Kinematics
Step5: Loading
|
4,442
|
<ASSISTANT_TASK:>
Python Code:
week = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday']
for weekday in week:
print("Today is ",weekday)
for i in range(len(week)):
print("This is the value of the index, ", i)
weekday = week[i] #once we have the index we can obtain the corresponding list element
print("Today is ",weekday)
for i, weekday in enumerate(week):
print("This is the index ",i)
print("This is the element ",weekday)
week_set = set(week)
for weekday in week_set: #Remark that order is arbitrary!
print("Today is ",weekday)
#Try this and read the error:
for i in range(len(week_set)):
print("This is the value of the index, ", i)
weekday = week_set[i]
print("Today is ",weekday)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The following for structure cycles over the elements of the list
Step2: Alternatively we loop over the indices of the list
Step3: Third possibility, use enumerate to generate a sequence with pairs of indices and elements
Step4: Iterating over elements is more general than iterating based on indices. The reason is that several collections do not have a predefined order and cannot be accessed via indices. As a simple example, if we convert the previous list into a set, we can still iterate over its elements, but we cannot loop over the indices.
|
4,443
|
<ASSISTANT_TASK:>
Python Code:
import toytree
import toyplot
import numpy as np
## A tree with edge lengths
newick = "((apple:2,orange:4):2,(((tomato:2,eggplant:1):2,pepper:3):1,tomatillo:2):1);"
tre = toytree.tree(newick)
## show tip labels
tre.draw();
## hide tip labels
tre.draw(tip_labels=False);
## enter a new list of names
tipnames = ["a", "b", "c", "d", "e", "f"]
tre.draw(tip_labels=tipnames);
## get list of existing names and modify it
modnames = ["tip - " + i for i in tre.get_tip_labels()]
tre.draw(tip_labels=modnames);
## you can use HTML tags to further style the text
modnames = ["<b>{}</b>".format(i) if 'tom' in i else i for i in tre.get_tip_labels()]
tre.draw(tip_labels=modnames);
## set a single tip labels color
tre.draw(tip_labels_colors="darkcyan");
## use a list of colors to assign different values to tips
colorlist = ["darkcyan" if "tom" in t else "darkorange" for t in tre.get_tip_labels()]
tre.draw(tip_labels_colors=colorlist);
## default tree
tre.draw();
## with edge lengths
tre.draw(use_edge_lengths=False);
## with edge lengths and aligned tips
tre.draw(tip_labels_align=True);
## style the edges and alignment-edges
tre.draw(
use_edge_lengths=True,
tip_labels_align=True,
edge_style={"stroke": "darkcyan"},
edge_align_style={"stroke": "darkorange"},
);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hide/Show tip labels
Step2: Modify tip labels
Step3: Color tip labels
Step4: Aligning tip labels
Step5: Styling edges on aligned tip trees
|
4,444
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data.gl/')
train_data,test_data = sales.random_split(.8,seed=0)
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
def simple_linear_regression(input_feature, output):
# compute the mean of input_feature and output
# compute the product of the output and the input_feature and its mean
# compute the squared value of the input_feature and its mean
# use the formula for the slope
# use the formula for the intercept
return (intercept, slope)
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(1 + 1*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
def get_regression_predictions(input_feature, intercept, slope):
# calculate the predicted values:
return predicted_values
my_house_sqft = 2650
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
# square the residuals and add them up
return(RSS)
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
def inverse_regression_predictions(output, intercept, slope):
# solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions:
return estimated_feature
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
# Compute RSS when using bedrooms on TEST data:
# Compute RSS when using squarfeet on TEST data:
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load house sales data
Step2: Split data into training and testing
Step3: Useful SFrame summary functions
Step4: As we see we get the same answer both ways
Step5: Aside
Step6: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line
Step7: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
Step8: Predicting Values
Step9: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Step10: Residual Sum of Squares
Step11: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
Step12: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Step13: Predict the squarefeet given price
Step14: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Step15: New Model
Step16: Test your Linear Regression Algorithm
|
4,445
|
<ASSISTANT_TASK:>
Python Code:
# uncomment the following line to install/upgrade the PixieDust library
# ! pip install pixiedust --user --upgrade
import pixiedust
from pixiedust.display.app import *
@PixieApp
class HelloWorldPixieApp:
@route()
def main(self):
return
<input pd_options="clicked=true" type="button" value="Click Me">
@route(clicked="true")
def _clicked(self):
return
<input pd_options="clicked=false" type="button" value="You Clicked, Now Go back">
#run the app
HelloWorldPixieApp().run(runInDialog='false')
from pixiedust.display.app import *
@PixieApp
class HelloWorldPixieAppWithData:
@route()
def main(self):
return
<div class="row">
<div class="col-sm-2">
<input pd_options="handlerId=dataframe"
pd_entity
pd_target="target{{prefix}}"
type="button" value="Preview Data">
</div>
<div class="col-sm-10" id="target{{prefix}}"/>
</div>
#Create dataframe
df = SQLContext(sc).createDataFrame(
[(2010, 'Camping Equipment', 3, 200),(2010, 'Camping Equipment', 10, 200),(2010, 'Golf Equipment', 1, 240),
(2010, 'Mountaineering Equipment', 1, 348),(2010, 'Outdoor Protection',2,200),(2010, 'Personal Accessories', 2, 200),
(2011, 'Camping Equipment', 4, 489),(2011, 'Golf Equipment', 5, 234),(2011, 'Mountaineering Equipment',2, 123),
(2011, 'Outdoor Protection', 4, 654),(2011, 'Personal Accessories', 2, 234),(2012, 'Camping Equipment', 5, 876),
(2012, 'Golf Equipment', 5, 200),(2012, 'Mountaineering Equipment', 3, 156),(2012, 'Outdoor Protection', 5, 200),
(2012, 'Personal Accessories', 3, 345),(2013, 'Camping Equipment', 8, 987),(2013, 'Golf Equipment', 5, 434),
(2013, 'Mountaineering Equipment', 3, 278),(2013, 'Outdoor Protection', 8, 134),(2013,'Personal Accessories',4, 200)],
["year","zone","unique_customers", "revenue"])
#run the app
HelloWorldPixieAppWithData().run(df, runInDialog='false')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Hello World
Step5: <hr>
|
4,446
|
<ASSISTANT_TASK:>
Python Code:
# Import some libraries
import numpy as np
import math
from test_helper import Test
# Define data file
ratingsFilename = 'u.data'
# Read data with spark
rawRatings = sc.textFile(ratingsFilename)
# Check file format
print rawRatings.take(10)
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
def format_ratings(line):
Parse a line in the ratings dataset
Args:
line (str): a line in the ratings dataset in the form of UserID \t MovieID \t Rating \t Timestamp \n
Returns:
tuple: (UserID, MovieID, Rating)
# Divide each line with the character '\t'
items = # FILL
# Get UserID and convert it to int
user_id = # FILL
# Get ItemID and convert it to int
item_id = # FILL
# Get Reating and convert it to float
rating_id = # FILL
# Return UserID, ItemID and Rating.
return # FILL
###########################################################
# TEST CELL
###########################################################
check_line = u'196\t242\t3\t881250949'
check_tuple = format_ratings(check_line)
Test.assertEquals(check_tuple, (196, 242, 3), 'incorrect result: data are incorrectly formatted')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Convert each line of rawRatings
ratingsRDD = # FILL IN
# Show the output
print ratingsRDD.take(10)
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(ratingsRDD.first(), (196, 242, 3), 'incorrect result: data are incorrectly formatted')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
trainingRDD, testRDD = ratingsRDD.randomSplit(#FILL IN, seed=0L)
print 'Training: %s, test: %s\n' % (trainingRDD.count(), testRDD.count())
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(trainingRDD.count(), 75008, 'incorrect result: number of training ratings is incorrect')
Test.assertEquals(testRDD.count(), 24992, 'incorrect result: number of test ratings is incorrect')
Test.assertEquals(trainingRDD.first(), (186, 302, 3.0), 'incorrect result: the values of the training RDD are incorrect')
Test.assertEquals(testRDD.first(), (196, 242, 3.0), 'incorrect result: the values of the testing RDD are incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
def getAverages(IDandRatingsTuple):
Calculate average rating
Args:
IDandRatingsTuple: a single tuple of (ID_user, (Rating1, Rating2, Rating3, ...))
Returns:
tuple: a tuple of (ID_user, averageRating)
id_user = # FILL IN
mean_value = # FILL IN
return (id_user, mean_value)
###########################################################
# TEST CELL
###########################################################
check_ratings = (0, iter(2, 5, 3, 1, 2))
check_output = getAverages(check_ratings)
Test.assertEquals(check_output, (0, 2.6), 'incorrect result: check_output is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# From ratingsRDD with tuples of (UserID, MovieID, Rating) create an RDD with tuples of
# the (UserID, Rating), i.e, remove the MovieID field.
RDD_users_ratings = trainingRDD.# FILL IN
# From the RDD of (UserID, Rating) create an RDD with tuples of
# (UserID, iterable of Ratings for that UserID), where iterable of Ratings for that UserID has
# all the rated items of UserID. Review groupByKey() method of RDD elements.
RDD_users_allratings = RDD_users_ratings.# FILL IN
# Using getAverages(), compute the average rating of each user.
RDD_users_mean = RDD_users_allratings.# FILL IN
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(RDD_users_ratings.first(), (186, 3.0), 'incorrect result: RDD_users_ratings is incorrect')
Test.assertEquals(list(RDD_users_allratings.first()[1])[:5], [4.0, 5.0, 4.0, 3.0, 3.0], 'incorrect result: RDD_users_allratings is incorrect')
Test.assertEquals(np.round(RDD_users_mean.first()[1],2), 3.69, 'incorrect result: RDD_users_mean is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Create a new RDD, RDD_test_ids, consisting of (UserID, MovieID) pairs
# that you extract from testRDD. That is, remove the filed Rating from testRDD.
RDD_test_ids = testRDD.# FILL IN
# Using the user_id as key, join RDD_test_ids with RDD_users_mean.
# Review the method leftOuterJoin() of RRD elements.
RDD_test_ids_mean = RDD_test_ids.# FILL IN
# Note that the resulting RRD provided by leftOuterJoin() method has the format
# (Iduser, (IdItem, PredRating)). Remap it to create a RDD with tuples (Iduser, IdItem, PredRating)
RDD_pred_mean = RDD_test_ids_mean.# FILL IN
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(RDD_test_ids.first(), (196, 242), 'incorrect result: RDD_test_ids is incorrect')
Test.assertEquals(RDD_test_ids_mean.first(), (512, (23, 4.294117647058823)), 'incorrect result: RDD_test_ids_mean is incorrect')
Test.assertEquals(RDD_pred_mean.first(), (512, 23, 4.294117647058823), 'incorrect result: RDD_pred_mean is incorrect')
def get_RMSE(predictedRDD, actualRDD):
Compute the root mean squared error between two RDD with the predicted and actual ratings
Args:
predictedRDD: predicted ratings for each movie and each user where each entry is in the form
(UserID, MovieID, Rating)
actualRDD: actual ratings where each entry is in the form (UserID, MovieID, Rating)
Returns:
RSME (float): computed RSME value
# Transform predictedRDD into the tuples of the form ((UserID, MovieID), Rating)
predictedReformattedRDD = predictedRDD.map(lambda x: ((x[0],x[1]),x[2]))
# Transform actualRDD into the tuples of the form ((UserID, MovieID), Rating)
actualReformattedRDD = actualRDD.map(lambda x: ((x[0],x[1]),x[2]))
# Compute the squared error for each matching entry (i.e., the same (User ID, Movie ID) in each
# RDD) in the reformatted RDDs using RDD transformtions - do not use collect()
squaredErrorsRDD = (predictedReformattedRDD.join(actualReformattedRDD).map(lambda x: pow(x[1][0]-x[1][1],2)))
# Compute the total squared error - do not use collect()
totalError = squaredErrorsRDD.reduce(lambda a,b: a+b)
# Count the number of entries for which you computed the total squared error
numRatings = squaredErrorsRDD.count()
# Using the total squared error and the number of entries, compute the RSME
return math.sqrt(float(totalError)/numRatings )
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Create a function to compute the MAE error
def get_MAE(predictedRDD, actualRDD):
Compute the mean absolute error between predicted and actual
Args:
predictedRDD: predicted ratings for each movie and each user where each entry is in the form
(UserID, MovieID, Rating)
actualRDD: actual ratings where each entry is in the form (UserID, MovieID, Rating)
Returns:
MAE (float): computed MAE value
# Transform predictedRDD into the tuples of the form ((UserID, MovieID), Rating)
predictedReformattedRDD = # FILL IN
# Transform actualRDD into the tuples of the form ((UserID, MovieID), Rating)
actualReformattedRDD = # FILL IN
# Compute the mean absolute error for each matching entry (i.e., the same (User ID, Movie ID) in each
# RDD) in the reformatted RDDs using RDD transformtions - do not use collect()
AbsoluteErrorsRDD = # FILL IN
# Compute the total absolute error - do not use collect()
totalError = # FILL IN
# Count the number of entries for which you computed the total absolute error
numRatings = # FILL IN
# Using the total squared error and the number of entries, compute the MAE
return # FILL IN
###########################################################
# TEST CELL
###########################################################
check_Predicted = sc.parallelize([(0, 0, 5), (0, 1, 3)])
check_Actual = sc.parallelize([(0, 0, 3), (0, 1, 2)])
Test.assertEquals(get_MAE(check_Predicted, check_Actual), 1.5, 'incorrect result: function get_MAE() is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Compute the MAE error for each mean based baseline
MAE_mean = # FILL IN
# Compute the RMSE error for each mean based baseline
RMSE_mean = # FILL IN
print 'Mean model ... MAE: %2.2f , RMSE: %2.2f ' % (MAE_mean, RMSE_mean)
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(np.round(MAE_mean,2), 0.83, 'incorrect result: MAE value of mean recommeder is incorrect')
Test.assertEquals(np.round(RMSE_mean,2), 1.04, 'incorrect result: RMSE value of mean recommeder is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
from pyspark.mllib.recommendation import ALS
# Define parameters
n_latent_factors = 5
numIterations = 15
# Train the model (set seed=0L)
sc.setCheckpointDir('checkpoint/')
model = # FILL IN , seed=0L)
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Create a new RDD, RDD_test_ids, consisting of (UserID, MovieID) pairs
# that you extract from testRDD. That is, remove the filed Rating from testRDD.
RDD_test_ids = # FILL IN
# Estimate their ratings with model.predictAll( )
predictions = # FILL IN
# Print the first 10 predictions
predictions.take(10)
###########################################################
# TEST CELL
###########################################################
check_predictions = predictions.filter(lambda x: (x[0]==621) & (x[1]==68)).first()
Test.assertEquals(np.round(check_predictions[2],1), 3.7, 'incorrect result: predicted value is incorrect')
check_predictions = predictions.filter(lambda x: (x[0]==880) & (x[1]==8)).first()
Test.assertEquals(np.round(check_predictions[2],1), 4, 'incorrect result: predicted value is incorrect')
x = predictions.first()
print 'User ID: ' + str(x[0])
print 'Item ID: ' + str(x[1])
print 'Predicted rating: ' + str(x[2])
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
user_id = 10
# Select the outputs of the user_id=10 (hint: filter method)
predictions_userid = # FILL IN
# Sort the outputs according to rating field (hint: sortBy method)
predictions_userid_sorted = # FILL IN
predictions_userid_sorted.take(5)
###########################################################
# TEST CELL
###########################################################
check_output = predictions_userid_sorted.map(lambda x:x[1]).take(5)
Test.assertEquals(check_output, [483, 127, 174, 701, 185], 'incorrect result: recommeded items are incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Compute the MAE error
MAE_als = # FILL IN
# Compute the RMSE error
RMSE_als = # FILL IN
print 'ALS model ... MAE: %2.2f , RMSE: %2.2f ' % (MAE_als, RMSE_als)
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(np.round(MAE_als,2), 0.77, 'incorrect result: MAE value of ALS recommeder is incorrect')
Test.assertEquals(np.round(RMSE_als,2), 1.01, 'incorrect result: RMSE value of ALS recommeder is incorrect')
def compute_Pearson_correlation(ratings_u1, ratings_u2, n_items_th = 1):
Calculate correlation coefficient
Args:
ratings_u1: Iduser, a pyspark iterable with tuples (item, rating) with all the ratings of user 1
ratings_u2: Iduser, a pyspark iterable with tuples (item, rating) with all the ratings of user 2
n_items_th: number of common items that both users have to be rated to compute its similarity.
If the users have less than n_items_th common rated items, its similarity is set to zero.
By default, n_items_th is set to 1.
Returns:
corr_value: correlation coefficient
# Get the items and values rated by user 1
[items_u1, values_u1] = zip(*list(ratings_u1[1]))
# Get the items and values rated by user 2
[items_u2, values_u2] = zip(*list(ratings_u2[1]))
# Get the set of items rated by both users and their values
r_u1 = [values_u1[i] for i, item in enumerate(items_u1) if item in items_u2]
r_u2 = [values_u2[i] for i, item in enumerate(items_u2) if item in items_u1]
if len(r_u1)>= n_items_th: # If the are common rated items...
# Compute the means of the user ratings
m_1 = np.mean(np.array(values_u1))
m_2 = np.mean(np.array(values_u2))
# Remove their means
r_u1 = r_u1 - m_1
r_u2 = r_u2 - m_2
# Compute the correlation coefficient
corr_value = np.dot(r_u1,r_u2.T)/(np.sqrt(np.dot(r_u1,r_u1.T))*np.sqrt(np.dot(r_u2,r_u2.T)))
# Remove useless dimensions
corr_value =np.squeeze(corr_value)
else: # Else correlation is 0
corr_value = 0
# Checking that the correlation is not NaN (this would happen if the denominatior is 0),
# in this case, set the correlation coefficient to 0
if math.isnan(corr_value):
corr_value = 0
return corr_value
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# 1. From trainingRDD, create an RDD where each element is (userID, (ItemID, rating)), i.e,
# the userID is the key and the pair (ItemID, rating) is the value.
RDD_users_as_key = #FILL IN
# 2. Group the elements of RDD_users_as_key by key (see groupByKey() method)
# Each element of this new RDD is (userID, spark-iterable), where the spark iterable has
# a list with all the rated items elements (ItemID, rating)
RDD_users_ratings = #FILL IN
# 3. Extract the spark-iterable element with all the ratings of users 1 and 2
id_u1 = 1
ratings_u1 = #FILL IN
id_u2 = 2
ratings_u2 = #FILL IN
# 4. Compute its similarity
n_items_th = 4
compute_Pearson_correlation(ratings_u1, ratings_u2, n_items_th)
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(np.round(similarity,2), 0.80, 'incorrect result: similarity value is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# 1. From trainingRDD, create an new RDD with elements (userID, spark-iterable), where
# spark iterable has a list [(ItemID, rating), ...] with all the items rated by UserID
# (see previous section)
RDD_users_ratings = # FILL IN
# 2. Create all the combinations of pairs with the users (see cartesian method of RDD elements)
# Note that cartesian returns an RDD with elements ((id_1, iterable_ratings 1), (id_2, iterable_ratings 2))
pairs_users = # FILL IN
# 3. Compute correlation values with the function compute_Pearson_correlation()
n_items_th = 4
correlation_values = # FILL IN
# 4. Select correlation values larger than the similarity threshold (filter method)
sim_th = 0.2
correlation_values_sel = # FILL IN
# 5. Let's reorganize each element of the RDD to get user 1 as key and the tuple
# (user2, similarity) as value
all_correlations_with_userid = # FILL IN
# 6. Group the elements of all_correlations_with_userid by key (groupByKey() method)
# Each element of this new RDD is (userID, spark-iterable), where the spark iterable has
# a list with all the similar users (UserID, similarity)
RDD_sim_users = # FILL IN
RDD_sim_users.cache()
###########################################################
# TEST CELL
###########################################################
id_user = 1
sim_user1 = RDD_sim_users.filter(lambda x : x[0]==id_user).first()
sim_check = sc.parallelize(list(sim_user1[1]))
Test.assertEquals(np.round(sim_check.filter(lambda x: x[0] == 22).first()[1],2), 0.34, 'incorrect result: similarity value is incorrect')
Test.assertEquals(np.round(sim_check.filter(lambda x: x[0] == 120).first()[1],2), 0.37, 'incorrect result: similarity value is incorrect')
def compute_predictions(med_user, list_sim, list_ratings):
Estimate the rating that a user u would assign over a item i
Args:
med_user: average rating of the user u
list_sim: list of tuples (id_user, similarity) with the users who are
similar to the user u and its similarity value
list_rantings: list of tuples (id_user, rating) with the ratings that the remaining
users have already assigned to the item i. Note that the rating values are normalized
(the average rating of the corresponding user has been previously subtracted so that
this function implements the above expression)
Returns:
pred_value: estimated rating for the user u to the item i
if (list_sim is not None) & (list_ratings is not None):
dict1 = dict(list_sim)
dict2 = dict(list_ratings)
list_intersect = [(k, dict1[k], dict2[k]) for k in sorted(dict1) if k in dict2]
# We have build a list with: (user_id_similar, sim_value, rating_user_sim)
if len(list_intersect)>0:
aux = [(sim*rat, sim) for (id_user, sim, rat) in list_intersect]
numerator, denominator = zip(*aux)
pred_value = med_user + sum(numerator)/sum(denominator)
else:
pred_value = med_user
else:
pred_value = med_user
return pred_value
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# 1. From trainingRDD create a new RDD with the fields (user, rating), and convert it to
# (user, list_ratings). Hint: GroupByKey()
RDD_users_ratings = # FILL IN
# Convert this RDD (user, list_ratings) -> (user, mean_user). Use getAverages() function
RDD_users_mean = # FILL IN
###########################################################
# TEST CELL
###########################################################
id_user = 1
mean_user1 = RDD_users_mean.filter(lambda x : x[0]==id_user).first()
Test.assertEquals(np.round(mean_user1[1],2), 3.6, 'incorrect result: mean rating value is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# 2.1. Create an RDD with training ratings subtracting the users' mean
# Create an RDD with elements (user, (item, rating))
trainingRDD_aux = # FILL IN
# Combine it with the users_mean -> (user, ((item, rating), user_mean))
# Hint: leftOuterJoin()
trainingRDD_mean = # FILL IN
# Create a new RDD subtracting the mean of each rating and reorganize it -> (user, item, rating_norm)
trainingRDD_norm = # FILL IN
# 2.2. Create an RDD with normalized training ratings with the form (item, list((user, rating)))
RDD_tratings_item = # FILL IN
###########################################################
# TEST CELL
###########################################################
id_item = 22
ratings_item = RDD_ratings_item.filter(lambda x : x[0]==id_item).first()
ratings_check = sc.parallelize(list(ratings_item[1]))
Test.assertEquals(np.round(ratings_check.filter(lambda x: x[0] == 608).first()[1],2), 0.26, 'incorrect result: rating value is incorrect')
Test.assertEquals(np.round(ratings_check.filter(lambda x: x[0] == 184).first()[1],2), -0.66, 'incorrect result: rating value is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# 3.1 Create an input RDD, testForPredictingRDD, consisting of (UserID, MovieID) pairs
# that you extract from testRDD (i.e., remove the field rating)
RDD_test_ids = # FILL IN
# 3.2 Combine RDD_test_ids with RDD_users_mean to create an RDD (user, (item, mean_user))
# Hint: leftOuterJoin()
RDD_test_ids_mean = # FILL IN
# 3.3 Combine RDD_test_ids_mean with RDD_sim_users to create an RDD with elements
# (user, ((item, mean_user), list_sim_user)). Hint: leftOuterJoin()
# Next, reformat it to obtain elements (item, (user, mean_user, list_sim_user))
RDD_test_ids_sim = # FILL IN
# 3.4 Combine RDD_test_ids_sim with RDD_ratings_item to create an RDD with elements
# (item, ((user, mean_user , list_sim_user), list_item_rating)). Hint: leftOuterJoin()
# Next, reformat it to obatian elements ((user, item), mean_user, list_sim_user, list_item_rating)
RDD_test_ids_sim_rat = # FILL IN
###########################################################
# TEST CELL
###########################################################
check_out = RDD_test_ids_sim_rat.filter(lambda x: x[0]==(218, 516)).first()
Test.assertEquals(np.round(check_out[1],2), 3.62, 'incorrect result: mean value of the RDD is incorrect')
sim_check = sc.parallelize(list(check_out[2]))
Test.assertEquals(np.round(sim_check.filter(lambda x: x[0] == 24).first()[1],2), 0.31, 'incorrect result: similarity value is incorrect')
rating_check = sc.parallelize(list(check_out[3]))
Test.assertEquals(np.round(rating_check.filter(lambda x: x[0] == 308).first()[1],2), 0.23, 'incorrect result: rating value is incorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# For each element of RDD_test_ids_sim_rat call to compute_predictions and create a new RDD
# with elements ((user, item), predicted value)
RDD_outputs = # FILL IN
RDD_predictions = # FILL IN
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(np.round(RDD_predictions.filter(lambda x: (x[0], x[1]) == (840, 516)).first()[2],2), 4.8, 'incorrect result: predicted value is incorrect')
Test.assertEquals(np.round(RDD_predictions.filter(lambda x: (x[0], x[1]) == (174, 1032)).first()[2],2), 3.28, 'incorrect result: predicted value is incorrect')
Test.assertEquals(np.round(RDD_predictions.filter(lambda x: (x[0], x[1]) == (896, 12)).first()[2],2), 3.83, 'incorrect result: predicted value is incorrect')
Test.assertEquals(np.round(RDD_predictions.filter(lambda x: (x[0], x[1]) == (59, 528)).first()[2],2), 4.18, 'incorrect result: predicted value is incorrect')
# Compute the error MAE
MAE = get_MAE(RDD_predictions, testRDD)
# Compute the error RMSE
RMSE = get_RMSE(RDD_predictions, testRDD)
print 'User based model ... MAE: %2.2f , RMSE: %2.2f ' % (MAE, RMSE)
###########################################################
# TEST CELL
###########################################################
Test.assertEquals(np.round(MAE,2), 0.80, 'incorrect result: MAE value is incorrect')
Test.assertEquals(np.round(RMSE,2), 1.02, 'incorrect result: RMSE value is incorrect')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Formatting the data
Step3: 2. Format your data
Step4: Creating training and test rating matrices
Step6: Baseline recommender
Step7: 2. Compute the average rating of each user
Step8: 3. Make new predictions
Step11: 4. Performance evaluation
Step12: Now, let's evaluate the performance of the mean based baseline.
Step13: Alternative Least Squares algorithm
Step14: Computing predictions
Step15: Note that, although each element of the RDD predictions is an object, you can extract the UserID, ItemID and predicted rating accessing to its first, second, and third element, respectively. See the example of the next cell...
Step16: Advance work
Step17: Performance evaluation
Step19: Advance work
Step20: Now, complete the next cell to be able to evaluate the function compute_Pearson_correlation( ).
Step21: Once we can compute similarities between two users, let's compute, for each user, its distance with all the remaining users. The output of this cell will be an RDD of similarities where each element is (UserID, spark-iterable), where spark-iterable is a iterable list with pairs of (UserID, similarity).
Step23: Step 2. Making predictions
Step24: To obtain the predicted outputs for the test data and evaluate the performance of the user based recommender, we need to compute all the needed input arguments. Follow the steps of the next sections to obtain them.
Step25: 2. Create a list of ratings
Step26: 3. Combine previous RDDs
Step27: 4. Compute predictions
Step28: 5. Evaluate performance
|
4,447
|
<ASSISTANT_TASK:>
Python Code:
from pygoose import *
import os
import sys
from scipy.sparse import csr_matrix, dok_matrix
from sklearn.decomposition import TruncatedSVD
from sklearn.metrics.pairwise import cosine_distances, euclidean_distances, manhattan_distances
project = kg.Project.discover()
feature_list_id = 'magic_cooccurrence_matrix'
NUM_SVD_COMPONENTS = 150
NUM_SVD_ITERATIONS = 30
RANDOM_SEED = 42
df_train = pd.read_csv(project.data_dir + 'train.csv').fillna('')
df_test = pd.read_csv(project.data_dir + 'test.csv').fillna('')
df_all_pairs = pd.concat([df_train[['question1', 'question2']], df_test[['question1', 'question2']]])
df_unique_texts = pd.DataFrame(np.unique(df_all_pairs.values.ravel()), columns=['question'])
question_ids = pd.Series(df_unique_texts.index.values, index=df_unique_texts['question'].values).to_dict()
df_all_pairs['q1_id'] = df_all_pairs['question1'].map(question_ids)
df_all_pairs['q2_id'] = df_all_pairs['question2'].map(question_ids)
saved_matrix_path = project.trained_model_dir + f'{feature_list_id}_sparse_csr.pickle'
if os.path.exists(saved_matrix_path):
cooccurrence_csr = kg.io.load(saved_matrix_path)
else:
cooccurrence = dok_matrix((len(question_ids), len(question_ids)), dtype='b')
for i, row in progressbar(df_all_pairs.iterrows(), total=len(df_all_pairs), file=sys.stdout):
cooccurrence[row['q1_id'], row['q2_id']] = 1
cooccurrence[row['q2_id'], row['q1_id']] = 1
cooccurrence_csr = cooccurrence.tocsr(copy=True)
kg.io.save(cooccurrence_csr, saved_matrix_path)
id_pairs = df_all_pairs[['q1_id', 'q2_id']].values
def get_raw_matrix_distances(id_pair):
q1_row = cooccurrence_csr[id_pair[0]]
q2_row = cooccurrence_csr[id_pair[1]]
return [
cosine_distances(q1_row, q2_row)[0][0],
euclidean_distances(q1_row, q2_row)[0][0],
]
distances_raw = kg.jobs.map_batch_parallel(
id_pairs,
item_mapper=get_raw_matrix_distances,
batch_size=1000,
)
svd = TruncatedSVD(
n_components=NUM_SVD_COMPONENTS,
n_iter=NUM_SVD_ITERATIONS,
random_state=RANDOM_SEED,
)
X_svd = svd.fit_transform(cooccurrence_csr)
plt.plot(np.cumsum(svd.explained_variance_ratio_))
def get_svd_matrix_distances(id_pair):
q1_row = X_svd[id_pair[0]].reshape(1, -1)
q2_row = X_svd[id_pair[1]].reshape(1, -1)
return [
cosine_distances(q1_row, q2_row)[0][0],
euclidean_distances(q1_row, q2_row)[0][0],
manhattan_distances(q1_row, q2_row)[0][0],
]
distances_svd = kg.jobs.map_batch_parallel(
id_pairs,
item_mapper=get_svd_matrix_distances,
batch_size=1000,
)
distances = np.hstack([distances_raw, distances_svd])
X_train = distances[:len(df_train)]
X_test = distances[len(df_train):]
print('X_train:', X_train.shape)
print('X_test: ', X_test.shape)
feature_names = [
'magic_comatrix_cosine',
'magic_comatrix_euclidean',
'magic_comatrix_svd_cosine',
'magic_comatrix_svd_euclidean',
'magic_comatrix_svd_manhattan',
]
project.save_features(X_train, X_test, feature_names, feature_list_id)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Config
Step2: Identifier for storing these features on disk and referring to them later.
Step3: Number of SVD components.
Step4: Make subsequent runs reproducible.
Step5: Read data
Step6: Build features
Step7: Build cooccurrence matrix
Step8: Extract distance features from the raw matrix
Step9: SVD the matrix and calculate some distances there
Step10: Build final features
Step11: Save features
|
4,448
|
<ASSISTANT_TASK:>
Python Code:
# We need libffi-dev to launch the Dataflow pipeline.
!apt-get -qq install libffi-dev
# Clone the python-docs-samples respository.
!git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
# Navigate to the sample code directory.
%cd python-docs-samples/people-and-planet-ai/image-classification
# Install the sample requirements.
!pip install --quiet -U pip
!pip install -r requirements.txt
# After restarting the runtime, navigate back to the sample code directory.
%cd python-docs-samples/people-and-planet-ai/image-classification
google_cloud_project = "" #@param {type:"string"}
cloud_storage_bucket = "" #@param {type:"string"}
bigquery_dataset = "samples" #@param {type:"string"}
# Validate inputs.
if not google_cloud_project:
raise ValueError('Please provide your google_cloud_project')
if not cloud_storage_bucket:
raise ValueError('Please provide your cloud_storage_bucket')
# Authenticate to use the Google Cloud resources.
try:
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
except ModuleNotFoundError:
import os
if os.environ.get('GOOGLE_APPLICATION_CREDENTIALS') is None:
raise ValueError('Please set your GOOGLE_APPLICATION_CREDENTIALS environment variable to your service account JSON file path.')
print(f"GOOGLE_APPLICATION_CREDENTIALS: {service_account_file}")
%env GOOGLE_CLOUD_PROJECT={google_cloud_project}
cloud_storage_directory = "samples/wildlife-insights" #@param {type:"string"}
bigquery_table = "wildlife_images_metadata" #@param {type:"string"}
ai_platform_name_prefix = "wildlife_classifier" #@param {type:"string"}
region = "us-central1" #@param {type:"string"}
# [One time only] Create the images metadata table.
!python create_images_metadata_table.py \
--bigquery-dataset "{bigquery_dataset}" \
--bigquery-table "{bigquery_table}" \
--runner "DataflowRunner" \
--job_name "wildlife-images-metadata-`date +%Y%m%d-%H%M%S`" \
--project "{google_cloud_project}" \
--temp_location "gs://{cloud_storage_bucket}/{cloud_storage_directory}/temp" \
--region "{region}" \
--worker_machine_type "n1-standard-2"
min_images_per_class = 50 #@param {type:"integer"}
max_images_per_class = 100 #@param {type:"integer"}
# Create a balanced dataset and signal AI Platform to train a model.
!python train_model.py \
--cloud-storage-path "gs://{cloud_storage_bucket}/{cloud_storage_directory}" \
--bigquery-dataset "{bigquery_dataset}" \
--bigquery-table "{bigquery_table}" \
--ai-platform-name-prefix "{ai_platform_name_prefix}" \
--min-images-per-class "{min_images_per_class}" \
--max-images-per-class "{max_images_per_class}" \
--runner "DataflowRunner" \
--job_name "wildlife-train-model-`date +%Y%m%d-%H%M%S`" \
--project "{google_cloud_project}" \
--temp_location "gs://{cloud_storage_bucket}/{cloud_storage_directory}/temp" \
--requirements_file "requirements.txt" \
--region "{region}"
# First we need the model path, we can get it with gcloud.
# 💡 If you get an error here, please make sure your model has finished training.
cmd_output = !gcloud beta ai models list \
--project {google_cloud_project} \
--region {region} \
--filter "display_name:{ai_platform_name_prefix}*" \
--format "table[no-heading](display_name,name)" 2>/dev/null
models = sorted([line.split() for line in cmd_output])
model_path = models[0][1]
print(f"model_path: {model_path}")
# Create an endpoint and deploy the model to it.
!python deploy_model.py \
--project {google_cloud_project} \
--region {region} \
--model-path {model_path} \
--model-endpoint-name {ai_platform_name_prefix}
import io
import requests
from PIL import Image
from IPython.display import display
from google.cloud import bigquery
def display_image(image_file, width=400):
base_url = 'https://lilablobssc.blob.core.windows.net/wcs-unzipped'
image_bytes = requests.get(f"{base_url}/{image_file}").content
if b'<Error>' in image_bytes:
raise ValueError(f"Error requesting image: {base_url}/{image_file}\n{image_bytes.decode('utf-8')}")
image = Image.open(io.BytesIO(image_bytes))
display(image.resize((int(width), int(width / image.size[0] * image.size[1]))))
def display_samples_for_category(category, num_samples=3, width=400):
client = bigquery.Client()
query_job = client.query(f
SELECT file_name
FROM `{google_cloud_project}.{bigquery_dataset}.{bigquery_table}`
WHERE category = '{category}'
LIMIT {num_samples}
)
for row in query_job:
image_file = row['file_name']
print(f"{category}: {image_file}")
display_image(image_file, width)
# First we need the endpoint ID, we can get it with gcloud.
stdout = !gcloud beta ai endpoints list \
--project {google_cloud_project} \
--region {region} \
--filter "display_name={ai_platform_name_prefix}" \
--format "table[no-heading](ENDPOINT_ID)" 2>/dev/null
model_endpoint_id = stdout[0]
print(f"model_endpoint_id: {model_endpoint_id}")
def predict(image_file):
# First, display the image, and then we run our prediction script.
display_image(image_file)
!python predict.py \
--project "{google_cloud_project}" \
--region "{region}" \
--model-endpoint-id "{model_endpoint_id}" \
--image-file "{image_file}"
# Species: dicerorhinus sumatrensis
predict('animals/0325/1529.jpg')
# Species: didelphis imperfecta
predict('animals/0667/1214.jpg')
# Species: tapirus indicus
predict('animals/0036/0072.jpg')
# Species: leopardus wiedii
predict('animals/0000/1705.jpg')
# Species: hemigalus derbyanus
predict('animals/0036/0566.jpg')
# Species: dasypus novemcinctus
predict('animals/0000/0425.jpg')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 🛎️ [DON’T PANIC] It’s safe to ignore the warnings.
Step2: ✏️ Entering project details
Step3: Click the run button ▶️ for the cells above.
Step4: 🛎 Make sure the region matches the region you chose for your Cloud Storage bucket.
Step5: <button>
Step6: <button>
Step8: <button>
Step9: Now run the following cell to get the model endpoint ID, which we need to make predictions.
Step10: <button>
|
4,449
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
A = np.array([[1,2,3]]); print(A)
B = np.array([[1,2,3],[4,5,6],[7,8,9]]); print(B)
C = np.zeros((2,1)); print(C)
D = np.ones((1,3)); print(D)
E = np.random.randn(3,3); print(E)
print(B)
B[0] #first row
B[:,0] #first column
B[0,0]
B[2,2]
B[:,0]
print(B.dtype)
print(B.shape)
print(B.ndim)
print(B.size)
B.tolist()
B.T
B.reshape(1,9)
B.flatten()
A.nonzero()
A
A.min()
A.max()
A.sum()
A.mean()
A.var()
A.std()
A
A*A
A**A
A/A
A-A
A+A
A<B
A==B
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Basic Arrays
Step2: Array Indexing
Step3: Array Attributes
Step4: Array Methods
Step5: Array Calcuations
Step6: Array Arithmetic
|
4,450
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy as sp
from scipy import integrate,stats
def bekkers(x, a, m, d):
p = a*np.exp((-1*(x**(1/3) - m)**2)/(2*d**2))*x**(-2/3)
return(p)
range_start = 1
range_end = 10
estimated_a, estimated_m, estimated_d = 1,1,1
sample_data = [1.5,1.6,1.8,2.1,2.2,3.3,4,6,8,9]
def bekkers_cdf(x,a,m,d,range_start,range_end):
values = []
for value in x:
integral = integrate.quad(lambda k: bekkers(k,a,m,d),range_start,value)[0]
normalized = integral/integrate.quad(lambda k: bekkers(k,a,m,d),range_start,range_end)[0]
values.append(normalized)
return np.array(values)
s, p_value = stats.kstest(sample_data, lambda x: bekkers_cdf(x, estimated_a, estimated_m, estimated_d, range_start,range_end))
if p_value >= 0.05:
result = False
else:
result = True
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
4,451
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import requests
from urllib.parse import quote
from artist_api import *
artists_df = pd.read_csv('artists.dat', sep='\t', header=0, index_col=0, skipinitialspace=True)
artists_df.head()
artists_df['mbid'] = artists_df.apply(parse_artists, axis=1)
artists_df_mbid = pd.read_csv('artist_mbid_codes.csv', sep='\t', header=0, index_col=0, skipinitialspace=True)
artists_df_mbid.head()
print("The number of artists without an mbid code :",\
len(artists_df_mbid[artists_df_mbid.mbid=='notfound']) + len(artists_df_mbid[artists_df_mbid.mbid.isnull()]))
print("The total number of artists we have :",len(artists_df))
artists_df_mbid = artists_df_mbid[~((artists_df_mbid.mbid =='notfound'))]
artists_df_mbid = artists_df_mbid[~(artists_df_mbid.mbid.isnull())]
len(artists_df_mbid)
artists_df_mbid['similar_artists'] = artists_df_mbid.apply(similar_artists, axis=1)
columns=['ArtistID','Artist','Similar_artists','Weight']
artist_artist = pd.DataFrame(columns=columns)
for index, row in artists_df_mbid.iterrows():
# main artist id
artist_id = index
# main artist name
artist_name = row['name']
# list of dictionaries of similar artists information
similar_dict = row['similar_artists']
# the number of similar artists needed
range_used = 10
try:
# Incase list contained less than 10 artists, set the range to that number
if (len(similar_dict)<10):
range_used = len(similar_dict)
except:
continue
for i in range(range_used):
# Appending the information obtained to a new dataframe
org_artist_name = artist_name
similar_artist_name = similar_dict[i]['name']
similar_artist_match = similar_dict[i]['match']
artist_artist = artist_artist.append({'ArtistID':artist_id,'Artist': artist_name,'Similar_artists':similar_artist_name,'Weight':similar_artist_match}, ignore_index=True)
artist_artist_connections = pd.read_csv('artist_artist_connections.csv', sep='\t', header=0, index_col=0, skipinitialspace=True)
artist_artist_connections.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The artist dataset contains ids, Artist names, Artist url, and Artist pictureURL.
Step2: In this notebook we extract the unique mbid code identifiers of each artist using the api.
Step3: As mentioned above there are cases where there is no output mbid code returned from an artist, and so filtering the dataframe returning only the artists where the mbid code was fetched.
Step4: Now that we have the codes for the artists, we can extract all the similar artists to them.
Step5: For each artist, 100 similar artists are returned. To decrease the complexity of the problem and reduce the computational time, we only extracted the top 10 similar artists and saved them into a dataframe that contains the artist, artist id, the top 10 similar artists, and their corresponding weights.
Step6: Sample output
|
4,452
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import make_regression
from sklearn.cross_validation import train_test_split
X, y, true_coefficient = make_regression(n_samples=80, n_features=30, n_informative=10, noise=100, coef=True, random_state=5)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5)
print(X_train.shape)
print(y_train.shape)
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression().fit(X_train, y_train)
print("R^2 on training set: %f" % linear_regression.score(X_train, y_train))
print("R^2 on test set: %f" % linear_regression.score(X_test, y_test))
from sklearn.metrics import r2_score
print(r2_score(np.dot(X, true_coefficient), y))
plt.figure(figsize=(10, 5))
coefficient_sorting = np.argsort(true_coefficient)[::-1]
plt.plot(true_coefficient[coefficient_sorting], "o", label="true")
plt.plot(linear_regression.coef_[coefficient_sorting], "o", label="linear regression")
plt.legend()
from sklearn.linear_model import Ridge
ridge_models = {}
training_scores = []
test_scores = []
for alpha in [100, 10, 1, .01]:
ridge = Ridge(alpha=alpha).fit(X_train, y_train)
training_scores.append(ridge.score(X_train, y_train))
test_scores.append(ridge.score(X_test, y_test))
ridge_models[alpha] = ridge
plt.figure()
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [100, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([100, 10, 1, .01]):
plt.plot(ridge_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
from sklearn.linear_model import Lasso
lasso_models = {}
training_scores = []
test_scores = []
for alpha in [30, 10, 1, .01]:
lasso = Lasso(alpha=alpha).fit(X_train, y_train)
training_scores.append(lasso.score(X_train, y_train))
test_scores.append(lasso.score(X_test, y_test))
lasso_models[alpha] = lasso
plt.figure()
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [30, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([30, 10, 1, .01]):
plt.plot(lasso_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
from plots import plot_linear_svc_regularization
plot_linear_svc_regularization()
from sklearn.datasets import make_blobs
plt.figure()
X, y = make_blobs(random_state=42)
plt.scatter(X[:, 0], X[:, 1], c=y)
from sklearn.svm import LinearSVC
linear_svm = LinearSVC().fit(X, y)
print(linear_svm.coef_.shape)
print(linear_svm.intercept_.shape)
plt.figure()
plt.scatter(X[:, 0], X[:, 1], c=y)
line = np.linspace(-15, 15)
for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_):
plt.plot(line, -(line * coef[0] + intercept) / coef[1])
plt.ylim(-10, 15)
plt.xlim(-10, 8)
y % 2
# %load solutions/linear_models.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear Regression
Step2: Ridge Regression (L2 penalty)
Step3: Lasso (L1 penalty)
Step4: Linear models for classification
Step5: Multi-Class linear classification
Step6: Exercises
|
4,453
|
<ASSISTANT_TASK:>
Python Code:
import scipy.optimize
import numpy as np
np.random.seed(42)
a = np.random.rand(3,5)
x_true = np.array([10, 13, 5, 8, 40])
y = a.dot(x_true ** 2)
x0 = np.array([2, 3, 1, 4, 20])
def residual_ans(x, a, y):
s = ((y - a.dot(x**2))**2).sum()
return s
out = scipy.optimize.minimize(residual_ans, x0=x0, args=(a, y), method= 'L-BFGS-B').x
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
4,454
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from keras import layers
from keras import optimizers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from kt_utils import *
import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
%matplotlib inline
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
# GRADED FUNCTION: HappyModel
def HappyModel(input_shape):
Implementation of the HappyModel.
Arguments:
input_shape -- shape of the images of the dataset
Returns:
model -- a Model() instance in Keras
### START CODE HERE ###
# Feel free to use the suggested outline in the text above to get started, and run through the whole
# exercise (including the later portions of this notebook) once. The come back also try out other
# network architectures as well.
# Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
X_input = Input(input_shape)
# CONV -> BN -> RELU Block applied to X
X = ZeroPadding2D((1, 1))(X_input)
X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
X = MaxPooling2D((2, 2), name='max_pool0')(X)
# CONV -> BN -> RELU Block applied to X
X = ZeroPadding2D((1, 1))(X)
X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv1')(X)
X = BatchNormalization(axis = 3, name = 'bn1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((2, 2), name='max_pool1')(X)
# CONV -> BN -> RELU Block applied to X
X = ZeroPadding2D((1, 1))(X)
X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv2')(X)
X = BatchNormalization(axis = 3, name = 'bn2')(X)
X = Activation('relu')(X)
X = MaxPooling2D((2, 2), name='max_pool2')(X)
# FC
X = Flatten()(X)
X = Dense(1024, kernel_initializer='he_normal', activation='relu', name='fc1')(X)
X = Dense(1, kernel_initializer='he_normal', activation='sigmoid', name='fc2')(X)
model = Model(inputs=X_input, outputs=X, name='HappyModel')
### END CODE HERE ###
return model
### START CODE HERE ### (1 line)
happyModel = HappyModel(X_train.shape[1:])
### END CODE HERE ###
### START CODE HERE ### (1 line)
happyModel.compile(optimizer="adam", loss="binary_crossentropy", metrics = ["accuracy"])
### END CODE HERE ###
### START CODE HERE ### (1 line)
happyModel.fit(x=X_train, y=Y_train, epochs=20, batch_size=32, validation_data=(X_test, Y_test))
### END CODE HERE ###
### START CODE HERE ### (1 line)
preds = happyModel.evaluate(x=X_test, y=Y_test)
### END CODE HERE ###
print()
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
### START CODE HERE ###
img_path = 'images/my_image.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print(happyModel.predict(x))
happyModel.summary()
plot_model(happyModel, to_file='HappyModel.png')
SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step3: Details of the "Happy" dataset
Step4: You have now built a function to describe your model. To train and test this model, there are four steps in Keras
Step5: Exercise
Step6: Exercise
Step7: Note that if you run fit() again, the model will continue to train with the parameters it has already learnt instead of reinitializing them.
Step8: If your happyModel() function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets. To pass this assignment, you have to get at least 75% accuracy.
Step9: 5 - Other useful functions in Keras (Optional)
|
4,455
|
<ASSISTANT_TASK:>
Python Code:
# Install tflearn
import os
os.system("sudo pip install tflearn")
import numpy as np
import pandas as pd
import copy
from matplotlib import pyplot as plt
%matplotlib inline
# Temporarily load from np arrays
chi_photos_np = np.load('chi_photos_np_0.03_compress.npy')
lars_photos_np = np.load('lars_photos_np_0.03_compress.npy')
# View shape of numpy array
chi_photos_np.shape
# Set width var
width = chi_photos_np.shape[-1]
width
# Try out scaler on a manually set data (min of 0, max of 255)
from sklearn.preprocessing import MinMaxScaler
# Set test data list to train on (min of 0, max of 255)
test_list = np.array([0, 255]).reshape(-1, 1)
test_list
# Initialize scaler
scaler = MinMaxScaler()
# Fit test list
scaler.fit(test_list)
chi_photos_np.reshape(-1, width, width, 1).shape
# Reshape to prepare for scaler
chi_photos_np_flat = chi_photos_np.reshape(1, -1)
chi_photos_np_flat[:10]
# Scale
chi_photos_np_scaled = scaler.transform(chi_photos_np_flat)
chi_photos_np_scaled[:10]
# Reshape to prepare for scaler
lars_photos_np_flat = lars_photos_np.reshape(1, -1)
lars_photos_np_scaled = scaler.transform(lars_photos_np_flat)
# Reshape
chi_photos_reshaped = chi_photos_np_scaled.reshape(-1, width, width, 1)
lars_photos_reshaped = lars_photos_np_scaled.reshape(-1, width, width, 1)
print('{} has shape: {}'. format('chi_photos_reshaped', chi_photos_reshaped.shape))
print('{} has shape: {}'. format('lars_photos_reshaped', lars_photos_reshaped.shape))
# Create copy of chi's photos to start populating x_input
x_input = copy.deepcopy(chi_photos_reshaped)
print('{} has shape: {}'. format('x_input', x_input.shape))
# Concatentate lars' photos to existing x_input
x_input = np.append(x_input, lars_photos_reshaped, axis = 0)
print('{} has shape: {}'. format('x_input', x_input.shape))
# Create label arrays
y_chi = np.array([[1, 0] for i in chi_photos_reshaped])
y_lars = np.array([[0, 1] for i in lars_photos_reshaped])
print('{} has shape: {}'. format('y_chi', y_chi.shape))
print('{} has shape: {}'. format('y_lars', y_lars.shape))
# Preview the first few elements
y_chi[:5]
y_lars[:5]
# Create copy of chi's labels to start populating y_input
y_input = copy.deepcopy(y_chi)
print('{} has shape: {}'. format('y_input', y_input.shape))
# Concatentate lars' labels to existing y_input
y_input = np.append(y_input, y_lars, axis = 0)
print('{} has shape: {}'. format('y_input', y_input.shape))
# TFlearn libraries
import tflearn
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
# sentdex's code to build the neural net using tflearn
# Input layer --> conv layer w/ max pooling --> conv layer w/ max pooling --> fully connected layer --> output layer
convnet = input_data(shape = [None, 91, 91, 1], name = 'input')
convnet = conv_2d(convnet, 3, 10, activation = 'relu', name = 'conv_1')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_1')
convnet = conv_2d(convnet, 3, 10, activation = 'relu', name = 'conv_2')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_2')
convnet = fully_connected(convnet, 1024, activation = 'relu', name = 'fully_connected_1')
convnet = dropout(convnet, 0.8, name = 'dropout_1')
convnet = fully_connected(convnet, 2, activation = 'softmax', name = 'fully_connected_2')
convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'targets')
# Import library
from sklearn.model_selection import train_test_split
print(x_input.shape)
print(y_input.shape)
# Perform train test split
x_train, x_test, y_train, y_test = train_test_split(x_input, y_input, test_size = 0.1, stratify = y_input)
x_train = np.array(x_train, dtype = np.float64)
x_test = np.array(x_test, dtype = np.float64)
y_train = np.array(y_train, dtype = np.float64)
y_test = np.array(y_test, dtype = np.float64)
# Train with data
model = tflearn.DNN(convnet)
model.fit(
{'input': x_train},
{'targets': y_train},
n_epoch = 10,
validation_set = ({'input': x_test}, {'targets': y_test}),
snapshot_step = 500,
show_metric = True
)
import six
def display_convolutions(model, layer, padding=4, filename=''):
if isinstance(layer, six.string_types):
vars = tflearn.get_layer_variables_by_name(layer)
variable = vars[0]
else:
variable = layer.W
data = model.get_weights(variable)
# N is the total number of convolutions
N = data.shape[2] * data.shape[3]
print('There are {} filters in {}'.format(N, layer))
# Ensure the resulting image is square
filters_per_row = int(np.ceil(np.sqrt(N)))
# Assume the filters are square
filter_size = data.shape[0]
# Size of the result image including padding
result_size = filters_per_row * (filter_size + padding) - padding
# Initialize result image to all zeros
result = np.zeros((result_size, result_size))
# Tile the filters into the result image
filter_x = 0
filter_y = 0
for n in range(data.shape[3]):
for c in range(data.shape[2]):
if filter_x == filters_per_row:
filter_y += 1
filter_x = 0
for i in range(filter_size):
for j in range(filter_size):
result[filter_y * (filter_size + padding) + i, filter_x * (filter_size + padding) + j] = \
data[i, j, c, n]
filter_x += 1
# Normalize image to 0-1
min = result.min()
max = result.max()
result = (result - min) / (max - min)
# Plot figure
plt.figure(figsize=(10, 10))
plt.axis('off')
plt.imshow(result, cmap='gray', interpolation='nearest')
# Save plot if filename is set
if filename != '':
plt.savefig(filename, bbox_inches='tight', pad_inches=0)
plt.show()
# Display first convolutional layer filters
display_convolutions(model, 'conv_1')
# Display first convolutional layer filters ( filters)
display_convolutions(model, 'conv_2')
import tensorflow as tf
with tf.Graph().as_default():
# Build CNN
convnet = input_data(shape = [None, 91, 91, 1], name = 'input')
convnet = conv_2d(convnet, 3, 3, activation = 'relu', name = 'conv_1')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_1')
convnet = conv_2d(convnet, 3, 3, activation = 'relu', name = 'conv_2')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_2')
convnet = fully_connected(convnet, 1024, activation = 'relu', name = 'fully_connected_1')
convnet = dropout(convnet, 0.8, name = 'dropout_1')
convnet = fully_connected(convnet, 2, activation = 'softmax', name = 'fully_connected_2')
convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'targets')
# Train Model
model = tflearn.DNN(convnet)
model.fit(
{'input': x_train},
{'targets': y_train},
n_epoch = 6,
validation_set = ({'input': x_test}, {'targets': y_test}),
snapshot_step = 500,
show_metric = True
)
# Display convolutional filters
display_convolutions(model, 'conv_1')
display_convolutions(model, 'conv_2')
with tf.Graph().as_default():
# Build CNN
convnet = input_data(shape = [None, 91, 91, 1], name = 'input')
convnet = conv_2d(convnet, 3, 91, activation = 'relu', name = 'conv_1')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_1')
convnet = conv_2d(convnet, 3, 91, activation = 'relu', name = 'conv_2')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_2')
convnet = fully_connected(convnet, 1024, activation = 'relu', name = 'fully_connected_1')
convnet = dropout(convnet, 0.8, name = 'dropout_1')
convnet = fully_connected(convnet, 2, activation = 'softmax', name = 'fully_connected_2')
convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'targets')
# Train Model
model = tflearn.DNN(convnet)
model.fit(
{'input': x_train},
{'targets': y_train},
n_epoch = 6,
validation_set = ({'input': x_test}, {'targets': y_test}),
snapshot_step = 500,
show_metric = True
)
# Display convolutional filters
display_convolutions(model, 'conv_1')
display_convolutions(model, 'conv_2')
with tf.Graph().as_default():
# Build CNN
convnet = input_data(shape = [None, 91, 91, 1], name = 'input')
convnet = conv_2d(convnet, 3, 91, activation = 'relu', name = 'conv_1')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_1')
convnet = conv_2d(convnet, 3, 91, activation = 'relu', name = 'conv_2')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_2')
convnet = conv_2d(convnet, 3, 91, activation = 'relu', name = 'conv_3')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_3')
convnet = fully_connected(convnet, 1024, activation = 'relu', name = 'fully_connected_1')
convnet = dropout(convnet, 0.8, name = 'dropout_1')
convnet = fully_connected(convnet, 2, activation = 'softmax', name = 'fully_connected_2')
convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'targets')
# Train Model
model = tflearn.DNN(convnet)
model.fit(
{'input': x_train},
{'targets': y_train},
n_epoch = 10,
validation_set = ({'input': x_test}, {'targets': y_test}),
snapshot_step = 500,
show_metric = True
)
# Display convolutional filters
display_convolutions(model, 'conv_1')
display_convolutions(model, 'conv_2')
display_convolutions(model, 'conv_3')
with tf.Graph().as_default():
# Build CNN
convnet = input_data(shape = [None, 91, 91, 1], name = 'input')
convnet = conv_2d(convnet, 3, 91, activation = 'relu', name = 'conv_1')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_1')
convnet = fully_connected(convnet, 1024, activation = 'relu', name = 'fully_connected_1')
convnet = dropout(convnet, 0.8, name = 'dropout_1')
convnet = fully_connected(convnet, 2, activation = 'softmax', name = 'fully_connected_2')
convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.001, loss = 'categorical_crossentropy', name = 'targets')
# Train Model
model = tflearn.DNN(convnet)
model.fit(
{'input': x_train},
{'targets': y_train},
n_epoch = 100,
validation_set = ({'input': x_test}, {'targets': y_test}),
snapshot_step = 500,
show_metric = True
)
# Display convolutional filters
display_convolutions(model, 'conv_1')
with tf.Graph().as_default():
# Build CNN
convnet = input_data(shape = [None, 91, 91, 1], name = 'input')
convnet = conv_2d(convnet, 9, 30, activation = 'relu', name = 'conv_1')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_1')
convnet = fully_connected(convnet, 1024, activation = 'relu', name = 'fully_connected_1')
convnet = dropout(convnet, 0.8, name = 'dropout_1')
convnet = fully_connected(convnet, 2, activation = 'softmax', name = 'fully_connected_2')
convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.001, loss = 'categorical_crossentropy', name = 'targets')
# Train Model
model = tflearn.DNN(convnet)
model.fit(
{'input': x_train},
{'targets': y_train},
n_epoch = 100,
validation_set = ({'input': x_test}, {'targets': y_test}),
snapshot_step = 500,
show_metric = True
)
# Display convolutional filters
display_convolutions(model, 'conv_1')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Feature Building
Step2: Scaling Inputs
Step3: Reshaping 3D Array To 4D Array
Step4: Putting It All Together
Step5: Preparing Labels
Step6: CNN #1 - Less Convolutional Filters
Step7: Train Test Split
Step8: Training
Step9: View Convolutional Filters
Step10: Hmm... they could potentially represent some part of someone's face, but it's still a bit too pixelated to tell... What if I go smaller filter size?
Step11: First of all, I had to import tensorflow and add in
Step12: Interesting, it still doesn't quite make sense to me, but it's starting to take some type of shape in the second level filters. In that bottom right one in the second layer filters, I can almost even see a face if I squint... Or maybe that's just my hope and dreams deceiving my eyes right before me. I'm not even sure if we'd be able to really make any sense of the second layer filters because they would be filtering on the activations / outputs of the first layer filter. Only the first layer filter is a filter acting on the original image itself. It's plausible that one of these may have the silhouette of a person because my filter size is the size of the entire image. There theoretically isn't even any strides happening here.
Step13: My first thought... how come the third convolution layer only has 9 filters as well? Each output from the second layer (9 outputs) should each be convoluted with the 3 third-layer convolution filter, creating 27 outputs, no? That seems to be what happened between convolutional layers 1 and 2, but I'm not sure why the third filter doesn't follow this logic... I guess that's what I get for using a method I'm not too familiar with, on a python package I'm not too familiar with, while viewing the filters through a copied and pasted code I'm not too familiar with.
Step14: At about 0.15s per epoch, plus overhead time, this took us around 3 mins to train. Not too shabby! The top two filters don't look too smooth to me, but filter on the bottom, call me crazy, kind of has an outline of somebody with long hair. Note quite as crisp as the mock up I drew
|
4,456
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import matplotlib.pyplot as plt
import numpy as np
import imageio
import pandas as pd
import seaborn as sns
sns.set(style='ticks')
sys.path.append('../scripts/')
import bicorr as bicorr
import bicorr_e as bicorr_e
import bicorr_plot as bicorr_plot
import bicorr_sums as bicorr_sums
import bicorr_math as bicorr_math
%load_ext autoreload
%autoreload 2
det_df = bicorr.load_det_df('../meas_info/det_df_pairs_angles.csv')
det_df.head()
chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists()
dict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df)
num_fissions = 2194651200.00
e_min = 0.62
e_max = 12
singles_hist_e_n, e_bin_edges, dict_det_to_index, dict_index_to_det = bicorr_e.load_singles_hist_both(filepath = '../analysis/Cf072115_to_Cf072215b/datap/',plot_flag=True, show_flag=True)
bicorr_plot.plot_singles_hist_e_n(singles_hist_e_n, e_bin_edges, show_flag=False, clear_flag=False)
for e in [e_min, e_max]:
plt.axvline(e,c='r')
plt.show()
singles_hist_e_n.shape
bhm_e, e_bin_edges, note = bicorr_e.load_bhm_e('../analysis/Cf072115_to_Cf072215b/datap')
bhm_e.shape
bhp_e = np.zeros((len(det_df),len(e_bin_edges)-1,len(e_bin_edges)-1))
bhp_e.shape
for index in det_df.index.values: # index is same as in `bhm`
bhp_e[index,:,:] = bicorr_e.build_bhp_e(bhm_e,e_bin_edges,pair_is=[index])[0]
bicorr_plot.bhp_e_plot(np.sum(bhp_e,axis=0),e_bin_edges, show_flag=True)
det_df.head()
det_df = bicorr_sums.init_det_df_sums(det_df)
det_df.head()
singles_e_df = bicorr_sums.init_singles_e_df(dict_index_to_det)
singles_e_df.head()
bhp_e.shape
det_df, energies_real = bicorr_sums.fill_det_df_doubles_e_sums(det_df, bhp_e, e_bin_edges, e_min, e_max, True)
det_df.head()
bicorr_plot.counts_vs_angle_all(det_df, save_flag=False)
singles_e_df.head()
bicorr_plot.Sd_vs_ch_all(singles_e_df, save_flag=False)
det_df = bicorr_sums.fill_det_df_singles_sums(det_df, singles_e_df)
det_df.head()
plt.figure(figsize=(4,4))
ax = plt.gca()
sc = ax.scatter(det_df['d1'],det_df['d2'],s=13,marker='s',
edgecolor = 'none', c=det_df['Cd'],cmap='viridis')
ax.set_xlabel('d1 channel')
ax.set_ylabel('d2 channel')
ax.set_title('Doubles counts')
cbar = plt.colorbar(sc,fraction=0.043,pad=0.1)
cbar.set_label('Doubles counts')
plt.show()
plt.figure(figsize=(4,4))
ax = plt.gca()
sc = ax.scatter(det_df['d1'],det_df['d2'],s=13,marker='s',
edgecolor = 'none', c=det_df['Sd1'],cmap='viridis')
ax.set_xlabel('d1 channel')
ax.set_ylabel('d2 channel')
ax.set_title('D1 singles counts')
cbar = plt.colorbar(sc,fraction=0.043,pad=0.1)
cbar.set_label('D1 singles counts')
plt.show()
plt.figure(figsize=(4,4))
ax = plt.gca()
sc = ax.scatter(det_df['d1'],det_df['d2'],s=13,marker='s',
edgecolor = 'none', c=det_df['Sd2'],cmap='viridis')
ax.set_xlabel('d1 channel')
ax.set_ylabel('d2 channel')
ax.set_title('D2 singles counts')
cbar = plt.colorbar(sc,fraction=0.043,pad=0.1)
cbar.set_label('D2 Singles counts')
plt.show()
det_df = bicorr_sums.calc_det_df_W(det_df)
det_df.head()
plt.figure(figsize=(4,4))
ax = plt.gca()
sc = ax.scatter(det_df['d1'],det_df['d2'],s=13,marker='s',
edgecolor = 'none', c=det_df['W'],cmap='viridis')
ax.set_xlabel('d1 channel')
ax.set_ylabel('d2 channel')
ax.set_title('W')
cbar = plt.colorbar(sc,fraction=0.043,pad=0.1)
cbar.set_label('W')
plt.show()
chIgnore = [1,17,33]
det_df_ignore = det_df[~det_df['d1'].isin(chIgnore) & ~det_df['d2'].isin(chIgnore)]
bicorr_plot.W_vs_angle_all(det_df_ignore, save_flag=False)
bicorr_plot.W_vs_angle_all?
angle_bin_edges = np.arange(8,190,10)
print(angle_bin_edges)
by_angle_df = bicorr_sums.condense_det_df_by_angle(det_df_ignore, angle_bin_edges)
by_angle_df.head()
bicorr_plot.W_vs_angle(det_df_ignore, by_angle_df, save_flag=False)
singles_e_df.to_csv('singles_e_df_filled.csv')
det_df.to_csv(r'det_df_e_filled.csv')
by_angle_df.to_csv(r'by_angle_e_df.csv')
det_df_filled = pd.read_csv(r'det_df_e_filled.csv',index_col=0)
det_df_filled.head()
chIgnore = [1,17,33]
det_df_ignore = det_df_filled[~det_df_filled['d1'].isin(chIgnore) & ~det_df_filled['d2'].isin(chIgnore)]
det_df_ignore.head()
singles_e_df_filled = pd.read_csv(r'singles_e_df_filled.csv',index_col=0)
singles_e_df_filled.head()
by_angle_e_df = pd.read_csv(r'by_angle_e_df.csv',index_col=0)
by_angle_e_df.head()
bicorr_plot.W_vs_angle(det_df_ignore, by_angle_e_df, save_flag=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load some data
Step2: Specify energy range
Step3: singles_hist_e_n.npz
Step4: Load bhp_nn_e for all pairs
Step5: Set up det_df columns and singles_df
Step6: Calculate and fill doubles sums
Step7: Calculate singles sums
Step8: Calculate W values
Step9: Condense to angle bin
Step10: Plot it
Step11: Save to disk
Step12: Reload
|
4,457
|
<ASSISTANT_TASK:>
Python Code:
import nltk
import pandas as pd
import numpy as np
data = pd.read_csv("original_train_data.csv", header = None,delimiter = "\t", quoting=3,names = ["Polarity","TextFeed"])
#Data Visualization
data.head()
data_positive = data.loc[data["Polarity"]==1]
data_negative = data.loc[data["Polarity"]==0]
anomaly_data = pd.concat([data_negative.sample(n=10),data_positive,data_negative.sample(n=10)])
anomaly_data.Polarity.value_counts()
#Number of words per sentence
print ("No of words for sentence in train data",np.mean([len(s.split(" ")) for s in anomaly_data.TextFeed]))
import re
from sklearn.feature_extraction.text import CountVectorizer
nltk.download()
from nltk.stem.porter import PorterStemmer
''' this code is taken from
http://www.cs.duke.edu/courses/spring14/compsci290/assignments/lab02.html
'''
# a stemmer widely used
stemmer = PorterStemmer()
def stem_tokens(tokens, stemmer):
stemmed = []
for item in tokens:
stemmed.append(stemmer.stem(item))
return stemmed
def tokenize(text):
# remove non letters
text = re.sub("[^a-zA-Z]", " ", text)
# tokenize
tokens = nltk.word_tokenize(text)
# stem
stems = stem_tokens(tokens, stemmer)
return stems
#Max_Features selected as 80 - can be changed for the better trade-off
vector_data = CountVectorizer(
analyzer = 'word',
tokenizer = tokenize,
lowercase = True,
stop_words = 'english',
max_features = 90
)
#using only the "Text Feed" column to build the features
features = vector_data.fit_transform(anomaly_data.TextFeed.tolist())
#converting the data into the array
features = features.toarray()
features.shape
#printing the words in the vocabulary
vocab = vector_data.get_feature_names()
print (vocab)
# Sum up the counts of each vocabulary word
dist = np.sum(features, axis=0)
# For each, print the vocabulary word and the number of times it
# appears in the data set
a = zip(vocab,dist)
print (list(a))
from sklearn.cross_validation import train_test_split
#80:20 ratio
X_train, X_test, y_train, y_test = train_test_split(
features,
anomaly_data.Polarity,
train_size=0.80,
random_state=1234)
print ("Training data - positive and negative values")
print (pd.value_counts(pd.Series(y_train)))
print ("Testing data - positive and negative values")
print (pd.value_counts(pd.Series(y_test)))
from sklearn.svm import SVC
clf = SVC()
clf.fit(X=X_train,y=y_train)
wclf = SVC(class_weight={0: 20})
wclf.fit(X=X_train,y=y_train)
y_pred = clf.predict(X_test)
y_pred_weighted = wclf.predict(X_test)
from sklearn.metrics import classification_report
print ("Basic SVM metrics")
print(classification_report(y_test, y_pred))
print ("Weighted SVM metrics")
print(classification_report(y_test, y_pred_weighted))
from sklearn.metrics import confusion_matrix
print ("Basic SVM Confusion Matrix")
print (confusion_matrix(y_test, y_pred))
print ("Weighted SVM Confusion Matrix")
print (confusion_matrix(y_test, y_pred_weighted))
tn, fp, fn, tp = confusion_matrix(y_test, y_pred_weighted).ravel()
(tn, fp, fn, tp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Preparation
Step2: Data pre-processing - text analytics to create a corpus
Step3: The below implementation produces a sparse representation of the counts using scipy.sparse.csr_matrix.
Step4: Fit_Transform
Step5: Train-Test Split
Step6: A text polarity depends on what words appear in that text, discarding any grammar or word order but keeping multiplicity.
|
4,458
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.read_csv('ner_dataset.csv.gz', compression='gzip', encoding='ISO-8859-1')
df.info()
df.T
df = df.fillna(method='ffill')
df.info()
df.T
df['Sentence #'].nunique(), df.Word.nunique(), df.POS.nunique(), df.Tag.nunique()
df.Tag.value_counts()
def word2features(sent, i):
word = sent[i][0]
postag = sent[i][1]
features = {
'bias': 1.0,
'word.lower()': word.lower(),
'word[-3:]': word[-3:],
'word[-2:]': word[-2:],
'word.isupper()': word.isupper(),
'word.istitle()': word.istitle(),
'word.isdigit()': word.isdigit(),
'postag': postag,
'postag[:2]': postag[:2],
}
if i > 0:
word1 = sent[i-1][0]
postag1 = sent[i-1][1]
features.update({
'-1:word.lower()': word1.lower(),
'-1:word.istitle()': word1.istitle(),
'-1:word.isupper()': word1.isupper(),
'-1:postag': postag1,
'-1:postag[:2]': postag1[:2],
})
else:
features['BOS'] = True
if i < len(sent)-1:
word1 = sent[i+1][0]
postag1 = sent[i+1][1]
features.update({
'+1:word.lower()': word1.lower(),
'+1:word.istitle()': word1.istitle(),
'+1:word.isupper()': word1.isupper(),
'+1:postag': postag1,
'+1:postag[:2]': postag1[:2],
})
else:
features['EOS'] = True
return features
def sent2features(sent):
return [word2features(sent, i) for i in range(len(sent))]
def sent2labels(sent):
return [label for token, postag, label in sent]
agg_func = lambda s: [(w, p, t) for w, p, t in zip(s['Word'].values.tolist(),
s['POS'].values.tolist(),
s['Tag'].values.tolist())]
grouped_df = df.groupby('Sentence #').apply(agg_func)
print(grouped_df[grouped_df.index == 'Sentence: 1'].values)
grouped_df.shape
sentences = [s for s in grouped_df]
sentences[0]
sent2features(sentences[0][5:7])
sent2labels(sentences[0][5:7])
from sklearn.model_selection import train_test_split
import numpy as np
X = np.array([sent2features(s) for s in sentences])
y = np.array([sent2labels(s) for s in sentences])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
X_train.shape, X_test.shape
!pip install sklearn-crfsuite
import sklearn_crfsuite
crf = sklearn_crfsuite.CRF(algorithm='lbfgs',
c1=0.1,
c2=0.1,
max_iterations=100,
all_possible_transitions=True,
verbose=True)
crf.fit(X_train, y_train)
from sklearn.externals import joblib
#joblib.dump(crf, 'ner_model.pkl')
crf = joblib.load('ner_model.pkl')
y_pred = crf.predict(X_test)
print(y_pred[0])
print(y_test[0])
from sklearn_crfsuite import metrics as crf_metrics
labels = list(crf.classes_)
labels.remove('O')
print(crf_metrics.flat_classification_report(y_test, y_pred, labels=labels))
import re
text = Three more countries have joined an “international grand committee” of parliaments, adding to calls for
Facebook’s boss, Mark Zuckerberg, to give evidence on misinformation to the coalition. Brazil, Latvia and Singapore
bring the total to eight different parliaments across the world, with plans to send representatives to London on 27
November with the intention of hearing from Zuckerberg. Since the Cambridge Analytica scandal broke, the Facebook chief
has only appeared in front of two legislatures: the American Senate and House of Representatives, and the European parliament.
Facebook has consistently rebuffed attempts from others, including the UK and Canadian parliaments, to hear from Zuckerberg.
He added that an article in the New York Times on Thursday, in which the paper alleged a pattern of behaviour from Facebook
to “delay, deny and deflect” negative news stories, “raises further questions about how recent data breaches were allegedly
dealt with within Facebook.”
text = re.sub(r'\n', '', text)
text
import spacy
from spacy import displacy
nlp = spacy.load('en')
text_nlp = nlp(text)
displacy.render(text_nlp, style='ent', jupyter=True)
import nltk
text_tokens = nltk.word_tokenize(text)
text_pos = nltk.pos_tag(text_tokens)
text_pos[:10]
features = [sent2features(text_pos)]
features[0][0]
labels = crf.predict(features)
doc_labels = labels[0]
doc_labels[10:20]
text_ner = [(token, tag) for token, tag in zip(text_tokens, doc_labels)]
print(text_ner)
named_entities = []
temp_entity_name = ''
temp_named_entity = None
for term, tag in text_ner:
if tag != 'O':
temp_entity_name = ' '.join([temp_entity_name, term]).strip()
temp_named_entity = (temp_entity_name, tag)
else:
if temp_named_entity:
named_entities.append(temp_named_entity)
temp_entity_name = ''
temp_named_entity = None
import pandas as pd
pd.DataFrame(named_entities, columns=['Entity', 'Tag'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We have 47959 sentences that contain 35178 unique words.
Step2: Conditional Random Fields
Step3: Prepare Train and Test Datasets
Step4: Building Models with sklearn-crfsuite
Step5: Train the model!
Step6: Use the following to load our pre-trained model if training above takes a lot of time
Step7: Model Evaluation
Step9: We have intentially left out the Others tag to understand the performance of model on the remaining tags. The above evaluation statistics showcase a model which seems to have learnt the transitions quite well giving us an overall F1-score of 85%!
Step10: NER Tagging with SpaCy
Step11: Pipeline Step 1
Step12: Pipeline Step 2
Step13: Pipeline Step 3
Step14: Pipeline Step 4
|
4,459
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import IFrame
IFrame('http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data', width=300, height=200)
# import load_iris function from datasets module
from sklearn.datasets import load_iris
# save "bunch" object containing iris dataset and its attributes
iris = load_iris()
type(iris)
# print the iris data
print(iris.data)
# print the names of the four features
print(iris.feature_names)
# print integers representing the species of each observation
print(iris.target)
# print the encoding scheme for species: 0 = setosa, 1 = versicolor, 2 = virginica
print(iris.target_names)
# check the types of the features and response
print(type(iris.data))
print(type(iris.target))
# check the shape of the features (first dimension = number of observations, second dimensions = number of features)
print(iris.data.shape)
# check the shape of the response (single dimension matching the number of observations)
print(iris.target.shape)
# store feature matrix in "X"
X = iris.data
# store response vector in "y"
y = iris.target
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Machine learning on the iris dataset
Step2: Machine learning terminology
Step3: Each value we are predicting is the response (also known as
Step4: Resources
|
4,460
|
<ASSISTANT_TASK:>
Python Code:
from bedrock.client.client import BedrockAPI
import requests
import pandas
import pprint
SERVER = "http://localhost:81/"
api = BedrockAPI(SERVER)
resp = api.ingest("opals.spreadsheet.Spreadsheet.Spreadsheet")
if resp.json():
print("Spreadsheet Opal Installed!")
else:
print("Spreadsheet Opal Not Installed!")
resp = api.analytic('opals.logit2.Logit2.Logit2')
if resp.json():
print("Logit2 Opal Installed!")
else:
print("Logit2 Opal Not Installed!")
resp = api.analytic('opals.select-from-dataframe.SelectByCondition.SelectByCondition')
if resp.json():
print("Select-from-dataframe Opal Installed!")
else:
print("Select-from-dataframe Opal Not Installed!")
resp = api.analytic('opals.summarize.Summarize.Summarize')
if resp.json():
print("Summarize Opal Installed!")
else:
print("Summarize Opal Not Installed!")
filepath = 'Rand2011PNAS_cooperation_data.csv'
datafile = pandas.read_csv('Rand2011PNAS_cooperation_data.csv')
datafile.head(10)
ingest_id = 'opals.spreadsheet.Spreadsheet.Spreadsheet'
resp = api.put_source('Rand2011', ingest_id, 'default', {'file': open(filepath, "rb")})
if resp.status_code == 201:
source_id = resp.json()['src_id']
print('Source {0} successfully uploaded'.format(filepath))
else:
try:
print("Error in Upload: {}".format(resp.json()['msg']))
except Exception:
pass
try:
source_id = resp.json()['src_id']
print("Using existing source. If this is not the desired behavior, upload with a different name.")
except Exception:
print("No existing source id provided")
available_sources = api.list("dataloader", "sources").json()
s = next(filter(lambda source: source['src_id'] == source_id, available_sources),'None')
if s != 'None':
pp = pprint.PrettyPrinter()
pp.pprint(s)
else:
print("Could not find source")
resp = api.create_matrix(source_id, 'rand_mtx')
mtx = resp[0]
matrix_id = mtx['id']
print(mtx)
resp
analytic_id = "opals.summarize.Summarize.Summarize"
inputData = {
'matrix.csv': mtx,
'features.txt': mtx
}
paramsData = []
summary_mtx = api.run_analytic(analytic_id, mtx, 'rand_mtx_summary', input_data=inputData, parameter_data=paramsData)
output = api.download_results_matrix(matrix_id, summary_mtx['id'], 'matrix.csv')
output
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': mtx,
'features.txt': mtx
}
paramsData = [
{"attrname":"colname","value":"round_num"},
{"attrname":"comparator","value":"=="},
{"attrname":"value","value":"1"}
]
filtered_mtx = api.run_analytic(analytic_id, mtx, 'rand_round1_only', input_data=inputData, parameter_data=paramsData)
filtered_mtx
output = api.download_results_matrix('rand_mtx', 'rand_round1_only', 'matrix.csv', remote_header_file='features.txt')
output
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ condition"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, mtx, 'rand_logit2_step3', input_data=inputData, parameter_data=paramsData)
result_mtx
coef_table = api.download_results_matrix('rand_mtx', 'rand_logit2_step3', 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
analytic_id = "opals.summarize.Summarize.Summarize"
inputData = {
'matrix.csv': mtx,
'features.txt': mtx
}
paramsData = [
{"attrname":"groupby","value":"condition,round_num"},
{"attrname":"columns","value":"decision0d1c"}
]
base_mtx = api.get_matrix_metadata('Rand2011','rand_mtx')
summary_mtx = api.run_analytic(analytic_id, base_mtx,'summarize_grouped', input_data=inputData, parameter_data=paramsData)
output = api.download_results_matrix(base_mtx['id'], summary_mtx['id'], 'matrix.csv')
output
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ round_num"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_logit2_step1', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"colname","value":"num_neighbors"},
{"attrname":"comparator","value":">"},
{"attrname":"value","value":"0"}
]
filtered_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_has_neighbors', input_data=inputData, parameter_data=paramsData)
analytic_id = "opals.summarize.Summarize.Summarize"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"groupby","value":"condition,round_num"},
{"attrname":"columns","value":"decision0d1c"}
]
summary_mtx = api.run_analytic(analytic_id, filtered_mtx,'summarize_grouped', input_data=inputData, parameter_data=paramsData)
output = api.download_results_matrix(base_mtx['id'], summary_mtx['id'], 'matrix.csv')
output
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ round_num"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rand_logit2_step2', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ fluid_dummy*round_num"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_logit2_step4', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"colname","value":"round_num"},
{"attrname":"comparator","value":">="},
{"attrname":"value","value":"7"}
]
filtered_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_round7', input_data=inputData, parameter_data=paramsData)
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ condition"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rand_logit2_step5', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ C(fluid_dummy)"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rand_logit2_step6', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ C(condition, Treatment(reference='Random'))"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_logit2_step7', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
pandas.set_option('display.max_colwidth', -1)
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ C(condition, Treatment(reference='Static'))"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_logit2_step8', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
pandas.set_option('display.max_colwidth', -1)
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ C(condition, Treatment(reference='Random'))"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rand_logit2_step9', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ C(condition, Treatment(reference='Static'))"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rand_logit2_step10', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"colname","value":"condition"},
{"attrname":"comparator","value":"=="},
{"attrname":"value","value":"Fluid"}
]
filtered_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_fluid_only', input_data=inputData, parameter_data=paramsData)
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ C(num_neighbors)"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rand_logit2_step11', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test Connection to Bedrock Server
Step2: Check for Spreadsheet Opal
Step3: Check for logit2 Opal
Step4: Check for select-from-dataframe Opal
Step5: Check for summarize Opal
Step6: Step 2
Step7: Now Upload the source file to the Bedrock Server
Step8: Check available data sources for the CSV file
Step9: Create a Bedrock Matrix from the CSV Source
Step10: Look at basic statistics on the source data
Step11: Step 3
Step12: Check that Matrix is filtered
Step13: Step 4
Step14: Visualize the output of the analysis
Step15: Analysis
Step16: Compare round_num effect on decision
Step17: Consider only num_neighbors > 0
Step18: Summarize on filtered matrix
Step19: Compare round_num effect on decision only when there are neighbors
Step20: Compare effect of round_num and Fluid
Step21: Condition effect on decision for Round >= 7
Step22: Fluid Effect on decision for Round >= 7
Step23: Relevel on Random and Compare condition effect on decision
Step24: Relevel on Static and Compare condition effect on decision
Step25: Relevel on Random and round_num >= 7
Step26: Relevel on Static and round_num >= 7
Step27: Subset on Fluid Condition and look at effect of num_neighbors on decision
|
4,461
|
<ASSISTANT_TASK:>
Python Code:
import sys
import os
sys.path.insert(0, '/usr/hdp/2.6.0.3-8/spark2/python')
sys.path.insert(0, '/usr/hdp/2.6.0.3-8/spark2/python/lib/py4j-0.10.4-src.zip')
os.environ['SPARK_HOME'] = '/usr/hdp/2.6.0.3-8/spark2/'
os.environ['SPARK_CONF_DIR'] = '/etc/hadoop/synced_conf/spark2/'
os.environ['PYSPARK_PYTHON'] = '/software/anaconda3/4.2.0/bin/python'
import pyspark
conf = pyspark.SparkConf()
conf.setMaster("yarn")
conf.set("spark.driver.memory","4g")
conf.set("spark.executor.memory","60g")
conf.set("spark.num.executors","3")
conf.set("spark.executor.cores","12")
sc = pyspark.SparkContext(conf=conf)
sqlContext = pyspark.SQLContext(sc)
sqlContext
airlines = sqlContext.read.format("com.databricks.spark.csv")\
.option("header", "true")\
.option("inferschema", "true")\
.load("/repository/airlines/data/")\
.cache()
%%time
airlines.count()
%%time
airlines.count()
airlines.printSchema()
airlines.registerTempTable("airlines")
uniqueAirline = sqlContext.sql("SELECT DISTINCT UniqueCarrier \
FROM airlines")
uniqueAirline.show()
%%time
carrierFlightCount = sqlContext.sql("SELECT UniqueCarrier, COUNT(UniqueCarrier) AS FlightCount \
FROM airlines GROUP BY UniqueCarrier")
carrierFlightCount.show()
carriers = sqlContext.read.format("com.databricks.spark.csv")\
.option("header", "true")\
.option("inferschema", "true")\
.load("/repository/airlines/metadata/carriers.csv")\
.cache()
carriers.registerTempTable("carriers")
carriers.printSchema()
%%time
carrierFlightCountFullName = sqlContext.sql("SELECT c.Description, a.UniqueCarrier, COUNT(a.UniqueCarrier) AS FlightCount \
FROM airlines AS a \
INNER JOIN carriers AS c \
ON c.Code = a.UniqueCarrier \
GROUP BY a.UniqueCarrier, c.Description \
ORDER BY a.UniqueCarrier")
carrierFlightCountFullName.show()
%%time
avgDepartureDelay = sqlContext.sql("SELECT FIRST(c.Description), FIRST(a.UniqueCarrier), AVG(a.DepDelay) AS AvgDepDelay \
FROM airlines AS a \
INNER JOIN carriers AS c \
ON c.Code = a.UniqueCarrier \
GROUP BY a.UniqueCarrier \
ORDER BY a.UniqueCarrier")
avgDepartureDelay.show()
airlines.unpersist()
sc.stop()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Airlines Data
Step2: You can interact with a DataFrame via SQLContext using SQL statements by registerting the DataFrame as a table
Step3: How many unique airlines are there?
Step4: Calculate how many flights completed by each carrier over time
Step5: How do you display full carrier names?
Step6: What is the averaged departure delay time for each airline?
|
4,462
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
import random
G = nx.Graph()
G.add_edge(0,5)
n = 15
labels={0:"0",5:"5"}
for i in range(0,30):
a,b = random.randint(0,n),random.randint(0,n)
G.add_edge(b,a)
labels[a]=str(a)
labels[b]=str(b)
pos=nx.spring_layout(G)
nx.draw_networkx_labels(G,pos,labels)
nx.draw(G,pos,node_color='#A0CBE2')
def enumerate_arcs(G):
for i,j in G.edges():
yield i,j
yield j,i
d = { 0:0 }
maj = 1
while maj > 0:
maj = 0
for i,j in enumerate_arcs(G):
if i in d:
x = d[i] + 1
if j in d:
if d[j] > x :
d[j] = x
maj += 1
else:
d[j] = x
maj += 1
labels={ g:"d({0})={1}".format(g,d[g]) for g in G.nodes() if g in d }
plt.figure(figsize=(12,4))
nx.draw_networkx_labels(G,pos,labels,font_size=10)
nx.draw(G,pos,node_color='#A0CBE2',node_size=2000)
import pandas
df = pandas.DataFrame( [ _ for _ in enumerate_arcs(G) ], columns=["v1","v2"])
df.head(n=3)
from pyensae.sql import Database
import os
dbfile = "graph_arcs.db3"
if os.path.exists(dbfile):
os.remove(dbfile) # on supprime la précédente base
db = Database.fill_sql_table(df, dbfile, "arc")
db.close()
def nolog (*l,**p):pass
db = Database(dbfile, LOG = nolog)
db.connect()
sql =
SELECT A.v1,A.v2,B.v2 AS v3
FROM arc AS A INNER JOIN arc AS B
ON A.v2 == B.v1
db.execute_view(sql)[:5]
sql =
SELECT A.v1,B.v2 AS v2 FROM arc AS A INNER JOIN arc AS B ON A.v2 == B.v1
UNION ALL
SELECT v1,v2 FROM arc
UNION ALL
SELECT v1,v1 FROM arc
db.execute_view(sql)[:5]
sql =
SELECT v1,v2,MIN(d) AS d FROM (
SELECT A.v1,B.v2 AS v2, 2 AS d FROM arc AS A INNER JOIN arc AS B ON A.v2 == B.v1
UNION ALL
SELECT v1,v2, 1 AS d FROM arc
UNION ALL
SELECT v1,v1, 0 AS d FROM arc
)
GROUP BY v1,v2
db.execute_view(sql)[:5]
db.execute (
CREATE VIEW iter1 AS
SELECT v1,v2,MIN(d) AS d FROM (
SELECT A.v1,B.v2 AS v2, 2 AS d FROM arc AS A INNER JOIN arc AS B ON A.v2 == B.v1
UNION ALL
SELECT v1,v2, 1 AS d FROM arc
UNION ALL
SELECT v1,v1, 0 AS d FROM arc
)
GROUP BY v1,v2
)
sql =
SELECT v1,v2,MIN(d) AS d FROM (
SELECT A.v1,B.v2 AS v2, 2 AS d FROM iter1 AS A INNER JOIN arc AS B ON A.v2 == B.v1
UNION ALL
SELECT v1,v2, d FROM iter1
)
GROUP BY v1,v2
HAVING v1 == 0
db.execute_view(sql)
db.execute (CREATE VIEW iter2 AS
SELECT v1,v2,MIN(d) AS d FROM (
SELECT A.v1,B.v2 AS v2, A.d+1 AS d FROM iter1 AS A INNER JOIN arc AS B ON A.v2 == B.v1
UNION ALL
SELECT v1,v2, d FROM iter1
)
GROUP BY v1,v2 ;)
db.execute (CREATE VIEW iter3 AS
SELECT v1,v2,MIN(d) AS d FROM (
SELECT A.v1,B.v2 AS v2, A.d+1 AS d FROM iter2 AS A INNER JOIN arc AS B ON A.v2 == B.v1
UNION ALL
SELECT v1,v2, d FROM iter2
)
GROUP BY v1,v2 ;)
db.execute (CREATE VIEW iter4 AS
SELECT v1,v2,MIN(d) AS d FROM (
SELECT A.v1,B.v2 AS v2, A.d+1 AS d FROM iter3 AS A INNER JOIN arc AS B ON A.v2 == B.v1
UNION ALL
SELECT v1,v2, d FROM iter3
)
GROUP BY v1,v2 ;)
sql =
SELECT v1,v2,MIN(d) AS d FROM (
SELECT A.v1,B.v2 AS v2, A.d+1 AS d FROM iter4 AS A INNER JOIN arc AS B ON A.v2 == B.v1
UNION ALL
SELECT v1,v2, d FROM iter4
)
GROUP BY v1,v2
HAVING v1 == 0
db.execute_view(sql)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: On suppose qu'on a un graphe $G(V,E)$ pour lequel on cherche à déterminer la distance de tous les noeuds à un noeud précis du graphe. Si calculer cette distance pour un graphe de petite taille ne pose pas de problème, pour un grand graphe, cela peut se révéler plus complexe. L'idée de cet exercice est de montrer que les concepts Map Reduce ne sont pas bien adaptés à cela. Pour un grand graphe (plusieurs dizaines de millions de noeuds), le calcul est soit trop long à faire sur un seul ordinateur, soit cet ordinateur ne peut tout simplement pas contenir tout le graphe. Il faut alors répartir ce calcul sur plusieurs machines. Mais commençons d'abord par un graphe simple et symétrique
Step2: Par exemple, les noeuds 0 et 5 sont reliés par un arc. Leur distance est donc de 1. Mais tous les noeuds ne sont pas directement reliés entre eux. Il faut parfois emprunter deux ou trois arcs pour passer du noeud 0 à un autre. On calculer cette distance minimale $d(v,v')$ où $v$ et $v'$ sont deux noeuds du graphe. Par exemple, partons du noeud 0 et explorons de proche en proche le graphe. Au début, tout ce qu'on connaît est
Step3: Chaque valeur indique la distance d'un noeud au noeud 0. Dans ce graphe, il faut passer par trois arcs au maximum pour aller du noeud 0 à n'importe quel autre noeud.
Step5: Sous Windows, vous pouvez visualiser la base de données avec le programme SQLiteSpy. Il ne reste plus qu'à écrire l'algorithme en SQL en s'inspirant de celui déjà écrit en Python. L'instruction for est remplacée par une instruction SELECT, l'instruction if j in d
Step7: Et la requête suivante contient toutes les paires de noeuds $v1,v2$ tels que $d(v1,v2) \leq 2$.
Step9: Et maintenant on ajoute la distance puis on récupère la distance minimale pour chaque paire de noeuds
Step11: La distance maximale trouvée avec le premier algorithme est supérieure à 3, il faut donc ajouter une itération. Avant, on crée une VIEW pour éviter de recopier trop de code SQL
Step13: L'itération suivante est presque identique à la précédente. On remplace une des tables arc par iter1 puis on ne garde que les distances relatives au noeud 0
Step18: On ajoute des itérations
|
4,463
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -a '' -u -d -v -p numpy,matplotlib,theano,keras
from IPython.display import Image
%matplotlib inline
import theano
from theano import tensor as T
import numpy as np
# define expression
# which can be visualized as a graph
x1 = T.scalar()
w1 = T.scalar()
w0 = T.scalar()
z1 = w1 * x1 + w0
# compile
net_input = theano.function(inputs=[w1, x1, w0], outputs=z1)
# execute
answer = net_input(2.0, 1.0, 0.5)
print(answer)
answer
# define
b = T.scalar('b')
x = T.vector('x')
W = T.matrix('W')
y = x.dot(W.transpose())
z = W.dot(x) + b
# similar to python function
# theano function can return multiple outputs
f = theano.function(inputs = [x, W, b], outputs = [y, z])
output_y, output_z = f([1, 2], [[3, 4], [5, 6]], 1)
# output_y, output_z = f([[1, 2]], [[3, 4]], 1) # won't work as x is a vector not matrix
# output_y, output_z = f([1, 2], [3, 4], 1) # won't work as W is a matrix not vector
# output_y, output_z = f([1, 2], [[3, 4]], [1]) # won't work as b is a scalar not a vector/matrix
print(output_y)
print(output_z)
# quadratic polynomial root example
# ax^2 + bx + c = 0
a = T.scalar('a')
b = T.scalar('b')
c = T.scalar('c')
core = b*b - 4*a*c
root_p = (-b + np.sqrt(core))/(2*a)
root_m = (-b - np.sqrt(core))/(2*a)
# compile
f = theano.function(inputs = [a, b, c], outputs = [root_p, root_m])
# run
polys = [[1, 2, 1],
[1, -7, 12],
[1, 0, 1]
]
for poly in polys:
a, b, c = poly
root1, root2 = f(a, b, c)
print(root1, root2)
# default configuration
print(theano.config.floatX)
# we can change it like this
theano.config.floatX = 'float32'
print(theano.config.floatX)
print(theano.config.device)
import numpy as np
# define
# if you are running Theano on 64 bit mode,
# you need to use dmatrix instead of fmatrix
x = T.matrix(name='x') # tensor with arbitrary shape
x_sum = T.sum(x, axis=0)
# compile
calc_sum = theano.function(inputs=[x], outputs=x_sum)
# execute (Python list)
ary = [[1, 2, 3], [1, 2, 3]]
print('Column sum:', calc_sum(ary))
# execute (NumPy array)
ary = np.array(ary, dtype=theano.config.floatX)
print('Column sum:', calc_sum(ary))
# name can help debug
y = T.matrix(name='hello')
z = T.matrix()
print(y) # will print out variable name
print(z) # will print out variable type
print(y.type()) # will print out type
# explicit type specification
wf = T.fmatrix(name='wfmatrix')
wd = T.dmatrix(name='wdmatrix')
print(wf.type())
print(wd.type())
# initialize
x = T.matrix(name='x')
b = theano.shared(np.asarray([[1]], dtype=theano.config.floatX), name='b')
w = theano.shared(np.asarray([[0.0, 0.0, 0.0]],
dtype=theano.config.floatX))
# w = w + 1.0 # this will cause error
z = x.dot(w.T) + b
update = [[w, w + 1.0]] # update w after each function call
# compile
f = theano.function(inputs=[x],
updates=update,
outputs=z)
# won't compile as shared variable cannot be used as input
# g = theano.function(inputs=[x, b], outputs = z)
# execute
x_data = np.array([[1, 2, 3]], dtype=theano.config.floatX)
for i in range(5):
print('z%d:' % i, f(x_data))
# define
num_samples = 10
samples = np.asarray([i for i in range(num_samples)],
dtype=theano.config.floatX)
# samples = theano.shared(samples)
x = T.lscalar(name='index')
#y = theano.shared(np.asscalar(np.array([1], dtype=theano.config.floatX)))
y = T.vector(name='samples')
w = theano.shared(np.asscalar(np.array([0], dtype=theano.config.floatX)))
z = y[x]*w
# compile
f = theano.function(inputs = [x],
updates = [[w, w+1]],
givens = {y: samples},
outputs = z)
# run
for i in range(np.prod(samples.shape)):
print(f(i))
# initialize
x_data = np.array([[1, 2, 3]], dtype=theano.config.floatX)
x = T.matrix(name='hi')
w = theano.shared(np.asarray([[0, 0, 0], [0, 0, 0], [0, 0, 0]], dtype=theano.config.floatX))
# an input variable can be given
b_data = np.array([[-1, 0, 1]], dtype=theano.config.floatX)
b = T.matrix(name='bias')
# a shared variable can be given
c_data = np.array([[4, 5, 6]], dtype=theano.config.floatX)
c = theano.shared(np.asarray([[0]], dtype=theano.config.floatX))
z = x.dot(w.T) + b + c
updates = [[w, w + 1.0]]
givens = {b: b_data, c: c_data}
# compile
net_input = theano.function(inputs=[x],
updates=updates,
givens=givens,
outputs=z)
# execute
for i in range(5):
print('z:', net_input(x_data))
import numpy as np
X_train = np.asarray([[0.0], [1.0], [2.0], [3.0], [4.0],
[5.0], [6.0], [7.0], [8.0], [9.0]],
dtype=theano.config.floatX)
y_train = np.asarray([1.0, 1.3, 3.1, 2.0, 5.0,
6.3, 6.6, 7.4, 8.0, 9.0],
dtype=theano.config.floatX)
import theano
from theano import tensor as T
import numpy as np
def train_linreg(X_train, y_train, eta, epochs):
costs = []
# Initialize arrays
eta0 = T.scalar('eta0') # learning rate
y = T.vector(name='y')
X = T.matrix(name='X')
w = theano.shared(np.zeros(
shape=(X_train.shape[1] + 1),
dtype=theano.config.floatX),
name='w')
# calculate cost
y_pred = T.dot(X, w[1:]) + w[0]
errors = y - y_pred
cost = T.sum(T.pow(errors, 2))
# perform gradient update
gradient = T.grad(cost, wrt=w) # symbolic differentialtion
update = [(w, w - eta0 * gradient)]
# compile model
train = theano.function(inputs=[eta0],
outputs=cost,
updates=update,
givens={X: X_train,
y: y_train})
for _ in range(epochs):
# since eta is input
# we can gradually change the learning rate
costs.append(train(eta))
return costs, w
import matplotlib.pyplot as plt
costs, w = train_linreg(X_train, y_train, eta=0.001, epochs=10)
plt.plot(range(1, len(costs)+1), costs)
plt.tight_layout()
plt.xlabel('Epoch')
plt.ylabel('Cost')
plt.tight_layout()
# plt.savefig('./figures/cost_convergence.png', dpi=300)
plt.show()
def predict_linreg(X, w):
Xt = T.matrix(name='X')
y_pred = T.dot(Xt, w[1:]) + w[0]
predict = theano.function(inputs=[Xt], givens={w: w}, outputs=y_pred)
return predict(X)
plt.scatter(X_train, y_train, marker='s', s=50)
plt.plot(range(X_train.shape[0]),
predict_linreg(X_train, w),
color='gray',
marker='o',
markersize=4,
linewidth=3)
plt.xlabel('x')
plt.ylabel('y')
plt.tight_layout()
# plt.savefig('./figures/linreg.png', dpi=300)
plt.show()
# note that first element (X[0] = 1) to denote bias unit
X = np.array([[1, 1.4, 1.5]])
w = np.array([0.0, 0.2, 0.4])
def net_input(X, w):
z = X.dot(w)
return z
def logistic(z):
return 1.0 / (1.0 + np.exp(-z))
def logistic_activation(X, w):
z = net_input(X, w)
return np.asscalar(logistic(z))
print('P(y=1|x) = %.3f' % logistic_activation(X, w))
# W : array, shape = [n_output_units, n_hidden_units+1]
# Weight matrix for hidden layer -> output layer.
# note that first column (A[:][0] = 1) are the bias units
W = np.array([[1.1, 1.2, 1.3, 0.5],
[0.1, 0.2, 0.4, 0.1],
[0.2, 0.5, 2.1, 1.9]])
# A : array, shape = [n_hidden+1, n_samples]
# Activation of hidden layer.
# note that first element (A[0][0] = 1) is for the bias units
A = np.array([[1.0],
[0.1],
[0.3],
[0.7]])
# Z : array, shape = [n_output_units, n_samples]
# Net input of output layer.
Z = W.dot(A)
y_probas = logistic(Z)
print('Probabilities:\n', y_probas)
y_class = np.argmax(Z, axis=0)
print('predicted class label: %d' % y_class[0])
def softmax(z):
return np.exp(z) / np.sum(np.exp(z))
def softmax_activation(X, w):
z = net_input(X, w)
return softmax(z)
y_probas = softmax(Z) # same Z computed above
print('Probabilities:\n', y_probas)
y_probas.sum()
y_class = np.argmax(Z, axis=0)
y_class
def tanh(z):
e_p = np.exp(z)
e_m = np.exp(-z)
return (e_p - e_m) / (e_p + e_m)
import matplotlib.pyplot as plt
z = np.arange(-5, 5, 0.005)
log_act = logistic(z)
tanh_act = tanh(z)
# alternatives:
# from scipy.special import expit
# log_act = expit(z)
# tanh_act = np.tanh(z)
plt.ylim([-1.5, 1.5])
plt.xlabel('net input $z$')
plt.ylabel('activation $\phi(z)$')
plt.axhline(1, color='black', linestyle='--')
plt.axhline(0.5, color='black', linestyle='--')
plt.axhline(0, color='black', linestyle='--')
plt.axhline(-1, color='black', linestyle='--')
plt.plot(z, tanh_act,
linewidth=2,
color='black',
label='tanh')
plt.plot(z, log_act,
linewidth=2,
color='lightgreen',
label='logistic')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/activation.png', dpi=300)
plt.show()
import os
import os.path
import struct
import gzip
import numpy as np
def open_mnist(full_path):
if full_path.find('.gz') >= 0:
return gzip.open(full_path, 'rb')
else:
return open(full_path, 'rb')
def pick_mnist(path, name, exts):
for ext in exts:
full_path = os.path.join(path, name + ext)
if os.path.isfile(full_path):
return full_path
# none of the exts options works
return None
def load_mnist(path, kind='train', exts=['', '.gz']):
Load MNIST data from `path`
labels_path = pick_mnist(path, kind + '-labels-idx1-ubyte', exts)
images_path = pick_mnist(path, kind + '-images-idx3-ubyte', exts)
with open_mnist(labels_path) as lbpath:
magic, n = struct.unpack('>II', lbpath.read(8))
if(magic != 2049):
raise IOError(str(magic) + ' != ' + str(2049))
# np.fromfile does not work with gzip open
# http://stackoverflow.com/questions/15966335/efficient-numpy-fromfile-on-zipped-files
# labels = np.fromfile(lbpath, dtype=np.uint8)
content = lbpath.read()
labels = np.frombuffer(content, dtype=np.uint8)
if(len(labels) != n):
raise IOError(str(len(labels)) + ' != ' + str(n))
with open_mnist(images_path) as imgpath:
magic, num, rows, cols = struct.unpack(">IIII", imgpath.read(16))
if(magic != 2051):
raise IOError(str(magic) + ' != ' + str(2051))
# images = np.fromfile(imgpath, dtype=np.uint8).reshape(num, rows*cols)
content = imgpath.read()
images = np.frombuffer(content, dtype=np.uint8).reshape(num, rows*cols)
if(num != len(labels)):
raise IOError(str(num) + ' != ' + str(len(labels)))
return images, labels
mnist_data_folder = os.path.join('..', 'datasets', 'mnist')
exts = ['', '.gz'] # for already gunzipped files and not yet gzipped files
X_train, y_train = load_mnist(mnist_data_folder, kind='train', exts=exts)
print('Rows: %d, columns: %d' % (X_train.shape[0], X_train.shape[1]))
X_test, y_test = load_mnist(mnist_data_folder, kind='t10k', exts=exts)
print('Rows: %d, columns: %d' % (X_test.shape[0], X_test.shape[1]))
import theano
theano.config.floatX = 'float32'
X_train = X_train.astype(theano.config.floatX)
X_test = X_test.astype(theano.config.floatX)
from keras.utils import np_utils
print('First 3 labels: ', y_train[:3])
y_train_ohe = np_utils.to_categorical(y_train)
print('\nFirst 3 labels (one-hot):\n', y_train_ohe[:3])
from keras.models import Sequential
from keras.layers.core import Dense
from keras.optimizers import SGD
np.random.seed(1)
model = Sequential()
model.add(Dense(input_dim=X_train.shape[1],
output_dim=50,
init='uniform',
activation='tanh'))
model.add(Dense(output_dim=50,
init='uniform',
activation='tanh'))
model.add(Dense(output_dim=y_train_ohe.shape[1],
init='uniform',
activation='softmax'))
sgd = SGD(lr=0.001, decay=1e-7, momentum=.9)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=["accuracy"])
model.fit(X_train, y_train_ohe,
nb_epoch=50,
batch_size=300,
verbose=1,
validation_split=0.1 # 10% of training data for validation per epoch
)
y_train_pred = model.predict_classes(X_train, verbose=0)
print('First 3 predictions: ', y_train_pred[:3])
train_acc = np.sum(y_train == y_train_pred, axis=0) / X_train.shape[0]
print('Training accuracy: %.2f%%' % (train_acc * 100))
y_test_pred = model.predict_classes(X_test, verbose=0)
test_acc = np.sum(y_test == y_test_pred, axis=0) / X_test.shape[0]
print('Test accuracy: %.2f%%' % (test_acc * 100))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see
Step2: Building, compiling, and running expressions with Theano
Step3: Steps for using Theano
Step4: To change the float type globally, execute
Step5: You can run a Python script on CPU (e.g. for prototyping and debug) via
Step6: Memory management
Step7: given
Step8: Wrapping things up
Step9: Implement the training function
Step10: Plotting the sum of squared errors cost vs epochs.
Step11: Make prediction
Step12: Theano for neural networks
Step13: Multiple outputs
Step14: The outputs do not sum to 1 and thus are not probabilities.
Step15: Estimating probabilities in multi-class classification via the softmax function
Step16: The class probabilities sum to 1.
Step17: Broadening the output spectrum using a hyperbolic tangent
Step19: Different activation functions
Step20: Multi-layer Perceptron in Keras
Step21: One-hot encoding of the class variable
Step22: Implement a neural network
|
4,464
|
<ASSISTANT_TASK:>
Python Code:
X = np.array([[-1.0, -1.0], [-1.2, -1.4], [1, -0.5], [-3.4, -2.2], [1.1, 1.2], [-2.1, -0.2]])
y = np.array([1, 1, 1, 2, 2, 2])
x_new = [0, 0]
plt.scatter(X[y==1, 0], X[y==1, 1], s=100, c='r')
plt.scatter(X[y==2, 0], X[y==2, 1], s=100, c='b')
plt.scatter(x_new[0], x_new[1], s=100, c='g')
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import VotingClassifier
clf1 = LogisticRegression(random_state=1)
clf2 = SVC(random_state=1, probability=True)
clf3 = GaussianNB()
eclf = VotingClassifier(estimators=[('lr', clf1), ('ksvc', clf2), ('gnb', clf3)], voting='soft', weights=[2, 1, 1])
probas = [c.fit(X, y).predict_proba([x_new]) for c in (clf1, clf2, clf3, eclf)]
class1_1 = [pr[0, 0] for pr in probas]
class2_1 = [pr[0, 1] for pr in probas]
ind = np.arange(4)
width=0.35 #bar width
p1 = plt.bar(ind, np.hstack(([class1_1[:-1], [0]])), width, align='center', color='green')
p2 = plt.bar(ind + width, np.hstack(([class2_1[:-1], [0]])), width, align='center', color='lightgreen')
p3 = plt.bar(ind, [0,0,0,class1_1[-1]], width, align='center', color='blue')
p4 = plt.bar(ind + width, [0, 0, 0, class2_1[-1]], width, align='center', color='steelblue')
plt.xticks(ind + 0.5 * width, ['LogisticRegression\nweight 2',
'Kernel SVC\nweight 1',
'GaussianNB\nweight 1',
'VotingClassifier'])
plt.ylim([0, 1.1])
plt.title('Class probabilities for sample 1 by different classifiers')
plt.legend([p1[0], p2[0]], ['class1', 'class2'], loc='upper left')
plt.show()
from itertools import product
x_min, x_max = -4, 2
y_min, y_max = -3, 2
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.025), np.arange(y_min, y_max, 0.025))
f, axarr = plt.subplots(2, 2)
for idx, clf, tt in zip(product([0, 1], [0, 1]),
[clf1, clf2, clf3, eclf],
['LogisticRegression', 'Kernel SVC', 'GaussianNB', 'VotingClassifier']):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
axarr[idx[0], idx[1]].contourf(xx, yy, Z, alpha=0.2, cmap=mpl.cm.jet)
axarr[idx[0], idx[1]].scatter(X[:, 0], X[:, 1], c=y, alpha=0.5, s=50, cmap=mpl.cm.jet)
axarr[idx[0], idx[1]].set_title(tt)
plt.tight_layout()
plt.show()
from itertools import product
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import VotingClassifier
iris = load_iris()
X, y = iris.data[:, [0, 2]], iris.target
model1 = DecisionTreeClassifier(max_depth=4).fit(X, y)
model2 = LogisticRegression().fit(X, y)
model3 = SVC(probability=True).fit(X, y)
model4 = VotingClassifier(estimators=[('dt', model1), ('lr', model2), ('svc', model3)],
voting='soft', weights=[1, 2, 3]).fit(X, y)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.025), np.arange(y_min, y_max, 0.025))
f, axarr = plt.subplots(2, 2)
for idx, clf, tt in zip(product([0, 1], [0, 1]),
[model1, model2, model3, model4],
['Decision Tree', 'Logistic Regression', 'Kernel SVM', 'Soft Voting']):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
axarr[idx[0], idx[1]].contourf(xx, yy, Z, alpha=0.2, cmap=mpl.cm.jet)
axarr[idx[0], idx[1]].scatter(X[:, 0], X[:, 1], c=y, alpha=1, s=50, cmap=mpl.cm.jet)
axarr[idx[0], idx[1]].set_title(tt)
plt.tight_layout()
plt.show()
sp.misc.comb(10, 2)
def total_error(p, N):
te = 0.0
for k in range(int(np.ceil(N/2)), N + 1):
te += sp.misc.comb(N, k) * p**k * (1-p)**(N-k)
return te
x = np.linspace(0, 1, 100)
plt.plot(x, x, 'g:', lw=3, label="individual model")
plt.plot(x, total_error(x, 10), 'b-', label="voting model (N=10)")
plt.plot(x, total_error(x, 100), 'r-', label="voting model (N=100)")
plt.xlabel("performance of individual model")
plt.ylabel("performance of voting model")
plt.legend(loc=0)
plt.show()
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import BaggingClassifier
iris = load_iris()
X, y = iris.data[:, [0, 2]], iris.target
model1 = DecisionTreeClassifier().fit(X, y)
model2 = BaggingClassifier(DecisionTreeClassifier(), bootstrap_features=True, random_state=0).fit(X, y) #특이현상이 사라지네
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1), np.arange(y_min, y_max, 0.1))
plt.figure(figsize=(8,12))
plt.subplot(211)
Z1 = model1.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.contourf(xx, yy, Z1, alpha=0.6, cmap=mpl.cm.jet)
plt.scatter(X[:, 0], X[:, 1], c=y, alpha=1, s=50, cmap=mpl.cm.jet)
plt.subplot(212)
Z2 = model2.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.contourf(xx, yy, Z2, alpha=0.6, cmap=mpl.cm.jet)
plt.scatter(X[:, 0], X[:, 1], c=y, alpha=1, s=50, cmap=mpl.cm.jet)
plt.tight_layout()
plt.show()
from sklearn import clone
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
iris = load_iris()
n_classes = 3
n_estimators = 30
plot_colors = "ryb"
cmap = plt.cm.RdYlBu
plot_step = 0.02
RANDOM_SEED = 13
models = [DecisionTreeClassifier(max_depth=4),
RandomForestClassifier(max_depth=4, n_estimators=n_estimators),
ExtraTreesClassifier(max_depth=4, n_estimators=n_estimators)]
plot_idx = 1
plt.figure(figsize=(12, 12))
for pair in ([0, 1], [0, 2], [2, 3]):
for model in models:
X = iris.data[:, pair]
y = iris.target
idx = np.arange(X.shape[0])
np.random.seed(RANDOM_SEED)
np.random.shuffle(idx)
X = X[idx]
y = y[idx]
mean = X.mean(axis=0)
std = X.std(axis=0)
X = (X - mean) / std
clf = clone(model)
clf = model.fit(X, y)
plt.subplot(3, 3, plot_idx)
model_title = str(type(model)).split(".")[-1][:-2][:-len("Classifier")]
if plot_idx <= len(models):
plt.title(model_title)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
if isinstance(model, DecisionTreeClassifier):
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=cmap)
else:
estimator_alpha = 1.0 / len(model.estimators_)
for tree in model.estimators_:
Z = tree.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, alpha=estimator_alpha, cmap=cmap)
for i, c in zip(range(n_classes), plot_colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1], c=c, label=iris.target_names[i], cmap=cmap)
plot_idx += 1
plt.tight_layout()
plt.show()
from sklearn.datasets import make_classification
from sklearn.ensemble import ExtraTreesClassifier
X, y = make_classification(n_samples=1000, n_features=10, n_informative=3, n_redundant=0, n_repeated=0,
n_classes=2, random_state=0, shuffle=False)
forest = ExtraTreesClassifier(n_estimators=250, random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices], color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
from sklearn.datasets import fetch_olivetti_faces
from sklearn.ensemble import ExtraTreesClassifier
data = fetch_olivetti_faces()
X = data.images.reshape((len(data.images), -1))
y = data.target
mask = y < 5 # Limit to 5 classes
X = X[mask]
y = y[mask]
forest = ExtraTreesClassifier(n_estimators=1000, max_features=128, random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
importances = importances.reshape(data.images[0].shape)
plt.figure(figsize=(8, 8))
plt.imshow(importances, cmap=plt.cm.bone_r)
plt.grid(False)
plt.title("Pixel importances with forests of trees")
plt.show()
from sklearn.datasets import fetch_olivetti_faces
from sklearn.utils.validation import check_random_state
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.linear_model import LinearRegression
data = fetch_olivetti_faces()
targets = data.target
data = data.images.reshape((len(data.images), -1))
train = data[targets < 30]
test = data[targets >= 30]
n_faces = 5
rng = check_random_state(4)
face_ids = rng.randint(test.shape[0], size=(n_faces, ))
test = test[face_ids, :]
n_pixels = data.shape[1]
X_train = train[:, :int(np.ceil(0.5 * n_pixels))] # Upper half of the faces
y_train = train[:, int(np.floor(0.5 * n_pixels)):] # Lower half of the faces
X_test = test[:, :int(np.ceil(0.5 * n_pixels))]
y_test = test[:, int(np.floor(0.5 * n_pixels)):]
ESTIMATORS = {
"Linear regression": LinearRegression(),
"Extra trees": ExtraTreesRegressor(n_estimators=10, max_features=32, random_state=0),
}
y_test_predict = dict()
for name, estimator in ESTIMATORS.items():
estimator.fit(X_train, y_train)
y_test_predict[name] = estimator.predict(X_test)
image_shape = (64, 64)
n_cols = 1 + len(ESTIMATORS)
plt.figure(figsize=(3*n_cols, 3*n_faces))
plt.suptitle("Face completion with multi-output estimators", size=16)
for i in range(n_faces):
true_face = np.hstack((X_test[i], y_test[i]))
if i:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 1)
else:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 1, title="true faces")
sub.axis("off")
sub.imshow(true_face.reshape(image_shape), cmap=plt.cm.gray, interpolation="nearest")
for j, est in enumerate(ESTIMATORS):
completed_face = np.hstack((X_test[i], y_test_predict[est][i]))
if i:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 2 + j)
else:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 2 + j, title=est)
sub.axis("off")
sub.imshow(completed_face.reshape(image_shape), cmap=plt.cm.gray, interpolation="nearest");
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_gaussian_quantiles
# Construct dataset
X1, y1 = make_gaussian_quantiles(cov=2.,
n_samples=200, n_features=2,
n_classes=2, random_state=1)
X2, y2 = make_gaussian_quantiles(mean=(3, 3), cov=1.5,
n_samples=300, n_features=2,
n_classes=2, random_state=1)
X = np.concatenate((X1, X2))
y = np.concatenate((y1, - y2 + 1))
# Create and fit an AdaBoosted decision tree
bdt = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1),
algorithm="SAMME",
n_estimators=200)
bdt.fit(X, y)
plot_colors = "br"
plot_step = 0.02
class_names = "AB"
plt.figure(figsize=(12,6))
plt.subplot(121)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
Z = bdt.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)
plt.axis("tight")
for i, n, c in zip(range(2), class_names, plot_colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1],
c=c, cmap=plt.cm.Paired,
label="Class %s" % n)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.legend(loc='upper right')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Decision Boundary')
twoclass_output = bdt.decision_function(X)
plot_range = (twoclass_output.min(), twoclass_output.max())
plt.subplot(122)
for i, n, c in zip(range(2), class_names, plot_colors):
plt.hist(twoclass_output[y == i],
bins=10,
range=plot_range,
facecolor=c,
label='Class %s' % n,
alpha=.5)
x1, x2, y1, y2 = plt.axis()
plt.axis((x1, x2, y1, y2 * 1.2))
plt.legend(loc='upper right')
plt.ylabel('Samples')
plt.xlabel('Score')
plt.title('Decision Scores')
plt.tight_layout()
plt.subplots_adjust(wspace=0.35)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 다수결 모형이 개별 모형보다 더 나은 성능을 보이는 이유는 다음 실험에서도 확인 할 수 있다.
Step2: 배깅
Step3: 랜덤 포레스트
Step4: 랜덤 포레스트의 장점 중 하나는 각 독립 변수의 중요도(feature importance)를 계산할 수 있다는 점이다.
Step5: 예
Step6: 에이다 부스트
|
4,465
|
<ASSISTANT_TASK:>
Python Code:
import gmpy2
from gmpy2 import sqrt as rt2
from gmpy2 import mpfr
gmpy2.get_context().precision=200
root2 = rt2(mpfr(2))
root3 = rt2(mpfr(3))
root5 = rt2(mpfr(5))
ø = (root5 + 1)/2
ø_down = ø ** -1
ø_up = ø
E_vol = (15 * root2 * ø_down ** 3)/120 # a little more than 1/24, volume of T module
print(E_vol)
# Edges needed for Fum and Emod
e0 = Black_Yellow = root3 * ø_down
e1 = Black_Blue = mpfr(1) # raddfius of RT = 1 (same as unit-radius sphere)
e2 = Black_Orange = 1/(rt2(ø**2+1)/2)
e3 = Yellow_Blue = (3 - root5)/2
e4 = Blue_Orange = (ø**-1)*(1/rt2(ø**2+1))
e5 = Orange_Yellow = rt2(Yellow_Blue**2 - Blue_Orange**2)
e6 = Black_Red = rt2((5 - root5)/2)
e7 = Blue_Red = 1/ø
e8 = Red_Yellow = rt2(5 - 2 * root5)
#print(e3 ** 2 + e7 ** 2)
#print(e8 ** 2)
#assert e3 ** 2 + e7 ** 2 == e8 ** 2 # check
#assert e4 ** 2 + e5 ** 2 == e3 ** 2 # check
# not needed for this computation
e9 = Black_Green = 20/(5 * root2 * ø**2) # Sfactor
e10 = Purple_Green = ø ** -4
for e in range(11):
val = "e" + str(e)
length = eval(val)
print("Edge {:3} = {:40.37}".format(val, length))
import tetravolume as tv # has to be in your path, stored on Github with this JN
# D = 1 in this module, so final volume need to be divided by 8 to match R=1 (D=2)
# see Fig. 986.411A in Synergetics
Fum_vol = tv.Tetrahedron(e0,e1,e2,e3,e4,e5).ivm_volume()/8
E_vol = tv.Tetrahedron(e1,e0,e6,e3,e8,e7).ivm_volume()/8
print("Fum volume (in tetravolumes): {:40.38}".format( Fum_vol ))
print("E volume (in tetravolumes) : {:40.38}".format( E_vol ))
Fe = (ø**-7) * (rt2(2)/8)
Fi = (ø**-6) * (rt2(2)/8)
Fo = ((5-rt2(5))/5) * (ø**-4) * (rt2(2)/8)
Fum = (rt2(5)/5) * (ø**-4)*(rt2(2)/8)
Fe_Fi = (ø**-5) * (rt2(2)/8)
Fo_Fum = (ø**-4) * (rt2(2)/8)
print("Fe: {:40.38}".format(Fe))
print("Fi: {:40.38}".format(Fi))
print("Fo: {:40.38}".format(Fo))
print("Fum: {:40.38}".format(Fum))
print("E_vol: {:40.38}".format((Fe_Fi) + (Fo_Fum)))
print("E_vol: {:40.38}".format((ø**-3)*(rt2(2)/8)))
PD = 3 * root2 * (ø ** 2 + 1)
print(PD)
E = e = E_vol # shorthand (E3 = E * ø_up ** 3, e3 = E * ø_down ** 3, E = e)
e3 = e * ø_down ** 3
PD = 348 * E + 84 * e3
print(PD)
RT3 = 480 * E + 120 * e3 # e3 is e * ø_down ** 3 (e = E)
print(RT3)
E3 = E_vol * ø_up ** 3
Fum3 = Fum_vol * ø_up ** 3
print(E3)
print(Fum3)
print(RT3 - PD)
print(120 * Fum3)
from math import atan, sqrt as rt2, degrees
Ø = (1 + rt2(5))/2 # back to floating point
print(degrees(atan(Ø**-2)/2)) # 10.812316º
print(degrees(atan(Ø**-3))) # 13.282525º
print(degrees(atan(Ø**-2))) # 20.905157º
print(degrees(atan(Ø**-1))) # 31.717474º
print(degrees(atan(2*Ø**-2))) # 37.377368º
print(atan(Ø ** -1) + atan(Ø ** -3))
print(atan(1)) # arctan 1 = 45º
print(2 * atan(Ø**-1))
print(atan(2)) # 63.434948º
print(degrees(atan(2))) # 63.434948º
print( atan(Ø**-1) + 3 * atan(Ø**-3) )
print(atan(3)) # 71.565051º
print(degrees(atan(3))) # 71.565051º
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now lets import the tetravolume.py module, which in turn has dependencies, to get these volumes directly, based on edge lengths. I'll use the edges given in Fig. 986.411 of <i>Synergetics</i>, spoking out from the point C at the center of any RT diamond, and/or values computed by David Koski.
Step2: Lets start with a Pentagonal Dodecahedron and build it from Es + e3s.
Step3: RT3, on the other hand, has a volume we may express as
Step4: Recall RT3 is the Rhombic Triacontahedron we get by intersecting the two Platonic duals
Step5: As you can see, the relationship holds, though floating point numbers add some noise.
|
4,466
|
<ASSISTANT_TASK:>
Python Code:
data_sets = input_data.read_data_sets(FLAGS.train_dir, FLAGS.fake_data)
images_placeholder = tf.placeholder(tf.float32, shape=(batch_size,
mnist.IMAGE_PIXELS))
labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
with tf.name_scope('hidden1'):
weights = tf.Variable(
tf.truncated_normal([IMAGE_PIXELS, hidden1_units],
stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))),
name='weights')
biases = tf.Variable(tf.zeros([hidden1_units]),
name='biases')
hidden1 = tf.nn.relu(tf.matmul(images, weights) + biases)
hidden2 = tf.nn.relu(tf.matmul(hidden1, weights) + biases)
logits = tf.matmul(hidden2, weights) + biases
labels = tf.to_int64(labels)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits, name='xentropy')
loss = tf.reduce_mean(cross_entropy, name='xentropy_mean')
tf.summary.scalar('loss', loss)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
global_step = tf.Variable(0, name='global_step', trainable=False)
train_op = optimizer.minimize(loss, global_step=global_step)
with tf.Graph().as_default():
sess = tf.Session()
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
for step in xrange(FLAGS.max_steps):
sess.run(train_op)
images_feed, labels_feed = data_set.next_batch(FLAGS.batch_size,
FLAGS.fake_data)
feed_dict = {
images_placeholder: images_feed,
labels_placeholder: labels_feed,
}
for step in xrange(FLAGS.max_steps):
feed_dict = fill_feed_dict(data_sets.train,
images_placeholder,
labels_placeholder)
_, loss_value = sess.run([train_op, loss],
feed_dict=feed_dict)
if step % 100 == 0:
print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value, duration))
summary = tf.summary.merge_all()
summary_writer = tf.summary.FileWriter(FLAGS.train_dir, sess.graph)
summary_str = sess.run(summary, feed_dict=feed_dict)
summary_writer.add_summary(summary_str, step)
saver = tf.train.Saver()
saver.save(sess, FLAGS.train_dir, global_step=step)
saver.restore(sess, FLAGS.train_dir)
print('Training Data Eval:')
do_eval(sess,
eval_correct,
images_placeholder,
labels_placeholder,
data_sets.train)
print('Validation Data Eval:')
do_eval(sess,
eval_correct,
images_placeholder,
labels_placeholder,
data_sets.validation)
print('Test Data Eval:')
do_eval(sess,
eval_correct,
images_placeholder,
labels_placeholder,
data_sets.test)
eval_correct = mnist.evaluation(logits, labels_placeholder)
eval_correct = tf.nn.in_top_k(logits, labels, 1)
for step in xrange(steps_per_epoch):
feed_dict = fill_feed_dict(data_set,
images_placeholder,
labels_placeholder)
true_count += sess.run(eval_correct, feed_dict=feed_dict)
precision = true_count / num_examples
print(' Num examples: %d Num correct: %d Precision @ 1: %0.04f' %
(num_examples, true_count, precision))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 注意:fake_data标记是用于单元测试的,读者可以不必理会。
Step2: 在训练循环(training loop)的后续步骤中,传入的整个图像和标签数据集会被切片,以符合每一个操作所设置的batch_size值,占位符操作将会填补以符合这个batch_size值。然后使用feed_dict参数,将数据传入sess.run()函数。
Step3: 在定义的作用域中,每一层所使用的权重和偏差都在tf.Variable实例中生成,并且包含了各自期望的维度:
Step4: 例如,当这些层是在hidden1作用域下生成时,赋予权重变量的独特名称将会是"hidden1/weights"。
Step5: 最后,程序会返回包含了输出结果的logits张量。
Step6: 然后,使用tf.reduce_mean函数,计算batch维度(第一维度)下交叉熵(cross entropy)的平均值,将将该值作为总损失。
Step7: 最后,程序会返回包含了损失值的张量。
Step8: 接下来,我们实例化一个tf.train.GradientDescentOptimizer,负责按照所要求的学习效率(learning rate)应用梯度下降法(gradients)。
Step9: 之后,我们生成一个变量用于保存全局训练步骤(global training step)的数值,并使用tf.train.Optimizer.minimize操作更新系统中的训练权重(trainable weights)、增加全局步骤。根据惯例,这个操作被称为train_op,是TensorFlow会话(session)诱发一个完整训练步骤所必须运行的操作(见下文)。
Step10: 训练模型
Step11: tf.Graph实例是一系列可以作为整体执行的操作。TensorFlow的大部分场景只需要依赖默认图一个实例即可。
Step12: 另外,也可以利用with代码块生成Session,限制作用域:
Step13: Session函数中没有传入参数,表明该代码将会依附于(如果还没有创建会话,则会创建新的会话)默认的本地会话。
Step14: tf.Session.run方法将会运行图中与作为参数传入的操作相对应的完整子集。在初次调用时,init操作只包含了变量初始化程序tf.group。图的其他部分不会在这里,而是在下面的训练循环运行。
Step15: 但是,本教程中的例子要更为复杂一点,原因是我们必须把输入的数据根据每一步的情况进行切分,以匹配之前生成的占位符。
Step16: 然后,以占位符为键,创建一个Python字典对象,键值则是其代表的输入张量。
Step17: 这个字典随后作为feed_dict参数,传入sess.run()函数中,为这一步的训练提供输入样例。
Step18: 因为要获取这两个值,sess.run()会返回一个有两个元素的元组。其中每一个张量,对应了返回的元组中的numpy数组,而这些数组中包含了当前这步训练中对应张量的值。由于train_op并不会产生输出,其在返回的元祖中的对应元素就是None,所以会被抛弃。但是,如果模型在训练中出现偏差,loss张量的值可能会变成NaN,所以我们要获取它的值,并记录下来。
Step19: 状态可视化
Step20: 在创建好会话(session)之后,可以实例化一个tf.summary.FileWriter,用于写入包含了图表本身和即时数据具体值的事件文件。
Step21: 最后,每次运行summary时,都会往事件文件中写入最新的即时数据,函数的输出会传入事件文件读写器(writer)的add_summary()函数。
Step22: 事件文件写入完毕之后,可以就训练文件夹打开一个TensorBoard,查看即时数据的情况。
Step23: 在训练循环中,将定期调用tf.train.Saver.save方法,向训练文件夹中写入包含了当前所有可训练变量值得检查点文件。
Step24: 这样,我们以后就可以使用tf.train.Saver.restore方法,重载模型的参数,继续训练。
Step25: 评估模型
Step26: 注意,更复杂的使用场景通常是,先隔绝data_sets.test测试数据集,只有在大量的超参数优化调整(hyperparameter tuning)之后才进行检查。但是,由于MNIST问题比较简单,我们在这里一次性评估所有的数据。
Step27: evaluation()函数会生成tf.nn.in_top_k操作,如果在K个最有可能的预测中可以发现真的标签,那么这个操作就会将模型输出标记为正确。在本文中,我们把K的值设置为1,也就是只有在预测是真的标签时,才判定它是正确的。
Step28: 评估输出
Step29: true_count变量会累加所有in_top_k操作判定为正确的预测之和。接下来,只需要将正确测试的总数,除以例子总数,就可以得出准确率了。
|
4,467
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from time import time
from operator import itemgetter
from scipy.stats import randint as sp_randint
from sklearn.grid_search import GridSearchCV, RandomizedSearchCV
from sklearn.datasets import load_digits
from sklearn.ensemble import RandomForestClassifier
iris = load_digits() # get some data
X, y = iris.data, iris.target
clf = RandomForestClassifier(n_estimators=20) # build a classifier
def report(grid_scores, n_top=3):
# Utility function to report best scores
top_scores = sorted(grid_scores, key=itemgetter(1), reverse=True)[:n_top]
for i, score in enumerate(top_scores):
print("Model with rank: {0}".format(i + 1))
print("Mean validation score: {0:.3f} (std: {1:.3f})".format(
score.mean_validation_score,
np.std(score.cv_validation_scores)))
print("Parameters: {0}".format(score.parameters))
print("")
# specify parameters and distributions to sample from
param_dist = {"max_depth": [3, None],
"max_features": sp_randint(1, 11),
"min_samples_split": sp_randint(1, 11),
"min_samples_leaf": sp_randint(1, 11),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
# run randomized search
n_iter_search = 20
random_search = RandomizedSearchCV(clf, param_distributions=param_dist,
n_iter=n_iter_search)
start = time()
random_search.fit(X, y)
finish = (time() - start)
print("RandomizedSearchCV took {time} seconds for {candidate} candidates"
" parameter settings.".format(time=finish, candidate=n_iter_search))
report(random_search.grid_scores_)
# use a full grid over all parameters
param_grid = {"max_depth": [3, None],
"max_features": [1, 3, 10],
"min_samples_split": [1, 3, 10],
"min_samples_leaf": [1, 3, 10],
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
# run grid search
grid_search = GridSearchCV(clf, param_grid=param_grid)
start = time()
grid_search.fit(X, y)
finish = (time() - start)
print("GridSearchCV took {time} seconds for {candidates} candidate parameter settings.".format(
time=finish, candidates=len(grid_search.grid_scores_)))
report(grid_search.grid_scores_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For this example, we'll load up the iris data set, an example data set from scikit-learn that has various measurements of different species of iris (the flower, not the eye thing).
Step2: Next, we initialize our classifier (a random forest in this case).
Step3: In order to run random search, we need to specify a distribution to sample from. We'll use sp_randint from the scipy.stats library which will return a random integer.
Step4: Finally, we'll run the random search over our random forest classifier
Step5: We'll now follow the same process for grid search, the only difference being that instead of sampling from a distribution, we'll specify an array of values to try.
|
4,468
|
<ASSISTANT_TASK:>
Python Code:
import steps.model as smodel
import steps.geom as stetmesh
import steps.utilities.meshio as smeshio
import steps.rng as srng
import steps.solver as solvmod
import pylab
import math
# Number of iterations; plotting dt; sim endtime:
NITER = 10
# The data collection time increment (s)
DT = 1.0
# The simulation endtime (s)
INT = 21.0
# Number of molecules injected in centre
NINJECT = 1000
# The diffusion constant for our diffusing species (m^2/s)
DCST = 0.08e-12
def gen_model():
mdl = smodel.Model()
X = smodel.Spec('X', mdl)
ssys = smodel.Surfsys('ssys', mdl)
diff_X = smodel.Diff('diffX', ssys, X, DCST)
return mdl
mesh = smeshio.loadMesh('meshes/coin_10r_1h_13861')[0]
ntets = mesh.countTets()
comp = stetmesh.TmComp('cyto', mesh, range(ntets))
alltris = mesh.getSurfTris()
# Sort patch triangles as those of positive z: A +ve x, B -ve x
patchA_tris = []
patchB_tris = []
patchA_bars = set()
patchB_bars = set()
for t in alltris:
vert0, vert1, vert2 = mesh.getTri(t)
if (mesh.getVertex(vert0)[2] > 0.0 \
and mesh.getVertex(vert1)[2] > 0.0 \
and mesh.getVertex(vert2)[2] > 0.0):
if mesh.getTriBarycenter(t)[0] > 0.0:
patchA_tris.append(t)
bar = mesh.getTriBars(t)
patchA_bars.add(bar[0])
patchA_bars.add(bar[1])
patchA_bars.add(bar[2])
else:
patchB_tris.append(t)
bar = mesh.getTriBars(t)
patchB_bars.add(bar[0])
patchB_bars.add(bar[1])
patchB_bars.add(bar[2])
# Create the patch
patchA = stetmesh.TmPatch('patchA', mesh, patchA_tris, icomp=comp)
patchA.addSurfsys('ssys')
patchB = stetmesh.TmPatch('patchB', mesh, patchB_tris, icomp=comp)
patchB.addSurfsys('ssys')
# Find the set of bars that connect the two patches as the intersecting bars of PatchA and PatchB
barsDB = patchA_bars.intersection(patchB_bars)
barsDB = list(barsDB)
# Create the surface diffusion boundary
diffb = stetmesh.SDiffBoundary('sdiffb', mesh, barsDB, [patchA, patchB])
# Find the central tri
ctetidx = mesh.findTetByPoint([0.0, 0.0, 0.5e-6])
ctet_trineighbs = mesh.getTetTriNeighb(ctetidx)
ctri_idx=-1
for t in ctet_trineighbs:
if t in patchA_tris+patchB_tris:
ctri_idx = t
cbaryc = mesh.getTriBarycenter(ctri_idx)
# Record the tri radii from centre and areas for patchA and patchB
trirads_A = pylab.zeros(len(patchA_tris))
trirads_B = pylab.zeros(len(patchB_tris))
triareas_A = pylab.zeros(len(patchA_tris))
triareas_B = pylab.zeros(len(patchB_tris))
for i in range(len(patchA_tris)):
baryc = mesh.getTriBarycenter(patchA_tris[i])
r2 = math.pow((baryc[0]-cbaryc[0]),2) + \
math.pow((baryc[1]-cbaryc[1]),2) + \
math.pow((baryc[2]-cbaryc[2]),2)
r = math.sqrt(r2)
# Convert to microns
trirads_A[i] = r*1.0e6
triareas_A[i] = mesh.getTriArea(patchA_tris[i])*1.0e12
for i in range(len(patchB_tris)):
baryc = mesh.getTriBarycenter(patchB_tris[i])
r2 = math.pow((baryc[0]-cbaryc[0]),2) + \
math.pow((baryc[1]-cbaryc[1]),2) + \
math.pow((baryc[2]-cbaryc[2]),2)
r = math.sqrt(r2)
# Convert to microns
trirads_B[i] = -r*1.0e6
triareas_B[i] = mesh.getTriArea(patchB_tris[i])*1.0e12
# Create the biochemical model
model = gen_model()
# Create rnadom number generator object
rng = srng.create('mt19937', 512)
rng.initialize(234)
# Create solver object
sim = solvmod.Tetexact(model, mesh, rng)
# Create the simulation data structures
tpnts = pylab.arange(0.0, INT, DT)
ntpnts = tpnts.shape[0]
res_A = pylab.zeros((NITER, ntpnts, len(patchA_tris)))
res_B = pylab.zeros((NITER, ntpnts, len(patchB_tris)))
# Run NITER number of iterations:
for j in range(NITER):
print("Running iteration", j)
sim.reset()
sim.setTriCount(ctri_idx, 'X', NINJECT)
sim.setSDiffBoundaryDiffusionActive('sdiffb', 'X', True)
sim.setSDiffBoundaryDcst('sdiffb', 'X', 0.008e-12 , 'patchB')
for i in range(ntpnts):
sim.run(tpnts[i])
for k in range(len(patchA_tris)):
res_A[j, i, k] = sim.getTriCount(patchA_tris[k], 'X')/ \
triareas_A[k]
for k in range(len(patchB_tris)):
res_B[j, i, k] = sim.getTriCount(patchB_tris[k], 'X')/ \
triareas_B[k]
res_A_mean = pylab.mean(res_A, axis = 0)
res_B_mean = pylab.mean(res_B, axis = 0)
def plotres(tidx):
if (tidx >= INT/DT):
print("Time index is out of range.")
return
pylab.plot(trirads_A, res_A_mean[tidx], 'bo', label='patchA')
pylab.plot(trirads_B, res_B_mean[tidx], 'ro', label='patchB')
pylab.xlabel('Radial distance ($\mu$m)')
pylab.ylabel('Concentration (/$\mu$m$^2$)')
t = tpnts[tidx]
pylab.xlim(-10,10)
pylab.ylim(0)
pylab.legend()
pylab.show()
pylab.figure(figsize=(14,7))
plotres(20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We set some simulation constants
Step2: Model specification
Step3: Geometry specification
Step4: Then we create a compartment comprising all mesh tetrahedrons
Step5: We go on to creating the surface steps.geom.TmPatch objects for this mesh-based simulation,
Step6: We create the two steps.geom.TmPatch objects. We call the Patch with +ve x (red in the figure above) 'PatchA'
Step7: Now we can create the steps.geom.SDiffBoundary object. Analogously to the Diffusion Boundary between Compartments
Step8: So the list barsDB contains the indices of the common bars to both patchA and patchB, which are the bars
Step9: The next part of the code follows similarly to in Surface Diffusion in recording the triangle radii and areas,
Step10: Simulation with Tetexact
Step11: We, similar to previous chapters, create arrays to help run the simulation and to record data, separated into patchA and patchB
Step12: And we are ready to run the simulation. The default behaviour of the Surface Diffusion Boundary is to be inactive, that is to
Step13: Take the mean of the results over the 10 iterations
Step14: Plotting simulation output
Step15: And a call to the function with the last ‘timepoint’ of 20, equivalent to 20seconds
|
4,469
|
<ASSISTANT_TASK:>
Python Code:
import data_science.j_utils as j_utils
import data_science.lendingclub.dataprep_and_modeling.modeling_utils.data_prep_new as data_prep
import dir_constants as dc
from sklearn.externals import joblib
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
import time
from sklearn.metrics import mean_squared_error
from tqdm import tqdm_notebook
import matplotlib.pyplot as plt
%matplotlib notebook
# from IPython.display import HTML
# HTML('''<script>
# code_show_err=false;
# function code_toggle_err() {
# if (code_show_err){
# $('div.output_stderr').hide();
# } else {
# $('div.output_stderr').show();
# }
# code_show_err = !code_show_err
# }
# $( document ).ready(code_toggle_err);
# </script>
# To toggle on/off output_stderr, click <a href="javascript:code_toggle_err()">here</a>.''')
platform = 'lendingclub'
use_cuda = True
dtype = torch.cuda.FloatTensor
save_path = "model_dump/nn_1_0_2/"
if not os.path.isdir(save_path):
os.mkdir(save_path)
store = pd.HDFStore(
dc.home_path+'/justin_tinkering/data_science/lendingclub/{0}_store.h5'.
format(platform),
append=True)
train = store['train_filtered_columns']
valid = store['validate_filtered_columns']
# columns = loan_info.columns.values
# # checking dtypes to see which columns need one hotting, and which need null or not
# to_one_hot = []
# to_null_or_not = []
# do_nothing = []
# for col in columns:
# if loan_info[col].dtypes == np.dtype('O'):
# # print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict())
# to_one_hot.append(col)
# elif len(loan_info[col].isnull().value_counts(dropna=False)) > 1:
# # print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict())
# to_null_or_not.append(col)
# else:
# # print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict())
# do_nothing.append(col)
# %load modeling_utils/data_prep_new.py
# %%writefile modeling_utils/data_prep_new.py
train_X, train_y, mean_series, std_dev_series = data_prep.process_data_train(
train, target='target_strict')
valid_X, valid_y, _, _ = data_prep.process_data_train(
valid, target='target_strict')
# class DfDataset(Dataset):
# '''Makes dataset from df of data and df of targets'''
# def __init__(self, data, targets):
# self.data = data
# self.targets = targets
# def __len__(self):
# return len(self.data)
# def __getitem__(self, idx):
# return self.data[idx,:], self.targets[idx,:]
# def get_dataset(data, targets):
# return DfDataset(data, targets)
# def get_loader(dataset, use_cuda, batch_size=6400, shuffle=True):
# return DataLoader(dataset, batch_size=batch_size, shuffle=shuffle, pin_memory=use_cuda)
train_dataset = j_utils.get_dataset(train_X.values, train_y.values)
train_loader = j_utils.get_loader(train_dataset, use_cuda, batch_size=2**13, shuffle=True)
valid_dataset = j_utils.get_dataset(valid_X.values, valid_y.values)
valid_loader = j_utils.get_loader(valid_dataset, use_cuda, batch_size=2**13, shuffle=True)
# %%writefile model_dump/nn_1_0_1/net_class.py
# import torch
# import torch.nn as nn
# import torch.nn.functional as F
# from torch.autograd import Variable
# import numpy as np
# dtype = torch.FloatTensor
# nn_input_dim = 223
# hly1_n = 300
# hly2_n = 400
# hly3_n = 300
# hly4_n = 200
# hly5_n = 100
# hly6_n = 100
# hly7_n = 100
# # hly8_n = 100
# nn_output_dim = 1
# class Net(nn.Module):
# def __init__(self):
# super(Net, self).__init__()
# self.hl1 = nn.Linear(nn_input_dim, hly1_n)
# self.hl2 = nn.Linear(hly1_n, hly2_n)
# self.hl3 = nn.Linear(hly2_n, hly3_n)
# self.hl4 = nn.Linear(hly3_n, hly4_n)
# self.hl5 = nn.Linear(hly4_n, hly5_n)
# self.hl6 = nn.Linear(hly5_n, hly6_n)
# self.hl7 = nn.Linear(hly6_n, hly7_n)
# # self.hl8 = nn.Linear(hly7_n, hly8_n)
# self.out = nn.Linear(hly7_n, nn_output_dim)
# def forward(self, x):
# x = F.leaky_relu(self.hl1(x))
# x = F.leaky_relu(self.hl2(x))
# x = F.leaky_relu(self.hl3(x))
# x = F.leaky_relu(self.hl4(x))
# x = F.leaky_relu(self.hl5(x))
# x = F.leaky_relu(self.hl6(x))
# x = F.leaky_relu(self.hl7(x))
# # x = F.leaky_relu(self.hl8(x))
# x = self.out(x)
# return x
# def torch_version(df_inputs, net):
# input = Variable(torch.from_numpy(df_inputs.values)).type(dtype)
# return np.round(net(input).data.cpu().numpy(),5).ravel()
nn_input_dim = 223
hly1_n = 300
hly2_n = 400
hly3_n = 300
hly4_n = 200
hly5_n = 100
hly6_n = 100
hly7_n = 100
# hly8_n = 100
n_classes = 2
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.hl1 = nn.Linear(nn_input_dim, hly1_n)
self.hl2 = nn.Linear(hly1_n, hly2_n)
self.hl3 = nn.Linear(hly2_n, hly3_n)
self.hl4 = nn.Linear(hly3_n, hly4_n)
self.hl5 = nn.Linear(hly4_n, hly5_n)
self.hl6 = nn.Linear(hly5_n, hly6_n)
self.hl7 = nn.Linear(hly6_n, hly7_n)
# self.hl8 = nn.Linear(hly7_n, hly8_n)
self.out = nn.Linear(hly7_n, n_classes)
self.dropout_9 = nn.Dropout(p=.9)
self.dropout_8 = nn.Dropout(p=.8)
self.dropout_7 = nn.Dropout(p=.7)
self.dropout_6 = nn.Dropout(p=.6)
self.dropout_5 = nn.Dropout(p=.5)
self.dropout_4 = nn.Dropout(p=.4)
self.dropout_3 = nn.Dropout(p=.3)
self.dropout_2 = nn.Dropout(p=.2)
self.dropout_1 = nn.Dropout(p=.1)
def forward(self, x):
x = F.leaky_relu(self.hl1(x.type(torch.cuda.FloatTensor)))
# x = self.dropout_8(x)
x = F.leaky_relu(self.hl2(x))
# x = self.dropout_7(x)
x = F.leaky_relu(self.hl3(x))
# x = self.dropout_5(x)
x = F.leaky_relu(self.hl4(x))
# x = self.dropout_5(x)
x = F.leaky_relu(self.hl5(x))
x = self.dropout_5(x)
x = F.leaky_relu(self.hl6(x))
x = self.dropout_5(x)
x = F.leaky_relu(self.hl7(x))
x = self.dropout_5(x)
# x = F.leaky_relu(self.hl8(x))
x = self.out(x)
return x
net = Net()
# params = list(net.parameters())
# setup for training
net.cuda()
criterion = nn.CrossEntropyLoss().cuda()
optimizer = optim.Adam(net.parameters(), lr=0.0001, weight_decay=0.00135)
n_epochs = 200
save_epoch = 20
savedir = save_path
patience_epoch = 5
pct_change = .005
decay_rate = .85
variance_pct_thrsh = .003
import importlib; importlib.reload(j_utils)
model_name = '1.0.2'
j_utils.train_model(net, model_name, train_loader, valid_loader, optimizer, criterion, n_epochs, save_epoch,
savedir, variance_pct_thrsh, patience_epoch, pct_change, decay_rate, verbose=False, g_epoch=1)
# np_hl1_weight = net.hl1.weight.data.numpy()
# np_hl1_bias = net.hl1.bias.data.numpy()
# np_hl2_weight = net.hl2.weight.data.numpy()
# np_hl2_bias = net.hl2.bias.data.numpy()
# np_out_weight = net.out.weight.data.numpy()
# np_out_bias = net.out.bias.data.numpy()
# def np_version(df_inputs):
# np_hl1_z = df_inputs.dot(np_hl1_weight.T) + np_hl1_bias
# np_hl1_a = np.maximum(.01*np_hl1_z, np_hl1_z)
# np_hl2_z = np_hl1_a.dot(np_hl2_weight.T) + np_hl2_bias
# np_hl2_a = np.maximum(.01*np_hl2_z, np_hl2_z)
# np_out = np_hl2_a.dot(np_out_weight.T) + np_out_bias
# return np_out
class FeedDataset(Dataset):
def __init__(self, data):
self.data = data
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data.iloc[idx,:].values
def torch_version(df_inputs, net):
feed_dataset = FeedDataset(df_inputs)
feed_loader = get_loader(feed_dataset, batch_size=6400, shuffle=False, use_cuda = True)
all_results = []
for i, data in enumerate(feed_loader):
# wrap in Variable
inputs = data
inputs = Variable(inputs.cuda()).type(dtype)
# inputs = Variable(inputs.cuda()).type(dtype)
outputs = np.round(net(inputs).data.cpu().numpy(),5).ravel().tolist()
all_results += outputs
return all_results
#%timeit np_version(standardized)
# %timeit torch_version(train_X, net)
store.open()
test = store['test_filtered_columns']
train = store['train_filtered_columns']
loan_npv_rois = store['loan_npv_rois']
default_series = test['target_strict']
results = store['results']
store.close()
train_X, train_y = data_prep.process_data_test(train)
train_y = train_y['npv_roi_10'].values
test_X, test_y = data_prep.process_data_test(test)
test_y = test_y['npv_roi_10'].values
# regr = joblib.load('model_dump/model_0.2.1.pkl')
regr_version = '1.0.1'
test_yhat = torch_version(test_X, net)
train_yhat = torch_version(train_X, net)
test_mse = mean_squared_error(test_yhat,test_y)
train_mse = mean_squared_error(train_yhat,train_y)
test_mse
train_mse
def eval_models_net(trials, port_size, available_loans, net, regr_version, test, loan_npv_rois,
default_series):
results = {}
pct_default = {}
test_copy = test.copy()
for trial in tqdm_notebook(np.arange(trials)):
loan_ids = np.random.choice(
test_copy.index.values, available_loans, replace=False)
loans_to_pick_from = test_copy.loc[loan_ids, :]
scores = torch_version(loans_to_pick_from, net)
scores_series = pd.Series(dict(zip(loan_ids, scores)))
scores_series.sort_values(ascending=False, inplace=True)
picks = scores_series[:900].index.values
results[trial] = loan_npv_rois.loc[picks, :].mean().to_dict()
pct_default[trial] = (default_series.loc[picks].sum()) / port_size
pct_default_series = pd.Series(pct_default)
results_df = pd.DataFrame(results).T
results_df['pct_def'] = pct_default_series
return results_df
# as per done with baseline models, say 3000 loans available
# , pick 900 of them
trials = 20000
port_size = 900
available_loans = 3000
model_results = eval_models_net(trials, port_size, available_loans, net, regr_version, test_X, loan_npv_rois, default_series)
multi_index = []
for col in model_results.columns.values:
multi_index.append((str(col),regr_version))
append_results = model_results.copy()
append_results.columns = pd.MultiIndex.from_tuples(multi_index, names = ['discount_rate', 'model'])
multi_index_results = []
for col in results.columns.values:
multi_index_results.append((str(col[0]), col[1]))
results.columns = pd.MultiIndex.from_tuples(multi_index_results, names = ['discount_rate', 'model'])
full_results = results.join(append_results)
full_results.sort_index(axis=1, inplace=True)
full_results.describe()
store.open()
store['results'] = full_results
model_info = store['model_info']
store.close()
# dump the model
# joblib.dump(regr, 'model_dump/model_0.2.1.pkl')
joblib.dump((mean_series, std_dev_series), 'model_dump/mean_stddev.pkl')
test_mse
train_mse
now = time.strftime("%Y_%m_%d_%Hh_%Mm_%Ss")
# info to stick in detailed dataframe describing each model
model_info_dict = {'model_version': regr_version,
'target': 'npv_roi_10',
'weights': 'None',
'algo_model': 'feedforward NN',
'hyperparams': "nn_input_dim = 223, hly1_n = 300, hly2_n = 400, hly3_n = 300, hly4_n = 200, hly5_n = 100, hly6_n = 100, hly7_n = 100, nn_output_dim = 1, criterion = nn.MSELoss(),optimizer = optim.Adam(net.parameters(), lr=0.0001, weight_decay=0.00135), if epoch+1 % 100 == 0: optimizer.param_groups[0]['lr'] *= .97",
'cost_func': 'criterion = nn.MSELoss(),',
'useful_notes': 'test_mse: 0.0642635, train_mse: 0.061784, epoch_600',
'date': now}
model_info_df = pd.DataFrame(model_info_dict, index = [regr_version])
model_info.ix[regr_version,:] = model_info_df.values
model_info.sort_index(inplace=True)
model_info
store.open()
store.append(
'model_info',
model_info,
data_columns=True,
index=True,
append=False,
)
store.close()
train_preds = pd.Series(train_yhat)
test_preds = pd.Series(test_yhat)
train_preds.hist(bins=50)
test_preds.hist(bins=50)
train_preds.describe()
test_preds.describe()
train_preds.value_counts()
test_preds.value_counts()
# try:
# results = results.join(append_results)
# except ValueError:
# results.loc[:, (slice(None), slice('1.0.0','1.0.0'))] = append_results
# results.sort_index(axis=1, inplace = True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: DO NOT FORGET TO DROP ISSUE_D AFTER PREPPING
Step2: Until I figure out a good imputation method (e.g. bayes PCA), just drop columns with null still
Step3: instantiate network
Step4: get the weights and biases of the nn into np since at this size np is faster (correction, pytorch was faster)
Step5: check that they output the same and speedtest (pytorch was faster)
Step6: Examine performance on test set
Step7: Making model info and saving it
Step8: Examine scores distributions
|
4,470
|
<ASSISTANT_TASK:>
Python Code:
2 + 3
2*3
2**3
sin(pi)
from math import sin, pi
sin(pi)
a = 10
a
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
from nose.tools import assert_equal
assert_equal(_, c)
print("Sin errores")
A = [2, 4, 8, 10]
A
A*2
f = lambda x: x**2 + 1
f(2)
def g(x):
y = x**2 + 1
return y
g(2)
def cel_a_faren(grados_cel):
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
return grados_faren
cel_a_faren(-1)
from nose.tools import assert_equal
assert_equal(cel_a_faren(10), 50)
assert_equal(cel_a_faren(50), 122)
print("Sin errores")
for dato in A:
print(dato*2)
B = []
for dato in A:
B.append(dato*2)
B
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
C
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
D
from numpy.testing import assert_array_equal
print("Sin errores")
from numpy import matrix
A = matrix([[1, 2], [3, 4]])
A
v1 = matrix([[1], [2]])
v1
# Dependiendo de la version de python que exista en tu computadora,
# esta operacion pudiera no funcionar, en dado caso solo hay que
# cambiar @ por *
A@v1
# La siguiente linea no va a funcionar, porque?
v1@A
from numpy import sin, cos, pi
τ = 2*pi
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
vec_rot
from numpy.testing import assert_array_equal
assert_array_equal(vec_rot, matrix([[2*(cos(τ/12)-sin(τ/12))], [2*(cos(τ/12)+sin(τ/12))]]))
print("Sin errores")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sin embargo no existen funciones trigonométricas cargadas por default. Para esto tenemos que importarlas de la libreria math
Step2: Variables
Step3: Ejercicio
Step4: Ejecuta la prueba de abajo para saber si has creado el codigo correcto
Step5: Listas
Step6: Pero si intentamos multiplicar estos datos por un numero, no tendrá el comportamiento esperado.
Step7: Funciones
Step8: Esta linea de codigo es equivalente a definir una función matemática de la siguiente manera
Step9: Esta notación que introducimos es muy util para funciones matemáticas, pero esto nos obliga a pensar en las definiciones de una manera funcional, lo cual no siempre es la solución (sobre todo en un lenguaje con un paradigma de programación orientado a objetos).
Step10: Con los mismos resultados
Step11: Ejercicio
Step12: Y para probar trata de convertir algunos datos
Step13: Ciclos de control
Step14: ó agregarlo en una lista nueva
Step15: Ejercicio
Step16: Ejecuta las pruebas de abajo
Step17: Matrices
Step18: Ejercicio
|
4,471
|
<ASSISTANT_TASK:>
Python Code:
!pip install xarray netCDF4 geopy
#setup widgets
import ipywidgets as widgets
w = widgets.Dropdown(
options=['Melbourne', 'Sydney', 'Canberra', 'Brisbane', 'Adelaide', 'Hobart', 'Perth', 'Darwin'],
description='Capital city:',
disabled=False,
)
arrYears = [str(i) for i in range(2000,2017)]
wYear = widgets.SelectionSlider(
options=arrYears,
value=arrYears[-1],
description='Select year:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True
)
w
wYear
year = wYear.value
import xarray as xr
max_day_etot = "http://dapds00.nci.org.au/thredds/dodsC/ub8/au/OzWALD/daily/OzWALD.daily.ETtot.{}.nc".format(year)
max_day_etot
dataset = xr.open_dataset(max_day_etot)
dataset
dataset.ETtot
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent="CSIRO widget")
location = geolocator.geocode("{},australia".format(w.value))
print((location.latitude, location.longitude))
print(location.raw)
lon_st = location.longitude
lat_st = location.latitude
etot = dataset.ETtot.sel(longitude=lon_st, latitude=lat_st, method='nearest')
etot
p = etot.plot()
import pandas as pd
startdate = year + '-01-01'
enddate = year + '-06-30'
timerange = pd.date_range(startdate, enddate, freq='D')
timerange
six_month_ds = etot.sel({'time': timerange})
six_month_ds.plot()
# get Melbourne coordinates
geolocator = Nominatim(user_agent="CSIRO widget")
mel_location = geolocator.geocode("Melbourne,australia".format(w.value))
etot_mel = dataset.ETtot.sel(longitude=mel_location.longitude, latitude=mel_location.latitude, method='nearest')
# get sydney coordinates and data
syd_location = geolocator.geocode("Sydney,australia".format(w.value))
etot_syd = dataset.ETtot.sel(longitude=syd_location.longitude, latitude=syd_location.latitude, method='nearest')
etot_mel.plot()
etot_syd.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup widgets to select city and year
Step2: Setup xarray
Step3: Use geopy to get the lat-long coordinates for the selected city
Step4: Use city coordinates to find the nearest point in the grid for the data point
Step5: Plot using matplotlib
Step6: Use pandas to create a 6-month date range to filter the data for the dates selected
Step7: Compare Melbourne and Sydney
|
4,472
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import math
import random
from pycsa import CoupledAnnealer
try:
xrange
except NameError:
xrange = range
cities = {
'New York City': (40.72, 74.00),
'Los Angeles': (34.05, 118.25),
'Chicago': (41.88, 87.63),
'Houston': (29.77, 95.38),
'Phoenix': (33.45, 112.07),
'Philadelphia': (39.95, 75.17),
'San Antonio': (29.53, 98.47),
'Dallas': (32.78, 96.80),
'San Diego': (32.78, 117.15),
'San Jose': (37.30, 121.87),
'Detroit': (42.33, 83.05),
'San Francisco': (37.78, 122.42),
'Jacksonville': (30.32, 81.70),
'Indianapolis': (39.78, 86.15),
'Austin': (30.27, 97.77),
'Columbus': (39.98, 82.98),
'Fort Worth': (32.75, 97.33),
'Charlotte': (35.23, 80.85),
'Memphis': (35.12, 89.97),
'Baltimore': (39.28, 76.62)
}
def distance(a, b):
Helper function to calculate the distance between two
latitude-longitude coordinates.
R = 3963 # radius of Earth (miles)
lat1, lon1 = math.radians(a[0]), math.radians(a[1])
lat2, lon2 = math.radians(b[0]), math.radians(b[1])
return math.acos(math.sin(lat1) * math.sin(lat2) +
math.cos(lat1) * math.cos(lat2) *
math.cos(lon1 - lon2)) * R
# Create the distance matrix between the cities.
distance_matrix = {}
for ka, va in cities.items():
distance_matrix[ka] = {}
for kb, vb in cities.items():
if kb == ka:
distance_matrix[ka][kb] = 0.0
else:
distance_matrix[ka][kb] = distance(va, vb)
def probe(positions, tgen):
Swap two cities in the route.
Note that `tgen` (the generation temperature) is ignored here.
In general, you can use `tgen` to adjust the variance of
the probing jumps as the algorithm progress.
a = random.randint(0, len(positions) - 1)
b = random.randint(0, len(positions) - 1)
positions[a], positions[b] = positions[b], positions[a]
return positions
def target(positions):
Calculates the length of the route.
e = 0
for i in xrange(len(positions)):
e += distance_matrix[positions[i-1]][positions[i]]
return e
n_annealers = 10 # the number of coupled annealers
init_state = list(cities.keys())
random.shuffle(init_state)
# Initialize the CSA process.
annealer = CoupledAnnealer(
target,
probe,
initial_state=[init_state] * n_annealers,
steps=100, # You probably want to set this a lot higher, like 10,000
processes=1, # Only use more than 1 process if the target function is costly to compute
n_annealers=n_annealers,
tacc_initial=1000.0,
verbose=1)
# Beging the annealing.
annealer.anneal()
# Get the best result from all `n_annealers`.
energy, state = annealer.get_best()
# Slide the list of cities until NYC is first.
while state[0] != 'New York City':
state = state[1:] + state[:1]
print()
print("%i mile route:" % energy)
for city in state:
print("\t", city)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's create a set of cities to use for TSP.
Step3: Now's lets define the function to calculate distances between cities
Step6: Next we have to define the target_function, i.e. the cost function to be minimized, and the probe_function, which will randomly update the current state at each annealing process.
Step7: Okay let's give it a run!
|
4,473
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import faps as fp
import matplotlib.pylab as plt
import pandas as pd
from time import time, localtime, asctime
print("Created using FAPS version {}.".format(fp.__version__))
np.random.seed(37)
allele_freqs = np.random.uniform(0.2, 0.5, 50)
adults = fp.make_parents(10, allele_freqs, family_name='adult')
family1 = fp.make_offspring(parents = adults, noffs=5)
family1.parents
family2 = fp.make_offspring(parents = adults, dam_list=[7,1,8,8,0], sire_list=[2,6,3,0,7])
family2.parents
family3 = fp.make_sibships(parents=adults, dam=0, sires=[1,2,3,4], family_size=5)
family3.parents
family4 = fp.make_sibships(parents=adults, dam=0, sires=[1,2,3,4], family_size=[5,4,3,2])
family4.parents
np.random.seed(85)
allele_freqs = np.random.uniform(0.2, 0.5, 50)
adults = fp.make_parents(10, allele_freqs, family_name='adult')
progeny = fp.make_sibships(parents=adults, dam=0, sires=[1,2,3,4], family_size=5)
d, mu= 0.01, 0.0015 # values for dropout and error rate.
# add genotyping errors
adults_mu = adults.mutations(mu)
progeny_mu = progeny.mutations(mu)
# add dropouts (to the mutated data)
adults_mu = adults_mu.dropouts(d)
progeny_mu = progeny.dropouts(d)
print(adults.missing_data().mean())
print(adults_mu.missing_data().mean())
np.random.seed(85)
allele_freqs = np.random.uniform(0.4, 0.5, 50)
adults = fp.make_parents(10, allele_freqs, family_name='adult')
progeny = fp.make_sibships(parents=adults, dam=0, sires=[1,2,3,4], family_size=5)
mothers = adults.subset(progeny.mothers)
patlik = fp.paternity_array(progeny, mothers, adults, mu=0.0015, missing_parents=0.01, integration='partial')
sc = fp.sibship_clustering(patlik)
sc.accuracy(progeny, adults)
patlik.purge = 'adult_1'
patlik.missing_parents=0.5
sc = fp.sibship_clustering(patlik)
sc.accuracy(progeny, adults)
patlik.selfing_rate=0.5
sc = fp.sibship_clustering(patlik)
sc.accuracy(progeny, adults)
# Common simulation parameters
r = 10 # number of replicates
nloci = [30,40,50] # number of loci
allele_freqs = [0.25, 0.5] # draw allele frequencies
nadults = [100,250,500] # size of the adults population
mu = 0.0015 #genotype error rates
sires = 4
offspring = 5
np.random.seed(614)
eventab = fp.make_power(
replicates = r,
nloci = nloci,
allele_freqs = allele_freqs,
candidates = nadults,
sires = sires,
offspring = offspring,
missing_loci=0,
mu_real = mu,
unsampled_input=0.01
)
fp.make_power(r, nloci, allele_freqs, nadults, sires, offspring, 0,
mu_input= 0.003,
mu_real=0.0015,
unsampled_real=0.1,
unsampled_input = 0.05);
eventab, evenclusters = fp.make_power(
replicates = r,
nloci = nloci,
allele_freqs = allele_freqs,
candidates = nadults,
sires = sires,
offspring = offspring,
missing_loci=0,
mu_real = mu,
unsampled_input=0.01,
return_clusters=True,
verbose=False
)
even_famsizes = np.array([evenclusters[i].family_size() for i in range(len(evenclusters))])
plt.plot(even_famsizes.mean(0))
plt.show()
# Common simulation parameters
nreps = 10 # number of replicates
nloci = [50] # number of loci
allele_freqs = [0.1, 0.2, 0.3, 0.4, 0.5] # draw allele frequencies
nadults = [10, 100, 250, 500, 750, 1000] # size of the adults population
mu_list = [0.0015] #genotype error rates
nsims = nreps * len(nloci) * len(allele_freqs) * len(nadults) * len(mu_list) # total number of simulations to run
dt = np.zeros([nsims, 7]) # empty array to store data
t0 = time()
counter = 0
print("Beginning simulations on {}.".format(asctime(localtime(time()) )))
for r in range(nreps):
for l in range(len(nloci)):
for a in range(len(allele_freqs)):
for n in range(len(nadults)):
for m in range(len(mu_list)):
af = np.repeat(allele_freqs[a], nloci[l])
adults = fp.make_parents(nadults[n], af)
progeny = fp.make_offspring(adults, 100)
mi = progeny.parent_index('m', adults.names) # maternal index
mothers = adults.subset(mi)
patlik = fp.paternity_array(progeny, mothers, adults, mu_list[m], missing_parents=0.01)
# Find the rank of the missing term within the array.
rank = [np.where(np.sort(patlik.prob_array()[i]) == patlik.prob_array()[i,-1])[0][0] for i in range(progeny.size)]
rank = np.array(rank).mean() / nadults[n]
# get the posterior probabilty fir the missing term.
prob_misisng = np.exp(patlik.prob_array()[:, -1]).mean()
#export data
dt[counter] = np.array([r, nloci[l], allele_freqs[a], nadults[n], mu_list[m], rank, prob_misisng])
# update counters
counter += 1
print("Completed in {} hours.".format(round((time() - t0)/3600,2)))
head = ['rep', 'nloci', 'allele_freqs', 'nadults', 'mu', 'rank', 'prob_missing']
dt = pd.DataFrame(dt, columns=head)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Before committing to the time and cost of genotyping samples for a paternity study, it is always sensible to run simulations to test the likely statistical power of your data set. This can help with important questions regaridng study design, such as finding an appropriate balance between the number of families vs offspring per family, or identifying a minimum number of loci to type. Simulated data can also be useful in verifying the results of an analysis.
Step2: There are multiple ways to mate adults to generate offspring. If you supply a set of adults and an integer number of offspring, make_offspring mates adults at random.
Step3: You can also supply an explicit list of dams and sires, in which case the adults are paired in the order they appear in each list.
Step4: Usually we really want to simulate half sib arrays. This can be done using make_sibships, which mates a single mother to a set of males.
Step5: For uneven sibship sizes, give a list of sizes for each family of the same length as sires.
Step6: Adding errors
Step7: It is best to create the progeny before adding errors. Set the error rates and add errors at random.
Step8: mutations and dropouts make copies of the genotypeArray, so the original data remains unchanged. For example
Step9: Paternity and sibships
Step10: A very useful tool is the accuracy subfunction for sibshipCluster objects.
Step11: In this example, accuracy is high, but the probability of a missing sire is NaN because all the sires are present, and this number of calculated only for offspring whose sire was absent.
Step12: In contrast, imagine we had an idea that selfing was strong. How would this affect things?
Step13: The results are identical to the unmodified case; FAPS has correctly identifed the correct partition structure in spite of the (incorrect) strong prior for high selfing.
Step14: For convenience, make_power provides a summary of the input parameters.
Step15: If you want to perform downstream analysis, you can tell make_power to also export each paternity_Array and/or sibshipCluster object. This is done by setting return_paternities and return_clusters to True. For example, this code pulls out the distribution of family sizes from each sibshipArray, and plots it.
Step16: Custom simulations
Step17: This cell simulates genotype data and clusters the offspring into full sibships.
|
4,474
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
%load_ext autoreload
%autoreload 2
fig, axarr = plt.subplots(ncols=2, nrows=2)
# plot the same signal scaled and shifted or both
axarr.flat[0].plot(np.random.rand(10))
axarr.flat[1].plot((np.random.rand(10)*5)-16)
axarr.flat[2].plot((np.random.rand(10))-16)
axarr.flat[3].plot((np.random.rand(10)*5))
from xarrayutils.plotting import same_y_range
fig, axarr = plt.subplots(ncols=2, nrows=2)
# plot the same signal scaled and shifted or both
axarr.flat[0].plot(np.random.rand(10))
axarr.flat[1].plot((np.random.rand(10)*5)-16)
axarr.flat[2].plot((np.random.rand(10))-16)
axarr.flat[3].plot((np.random.rand(10)*5))
same_y_range(axarr)
from xarrayutils.plotting import shaded_line_plot
# build test dataset with noisy members
x = np.linspace(0,2*np.pi, 10)
y = np.sin(x)
y_full = np.stack([y+np.random.rand(len(y))*2-0.5 for e in range(6)])
da = xr.DataArray(y_full, coords=[('member',range(6)),('time',x)])
da.plot(hue='member');
x = np.linspace(0,2*np.pi, 10)
y = np.sin(x+2)
y_full = np.stack([y+np.random.rand(len(y))*2-0.5 for e in range(6)])
da2 = xr.DataArray(y_full, coords=[('member',range(6)),('time',x)])
da.plot(hue='member')
da2.plot(hue='member');
da.plot(hue='member', color='C0')
da2.plot(hue='member', color='C1');
shaded_line_plot(da, 'member', color='C0');
shaded_line_plot(da2, 'member', color='C1');
shaded_line_plot(da, 'member', color='C0')
(da.mean('member') + da.std('member') / 2).plot(color='k', ls='--')
(da.mean('member') + da.std('member') * 3 / 2).plot(color='k', ls='-.')
shaded_line_plot(da, 'member', spreads=[2,3,5], alphas=[0.1, 0.3, 0.4], color='C0');
shaded_line_plot(da2, 'member',spreads=[2,3,5], alphas=[0.2, 0.5, 0.6], color='C1');
shaded_line_plot(da, 'member', spread_style='quantile', color='C0');
da.quantile(0.25,'member').plot(color='k', ls='--')
da.quantile(0.1,'member').plot(color='k', ls='--')
shaded_line_plot(da, 'member', spread_style='quantile', spreads=[0.5,1], color='C0');
shaded_line_plot(da2, 'member',spread_style='quantile', spreads=[0.5,1], color='C1');
woa_path = 'https://data.nodc.noaa.gov/thredds/dodsC/ncei/woa/oxygen/all/1.00/woa18_all_o00_01.nc'
woa = xr.open_dataset(woa_path, decode_times=False)
o2 = woa[['o_an']].squeeze(drop=True).o_an
o2.sel(lon=-180, method='nearest').plot(robust=True)
plt.gca().invert_yaxis()
from xarrayutils.plotting import linear_piecewise_scale
o2.sel(lon=-180, method='nearest').plot(robust=True)
ax = plt.gca()
ax.invert_yaxis()
linear_piecewise_scale(1000, 5)
#indicate the point between the different scalings
ax.axhline(1000, color='0.5', ls='--')
# Rearange the yticks
ax.set_yticks([0, 250, 500, 750, 1000, 3000, 5000]);
o2.sel(lon=-180, method='nearest').plot(robust=True)
ax = plt.gca()
ax.invert_yaxis()
linear_piecewise_scale(500, 20)
#indicate the point between the different scalings
ax.axhline(500, color='0.5', ls='--')
# Rearange the yticks
ax.set_yticks([0, 125, 250, 375, 500, 2000, 4000, 6000]);
o2.sel(lon=-180, method='nearest').plot(robust=True)
ax = plt.gca()
ax.invert_yaxis()
linear_piecewise_scale(1000, 2, scaled_half='lower')
#indicate the point between the different scalings
ax.axhline(1000, color='0.5', ls='--')
# Rearange the yticks
ax.set_yticks([0, 1000, 2000, 3000, 4000, 5000, 6000]);
o2.sel(lon=-180, method='nearest').plot(robust=True)
ax = plt.gca()
ax.invert_yaxis()
linear_piecewise_scale(0, 2, axis='x')
#indicate the point between the different scalings
ax.axvline(0, color='0.5', ls='--')
ax.get_xscale()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: These are hard to compare with regard to their amplitude.
Step2: Now we can clearly see the different amplitude.
Step3: Thats pretty cool (xarray is generally awesome!), but what if we have several of these datasets (e.g. climate models with several members each)?
Step4: Ok that is not great. We can color each dataset with a different color...
Step5: But if you supply more models this ends up looking too busy. xarrayutils.plotting.shaded_line_plot give a quick alternative to show the spread of the members with a line and shaded envelope
Step6: In the default setting, this plots the mean along the dimension member as a line and the ranges indicate 1 standard deviation (dark shading) and 3 standard deviations (light shading). The transparency and spread values can be customized.
Step7: Lets add shading for 2, 3, and 5 standard deviations with different increasing alpha values (increasing alpha for increased transparency).
Step8: Additionally shaded_line_plot offers a different mode to determine the spread, using quantiles.
Step9: The default spreads value is [0.5, 0.8] which means the outer shading indicates the 25th and 75th percentile, and the inner one the 10th and 90th percentile. You can customize this just like before.
Step10: Here the lines indicate the 50th quantile (approximate median), and the shadings indicate the range between the 25th and 75th percentile (dark) and the full range between 0th and 100th percentile (light
Step11: You can see that there is a lot more structure to the oxygen field in the upper ~1000m, and less below. Yet the plot is visually dominated by the deep ocean. We can focus on the upper ocean by compressing the values below 1000 m using linear_piecewise_scale.
Step12: Now you can see the uppr ocean structure more clearly, without completely cutting out the deep ocean. We can adjust the cut (the point of the y-axis where the different linear scales meet) and the scale (higher numbers mean a stronger compression of the deep ocean).
Step13: We could also compress the upper ocean using the scaled_half argument. Here upper means values larger than cut are compressed and lower means values smaller than cut are compressed instead
Step14: You can apply all of the above to the x-axis as well, using th axis keyword.
Step15: This would put the focus on the southen hemisphere.
|
4,475
|
<ASSISTANT_TASK:>
Python Code::
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import convolve2d
image = Image.open('image.jpg')
gray = np.mean(image, axis = 2)
h_x = [[1,0,-1], [2,0,-2], [1,0,-1]]
h_y = [[1,2,1], [0,0,0], [-1,-2,-1]]
g_x = convolve2d(gray, h_x)
g_y = convolve2d(gray, h_y)
a = np.square(g_x) + np.square(g_y)
G = np.sqrt(a)
plt.imshow(G, cmap='gray')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
4,476
|
<ASSISTANT_TASK:>
Python Code:
import os
from os.path import isdir, join
from pathlib import Path
import pandas as pd
from tqdm import tqdm
# Math
import numpy as np
import scipy.stats
from scipy.fftpack import fft
from scipy import signal
from scipy.io import wavfile
import librosa
import librosa.display
from scipy import sparse, stats, spatial
import scipy.sparse.linalg
# Machine learning
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import cross_val_score
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
# Visualization
import matplotlib.pyplot as plt
import seaborn as sns
import IPython.display as ipd
# Self_made functions
from main_pipeline import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (17, 5)
recompute = False
# Conditional recomputing : (WARNING : THIS TAKES MORE THAN 24H)
if recompute == True :
# Extracts and cuts the audio files from the folder to store it inside a set of pickles
main_train_audio_extraction()
# Computes the features from the previously extracted audio files. Save them into a single pickle.
main_train_audio_features()
N = 2
train_audio_path = '../Data/train/audio'
dirs = [f for f in os.listdir(train_audio_path) if isdir(join(train_audio_path, f))]
dirs.sort()
path = []
word = []
speaker = []
iteration = []
for direct in dirs:
if not direct.startswith('_'):
# Random selection of N files per folder
list_files = os.listdir(join(train_audio_path, direct))
wave_selected = list(np.random.choice([ f for f in list_files if f.endswith('.wav')],N,replace=False))
# Extraction of file informations for dataframe
word.extend(list(np.repeat(direct,N,axis=0)))
speaker.extend([wave_selected[f].split('.')[0].split('_')[0] for f in range(N) ])
iteration.extend([wave_selected[f].split('.')[0].split('_')[-1] for f in range(N) ])
path.extend([train_audio_path + '/' + direct + '/' + wave_selected[f] for f in range(N)])
# Creation of the Main Dataframe :
features_og = pd.DataFrame({('info','word',''): word,
('info','speaker',''): speaker,
('info','iteration',''): iteration,
('info','path',''): path})
index_og = [('info','word',''),('info','speaker',''),('info','iteration','')]
features_og.head()
word_1 = 1
word_2 = 59
def get_audio(filepath):
audio, sampling_rate = librosa.load(filepath, sr=None, mono=True)
return audio, sampling_rate
audio_1, sampling_rate_1 = get_audio(features_og[('info','path')].iloc[word_1])
audio_2, sampling_rate_2 = get_audio(features_og[('info','path')].iloc[word_2])
# normalize audio signals
audio_1 = audio_1/np.max(audio_1)
audio_2 = audio_2/np.max(audio_2)
# Look at the signal in the time domain
plt.plot(audio_1)
plt.hold
plt.plot(audio_2)
# Listen to the first word
ipd.Audio(data=audio_1, rate=sampling_rate_1)
# Listen to the first word
ipd.Audio(data=audio_2, rate=sampling_rate_1)
def find_lobes(Thresh, audio, shift = int(2048/16)):
Finds all energy lobes in an audio signal and returns their start and end indices. The parameter Thresh defines
the sensitivity of the algforithm.
# Compute rmse
audio = audio/np.max(audio)
rmse_audio = librosa.feature.rmse(audio, hop_length = 1, frame_length=int(shift*2)).reshape(-1,)
rmse_audio -= np.min(rmse_audio)
rmse_audio /= np.max(rmse_audio)
i_start = np.array([])
i_end = np.array([])
for i in range(len(rmse_audio)-1):
if (int(rmse_audio[i]>Thresh)-int(rmse_audio[i+1]>Thresh)) == -1:
i_start = np.append(i_start,i)
elif (int(rmse_audio[i]>Thresh)-int(rmse_audio[i+1]>Thresh)) == 1:
i_end = np.append(i_end,i)
if len(i_start) == 0:
i_start = np.append(i_start,0)
if len(i_end) == 0:
i_end = np.append(i_end,i)
if i_start[0]>i_end[0]:
i_start = np.append(np.array(0), i_start)
if i_start[-1]>i_end[-1]:
i_end = np.append(i_end,i)
return i_start, i_end, rmse_audio, shift
def cut_signal( audio, Thresh = 0.1, mode = 'proxy',reach = 2000, number_lobes = 2):
Extracts relevant parts ofn audio signal.
The Thresh input value defines the sensitivity of the cut, its value has to be positive.
Two modes can be chosen:
- proxy(Default): Finds main energy lobe of the signal and also adds lobes that are within reach.
The reach parameter can be adjusted adn has to be a positive value (default is 2000.)
- num_lobes: Finds the highest energy lobes of the signal. The parameter num_lobes (default value 2)
defines how many of the largest lobes are being considered.
i_start, i_end, rmse_audio, shift = find_lobes(Thresh, audio)
energy = np.array([])
for i in range(len(i_start)):
energy = np.append(energy,sum(rmse_audio[int(i_start[i]):int(i_end[i])]))
if mode is 'num_lobes':
lobes = np.argsort(energy)[-number_lobes:]
start = np.min(i_start[lobes])
end = np.max(i_end[lobes])
elif mode is 'proxy':
main_lobe = np.argsort(energy)[-1]
start = i_start[main_lobe]
end = i_end[main_lobe]
for i in range(main_lobe):
if (i_start[main_lobe]-i_end[i])<reach:
start = np.min((i_start[i],start))
for i in range(main_lobe,len(i_start)):
if (i_start[i]-i_end[main_lobe])<reach:
end = i_end[i]
else:
print('ERROR: mode not implemented.')
audio_cut = audio[int(np.max((0,int(start-shift-300)))):int(np.min((int(end)+300,len(audio))))]
return audio_cut
rmse_audio_1 = librosa.feature.rmse(audio_1, hop_length = 1, frame_length=int(2048/8)).reshape(-1,)
rmse_audio_1 -= np.min(rmse_audio_1)
rmse_audio_1 /= np.max(rmse_audio_1)
plt.plot(rmse_audio_1)
plt.grid()
plt.title('RMSE of Audio signal')
plt.xlabel('mffc sample')
plt.ylabel('rmse')
plt.hold
rmse_audio_2 = librosa.feature.rmse(audio_2, hop_length = 1, frame_length=int(2048/8)).reshape(-1,)
rmse_audio_2 -= np.min(rmse_audio_2)
rmse_audio_2 /= np.max(rmse_audio_2)
plt.plot(rmse_audio_2)
# Cutting above the threshold and keeping the main lobes :
audio_1_cut = cut_signal(audio_1)
audio_2_cut = cut_signal(audio_2)
# Display cut time signal
plt.plot(audio_1_cut)
plt.hold
plt.plot(audio_2_cut)
print('Cut Version 1 :')
ipd.Audio(data=audio_1_cut, rate=sampling_rate_1)
print('Cut Version 2 :')
ipd.Audio(data=audio_2_cut, rate=sampling_rate_2)
N_MFCCS = 10
#n_fft, hop_length
mfccs_1 = librosa.feature.mfcc(y=audio_1_cut,sr=sampling_rate_1, n_mfcc=N_MFCCS, n_fft = int(2048/2), hop_length = int(np.floor(len(audio_1_cut)/20)))
mfccs_2 = librosa.feature.mfcc(y=audio_2_cut,sr=sampling_rate_2, n_mfcc=N_MFCCS, n_fft = int(2048/2), hop_length = int(np.floor(len(audio_2_cut)/20)))
mfccs_1 = mfccs_1[:,:-1]
mfccs_2 = mfccs_2[:,:-1]
print(np.shape(mfccs_1))
print(np.shape(mfccs_2))
plt.figure(figsize=(10, 4))
librosa.display.specshow(mfccs_1, x_axis='time')
plt.colorbar()
plt.title('MFCC 1st Word')
plt.tight_layout()
plt.figure(figsize=(10, 4))
librosa.display.specshow(mfccs_2, x_axis='time')
plt.colorbar()
plt.title('MFCC 2nd Word')
plt.tight_layout()
# Load features
features_og = pd.read_pickle('./Features Data/cut_mfccs_all_raw_10_1028_20.pickle')
features_og.head()
# Build Label vector
# Define class name vector, the index will correspond to the class label
class_names = features_og['info']['word'].unique()
y = np.ones(len(features_og))
for i in range(0,len(class_names)):
y +=(features_og['info','word'] == class_names[i]) * i
# Plot the label vector
print('We have {} datapoints over the entire dataset.'.format(len(y)))
fix, axes = plt.subplots(1, 2, figsize=(17, 5))
axes[0].plot(y)
axes[0].grid()
axes[0].set_xlabel('datapoint n')
axes[0].set_ylabel('label yn')
# Plot distribution of classe
axes[1].hist(y,30)
axes[1].set_xlabel('class')
axes[1].set_ylabel('number of datapoints')
# Specify the number of datapoints that should be sampled in each class to build training and validation set
train_size = 160
valid_size = 1553
train_x = np.array([])
train_y = np.array([])
valid_x = np.array([])
valid_y = np.array([])
for i in range(len(class_names)):
class_index = np.where(y == (i+1))[0]
random_index = np.random.choice(range(len(class_index)), size=train_size+valid_size, replace=False)
train_x_class = class_index[random_index[:train_size]]
train_y_class = y[train_x_class]
train_x = np.append(train_x, train_x_class).astype(int)
train_y = np.append(train_y, train_y_class).astype(int)
valid_x_class = class_index[random_index[train_size:train_size+valid_size]]
valid_y_class = y[valid_x_class]
valid_x = np.append(valid_x, valid_x_class).astype(int)
valid_y = np.append(valid_y, valid_y_class).astype(int)
# Define batch size
batch_size = 200
# Choose datapoints from validation set at random to form a batch
potential_elements = np.array(list(enumerate(np.array(valid_x))))
indices = np.random.choice(potential_elements[:,0].reshape(-1,), batch_size, replace=False)
# The batch index_variable contains the indices of the batch datapoints inside the complete dataset
batch_index = potential_elements[:,1].reshape(-1,)[indices]
# Build data matrix and normalize features
X = pd.DataFrame(features_og['mfcc'], np.append(train_x, batch_index))
X -= X.mean(axis=0)
X /= X.std(axis=0)
print('The data matrix has {} datapoints.'.format(len(X)))
# Compute distances between all datapoints
distances = spatial.distance.squareform(spatial.distance.pdist(X,'cosine'))
n=distances.shape[0]
# Build weight matrix
kernel_width = distances.mean()
W = np.exp(np.divide(-np.square(distances),kernel_width**2))
# Make sure the diagonal is 0 for the weight matrix
np.fill_diagonal(W,0)
print('The weight matrix has a shape of {}.'.format(W.shape))
# Show the weight matrix
plt.matshow(W)
# compute laplacian
degrees = np.sum(W,axis=0)
laplacian = np.diag(degrees**-0.5) @ (np.diag(degrees) - W) @ np.diag(degrees**-0.5)
laplacian = sparse.csr_matrix(laplacian)
eigenvalues, eigenvectors = sparse.linalg.eigsh(A=laplacian,k=25,which='SM')
plt.plot(eigenvalues[1:], '.-', markersize=15);
plt.grid()
fix, axes = plt.subplots(5, 5, figsize=(17, 8))
for i in range(1,6):
for j in range(1,6):
a = eigenvectors[:,i]
b = eigenvectors[:,j]
labels = np.sign(a)
axes[i-1,j-1].scatter(a, b, c=labels, cmap='RdBu', alpha=0.5)
# Splitt Eigenvectors into train and validation parts
train_features = eigenvectors[:len(train_x),:]
valid_features = eigenvectors[len(train_x):,:]
def fit_and_test(clf, train_x, train_y, test_x, test_y):
clf.fit(train_x, train_y)
predict_y = clf.predict(test_x)
print('accuracy : ', np.sum(test_y==predict_y)/len(test_y))
return predict_y
clf = GaussianNB()
predict_y = fit_and_test(clf, train_features, train_y, valid_features, np.array(y[batch_index]))
clf = QuadraticDiscriminantAnalysis()
predict_y = fit_and_test(clf, train_features, train_y, valid_features, np.array(y[batch_index]))
def plot_confusion_matrix(test_y, predict_y, class_names):
conf_mat=confusion_matrix(test_y,predict_y)
plt.figure(figsize=(10,10))
plt.imshow(conf_mat/np.sum(conf_mat,axis=1),cmap=plt.cm.hot)
tick = np.arange(len(class_names))
plt.xticks(tick, class_names,rotation=90)
plt.yticks(tick, class_names)
plt.ylabel('ground truth')
plt.xlabel('prediction')
plt.title('Confusion matrix')
plt.colorbar()
plot_confusion_matrix(np.array(y[batch_index]), predict_y, class_names)
def adapt_labels(x_hat):
# Real accuracy considering only the main words :
class_names_list = ["yes", "no", "up", "down", "left", "right", "on", "off", "stop", "go", "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine"]
mask_names_main = [True if name in class_names_list else False for name in class_names]
index_names_main = [i for i in range(len(mask_names_main)) if mask_names_main[i] == True]
inverted_index_names = dict(zip(index_names_main,range(len(index_names_main))))
# Creating the label names :
class_names_main = class_names[mask_names_main].tolist()
class_names_main.extend(["unknown"])
# Adapting the labels in the test and prediction sets :
return np.array([inverted_index_names[int(x_hat[i])] if x_hat[i] in index_names_main else len(class_names_main)-1 for i in range(len(x_hat)) ]),class_names_main
valid_y_adapted, class_names_main = adapt_labels(np.array(y[batch_index]))
predict_y_adapted, class_names_main = adapt_labels(predict_y)
acc_adapted = np.sum(valid_y_adapted==predict_y_adapted)/len(valid_y_adapted)
print('accuracy for main words classification : ', acc_adapted)
plot_confusion_matrix(valid_y_adapted,predict_y_adapted, class_names_main)
# Sparsify using k- nearest neighbours and make sure it stays symmetric
NEIGHBORS = 120
# Make sure
for i in range(W.shape[0]):
idx = W[i,:].argsort()[:-NEIGHBORS]
W[i,idx] = 0
W[idx,i] = 0
plt.matshow(W)
# Build normalized Laplacian Matrix
D = np.sum(W,axis=0)
L = np.diag(D**-0.5) @ (np.diag(D) - W) @ np.diag(D**-0.5)
L = sparse.csr_matrix(L)
# Build one-hot encoded class matrix
Y_t = np.eye(len(class_names))[train_y - 1].T
print('The shape of the new label matrix Y is {}, its maximum value is {} and its minimum value is {}.'.format(np.shape(Y_t),np.min(Y_t),np.max(Y_t)))
# Create Mask Matrix
M = np.zeros((len(class_names), len(train_y) + batch_size))
M[:len(train_y),:len(train_y)] = 1
# Create extened label matrix and vector
Y = np.concatenate((Y_t, np.zeros((len(class_names), batch_size))), axis=1)
y_tv = np.concatenate((train_y,np.zeros((batch_size,)))) # y_tv corresponds to y in text
def solve(Y_compr, M, L, alpha, beta):
Solves the above defined optimization problem to find an estimated label vector.
X = np.ones(Y_compr.shape)
for i in range(Y_compr.shape[0]):
Mask = np.diag(M[i,:])
y_i_compr = Y_compr[i,:]
X[i,:] = np.linalg.solve((Mask+alpha*L+beta),y_i_compr)
return X
# Solve for the matrix X
Y_hat = solve(Y, M, L,alpha = 1e-3, beta = 1e-7)
# Go from matrix X to estimated label vector x_hat
y_predict = np.argmax(Y_hat,axis = 0)+np.ones(Y_hat[0,:].shape)
# Adapt the labels, whee all words of the category "unknown" are unified
y_predict_adapted, class_names_main = adapt_labels(y_predict)
y_adapted, class_names_main = adapt_labels(np.array(y[batch_index]))
# Compute accuracy in predicting unknown labels
pred = np.sum(y_predict_adapted[-batch_size:]==y_adapted)/batch_size
print('The achieved accuracy clasifying the bacth of validation points using semi-supervised classification is {}.'.format(pred))
plot_confusion_matrix(y_adapted,y_predict_adapted[-batch_size:], class_names_main)
accuracy_mat = semisup_test_all_dataset(features_og, y, batch_size, NEIGHBORS, alpha = 1e-3, beta = 1e-7, iter_max=100, class_names = class_names)
# Display as boxplot
plt.boxplot(accuracy_mat.transpose(), labels = ['Spectral Clustering','Semi-Supervised Learning'])
plt.grid()
plt.title('Classification accuracy vs. classification method.')
plt.ylabel('Classification accuracy')
print('Using spectral clustering a mean accuracy of {}, with a variance of {} could be achieved.'.format(round(np.mean(accuracy_mat[0,:]),2),round(np.var(accuracy_mat[0,:]),4)))
print('Using semi-supervised classification a mean accuracy of {}, with a variance of {} could be achieved.'.format(round(np.mean(accuracy_mat[1,:]),2),round(np.var(accuracy_mat[1,:]),4)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Recompute
Step2: Feature Extraction
Step3: Pipeline for a small number of audio files
Step4: After selecting 2 words we normalize their values to their maximum.
Step7: Next we define two auxiliary function that allow us to select main lobes of the signal and to keep only those lobes.
Step8: For the selection of the lobes we use the RMSE transformation. We next display the shape of those signals after this transformation
Step9: Next we apply the our auxiliary function cut_signal to the 2 audio samples. As we see it removes efficiently the silence surrounding the main lobes.
Step10: Of the cut audio file we want now to compute our features the Mel-Frequency Cepstral Coefficients, short mfccs. For this, no matter the length of the audio file, we compute 20 mfcc vectors of dimension 10. This means we compoute a short-time Fourier transform at 20 equidistant time points inside the cut audio files and keep the lower 10 mfccs of the spectrum. Since the audio files are of different length after the cutting, we adjust the hop-length (length between two short-time Fourier analyses) for every audio file accordingly. this makes the resulting feature vectors comparable and adds a "time warping" effect which should make the feature more robust to slower/faster spoken words.
Step11: As we have allready computed the Features for the whole dataset we load it directly from the following pickle file.
Step12: Classification Methods
Step13: In the above histogram we can see that the classes are not balanced inside the test set. However, for our testing we will chose a balanced training, as well as a balanced validation step. This corresponds to having an equal prior probability of occurence between the different words we want to classify. Thus, in the next cell we choose at random $160$ datapoints per class to form our training set $S_t$ ($30160=4800$) and $1553$ datapoints per class to form the validation set $S_v$ ($155330 = 46590$), which is the maximum amount of datapoints we can put into the vlidation set for it to still be balanced.
Step14: We will define the batch sizem which defines how many validation samples are classified simultaniously. Then we choose at random 200 datapoints of the validation set $S_v$ to build said batch. Remark
Step15: Now we build our feature matrix $\mathbf{X}^{(N+K)\times D}$ by concatenating the feature vectors of all datapoints inside the training set $S_t$ and the batch datapoints. The feature are then normalized by substracting their mean, as well as dividing by the standard deviation. The feature normalizing step was found to have a ver significant effect on the resulting classification accuracy.
Step16: Build Graph from Data Matrix
Step17: We can already see that there is a distinct square pattern inside the weight matrix. This points to a clustering inside a graph, achieved by good feature extraction (rows and columns are sorted by labels, except last 200). At this point we are ready to present the first classification method that was analyzed
Step18: We can now calculate the eigenvectors of the Laplacian matrix. These eigenvectors will be used as feature vectors for our classifier.
Step19: In a next step we split the eigenvectors of the graph into two parts, one containing the nodes representing the training datapoints, one containing the nodes representing the validation datapoints.
Step20: A wide range of classifiers were tested on our input features. Remarkably, a very simple classifier such as the Gaussian Naive Bayes classifier produced far better results than more advanced techniques. This is mainly because the graph datapoints were generated using a gaussian kernel, and is therefore sensible to assume that our feature distribution will be gaussian as well. However, the best results were obtained using a Quadratic Discriminant Analysis classifier.
Step21: Once our test set has been classified we can visualize the effectiveness of our classification using a confusion matrix.
Step22: Finally we can focus on the core words that need to be classified and label the rest as 'unknown'.
Step23: In conclusion, we can say that, using spectral clustering, we were able to leverage the properties of graph theory to find relevant features in speech recognition. However, the accuracy achieved with our model is too far low for any practical applications. Moreover, this model does not benefit from sparsity, meaning that it will not be able to scale with large datasets.
Step24: We can see that the sparsifyied weight matrix is very focused on its diagonal. We will now build the normalized Laplacian, since it is the core graph feature we will use for semi-supervised classification. The normalized Laplacian is defined as
Step25: For the semi-supervised classification approach, we now want to transform the label vector of our training data $\mathbf{y_t} \in {1,2,...,30}^{N}$ into a matrix $\mathbf{Y_t}\in {0,1}^{30\times N}$. Each row $i$ of the matrix $\mathbf{Y_t}$ contains an indicator vector $\mathbf{y_{t,i}}\in{0,1}^N$ for class $i$, which means it contains a vector which specifyies for each training node in the graph if it belongs to node $i$ or not.
Step26: In the next cell we extend our label matrix $\mathbf{Y_t}$, such that there are labels (not known yet) for the validation datapoints we want to classify. Thus we extend the rows of $\mathbf{Y}$ by $K$ zeros, since the last $K$ nodes in the weight matrix of the used graph correspond to the validation points. We also create the masking matrix $\mathbf{M}\in{0,1}^{30\times (N+K)}$, which specifies which of the entries in $\mathbf{Y}$ are known (training) and which are unknown (validation).
Step28: Now comes the main part of semi-supervised classification. The method relies on the fact that we have a clustered graph, which gives us similarity measures between all the considered datapoints. The above mentioned class indicator vectors $\mathbf{y_i}$ (rows of $\mathbf{Y}$) are considered to be smooth signals on the graph, which is why achieving a clustered graph with good feature extraction was important.<br>
Step29: Method Validation
|
4,477
|
<ASSISTANT_TASK:>
Python Code:
## You can use Python as a calculator:
5*7 #This is a comment and does not affect your code.
#You can have as many as you want.
#Comments help explain your code to others and yourself.
#No worries.
5+7
5-7
5/7
a = 10
b = 7
print(a)
print(b)
print(a*b , a+b, a/b)
a = 5
b = 7
print(a*b, a+b, a/b)
numList = [0,1,2,3,4,5,6,7,8,9]
print(numList)
L = len(numList)
print(L)
numList[3]
# your code here
x = numList[5]
print(x)
fibList = [1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
fibList[5]
addList = [1, 1, 5, 4, 6, 7, 3, 2, 8]
print(addList)
# Now let's add some new numbers to the list
addList = addList + [4, 3, 2, 6]
print(addList)
# Run this code
%matplotlib inline
# this "magic" command puts the plots right in the jupyter notebook
import matplotlib
# Run this code
import matplotlib.pyplot as plt
x = numList
y = numList
p = plt.plot(x, y)
# Clear the plotting field.
plt.clf() # No need to add anything inside these parentheses.
# First line
plt.plot(x, y, color='blue', linestyle='-', linewidth=1, label='num')
# Second line
z = fibList
# you can shorten the keywords like "color" to be just "c" for quicker typing
plt.plot(x, z, c='r', ls='--', lw=3, label='fib')
# add the labels and titles
plt.xlabel('x values')
plt.ylabel('y values')
plt.title('My First Plot')
plt.legend(loc='best')
#Would you like to save your plot? Uncomment the below line. Here, we use savefig('nameOffigure')
#It should save to the folder you are currently working out of.
#plt.savefig('MyFirstFigure.jpg')
# defining lists
list1 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
list2 = [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
plt.clf()
plt.plot(list1, list2, c='purple', ls='-.', lw=2, label='Sqr')
plt.xlabel('x values')
plt.ylabel('y values')
plt.title('Exercise 1')
plt.legend(loc='best')
plt.savefig('exercise1')
#Example conditional statements
x = 1
y = 2
x<y #x is less than y
#x is greater than y
x>y
#x is less-than or equal to y
x<=y
#x is greater-than or equal to y
x>=y
#Example of and operator
(1<2) and (2<3)
#Example of or operator
(1<2) or (2>3)
#Example of not operator
not(1<2)
x = 1
y = 2
if (x < y):
print("Yup, totally true!")
else:
print("Nope, completely wrong!")
x = 2
y = 1
if (x > y):
print("x is greater than y")
x = 2
y = 2
if (x == y):
print("x and y are equal")
if (x != y):
print("x and y are not equal")
if (x > y or x < y):
print("x and y are not equal (again!)")
x = 1
while (x <= 10):
print(x)
x = x + 1
x = 2
i = 0 #dummy variable
while (i<10):
x = 2*x
print(x)
i = i+1
myList = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# we want to end the loop at the end of the list i.e., the length of the list
end = len(myList)
# your code here
i = 0
while i < end:
num = myList[i]
print(num**2)
i = i + 1
twoList = [2, 5, 6, 2, 4, 1, 5, 7, 3, 2, 5, 2]
count = 0 # this variable will count up how many times the number 2 appears in the above list
end = len(twoList)
i = 0
while i < end:
if twoList[i] == 2:
count = count + 1
i = i + 1
print(count)
x = [True, True, False, False]
y = [True, False, True, False]
print('x and y')
i = 0
while i < len(x):
print (x[i] and y[i])
i = i+1
print('x or y')
i = 0
while i < len(y):
print (x[i] or y[i])
i = i+1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unfortunately, the output of your calculations won't be saved anywhere, so you can't use them later in your code.
Step2: You can also write over variables with new values, but your previous values will be gone.
Step3: Next, let's create a list of numbers. A list is a way to store items in a group.
Step4: How many elements or numbers does the list numList contain? Yes, this is easy to count now, but you will eventually work with lists that contains MANY items. To get the length of a list, use len().
Step5: You can also access particular elements in an array by indexing. The syntax for this is the following
Step6: How would you access the number 5 in numList?
Step7: Let's try making more complicated list
Step8: Now let's say you create a list of numbers, but later on you want to add more numbers to the list. We can do that! We can quite literally add them to the list like so
Step9: See how the list changed to now include the new numbers?
Step10: When using modules (also sometimes called libraries or packages ) you can use a nickname through the as keyword so you don't have to type the long module name every time. For example, matplotlib.pyplot is typically shortened to plt like below.
Step11: Now let's do a quick simple plot using the list we defined earlier!
Step12: You can change a lot of attributes about plots, like the style of the line, the color, and the thickness of the line. You can add titles, axis labels, and legends. You can also put more than one line on the same plot. This link includes all the ways you can modify plots
Step13: EXERCISE 1
Step14: C. Logic, If/Else, and Loops
Step15: Think of the statement $x<y$ as asking the question "is x less than y?" If it is, then it returns True and if x is not less than y it returns False.
Step16: If you let a and b be conditional statements (like the above statements, e.g. a = x < y), then you can combine the two together using logical operators, which can be thought of as functions for conditional statements.
Step17: Now, these might not seem especially useful at first, but they're the bread and butter of programming. Even more importantly, they are used when we are doing if/else statements or loops, which we will now cover.
Step18: The idea here is that Python checks to see if the statement (in this case "x < y") is True. If it is, then it will do what is below the if statement. The else statement tells Python what to do if the condition is False.
Step19: Here's a more complicated case. Here, we introduce some logic that helps you figure out if two objects are equal or not.
Step20: Loops
Step21: Note here that we tell Python to print the number x (x starts at 1) and then redefining x as itself +1 (so, x=1 gets redefined to x = x+1 = 1+1 = 2). Python then executes the loop again, but now x has been incremented by 1. We continue this process from x = 1 to x = 10, printing out x every time. Thus, with a fairly compact bit of code, you get 10 lines of output.
Step22: Now we want to combine lists with loops! You can use the dummy variable as a way to access a value in the list through its index. In exercise 1 we asked you to square the elements in a given list by hand, let's now do it by using a loop. The setup for the loop is provided below but try completing the code on your own!
Step23: Isn't that much easier than squaring everything by hand? Loops are your friends in programming and will make menial, reptitive tasks go by very quickly.
Step24: Notice how the indentation is set up. What happens if you indent the print statement? How about removing the indentation on the if statement? Play around with it so you get the hang of indentation in nested code.
|
4,478
|
<ASSISTANT_TASK:>
Python Code:
# install published dev version
# !pip install cirq~=0.4.0.dev
# install directly from HEAD:
!pip install git+https://github.com/quantumlib/Cirq.git@8c59dd97f8880ac5a70c39affa64d5024a2364d0
import cirq
import numpy as np
import matplotlib.pyplot as plt
print(cirq.google.Foxtail)
a = cirq.NamedQubit("a")
b = cirq.NamedQubit("b")
w = .25 # Put your own weight here.
angle = 2*np.pi*w
circuit = cirq.Circuit.from_ops(cirq.ControlledGate(cirq.Rx(angle)).on(a,b))
print(circuit)
circuit.to_unitary_matrix().round(2)
a = cirq.NamedQubit("a")
b = cirq.NamedQubit("b")
w = 0.25 # Put your own weight here.
angle = 2*np.pi*w
circuit = cirq.Circuit.from_ops([cirq.X(a),
cirq.ControlledGate(cirq.Rx(-angle)).on(a,b),
cirq.X(a)])
print(circuit)
circuit.to_unitary_matrix().round(2)
class ZXGate(cirq.ops.gate_features.TwoQubitGate):
ZXGate with variable weight.
def __init__(self, weight=1):
Initializes the ZX Gate up to phase.
Args:
weight: rotation angle, period 2
self.weight = weight
def _decompose_(self, qubits):
a, b = qubits
## YOUR CODE HERE
# This lets the weight be a Symbol. Useful for paramterization.
def _resolve_parameters_(self, param_resolver):
return ZXGate(weight=param_resolver.value_of(self.weight))
# How should the gate look in ASCII diagrams?
def _circuit_diagram_info_(self, args):
return cirq.protocols.CircuitDiagramInfo(
wire_symbols=('Z', 'X'),
exponent=self.weight)
class ZXGate(cirq.ops.gate_features.TwoQubitGate):
ZXGate with variable weight.
def __init__(self, weight=1):
Initializes the ZX Gate up to phase.
Args:
weight: rotation angle, period 2
self.weight = weight
def _decompose_(self, qubits):
a, b = qubits
yield cirq.ControlledGate(cirq.Rx(2*np.pi*self.weight)).on(a,b)
yield cirq.X(a)
yield cirq.ControlledGate(cirq.Rx(-2*np.pi*self.weight)).on(a,b)
yield cirq.X(a)
# This lets the weight be a Symbol. Useful for paramterization.
def _resolve_parameters_(self, param_resolver):
return ZXGate(weight=param_resolver.value_of(self.weight))
# How should the gate look in ASCII diagrams?
def _circuit_diagram_info_(self, args):
return cirq.protocols.CircuitDiagramInfo(
wire_symbols=('Z', 'X'),
exponent=self.weight)
class ZXGate(cirq.ops.eigen_gate.EigenGate,
cirq.ops.gate_features.TwoQubitGate):
ZXGate with variable weight.
def __init__(self, weight=1):
Initializes the ZX Gate up to phase.
Args:
weight: rotation angle, period 2
self.weight = weight
super().__init__(exponent=weight) # Automatically handles weights other than 1
def _eigen_components(self):
return [
(1, np.array([[0.5, 0.5, 0, 0],
[ 0.5, 0.5, 0, 0],
[0, 0, 0.5, -0.5],
[0, 0, -0.5, 0.5]])),
(??, ??) # YOUR CODE HERE: phase and projector for the other eigenvalue
]
# This lets the weight be a Symbol. Useful for parameterization.
def _resolve_parameters_(self, param_resolver):
return ZXGate(weight=param_resolver.value_of(self.weight))
# How should the gate look in ASCII diagrams?
def _circuit_diagram_info_(self, args):
return cirq.protocols.CircuitDiagramInfo(
wire_symbols=('Z', 'X'),
exponent=self.weight)
class ZXGate(cirq.ops.eigen_gate.EigenGate,
cirq.ops.gate_features.TwoQubitGate):
ZXGate with variable weight.
def __init__(self, weight=1):
Initializes the ZX Gate up to phase.
Args:
weight: rotation angle, period 2
self.weight = weight
super().__init__(exponent=weight) # Automatically handles weights other than 1
def _eigen_components(self):
return [
(1, np.array([[0.5, 0.5, 0, 0],
[ 0.5, 0.5, 0, 0],
[0, 0, 0.5, -0.5],
[0, 0, -0.5, 0.5]])),
(-1, np.array([[0.5, -0.5, 0, 0],
[ -0.5, 0.5, 0, 0],
[0, 0, 0.5, 0.5],
[0, 0, 0.5, 0.5]]))
]
# This lets the weight be a Symbol. Useful for parameterization.
def _resolve_parameters_(self, param_resolver):
return ZXGate(weight=param_resolver.value_of(self.weight))
# How should the gate look in ASCII diagrams?
def _circuit_diagram_info_(self, args):
return cirq.protocols.CircuitDiagramInfo(
wire_symbols=('Z', 'X'),
exponent=self.weight)
a = cirq.NamedQubit("a")
b = cirq.NamedQubit("b")
w = .15 # Put your own weight here. Try using a cirq.Symbol.
circuit = cirq.Circuit.from_ops(ZXGate(w).on(a,b))
print(circuit)
test_matrix = np.array([[np.cos(np.pi*w), 1j*np.sin(np.pi*w), 0, 0],
[1j*np.sin(np.pi*w), np.cos(np.pi*w), 0, 0],
[0, 0, np.cos(np.pi*w), -1j*np.sin(np.pi*w)],
[0, 0, -1j*np.sin(np.pi*w),np.cos(np.pi*w)]])
# Test for five digits of accuracy. Won't work with cirq.Symbol
assert (circuit.to_unitary_matrix().round(5) == test_matrix.round(5)).all()
# Total number of data qubits
INPUT_SIZE = 9
data_qubits = cirq.LineQubit.range(INPUT_SIZE)
readout = cirq.NamedQubit('r')
# Initialize parameters of the circuit
params = {'w': 0}
def ZX_layer():
Adds a ZX gate between each data qubit and the readout.
All gates are given the same cirq.Symbol for a weight.
for qubit in data_qubits:
yield ZXGate(cirq.Symbol('w')).on(qubit, readout)
qnn = cirq.Circuit()
qnn.append(???) # YOUR CODE HERE
qnn = cirq.Circuit()
qnn.append(ZX_layer())
qnn.append([cirq.S(readout)**-1, cirq.H(readout)]) # Basis transformation
print(qnn)
def readout_expectation(state):
Takes in a specification of a state as an array of 0s and 1s
and returns the expectation value of Z on ther readout qubit.
Uses the XmonSimulator to calculate the wavefunction exactly.
# A convenient representation of the state as an integer
state_num = int(np.sum(state*2**np.arange(len(state))))
resolver = cirq.ParamResolver(params)
simulator = cirq.Simulator()
# Specify an explicit qubit order so that we know which qubit is the readout
result = simulator.simulate(qnn, resolver, qubit_order=[readout]+data_qubits,
initial_state=state_num)
wf = result.final_state
# Becase we specified qubit order, the Z value of the readout is the most
# significant bit.
Z_readout = np.append(np.ones(2**INPUT_SIZE), -np.ones(2**INPUT_SIZE))
return np.sum(np.abs(wf)**2 * Z_readout)
def loss(states, labels):
loss=0
for state, label in zip(states,labels):
loss += 1 - label*readout_expectation(state)
return loss/(2*len(states))
def classification_error(states, labels):
error=0
for state,label in zip(states,labels):
error += 1 - label*np.sign(readout_expectation(state))
return error/(2*len(states))
def make_batch():
Generates a set of labels, then uses those labels to generate inputs.
label = -1 corresponds to majority 0 in the sate, label = +1 corresponds to
majority 1.
np.random.seed(0) # For consistency in demo
labels = (-1)**np.random.choice(2, size=100) # Smaller batch sizes will speed up computation
states = []
for label in labels:
states.append(np.random.choice(2, size=INPUT_SIZE, p=[0.5-label*0.2,0.5+label*0.2]))
return states, labels
states, labels = make_batch()
# Using cirq.Simulator with the EigenGate implementation of ZZ, this takes
# about 30s to run. Using the XmonSimulator took about 40 minutes the last
# time I tried it!
%%time
linspace = np.linspace(start=-1, stop=1, num=80)
train_losses = []
error_rates = []
for p in linspace:
params = {'w': p}
train_losses.append(loss(states, labels))
error_rates.append(classification_error(states, labels))
plt.plot(linspace, train_losses)
plt.xlabel('Weight')
plt.ylabel('Loss')
plt.title('Loss as a Function of Weight')
plt.show()
plt.plot(linspace, error_rates)
plt.xlabel('Weight')
plt.ylabel('Error Rate')
plt.title('Error Rate as a Function of Weight')
plt.show()
def stochastic_grad_loss():
Generates a new data point and computes the gradient of the loss
using that data point.
# Randomly generate the data point.
label = (-1)**np.random.choice(2)
state = np.random.choice(2, size=INPUT_SIZE, p=[0.5-label*0.2,0.5+label*0.2])
# Compute the gradient using finite difference
eps = 10**-5 # Discretization of gradient. Try different values.
params['w'] -= eps
loss1 = loss([state],[label])
params['w'] += 2*eps
grad = (loss([state],[label])-loss1)/(2*eps)
params['w'] -= eps # Reset the parameter value
return grad
eta = 10**-4 # Learning rate. Try different values.
params = {'w': 0} # Initialize weight. Try different values.
for i in range(201):
if not i%25:
print('Step: {} Loss: {}'.format(i, loss(states, labels)))
grad = stochastic_grad_loss()
params['w'] += -eta*grad
print('Final Weight: {}'.format(params['w']))
def readout_expectation_sample(state):
Takes in a specification of a state as an array of 0s and 1s
and returns the expectation value of Z on ther readout qubit.
Uses the XmonSimulator to sample the final wavefunction.
# We still need to resolve the parameters in the circuit.
resolver = cirq.ParamResolver(params)
# Make a copy of the QNN to avoid making changes to the global variable.
measurement_circuit = qnn.copy()
# Modify the measurement circuit to account for the desired input state.
# YOUR CODE HERE
# Add appropriate measurement gate(s) to the circuit.
# YOUR CODE HERE
simulator = cirq.google.XmonSimulator()
result = simulator.run(measurement_circuit, resolver, repetitions=10**6) # Try adjusting the repetitions
# Return the Z expectation value
return ((-1)**result.measurements['m']).mean()
def readout_expectation_sample(state):
Takes in a specification of a state as an array of 0s and 1s
and returns the expectation value of Z on ther readout qubit.
Uses the XmonSimulator to sample the final wavefunction.
# We still need to resolve the parameters in the circuit.
resolver = cirq.ParamResolver(params)
# Make a copy of the QNN to avoid making changes to the global variable.
measurement_circuit = qnn.copy()
# Modify the measurement circuit to account for the desired input state.
for i, qubit in enumerate(data_qubits):
if state[i]:
measurement_circuit.insert(0,cirq.X(qubit))
# Add appropriate measurement gate(s) to the circuit.
measurement_circuit.append(cirq.measure(readout, key='m'))
simulator = cirq.Simulator()
result = simulator.run(measurement_circuit, resolver, repetitions=10**6) # Try adjusting the repetitions
# Return the Z expectation value
return ((-1)**result.measurements['m']).mean()
state = [0,0,0,1,0,1,1,0,1] # Try different initial states.
params = {'w': 0.05} # Try different weights.
print("Exact expectation value: {}".format(readout_expectation(state)))
print("Estimates from sampling:")
for _ in range(5):
print(readout_expectation_sample(state))
print(cirq.google.Foxtail)
qnn_fox = cirq.Circuit()
w = 0.2 # Want an explicit numerical weight for later
for i in range(10):
qnn_fox.append([ZXGate(w).on(cirq.GridQubit(1,i), cirq.GridQubit(0,i)),
ZXGate(w).on(cirq.GridQubit(0,i+1), cirq.GridQubit(0,i)),
cirq.SWAP(cirq.GridQubit(0,i), cirq.GridQubit(0,i+1))])
qnn_fox.append(ZXGate(w).on(cirq.GridQubit(1,10), cirq.GridQubit(0,10)))
qnn_fox.append([(cirq.S**-1)(cirq.GridQubit(0,10)),cirq.H(cirq.GridQubit(0,10)),
cirq.measure(cirq.GridQubit(0,10))])
print(qnn_fox)
cirq.google.optimized_for_xmon(qnn_fox, new_device=cirq.google.Foxtail, allow_partial_czs=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To verify that Cirq is installed in your environment, try to import cirq and print out a diagram of the Foxtail device. It should produce a 2x11 grid of qubits.
Step2: The QNN Idea
Step3: Question
Step6: The Full $ZX$ Gate
Step9: Solution
Step12: EigenGate Implementation
Step15: Solution
Step16: Testing the Gate
Step17: We should also check that the matrix is what we expect
Step19: Create Circuit
Step20: Use this generator to create the QNN circuit. Don't forget to add the basis change for the readout qubit at the end!
Step21: Solution
Step22: View the Circuit
Step24: You can experiment with adding more layers of $ZX$ gates (or adding other kinds of transformations!) to your QNN, but we can use this simplest kind of circuit to analyze a simple toy problem, which is what we will do next.
Step25: Loss and Error
Step27: Generating Data
Step28: Training
Step30: Question
Step31: We can apply this function repeatedly to flow toward the minimum
Step33: Use Sampling Instead of Calculating from the Wavefunction
Step35: Solution
Step36: Comparison of Sampling with the Exact Wavefunction
Step37: As an exercise, try repeating some of the above calculations (e.g., the SGD optimization) using readout_expectation_sample in place of readout_expectation. How many repetitions should you use? How should the hyperparameters eps and eta be adjusted in response to the number of repetitions?
Step38: The qubits are arranged in two rows of eleven qubits each, and qubits can only communicate to their nearest neighbors along the horizontal and vertial connections. That does not mesh well with the QNN we designed, where all of the data qubits need to interact with the readout qubit.
Step39: As coded, this circuit still won't run on the Foxtail device. That's because the gates we've defined are not native gates. Cirq has a built-in method that will convert our gates to Xmon gates (which are native for the Foxtail device) and attempt to optimze the circuit by reducing the total number of gates
|
4,479
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image("images/monty.png")
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete import TabularCPD
# Defining the network structure
model = BayesianNetwork([("C", "H"), ("P", "H")])
# Defining the CPDs:
cpd_c = TabularCPD("C", 3, [[0.33], [0.33], [0.33]])
cpd_p = TabularCPD("P", 3, [[0.33], [0.33], [0.33]])
cpd_h = TabularCPD(
"H",
3,
[
[0, 0, 0, 0, 0.5, 1, 0, 1, 0.5],
[0.5, 0, 1, 0, 0, 0, 1, 0, 0.5],
[0.5, 1, 0, 1, 0.5, 0, 0, 0, 0],
],
evidence=["C", "P"],
evidence_card=[3, 3],
)
# Associating the CPDs with the network structure.
model.add_cpds(cpd_c, cpd_p, cpd_h)
# Some other methods
model.get_cpds()
# check_model check for the model structure and the associated CPD and returns True if everything is correct otherwise throws an exception
model.check_model()
# Infering the posterior probability
from pgmpy.inference import VariableElimination
infer = VariableElimination(model)
posterior_p = infer.query(["P"], evidence={"C": 0, "H": 2})
print(posterior_p)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: with the following CPDs
|
4,480
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
!gsutil cp gs://cloud-training-demos/taxifare/traffic/small/*.csv .
!ls -l *.csv
CSV_COLUMN_NAMES = ["fare_amount","dayofweek","hourofday","pickuplon","pickuplat",\
"dropofflon","dropofflat","trips_last_5min"]
CSV_DEFAULTS = [[0.0],[1],[0],[-74.0],[40.0],[-74.0],[40.7],[0]]
def read_dataset(csv_path):
def _parse_row(row):
# Decode the CSV row into list of TF tensors
fields = tf.decode_csv(records = row, record_defaults = CSV_DEFAULTS)
# Pack the result into a dictionary
features = dict(zip(CSV_COLUMN_NAMES, fields))
# NEW: Add engineered features
features = add_engineered_features(features)
# Separate the label from the features
label = features.pop("fare_amount") # remove label from features and store
return features, label
# Create a dataset containing the text lines.
dataset = tf.data.Dataset.list_files(file_pattern = csv_path) # (i.e. data_file_*.csv)
dataset = dataset.flat_map(map_func = lambda filename:tf.data.TextLineDataset(filenames = filename).skip(count = 1))
# Parse each CSV row into correct (features,label) format for Estimator API
dataset = dataset.map(map_func = _parse_row)
return dataset
def train_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features,label) format
dataset = read_dataset(csv_path)
#2. Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size)
return dataset
def eval_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features,label) format
dataset = read_dataset(csv_path)
#2.Batch the examples.
dataset = dataset.batch(batch_size = batch_size)
return dataset
# 1. One hot encode dayofweek and hourofday
fc_dayofweek = tf.feature_column.categorical_column_with_identity(key = "dayofweek", num_buckets = 7)
fc_hourofday = tf.feature_column.categorical_column_with_identity(key = "hourofday", num_buckets = 24)
# 2. Bucketize latitudes and longitudes
NBUCKETS = 16
latbuckets = np.linspace(start = 38.0, stop = 42.0, num = NBUCKETS).tolist()
lonbuckets = np.linspace(start = -76.0, stop = -72.0, num = NBUCKETS).tolist()
def bucketize_fc(key,boundaries):
return tf.feature_column.bucketized_column(
source_column = tf.feature_column.numeric_column(key = key),
boundaries = boundaries)
fc_bucketized_plat = bucketize_fc("pickuplon",lonbuckets)
fc_bucketized_plon = bucketize_fc("pickuplat",latbuckets)
fc_bucketized_dlat = bucketize_fc("dropofflon",lonbuckets)
fc_bucketized_dlon = bucketize_fc("dropofflat",latbuckets)
# 3. Cross features to get combination of day and hour
fc_crossed_day_hr = tf.feature_column.crossed_column(keys = [fc_dayofweek, fc_hourofday], hash_bucket_size = 24 * 7)
def add_engineered_features(features):
features["dayofweek"] = features["dayofweek"] - 1 # subtract one since our days of week are 1-7 instead of 0-6
features["latdiff"] = features["pickuplat"] - features["dropofflat"] # East/West
features["londiff"] = features["pickuplon"] - features["dropofflon"] # North/South
features["euclidean_dist"] = tf.sqrt(x = features["latdiff"]**2 + features["londiff"]**2)
return features
feature_cols = [
#1. Engineered using tf.feature_column module
tf.feature_column.indicator_column(categorical_column = fc_crossed_day_hr),
fc_bucketized_plat,
fc_bucketized_plon,
fc_bucketized_dlat,
fc_bucketized_dlon,
#2. Engineered in input functions
tf.feature_column.numeric_column(key = "latdiff"),
tf.feature_column.numeric_column(key = "londiff"),
tf.feature_column.numeric_column(key = "euclidean_dist"),
#3. Traffic proxy
tf.feature_column.numeric_column(key = "trips_last_5min",
normalizer_fn=lambda x: (x - 2070) / 616)
]
def serving_input_receiver_fn():
receiver_tensors = {
'dayofweek' : tf.placeholder(dtype = tf.int32, shape = [None]), # shape is vector to allow batch of requests
'hourofday' : tf.placeholder(dtype = tf.int32, shape = [None]),
'pickuplon' : tf.placeholder(dtype = tf.float32, shape = [None]),
'pickuplat' : tf.placeholder(dtype = tf.float32, shape = [None]),
'dropofflat' : tf.placeholder(dtype = tf.float32, shape = [None]),
'dropofflon' : tf.placeholder(dtype = tf.float32, shape = [None]),
'trips_last_5min' : tf.placeholder(dtype = tf.float32, shape = [None]),
}
features = add_engineered_features(receiver_tensors) # 'features' is what is passed on to the model
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = receiver_tensors)
%%time
OUTDIR = "taxi_trained"
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training
model = tf.estimator.DNNRegressor(
hidden_units = [10,10], # specify neural architecture
feature_columns = feature_cols,
model_dir = OUTDIR,
config = tf.estimator.RunConfig(
tf_random_seed = 1, # for reproducibility
save_checkpoints_steps = 200 # checkpoint every N steps
)
)
# Add custom evaluation metric
def my_rmse(labels, predictions):
pred_values = tf.squeeze(input = predictions["predictions"], axis = -1)
return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)}
model = tf.contrib.estimator.add_metrics(estimator = model, metric_fn = my_rmse)
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: train_input_fn("./taxi-train.csv"),
max_steps = 5000)
exporter = tf.estimator.FinalExporter(name = "exporter", serving_input_receiver_fn = serving_input_receiver_fn) # export SavedModel once at the end of training
# Note: alternatively use tf.estimator.BestExporter to export at every checkpoint that has lower loss than the previous checkpoint
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: eval_input_fn("./taxi-valid.csv"),
steps = None,
start_delay_secs = 1, # wait at least N seconds before first evaluation (default 120)
throttle_secs = 1, # wait at least N seconds before each subsequent evaluation (default 600)
exporters = exporter) # export SavedModel once at the end of training
tf.estimator.train_and_evaluate(estimator = model, train_spec = train_spec, eval_spec = eval_spec)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load raw data
Step2: Train and Evaluate input functions
Step3: Feature Engineering
Step4: Feature Engineering
Step5: Gather list of feature columns
Step6: Serving Input Receiver function
Step7: Train and Evaluate
|
4,481
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
def lorentz_derivs(yvec, t, sigma, rho, beta):
Compute the the derivatives for the Lorentz system at yvec(t).
x = yvec[0]
y = yvec[1]
z = yvec[2]
dx = sigma*(y-x)
dy = x*(rho-z)-y
dz = x*y-beta*z
return np.array([dx, dy,dz])
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
t = np.linspace(0, max_time, int(250*max_time))
soln = odeint(lorentz_derivs,ic,t,args=(sigma,rho,beta))
return soln, t
assert True # leave this to grade solve_lorenz
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
np.random.seed(1)
icl = [np.random.randint(-15,15,N),np.random.randint(-15,15,N),np.random.randint(-15,15,N)]
colors = plt.cm.hot(np.linspace(0,1,N))
plt.figure(figsize = (11,8))
icx = list(icl[0])
icy = list(icl[1])
icz = list(icl[2])
solns = []
for i in range(N):
ic = [icx[i],icy[i],icz[i]]
solnsy,solnst= solve_lorentz(ic,max_time,sigma,rho,beta)
solns.append(solnsy)
for i in solns:
solnx = [a[0] for a in i]
solnz = [b[2] for b in i]
plt.plot(solnx,solnz,label = '%s,%s'%(ic[0],ic[2]))
plt.legend(loc = 4, title = 'Initial conditions(x,z)')
plt.title('Trajectories')
return solnsy
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
interact(plot_lorentz,max_time = (1,10,1),N = (1,50,1),sigma = (0.0,50.0,0.1),rho = (0.0,50.0,0.1),beta = fixed(8/3))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Lorenz system
Step4: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
Step6: Write a function plot_lorentz that
Step7: Use interact to explore your plot_lorenz function with
|
4,482
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import idx2numpy
import pyflann
import mnist
import matplotlib.pyplot as plt
train_image_labels = idx2numpy.convert_from_file('train-labels-idx1-ubyte')
train_images = idx2numpy.convert_from_file('train-images-idx3-ubyte')
test_image_labels = idx2numpy.convert_from_file('t10k-labels-idx1-ubyte')
test_images = idx2numpy.convert_from_file('t10k-images-idx3-ubyte')
flattened_train_images = train_images.reshape((train_images.shape[0],
train_images.shape[1]*train_images.shape[2]),
order='C')
flattened_test_images = test_images.reshape((test_images.shape[0],
test_images.shape[1]*test_images.shape[2]),
order='C')
pyflann.set_distance_type('minkowski', order=2)
flann = pyflann.FLANN()
index_params = flann.build_index(flattened_train_images, log_level='info',
algorithm='kdtree', trees=10)
index_params
fig = plt.figure(figsize=(10,5))
for i in range (8):
fig.add_subplot(2,4,i + 1)
mnist.plot_image(test_images[i], test_image_labels[i])
fig.suptitle("Sample Test MNIST Digits")
plt.show()
neighbor, dist = flann.nn_index(flattened_test_images[:1000], num_neigbors=1, checks=index_params['checks'])
test_image_predictions = train_image_labels[neighbor]
sum(test_image_predictions != test_image_labels[:1000])/1000
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first step is to build the index.
Step2: Let's take a look at our test data. Plotting routines can be found on GitHub.
Step3: Let us try and make predictions now.
|
4,483
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-veg', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
4,484
|
<ASSISTANT_TASK:>
Python Code:
from astropy import time
from poliastro.twobody.orbit import Orbit
from poliastro.bodies import Earth
from poliastro.frames import Planes
from poliastro.plotting import StaticOrbitPlotter
eros = Orbit.from_sbdb("Eros")
eros.plot(label="Eros");
ganymed = Orbit.from_sbdb("1036") # Ganymed IAU number
amor = Orbit.from_sbdb("2001221") # Amor SPK-ID
eros = Orbit.from_sbdb("2000433") # Eros SPK-ID
frame = StaticOrbitPlotter(plane=Planes.EARTH_ECLIPTIC)
frame.plot(ganymed, label="Ganymed")
frame.plot(amor, label="Amor")
frame.plot(eros, label="Eros");
try:
Orbit.from_sbdb("*alley")
except ValueError as err:
print(err)
eros.epoch.iso
epoch = time.Time(2458000.0, scale="tdb", format="jd")
eros_november = eros.propagate(epoch)
eros_november.epoch.iso
from poliastro.neos import dastcom5
atira = dastcom5.orbit_from_name("atira")[0] # NEO
wikipedia = dastcom5.orbit_from_name("wikipedia")[0] # Asteroid, but not NEO.
frame = StaticOrbitPlotter()
frame.plot(atira, label="Atira (NEO)")
frame.plot(wikipedia, label="Wikipedia (asteroid)");
halleys = dastcom5.orbit_from_name("1P")
frame = StaticOrbitPlotter()
frame.plot(halleys[0], label="Halley")
frame.plot(halleys[5], label="Halley")
frame.plot(halleys[10], label="Halley")
frame.plot(halleys[20], label="Halley")
frame.plot(halleys[-1], label="Halley");
ast_db = dastcom5.asteroid_db()
comet_db = dastcom5.comet_db()
ast_db.dtype.names[
:20
] # They are more than 100, but that would be too much lines in this notebook :P
aphelion_condition = 2 * ast_db["A"] - ast_db["QR"] < 0.983
axis_condition = ast_db["A"] < 1.3
atiras = ast_db[aphelion_condition & axis_condition]
len(atiras)
from poliastro.bodies import Earth
frame = StaticOrbitPlotter()
frame.plot_body_orbit(Earth, time.Time.now().tdb)
for record in atiras["NO"]:
ss = dastcom5.orbit_from_record(record)
if ss.ecc < 1:
frame.plot(ss, color="#666666")
else:
print(f"Skipping hyperbolic orbit: {record}")
frame = StaticOrbitPlotter()
frame.plot_body_orbit(Earth, time.Time.now().tdb)
for i in range(len(atiras)):
record = atiras["NO"][i]
label = atiras["ASTNAM"][i].decode().strip() # DASTCOM5 strings are binary
ss = dastcom5.orbit_from_record(record)
if ss.ecc < 1:
frame.plot(ss, label=label)
else:
print(f"Skipping hyperbolic orbit: {label}")
db = dastcom5.entire_db()
db.columns
db[
db.NAME == "Halley"
] # As you can see, Halley is the name of an asteroid too, did you know that?
aphelion_condition = (2 * db["A"] - db["QR"]) < 0.983
axis_condition = db["A"] < 1.3
atiras = db[aphelion_condition & axis_condition]
len(atiras)
len(atiras[atiras.A < 0])
axis_condition = (db["A"] < 1.3) & (db["A"] > 0)
atiras = db[aphelion_condition & axis_condition]
len(atiras)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Small Body Database (SBDB)
Step2: You can also search by IAU number or SPK-ID (there is a faster neows.orbit_from_spk_id() function in that case, although)
Step3: You can use the wildcards from that browser
Step4: <div class="alert alert-info">Note that epoch is provided by the service itself, so if you need orbit on another epoch, you have to propagate it
Step5: DASTCOM5 module
Step6: Keep in mind that this function returns a list of orbits matching your string. This is made on purpose given that there are comets which have several records in the database (one for each orbit determination in history) what allow plots like this one
Step7: While neows can only be used to get Orbit objects, dastcom5 can also provide asteroid and comet complete database.
Step8: <div class="alert alert-info">Asteroid and comet parameters are not exactly the same (although they are very close)</div>
Step9: The number of Atira NEOs we use using this method is
Step10: Which is consistent with the stats published by CNEOS
Step11: If we needed also the names of each asteroid, we could do
Step12: <div class="alert alert-info">We knew beforehand that there are no `Atira` comets, only asteroids (comet orbits are usually more eccentric), but we could use the same method with `com_db` if we wanted.</div>
Step13: Also, in this function, DASTCOM5 data (specially strings) is ready to use (decoded and improved strings, etc)
Step14: Panda offers many functionalities, and can also be used in the same way as the ast_db and comet_db functions
Step15: What? I said they can be used in the same way!
Step16: So, rewriting our condition
|
4,485
|
<ASSISTANT_TASK:>
Python Code:
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this to grade the import statements
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this to grade the image display
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this here to grade the quark table
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic rich display
Step2: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
|
4,486
|
<ASSISTANT_TASK:>
Python Code:
# This tells matplotlib not to try opening a new window for each plot.
%matplotlib inline
# General libraries.
import re
import numpy as np
import matplotlib.pyplot as plt
# SK-learn libraries for learning.
from sklearn.pipeline import Pipeline
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.grid_search import GridSearchCV
# SK-learn libraries for evaluation.
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import classification_report
# SK-learn library for importing the newsgroup data.
from sklearn.datasets import fetch_20newsgroups
# SK-learn libraries for feature extraction from text.
from sklearn.feature_extraction.text import *
categories = ['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space']
newsgroups_train = fetch_20newsgroups(subset='train',
remove=('headers', 'footers', 'quotes'),
categories=categories)
newsgroups_test = fetch_20newsgroups(subset='test',
remove=('headers', 'footers', 'quotes'),
categories=categories)
num_test = len(newsgroups_test.target)
test_data, test_labels = newsgroups_test.data[num_test/2:], newsgroups_test.target[num_test/2:]
dev_data, dev_labels = newsgroups_test.data[:num_test/2], newsgroups_test.target[:num_test/2]
train_data, train_labels = newsgroups_train.data, newsgroups_train.target
print 'training label shape:', train_labels.shape
print 'test label shape:', test_labels.shape
print 'dev label shape:', dev_labels.shape
print 'labels names:', newsgroups_train.target_names
#def P1(num_examples=5):
### STUDENT START ###
### STUDENT END ###
#P1()
#def P2():
### STUDENT START ###
### STUDENT END ###
#P2()
#def P3():
### STUDENT START ###
### STUDENT END ###
#P3()
#def P4():
### STUDENT START ###
### STUDENT END ###
#P4()
#def empty_preprocessor(s):
# return s
#def better_preprocessor(s):
### STUDENT START ###
### STUDENT END ###
#def P5():
### STUDENT START ###
### STUDENT END ###
#P5()
#def P6():
# Keep this random seed here to make comparison easier.
#np.random.seed(0)
### STUDENT START ###
### STUDENT END ###
#P6()
#def P7():
### STUDENT START ###
## STUDENT END ###
#P7()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data, stripping out metadata so that we learn classifiers that only use textual features. By default, newsgroups data is split into train and test sets. We further split the test so we have a dev set. Note that we specify 4 categories to use for this project. If you remove the categories argument from the fetch function, you'll get all 20 categories.
Step2: (1) For each of the first 5 training examples, print the text of the message along with the label.
Step3: (2) Use CountVectorizer to turn the raw training text into feature vectors. You should use the fit_transform function, which makes 2 passes through the data
Step4: (3) Use the default CountVectorizer options and report the f1 score (use metrics.f1_score) for a k nearest neighbors classifier; find the optimal value for k. Also fit a Multinomial Naive Bayes model and find the optimal value for alpha. Finally, fit a logistic regression model and find the optimal value for the regularization strength C using l2 regularization. A few questions
Step5: ANSWER
Step6: ANSWER
Step7: (6) The idea of regularization is to avoid learning very large weights (which are likely to fit the training data, but not generalize well) by adding a penalty to the total size of the learned weights. That is, logistic regression seeks the set of weights that minimizes errors in the training data AND has a small size. The default regularization, L2, computes this size as the sum of the squared weights (see P3, above). L1 regularization computes this size as the sum of the absolute values of the weights. The result is that whereas L2 regularization makes all the weights relatively small, L1 regularization drives lots of the weights to 0, effectively removing unimportant features.
Step8: (7) Use the TfidfVectorizer -- how is this different from the CountVectorizer? Train a logistic regression model with C=100.
|
4,487
|
<ASSISTANT_TASK:>
Python Code:
from eden.io.gspan import load
pos_graphs = list(load('data/bursi.pos.gspan'))
neg_graphs = list(load('data/bursi.neg.gspan'))
graphs = pos_graphs + neg_graphs
y = [1]*len(pos_graphs) + [-1]*len(neg_graphs)
import numpy as np
y = np.array(y)
%%time
from eden.graph import vectorize
X = vectorize(graphs, complexity=2)
print 'Instances: %d Features: %d with an avg of %d features per instance' % (X.shape[0], X.shape[1], X.getnnz()/X.shape[0])
%%time
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import cross_val_score
predictor = SGDClassifier(average=True, class_weight='balanced', shuffle=True, n_jobs=-1)
scores = cross_val_score(predictor, X, y, cv=10, scoring='roc_auc')
print('AUC ROC: %.4f +- %.4f' % (np.mean(scores),np.std(scores)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: EDeN exports a vectorize function that converts a list of graphs in input to a data matrix in output.
Step2: Several predictive algorithms from the scikit library can process data in csr format.
|
4,488
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mfusg'
if platform.system() == 'Windows':
exe_name += '.exe'
mfexe = exe_name
modelpth = os.path.join('data')
modelname = 'zaidel'
#make sure modelpth directory exists
if not os.path.exists(modelpth):
os.makedirs(modelpth)
# model dimensions
nlay, nrow, ncol = 1, 1, 200
delr = 50.
delc = 1.
# boundary heads
h1 = 23.
h2 = 5.
# cell centroid locations
x = np.arange(0., float(ncol)*delr, delr) + delr / 2.
# ibound
ibound = np.ones((nlay, nrow, ncol), dtype=np.int)
ibound[:, :, 0] = -1
ibound[:, :, -1] = -1
# bottom of the model
botm = 25 * np.ones((nlay + 1, nrow, ncol), dtype=np.float)
base = 20.
for j in range(ncol):
botm[1, :, j] = base
#if j > 0 and j % 40 == 0:
if j+1 in [40,80,120,160]:
base -= 5
# starting heads
strt = h1 * np.ones((nlay, nrow, ncol), dtype=np.float)
strt[:, :, -1] = h2
#make the flopy model
mf = flopy.modflow.Modflow(modelname=modelname, exe_name=mfexe, model_ws=modelpth)
dis = flopy.modflow.ModflowDis(mf, nlay, nrow, ncol,
delr=delr, delc=delc,
top=botm[0, :, :], botm=botm[1:, :, :],
perlen=1, nstp=1, steady=True)
bas = flopy.modflow.ModflowBas(mf, ibound=ibound, strt=strt)
lpf = flopy.modflow.ModflowLpf(mf, hk=0.0001, laytyp=4)
oc = flopy.modflow.ModflowOc(mf,
stress_period_data={(0,0): ['print budget', 'print head',
'save head', 'save budget']})
sms = flopy.modflow.ModflowSms(mf, nonlinmeth=1, linmeth=1,
numtrack=50, btol=1.1, breduc=0.70, reslim = 0.0,
theta=0.85, akappa=0.0001, gamma=0., amomentum=0.1,
iacl=2, norder=0, level=5, north=7, iredsys=0, rrctol=0.,
idroptol=1, epsrn=1.e-5,
mxiter=500, hclose=1.e-3, hiclose=1.e-3, iter1=50)
mf.write_input()
# remove any existing head files
try:
os.remove(os.path.join(model_ws, '{0}.hds'.format(modelname)))
except:
pass
# run the model
mf.run_model()
# Create the mfusg headfile object
headfile = os.path.join(modelpth, '{0}.hds'.format(modelname))
headobj = flopy.utils.HeadFile(headfile)
times = headobj.get_times()
mfusghead = headobj.get_data(totim=times[-1])
fig = plt.figure(figsize=(8,6))
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=0.25, hspace=0.25)
ax = fig.add_subplot(1, 1, 1)
ax.plot(x, mfusghead[0, 0, :], linewidth=0.75, color='blue', label='MODFLOW-USG')
ax.fill_between(x, y1=botm[1, 0, :], y2=-5, color='0.5', alpha=0.5)
leg = ax.legend(loc='upper right')
leg.draw_frame(False)
ax.set_xlabel('Horizontal distance, in m')
ax.set_ylabel('Head, in m')
ax.set_ylim(-5,25);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model parameters
Step2: Create and run the MODFLOW-USG model
Step3: Read the simulated MODFLOW-USG model results
Step4: Plot MODFLOW-USG results
|
4,489
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sqlite3
%matplotlib inline
# Connect to the MIMIC database
conn = sqlite3.connect('data/mimicdata.sqlite')
# Create our test query
test_query =
SELECT subject_id, hadm_id, admittime, dischtime, admission_type, diagnosis
FROM admissions
LIMIT 10;
# Run the query and assign the results to a variable
test = pd.read_sql_query(test_query,conn)
# Display the first few rows
test.head()
query =
SELECT de.icustay_id
, (strftime('%s',de.charttime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS
, di.label
, de.value
, de.valuenum
, de.uom
FROM chartevents de
INNER join d_items di
ON de.itemid = di.itemid
INNER join icustays ie
ON de.icustay_id = ie.icustay_id
WHERE de.subject_id = 40036
ORDER BY charttime;
ce = pd.read_sql_query(query,conn)
# Preview the data
# Use 'head' to limit the number of rows returned
ce.head()
# Select a single column
ce['LABEL']
# Select just the heart rate rows using an index
ce[ce.LABEL=='Heart Rate']
# Which time stamps have a corresponding heart rate measurement?
print ce.index[ce.LABEL=='Heart Rate']
# Set x equal to the times
x_hr = ce.HOURS[ce.LABEL=='Heart Rate']
# Set y equal to the heart rates
y_hr = ce.VALUENUM[ce.LABEL=='Heart Rate']
# Plot time against heart rate
plt.figure(figsize=(14, 6))
plt.plot(x_hr,y_hr)
plt.xlabel('Time',fontsize=16)
plt.ylabel('Heart rate',fontsize=16)
plt.title('Heart rate over time from admission to the intensive care unit')
# Exercise 1 here
plt.figure(figsize=(14, 6))
plt.plot(ce.HOURS[ce.LABEL=='Respiratory Rate'],
ce.VALUENUM[ce.LABEL=='Respiratory Rate'],
'k+', markersize=10, linewidth=4)
plt.plot(ce.HOURS[ce.LABEL=='Resp Alarm - High'],
ce.VALUENUM[ce.LABEL=='Resp Alarm - High'],
'm--')
plt.plot(ce.HOURS[ce.LABEL=='Resp Alarm - Low'],
ce.VALUENUM[ce.LABEL=='Resp Alarm - Low'],
'm--')
plt.xlabel('Time',fontsize=16)
plt.ylabel('Respiratory rate',fontsize=16)
plt.title('Respiratory rate over time from admission, with upper and lower alarm thresholds')
plt.ylim(0,55)
# Display the first few rows of the GCS eye response data
ce[ce.LABEL=='GCS - Eye Opening'].head()
# Prepare the size of the figure
plt.figure(figsize=(14, 10))
# Set x equal to the times
x_hr = ce.HOURS[ce.LABEL=='Heart Rate']
# Set y equal to the heart rates
y_hr = ce.VALUENUM[ce.LABEL=='Heart Rate']
plt.plot(x_hr,y_hr)
plt.plot(ce.HOURS[ce.LABEL=='Respiratory Rate'],
ce.VALUENUM[ce.LABEL=='Respiratory Rate'],
'k', markersize=6)
# Add a text label to the y-axis
plt.text(-4,155,'GCS - Eye Opening',fontsize=14)
plt.text(-4,150,'GCS - Motor Response',fontsize=14)
plt.text(-4,145,'GCS - Verbal Response',fontsize=14)
# Iterate over list of GCS labels, plotting around 1 in 10 to avoid overlap
for i, txt in enumerate(ce.VALUE[ce.LABEL=='GCS - Eye Opening'].values):
if np.mod(i,6)==0 and i < 65:
plt.annotate(txt, (ce.HOURS[ce.LABEL=='GCS - Eye Opening'].values[i],155),fontsize=14)
for i, txt in enumerate(ce.VALUE[ce.LABEL=='GCS - Motor Response'].values):
if np.mod(i,6)==0 and i < 65:
plt.annotate(txt, (ce.HOURS[ce.LABEL=='GCS - Motor Response'].values[i],150),fontsize=14)
for i, txt in enumerate(ce.VALUE[ce.LABEL=='GCS - Verbal Response'].values):
if np.mod(i,6)==0 and i < 65:
plt.annotate(txt, (ce.HOURS[ce.LABEL=='GCS - Verbal Response'].values[i],145),fontsize=14)
plt.title('Vital signs and Glasgow Coma Scale over time from admission',fontsize=16)
plt.xlabel('Time (hours)',fontsize=16)
plt.ylabel('Heart rate or GCS',fontsize=16)
plt.ylim(10,165)
# OPTION 1: load outputs from the patient
query =
select de.icustay_id
, (strftime('%s',de.charttime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS
, di.label
, de.value
, de.valueuom
from outputevents de
inner join icustays ie
on de.icustay_id = ie.icustay_id
inner join d_items di
on de.itemid = di.itemid
where de.subject_id = 40036
order by charttime;
oe = pd.read_sql_query(query,conn)
oe.head()
# Prepare the size of the figure
plt.figure(figsize=(14, 10))
plt.title('Fluid output over time')
plt.plot(oe.HOURS,
oe.VALUE.cumsum()/1000,
'ro', markersize=8, label='Output volume, L')
plt.xlim(0,20)
plt.ylim(0,2)
plt.legend()
# Load inputs given to the patient (usually intravenously) using the database connection
query =
select de.icustay_id
, (strftime('%s',de.starttime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS_START
, (strftime('%s',de.endtime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS_END
, de.linkorderid
, di.label
, de.amount
, de.amountuom
, de.rate
, de.rateuom
from inputevents_mv de
inner join icustays ie
on de.icustay_id = ie.icustay_id
inner join d_items di
on de.itemid = di.itemid
where de.subject_id = 40036
order by endtime;
ie = pd.read_sql_query(query,conn)
ie.head()
ie['LABEL'].unique()
# Prepare the size of the figure
plt.figure(figsize=(14, 10))
# Plot the cumulative input against the cumulative output
plt.plot(ie.HOURS_END[ie.AMOUNTUOM=='mL'],
ie.AMOUNT[ie.AMOUNTUOM=='mL'].cumsum()/1000,
'go', markersize=8, label='Intake volume, L')
plt.plot(oe.HOURS,
oe.VALUE.cumsum()/1000,
'ro', markersize=8, label='Output volume, L')
plt.title('Fluid balance over time',fontsize=16)
plt.xlabel('Hours',fontsize=16)
plt.ylabel('Volume (litres)',fontsize=16)
# plt.ylim(0,38)
plt.legend()
plt.figure(figsize=(14, 10))
# Plot the cumulative input against the cumulative output
plt.plot(ie.HOURS_END[ie.AMOUNTUOM=='mL'],
ie.AMOUNT[ie.AMOUNTUOM=='mL'].cumsum()/1000,
'go', markersize=8, label='Intake volume, L')
plt.plot(oe.HOURS,
oe.VALUE.cumsum()/1000,
'ro', markersize=8, label='Output volume, L')
# example on getting two columns from a dataframe: ie[['HOURS_START','HOURS_END']].head()
for i, idx in enumerate(ie.index[ie.LABEL=='Furosemide (Lasix)']):
plt.plot([ie.HOURS_START[ie.LABEL=='Furosemide (Lasix)'][idx],
ie.HOURS_END[ie.LABEL=='Furosemide (Lasix)'][idx]],
[ie.RATE[ie.LABEL=='Furosemide (Lasix)'][idx],
ie.RATE[ie.LABEL=='Furosemide (Lasix)'][idx]],
'b-',linewidth=4)
plt.title('Fluid balance over time',fontsize=16)
plt.xlabel('Hours',fontsize=16)
plt.ylabel('Volume (litres)',fontsize=16)
# plt.ylim(0,38)
plt.legend()
ie['LABEL'].unique()
# Exercise 2 here
plt.figure(figsize=(14, 10))
plt.plot(ce.index[ce.LABEL=='Heart Rate'],
ce.VALUENUM[ce.LABEL=='Heart Rate'],
'rx', markersize=8, label='HR')
plt.plot(ce.index[ce.LABEL=='O2 saturation pulseoxymetry'],
ce.VALUENUM[ce.LABEL=='O2 saturation pulseoxymetry'],
'g.', markersize=8, label='O2')
plt.plot(ce.index[ce.LABEL=='Arterial Blood Pressure mean'],
ce.VALUENUM[ce.LABEL=='Arterial Blood Pressure mean'],
'bv', markersize=8, label='MAP')
plt.plot(ce.index[ce.LABEL=='Respiratory Rate'],
ce.VALUENUM[ce.LABEL=='Respiratory Rate'],
'k+', markersize=8, label='RR')
plt.title('Vital signs over time from admission')
plt.ylim(0,130)
plt.legend()
# OPTION 1: load labevents data using the database connection
query =
SELECT de.subject_id
, de.charttime
, di.label, de.value, de.valuenum
, de.uom
FROM labevents de
INNER JOIN d_labitems di
ON de.itemid = di.itemid
where de.subject_id = 40036
le = pd.read_sql_query(query,conn)
# preview the labevents data
le.head()
# preview the ioevents data
le[le.LABEL=='HEMOGLOBIN']
plt.figure(figsize=(14, 10))
plt.plot(le.index[le.LABEL=='HEMATOCRIT'],
le.VALUENUM[le.LABEL=='HEMATOCRIT'],
'go', markersize=6, label='Haematocrit')
plt.plot(le.index[le.LABEL=='HEMOGLOBIN'],
le.VALUENUM[le.LABEL=='HEMOGLOBIN'],
'bv', markersize=8, label='Hemoglobin')
plt.title('Laboratory measurements over time from admission')
plt.ylim(0,38)
plt.legend()
# load ioevents
ioe = pd.read_csv('data/example_ioevents.csv',index_col='HOURSSINCEADMISSION_START')
ioe.head()
plt.figure(figsize=(14, 10))
plt.plot(ioe.index[ioe.LABEL=='Midazolam (Versed)'],
ioe.RATE[ioe.LABEL=='Midazolam (Versed)'],
'go', markersize=6, label='Midazolam (Versed)')
plt.plot(ioe.index[ioe.LABEL=='Propofol'],
ioe.RATE[ioe.LABEL=='Propofol'],
'bv', markersize=8, label='Propofol')
plt.plot(ioe.index[ioe.LABEL=='Fentanyl'],
ioe.RATE[ioe.LABEL=='Fentanyl'],
'k+', markersize=8, label='Fentanyl')
plt.title('IOevents over time from admission')
plt.ylim(0,380)
plt.legend()
plt.figure(figsize=(14, 10))
plt.plot(ioe.index[ioe.LABEL=='OR Cryoprecipitate Intake'],
ioe.VALUENUM[ioe.LABEL=='OR Cryoprecipitate Intake'],
'go', markersize=6, label='OR Cryoprecipitate Intake')
plt.plot(ioe.index[ioe.LABEL=='OR Crystalloid Intake'],
ioe.VALUENUM[ioe.LABEL=='OR Crystalloid Intake'],
'bv', markersize=8, label='OR Crystalloid Intake')
plt.plot(ioe.index[ioe.LABEL=='OR FFP Intake'],
ioe.VALUENUM[ioe.LABEL=='OR FFP Intake'],
'k+', markersize=8, label='OR FFP Intake')
plt.plot(ioe.index[ioe.LABEL=='OR Packed RBC Intake'],
ioe.VALUENUM[ioe.LABEL=='OR Packed RBC Intake'],
'k+', markersize=8, label='OR Packed RBC Intake')
plt.plot(ioe.index[ioe.LABEL=='OR Platelet Intake'],
ioe.VALUENUM[ioe.LABEL=='OR Platelet Intake'],
'k+', markersize=8, label='OR Platelet Intake')
plt.title('Blood products administered over time from admission')
plt.legend()
# insert discharge summary here...
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Connect to the database
Step4: Load the chartevents data
Step5: Review the patient's heart rate
Step6: In a similar way, we can select rows from data using indexes.
Step7: Plot 1
Step8: Task 1
Step9: Plot 2
Step10: Task 2
Step12: Task 3
Step14: To provide context for this plot, it would help to include patient input data. This helps to determine the patient's fluid balance, a key indicator in patient health.
Step15: Note that the column headers are different
Step16: As the plot shows, the patient's intake tends to be above their output. There are however periods where input and output are almost one to one. One of the biggest challenges of working with ICU data is that context is everything, so let's look at a treatment (Furosemide/Lasix) which we know will affect this graph.
Step17: Exercise 2
Step18: Plot 3
Step20: Plot 5
Step21: Plot 5
Step22: Plot 6
Step23: Discharge summary
|
4,490
|
<ASSISTANT_TASK:>
Python Code:
import os
import tempfile
import tensorflow as tf
import tensorflow_data_validation as tfdv
import time
from apache_beam.options.pipeline_options import PipelineOptions, GoogleCloudOptions, StandardOptions, SetupOptions, DebugOptions, WorkerOptions
from google.protobuf import text_format
from tensorflow_metadata.proto.v0 import schema_pb2, statistics_pb2
print('TensorFlow version: {}'.format(tf.__version__))
print('TensorFlow Data Validation version: {}'.format(tfdv.__version__))
TRAINING_DATASET='gs://workshop-datasets/covertype/training/dataset.csv'
TRAINING_DATASET_WITH_MISSING_VALUES='gs://workshop-datasets/covertype/training_missing/dataset.csv'
EVALUATION_DATASET='gs://workshop-datasets/covertype/evaluation/dataset.csv'
EVALUATION_DATASET_WITH_ANOMALIES='gs://workshop-datasets/covertype/evaluation_anomalies/dataset.csv'
SERVING_DATASET='gs://workshop-datasets/covertype/serving/dataset.csv'
LAB_ROOT_FOLDER='/home/mlops-labs/lab-31-tfdv-structured-data'
PROJECT_ID = 'mlops-workshop'
REGION = 'us-central1'
STAGING_BUCKET = 'gs://{}-staging'.format(PROJECT_ID)
train_stats = tfdv.generate_statistics_from_csv(
data_location=TRAINING_DATASET_WITH_MISSING_VALUES
)
tfdv.visualize_statistics(train_stats)
schema = tfdv.infer_schema(train_stats)
tfdv.display_schema(schema=schema)
tfdv.get_feature(schema, 'Soil_Type').type = schema_pb2.FeatureType.BYTES
tfdv.set_domain(schema, 'Soil_Type', schema_pb2.StringDomain(name='Soil_Type', value=[]))
tfdv.set_domain(schema, 'Cover_Type', schema_pb2.IntDomain(name='Cover_Type', min=1, max=7, is_categorical=True))
tfdv.get_feature(schema, 'Slope').type = schema_pb2.FeatureType.FLOAT
tfdv.set_domain(schema, 'Slope', schema_pb2.FloatDomain(name='Slope', min=0, max=90))
tfdv.display_schema(schema=schema)
stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)
train_stats = tfdv.generate_statistics_from_csv(
data_location=TRAINING_DATASET_WITH_MISSING_VALUES,
stats_options=stats_options,
)
tfdv.visualize_statistics(train_stats)
soil_type_stats = [feature for feature in train_stats.datasets[0].features if feature.path.step[0]=='Soil_Type'][0].string_stats
soil_type_domain = [bucket.label for bucket in soil_type_stats.rank_histogram.buckets]
tfdv.set_domain(schema, 'Soil_Type', schema_pb2.StringDomain(name='Soil_Type', value=soil_type_domain))
tfdv.display_schema(schema=schema)
%%writefile setup.py
from setuptools import setup
setup(
name='tfdv',
description='TFDV Runtime.',
version='0.1',
install_requires=[
'tensorflow_data_validation==0.15.0'
]
)
options = PipelineOptions()
options.view_as(GoogleCloudOptions).project = PROJECT_ID
options.view_as(GoogleCloudOptions).region = REGION
options.view_as(GoogleCloudOptions).job_name = "tfdv-{}".format(time.strftime("%Y%m%d-%H%M%S"))
options.view_as(GoogleCloudOptions).staging_location = STAGING_BUCKET + '/staging/'
options.view_as(GoogleCloudOptions).temp_location = STAGING_BUCKET + '/tmp/'
options.view_as(StandardOptions).runner = 'DataflowRunner'
options.view_as(SetupOptions).setup_file = os.path.join(LAB_ROOT_FOLDER, 'setup.py')
stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)
train_stats = tfdv.generate_statistics_from_csv(
data_location=TRAINING_DATASET_WITH_MISSING_VALUES,
stats_options=stats_options,
pipeline_options=options,
output_path=STAGING_BUCKET + '/output/'
)
tfdv.visualize_statistics(train_stats)
stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)
eval_stats = tfdv.generate_statistics_from_csv(
data_location=EVALUATION_DATASET_WITH_ANOMALIES,
stats_options=stats_options
)
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL DATASET', rhs_name='TRAIN_DATASET')
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
tfdv.display_anomalies(anomalies)
tfdv.get_domain(schema, 'Soil_Type').value.append('5151')
updated_anomalies = tfdv.validate_statistics(eval_stats, schema)
tfdv.display_anomalies(updated_anomalies)
stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)
eval_stats = tfdv.generate_statistics_from_csv(
data_location=EVALUATION_DATASET,
stats_options=stats_options
)
updated_anomalies = tfdv.validate_statistics(eval_stats, schema)
tfdv.display_anomalies(updated_anomalies)
tfdv.display_schema(schema=schema)
stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)
eval_stats = tfdv.generate_statistics_from_csv(
data_location=SERVING_DATASET,
stats_options=stats_options
)
serving_anomalies = tfdv.validate_statistics(eval_stats, schema)
tfdv.display_anomalies(serving_anomalies)
schema.default_environment.append('TRAINING')
schema.default_environment.append('SERVING')
tfdv.get_feature(schema, 'Cover_Type').not_in_environment.append('SERVING')
serving_anomalies = tfdv.validate_statistics(eval_stats, schema, environment='SERVING')
tfdv.display_anomalies(serving_anomalies)
output_dir = os.path.join(tempfile.mkdtemp(),'covertype_schema')
tf.io.gfile.makedirs(output_dir)
schema_file = os.path.join(output_dir, 'schema.pbtxt')
tfdv.write_schema_text(schema, schema_file)
!cat {schema_file}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set the GCS locations of datasets used during the lab
Step2: Set the local path to the lab's folder.
Step3: Configure GCP project, region, and staging bucket
Step4: Computing and visualizing descriptive statistics
Step5: You can now use tfdv.visualize_statistics to create a visualization of your data. tfdv.visualize_statistics uses Facets that provides succinct, interactive visualizations to aid in understanding and analyzing machine learning datasets.
Step6: The interactive widget you see is Facets Overview.
Step7: In general, TFDV uses conservative heuristics to infer stable data properties from the statistics in order to avoid overfitting the schema to the specific dataset. It is strongly advised to review the inferred schema and refine it as needed, to capture any domain knowledge about the data that TFDV's heuristics might have missed.
Step8: Generate new statistics using the updated schema.
Step9: Finalize the schema
Step10: Creating statistics using Cloud Dataflow
Step11: Regenerate statistics
Step12: Analyzing evaluation data
Step13: Checking for anomalies
Step14: Fixing evaluation anomalies in the schema
Step15: Re-validate with the updated schema
Step16: The unexpected string values error in Soil_Type is gone but the out-of-range error in Slope is still there. Let's pretend you have fixed the source and re-evaluate the evaluation split without corrupted Slope.
Step17: Schema environments
Step18: Environments can be used to address such scenarios. In particular, specific features in schema can be associated with specific environments.
Step19: If you validate the serving statistics against the serving environment in schema you will not get anomaly
Step20: Freezing the schema
|
4,491
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn import datasets
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
# Load data
iris = datasets.load_iris()
# Create feature matrix
X = iris.data
# Create target vector
y = iris.target
# Create list of target class names
class_names = iris.target_names
# Create training and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# Create logistic regression
classifier = LogisticRegression()
# Train model and make predictions
y_hat = classifier.fit(X_train, y_train).predict(X_test)
# Create a classification report
print(classification_report(y_test, y_hat, target_names=class_names))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Iris Flower Data
Step2: Create Training And Test Sets
Step3: Train A Logistic Regression Model
Step4: Generate Report
|
4,492
|
<ASSISTANT_TASK:>
Python Code:
from datetime import datetime, timedelta
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
%matplotlib inline
sns.set_context('notebook')
def fourier_basis(x, degree, half_period):
Returns a 2-d array of fourier basis.
A = np.ones((x.size, 2 * degree + 1))
for d in range(1, degree + 1):
A[:, 2*d-1] = np.cos(d * np.pi * x / half_period)
A[:, 2*d] = np.sin(d * np.pi * x / half_period)
return A
def fit_driftfourier(x, data, weights, degree, half_period=0.5):
Fit y = f(x - x.min()) to data where f is given by
fourier series + drift.
Parameters
----------
x : 1-d array
x-coordinates
data : 1-d array
data values
weights : 1-d array
weights (>=0)
degree : int
degree of fourier series
half_period : float
half period
Returns
-------
intercept : float
intercept at x.min()
slope : float
slope (drift) for the normalized data
(x - x.min())
pfourier : 1-d array
Fourier series parameters for the
normalized data
f_drift : callable
Can be used to calculate the drift
given any (non-normalized) x
f_fourier : callable
Can be used to calculate fourier series
f_driftfourier : callable
Can be used to calculate drift + fourier
residual_std : float
estimated standard deviation of residuals
A : 2-d array
matrix of "coefficients"
xmin = x.min()
xnorm = x - xmin
# coefficient matrix
A = np.ones((x.size, 2 * degree + 2))
A[:, 1] = xnorm
A[:, 2:] = fourier_basis(xnorm, degree, half_period)[:, 1:]
# linear weighted least squares
results = np.linalg.lstsq(A * weights[:, np.newaxis],
data * weights)
params = results[0]
intercept = params[0]
slope = params[1]
pfourier = params[2:]
f_drift = lambda t: slope * (t - xmin) + intercept
f_fourier = lambda t: np.sum(fourier_basis(t - xmin, degree,
half_period)[:, 1:]
* pfourier[np.newaxis, :],
axis=1) + intercept
f_driftfourier = lambda t: f_drift(t) + f_fourier(t) - intercept
residual_std = np.sqrt(results[1][0] / (x.size - 2 * degree + 2))
return (intercept, slope, pfourier,
f_drift, f_fourier, f_driftfourier,
residual_std, A)
def cf_driftfourier(x, data, weights, degree,
half_period=0.5, nboot=5000,
percentiles=(2.5, 50., 97.5)):
Calculate confidence intervals for the fitted
parameters from fourier series + drift modelling,
using bootstrap resampling.
Parameters
----------
nboot : int
number of bootstrap replicates
percentiles : sequence of floats
percentiles of parameter estimate
distributions to return
Returns
-------
perc : dict
percentiles for each parameter
distribution.
boot_intercept : 1-d array
intercept estimates from bootstrapped
datasets.
boot_slope : 1-d array
slope estimates.
boot_pfourier : 2-d array
fourier parameters estimates.
See Also
--------
:func:`fit_driftfourier`
# first fit without bootstrapping
results = fit_driftfourier(x, data, weights,
degree, half_period)
f_driftfourier = results[5]
A = results[7]
model = f_driftfourier(x)
residuals = data - model
# generate bootstrap resamples of residuals
# and new datasets from these resamples
resample_residuals = np.random.choice(
residuals, size=(x.size, nboot), replace=True
)
boot_dataset = model[:, np.newaxis] + resample_residuals
# fit all bootstrap datasets at once
results_boot = np.linalg.lstsq(A * weights[:, np.newaxis],
boot_dataset * weights[:, np.newaxis])
params_boot = results_boot[0]
# compute percentiles
perc_boot = np.column_stack(
np.percentile(params_boot, percentiles, axis=1)
)
perc = {'intercept' : perc_boot[0],
'slope' : perc_boot[1],
'pfourier' : perc_boot[2:]}
intercept = params_boot[0]
slope = params_boot[1]
pfourier = params_boot[2:]
return perc, intercept, slope, pfourier
def get_slope_percent(slope, intercept):
Express `slope` as percentage, taking
the `intercept` value as reference (100%).
return slope / intercept * 100.
ccl4_nya = pd.read_csv('data/CCL4_NYA.csv', header=None,
names=['time', 'col_density'])
def decimal_year_to_datetime(val):
Convert `val` given in decimal years into a
`datetime.datetime` object, via the computation of the
total duration of the year in seconds.
year = int(val)
frac = val - year
year_dt = datetime(year, 1, 1)
year_delta = datetime(year + 1, 1, 1) - year_dt
year_delta_seconds = year_delta.total_seconds()
return year_dt + timedelta(seconds=year_delta_seconds * frac)
decimal_years_to_datetimes = np.vectorize(decimal_year_to_datetime)
# assign the datetimes as index of the dataset
ccl4_nya.index = pd.DatetimeIndex(decimal_years_to_datetimes(ccl4_nya['time'].values))
ccl4_nya.head()
by_day = [ccl4_nya.index.year, ccl4_nya.index.month, ccl4_nya.index.day]
ccl4_nya_daily_means = ccl4_nya.groupby(by_day).apply(np.mean)
ccl4_nya_daily_means.head()
x = ccl4_nya_daily_means['time'].values
y = ccl4_nya_daily_means['col_density'].values
w = np.ones_like(x)
fourier_degree = 0 # fit trend only
res = fit_driftfourier(x, y, w, fourier_degree)
intercept, slope, pfourier, residuals_std = res[0:3] + (res[6],)
f_drift, f_fourier, f_driftfourier = res[3:6]
slope_percent = get_slope_percent(slope, intercept)
print("fitted trend: {:.3E} molec./cm2/yr | {:.3f} %/yr".format(slope, slope_percent))
print("fitted intercept at xmin: {:.3E} molec./cm2".format(intercept))
print("fitted fourier parameters: {}".format(pfourier))
print("std of residuals: {:.3E} molec./cm2".format(residuals_std))
ccl4_nya_daily_means.plot(x='time', y='col_density', kind='scatter', label='data')
plt.plot(x, f_drift(x), label='fitted annual trend')
plt.plot(x, f_driftfourier(x),
label='fitted annual trend + intra-annual variability')
plt.legend()
ci95_percentiles = (2.5, 97.5)
cf, boot_intercept, boot_slope, boot_pfourier = cf_driftfourier(
x, y, w, fourier_degree, nboot=20000,
percentiles=ci95_percentiles
)
boot_slope_percent = get_slope_percent(boot_slope, boot_intercept)
cf['slope_percent'] = np.percentile(boot_slope_percent, ci95_percentiles)
print("95% CI for trend: ({:.3E}, {:.3E}) molec./cm2/yr".format(*cf['slope']))
print("95% CI for trend: ({:.3f}, {:.3f}) %/yr".format(*cf['slope_percent']))
print("95% CI for intercept: ({:.3E}, {:.3E}) molec./cm2".format(*cf['intercept']))
print("percentiles for fourier parameters: {}".format(cf['pfourier']))
fig, axes = plt.subplots(1, 3, figsize=(13, 3))
def plot_bootdist(v, ax, label):
sns.distributions.distplot(v, ax=ax)
plt.setp(ax, xlabel=label, yticks=[])
plot_bootdist(boot_slope, axes[0], 'trend (molec./cm2/yr)')
plot_bootdist(boot_slope_percent, axes[1], 'trend (%/yr)')
plot_bootdist(boot_intercept, axes[2], 'intercept (molec./cm2)')
ccl4_nya_daily_stds = ccl4_nya.groupby(by_day).apply(np.std)
ax = plt.subplot(111)
ccl4_nya.plot(x='time', y='col_density', kind='scatter', ax=ax,
label='raw measurements')
ccl4_nya_daily_means.plot(x='time', y='col_density',
kind='scatter', ax=ax, color='r',
label='daily means')
ccl4_nya_daily_stds = ccl4_nya.groupby(by_day).apply(np.std)
ccl4_nya_daily_means.plot(x='time', y='col_density',
kind='scatter', color='r',
yerr=ccl4_nya_daily_stds)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: Implementation
Step6: CCL4 NY4 data
Step8: The values in the time axis are given in decimal years. We first need to express them as real date & time values.
Step9: The fit below is performed from the daily mean column densities
Step10: Fit trend only (no fourier parameters added to the model), without bootstrapping. Also express trend as percentage.
Step11: Print the results
Step12: Plot data + fitted line
Step13: Fit trend only, with bootstrap resampling. Compute 2.5% and 97.5% percentiles (i.e., the 95% interval) for each fitted parameter. Compute also the percentiles for trend in %.
Step14: Print the results
Step15: Histograms of the distributions of the annual trend and intercept estimates obtained from bootstrap resampling.
Step16: Plot of daily means vs. raw data
Step17: Plot of daily mean column densities with errorbars (stdev).
|
4,493
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from scipy.sparse.linalg import svds
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
from jlab import load_test_data
X_train = pd.read_csv('MLchallenge2_training.csv')
X_test = load_test_data('test_in.csv')
X = (pd.concat([X_test, X_train], axis=0)
.reset_index(drop=True)
.fillna(0.0))
X_true = pd.read_csv('test_prediction.csv', names=['x', 'y', 'px', 'py', 'pz'],
header=None)
X.head()
X_true.head()
U, sigma, Vt = svds(X, k=30)
sigma = np.diag(sigma)
X_pred = pd.DataFrame(np.dot(np.dot(U, sigma), Vt), columns=X.columns, index=X.index)
X_pred.head()
def get_test_detector_plane(row):
# Find location of nans, get the first one
# Then divide by 6 (6 values per detector plane)
plane = np.where(np.isnan(row.values))[0][0]/6
return int(plane)
def get_vals_at_plane(row, plane):
cols = [i + str(int(plane)) for i in ['x','y','px','py','pz']]
return row[cols].values
def get_vals_at_eval_plane(X_test, X_pred):
X = X_pred.copy()
X['eval_plane'] = X_test.apply(get_test_detector_plane, axis=1)
retvals = X.loc[X_test.index.values].apply(lambda x: get_vals_at_plane(x, x['eval_plane']), axis=1)
return retvals
eval_planes = X_test.apply(get_test_detector_plane, axis=1)
get_vals_at_plane(X_test.loc[15], 7)
import logging
from jlab import COLS
from sklearn.preprocessing import StandardScaler
class DetectorRecommender(object):
def __init__(self, k=20):
self.logger = logging.getLogger(__name__)
self.k = k
self.planes = 27
self.kinematics = ["x", "y", "px", "py", "pz"]
self.cols = COLS
self.X_train = pd.DataFrame(columns=self.cols)
self.X_test = pd.DataFrame(columns=self.cols)
self.scaler = StandardScaler()
def fit(self, df):
SVD isn't really 'trained', but...
self.X_train = df.copy(deep=True)
def predict(self, df):
# Make a copy, index it from 0 to N
self.logger.debug("Making a copy")
self.X_test = df.copy(deep=True).reset_index(drop=True)
# For each track, figure out which detector plane we'll evaluate
self.logger.debug("Determining evaluation planes")
eval_planes = self.X_test.apply(self.get_eval_detector_plane, axis=1)
# Combine with the training set, shuffle it, and fill missing values
self.logger.debug("Combining train and test sets for SVD")
X = (pd.concat([self.X_test, self.X_train], axis=0)
.reset_index(drop=True)
.sample(replace=False, frac=1.0))
# Fill with the mean values of each column
self.logger.debug("Filling with mean values")
X = X.fillna(X.mean())
# Normalize the values
self.logger.debug("Applying standardscaler")
X_norm_values = self.scaler.fit_transform(X)
X_norm = pd.DataFrame(X_norm_values, columns=X.columns, index=X.index)
# Single-value Decomposition
self.logger.debug("Making predictions")
X_pred_norm = self.fit_predict_svds(X_norm)
# Extract our test tracks
X_pred_norm = X_pred_norm.loc[self.X_test.index, :].sort_index()
# Un-normalize them
X_pred_values = self.scaler.inverse_transform(X_pred_norm)
X_pred = pd.DataFrame(X_pred_values, columns=X_pred_norm.columns,
index=X_pred_norm.index)
self.logger.debug("De-normalized. Extracting pred values.")
# Extract just the non-z kinematic values for the eval planes
det_eval_values = self.extract_values_at_eval_planes(X_pred, eval_planes)
return det_eval_values
def fit_predict_svds(self, X):
U, sigma, Vt = svds(X, k=self.k)
sigma = np.diag(sigma)
X_pred = pd.DataFrame(np.dot(np.dot(U, sigma), Vt),
columns=X.columns, index=X.index)
return X_pred
def extract_values_at_eval_planes(self, pred, planes):
X = pred.copy(deep=True)
X['eval_plane'] = planes
retvals = X.apply(lambda x: self.get_vals_at_plane(x, x['eval_plane']), axis=1)
retvals_df = pd.DataFrame(retvals.values.tolist(), columns=self.kinematics)
return retvals_df
def get_vals_at_plane(self, row, plane):
cols = [i + str(int(plane)) for i in self.kinematics]
return row[cols].values
def get_eval_detector_plane(self, row):
# Find location of nans, get the first one
# Then divide by 6 (6 values per detector plane)
plane = np.where(np.isnan(row.values))[0][0]/6
return int(plane)
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s - %(name)-12s - %(levelname)-8s - %(message)s')
predictor = DetectorRecommender()
predictor.fit(X_train)
X_pred = predictor.predict(X_test)
X_pred.head()
mean_squared_error(X_true, X_pred)
for k in range(5,15):
predictor = DetectorRecommender(k=k)
predictor.fit(X_train)
X_pred = predictor.predict(X_test)
print(k, mean_squared_error(X_true, X_pred))
predictor = DetectorRecommender(k=7)
predictor.fit(X_train)
X_pred = predictor.predict(X_test)
print(mean_squared_error(X_true, X_pred))
!pip install scikit-surprise
import surprise
X.melt()
X.index.name = "track_id"
X.head().reset_index().melt(id_vars=['track_id'])
X.sample(replace=False, frac=1.0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hooray, we did it
Step3: Make a recommender class, a la sklearn
Step4: Tune the one hyperparameter we have
Step5: Optimal performance at k=7
Step6: Surprise!
|
4,494
|
<ASSISTANT_TASK:>
Python Code:
import os
import os.path as op
from urllib.request import urlretrieve
from pathlib import Path
URL = "https://github.com/m2dsupsdlclass/lectures-labs/releases/download/totallylookslike/dataset_totally.zip"
FILENAME = "dataset_totally.zip"
if not op.exists(FILENAME):
print('Downloading %s to %s...' % (URL, FILENAME))
urlretrieve(URL, FILENAME)
import zipfile
if not op.exists("anchors"):
print('Extracting image files...')
with zipfile.ZipFile(FILENAME, 'r') as zip_ref:
zip_ref.extractall('.')
home_dir = Path(Path.home())
anchor_images_path = Path("./anchors")
positive_images_path = Path("./positives")
def open_image(filename, target_shape = (256, 256)):
Load the specified file as a JPEG image, preprocess it and
resize it to the target shape.
image_string = tf.io.read_file(filename)
image = tf.image.decode_jpeg(image_string, channels=3)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, target_shape)
return image
import tensorflow as tf
# Careful to sort images folders so that the anchor and positive images correspond.
anchor_images = sorted([str(anchor_images_path / f) for f in os.listdir(anchor_images_path)])
positive_images = sorted([str(positive_images_path / f) for f in os.listdir(positive_images_path)])
anchor_count = len(anchor_images)
positive_count = len(positive_images)
print(f"number of anchors: {anchor_count}, positive: {positive_count}")
anchor_dataset_files = tf.data.Dataset.from_tensor_slices(anchor_images)
anchor_dataset = anchor_dataset_files.map(open_image)
positive_dataset_files = tf.data.Dataset.from_tensor_slices(positive_images)
positive_dataset = positive_dataset_files.map(open_image)
import matplotlib.pyplot as plt
def visualize(img_list):
Visualize a list of images
def show(ax, image):
ax.imshow(image)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig = plt.figure(figsize=(6, 18))
num_imgs = len(img_list)
axs = fig.subplots(1, num_imgs)
for i in range(num_imgs):
show(axs[i], img_list[i])
# display the first element of our dataset
anc = next(iter(anchor_dataset))
pos = next(iter(positive_dataset))
visualize([anc, pos])
from tensorflow.keras import layers
# data augmentations
data_augmentation = tf.keras.Sequential([
layers.RandomFlip("horizontal"),
# layers.RandomRotation(0.15), # you may add random rotations
layers.RandomCrop(224, 224)
])
import numpy as np
rng = np.random.RandomState(seed=42)
rng.shuffle(anchor_images)
rng.shuffle(positive_images)
negative_images = anchor_images + positive_images
np.random.RandomState(seed=32).shuffle(negative_images)
negative_dataset_files = tf.data.Dataset.from_tensor_slices(negative_images)
negative_dataset_files = negative_dataset_files.shuffle(buffer_size=4096)
# Build final triplet dataset
dataset = tf.data.Dataset.zip((anchor_dataset_files, positive_dataset_files, negative_dataset_files))
dataset = dataset.shuffle(buffer_size=1024)
# preprocess function
def preprocess_triplets(anchor, positive, negative):
return (
data_augmentation(open_image(anchor)),
data_augmentation(open_image(positive)),
data_augmentation(open_image(negative)),
)
# The map function is lazy, it is not evaluated on the spot,
# but each time a batch is sampled.
dataset = dataset.map(preprocess_triplets)
# Let's now split our dataset in train and validation.
train_dataset = dataset.take(round(anchor_count * 0.8))
val_dataset = dataset.skip(round(anchor_count * 0.8))
# define the batch size
train_dataset = train_dataset.batch(32, drop_remainder=False)
train_dataset = train_dataset.prefetch(8)
val_dataset = val_dataset.batch(32, drop_remainder=False)
val_dataset = val_dataset.prefetch(8)
anc_batch, pos_batch, neg_batch = next(train_dataset.take(1).as_numpy_iterator())
print(anc_batch.shape, pos_batch.shape, neg_batch.shape)
idx = np.random.randint(0, 32)
visualize([anc_batch[idx], pos_batch[idx], neg_batch[idx]])
from tensorflow.keras import Model, layers
from tensorflow.keras import optimizers, losses, metrics, applications
from tensorflow.keras.applications import resnet
input_img = layers.Input((224,224,3))
output = input_img # change that line and edit this code!
embedding = Model(input_img, output, name="Embedding")
output = embedding(np.random.randn(1,224,224,3))
output.shape
from tensorflow.keras import Model, layers
from tensorflow.keras import optimizers, losses, metrics, applications
from tensorflow.keras.applications import resnet
input_img = layers.Input((224,224,3))
base_cnn = resnet.ResNet50(weights="imagenet", input_shape=(224,224,3), include_top=False)
resnet_output = base_cnn(input_img)
flatten = layers.Flatten()(resnet_output)
dense1 = layers.Dense(512, activation="relu")(flatten)
# The batch normalization layer enables to normalize the activations
# over the batch
dense1 = layers.BatchNormalization()(dense1)
dense2 = layers.Dense(256, activation="relu")(dense1)
dense2 = layers.BatchNormalization()(dense2)
output = layers.Dense(256)(dense2)
embedding = Model(input_img, output, name="Embedding")
trainable = False
for layer in base_cnn.layers:
if layer.name == "conv5_block1_out":
trainable = True
layer.trainable = trainable
def preprocess(x):
we'll need to preprocess the input before passing them
to the resnet for better results. This is the same preprocessing
that was used during the training of ResNet on ImageNet.
return resnet.preprocess_input(x * 255.)
anchor_input = layers.Input(name="anchor", shape=(224, 224, 3))
positive_input = layers.Input(name="positive", shape=(224, 224, 3))
negative_input = layers.Input(name="negative", shape=(224, 224, 3))
distances = [anchor_input, positive_input] # TODO: Change this code to actually compute the distances
siamese_network = Model(
inputs=[anchor_input, positive_input, negative_input], outputs=distances
)
class DistanceLayer(layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def call(self, anchor, positive, negative):
ap_distance = tf.reduce_sum(tf.square(anchor - positive), -1)
an_distance = tf.reduce_sum(tf.square(anchor - negative), -1)
return (ap_distance, an_distance)
anchor_input = layers.Input(name="anchor", shape=(224, 224, 3))
positive_input = layers.Input(name="positive", shape=(224, 224, 3))
negative_input = layers.Input(name="negative", shape=(224, 224, 3))
distances = DistanceLayer()(
embedding(preprocess(anchor_input)),
embedding(preprocess(positive_input)),
embedding(preprocess(negative_input)),
)
siamese_network = Model(
inputs=[anchor_input, positive_input, negative_input], outputs=distances
)
class TripletModel(Model):
The Final Keras Model with a custom training and testing loops.
Computes the triplet loss using the three embeddings produced by the
Siamese Network.
The triplet loss is defined as:
L(A, P, N) = max(‖f(A) - f(P)‖² - ‖f(A) - f(N)‖² + margin, 0)
def __init__(self, siamese_network, margin=0.5):
super(TripletModel, self).__init__()
self.siamese_network = siamese_network
self.margin = margin
self.loss_tracker = metrics.Mean(name="loss")
def call(self, inputs):
return self.siamese_network(inputs)
def train_step(self, data):
# GradientTape is a context manager that records every operation that
# you do inside. We are using it here to compute the loss so we can get
# the gradients and apply them using the optimizer specified in
# `compile()`.
with tf.GradientTape() as tape:
loss = self._compute_loss(data)
# Storing the gradients of the loss function with respect to the
# weights/parameters.
gradients = tape.gradient(loss, self.siamese_network.trainable_weights)
# Applying the gradients on the model using the specified optimizer
self.optimizer.apply_gradients(
zip(gradients, self.siamese_network.trainable_weights)
)
# Let's update and return the training loss metric.
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
def test_step(self, data):
loss = self._compute_loss(data)
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
def _compute_loss(self, data):
# The output of the network is a tuple containing the distances
# between the anchor and the positive example, and the anchor and
# the negative example.
ap_distance, an_distance = self.siamese_network(data)
loss = ap_distance - an_distance
loss = tf.maximum(loss + self.margin, 0.0)
return loss
@property
def metrics(self):
# We need to list our metrics here so the `reset_states()` can be
# called automatically.
return [self.loss_tracker]
siamese_model = TripletModel(siamese_network)
siamese_model.compile(optimizer=optimizers.Adam(0.0001))
siamese_model.fit(train_dataset, epochs=10, validation_data=val_dataset)
embedding.save('best_model.h5')
# uncomment to get a pretrained model
url_pretrained = "https://github.com/m2dsupsdlclass/lectures-labs/releases/download/totallylookslike/best_model.h5"
urlretrieve(url_pretrained, "best_model.h5")
loaded_model = tf.keras.models.load_model('best_model.h5')
from functools import partial
open_img = partial(open_image, target_shape=(224,224))
all_img_files = tf.data.Dataset.from_tensor_slices(negative_images)
dataset = all_img_files.map(open_img).map(preprocess).take(1024).batch(32, drop_remainder=False).prefetch(8)
all_embeddings = loaded_model.predict(dataset)
all_embeddings.shape
random_img = np.random.choice(negative_images)
def most_similar(img, topn=5):
img_batch = tf.expand_dims(open_image(img, target_shape=(224, 224)), 0)
new_emb = loaded_model.predict(preprocess(img_batch))
dists = tf.sqrt(tf.reduce_sum((all_embeddings - new_emb)**2, -1)).numpy()
idxs = np.argsort(dists)[:topn]
return [(negative_images[idx], dists[idx]) for idx in idxs]
print(random_img)
most_similar(random_img)
random_img = np.random.choice(negative_images)
visualize([open_image(im) for im, _ in most_similar(random_img)])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: We will use mostly TensorFlow functions to open and process images
Step4: To generate the list of negative images, let's randomize the list of available images (anchors and positives) and concatenate them together.
Step5: We can visualize a triplet and display its shape
Step6: Exercise
Step8: Run the following can be run to get the same architecture as we have
Step9: Exercise
Step10: Solution
Step12: The final triplet model
Step13: Find most similar images in test dataset
Step14: We can build a most_similar function which takes an image path as input and return the topn most similar images through the embedding representation. It would be possible to use another metric, such as the cosine similarity here.
|
4,495
|
<ASSISTANT_TASK:>
Python Code:
from spirack import SPI_rack, S5k_module, version
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
%matplotlib notebook
#assert version.__version__ >= '0.1.4', 'spirack version needs to be >= 0.1.4'
print("SPI-rack Code Version: " + version.__version__)
spi = SPI_rack("COM4", 1000000, 1)
spi.unlock()
spi.get_battery()
s5k = S5k_module(spi, 1)
s5k.set_clock_source('internal')
s5k.set_clock_division(1, 4)
s5k.run_module(False)
s5k.run_module(True)
s5k.sync_clock()
for DAC in range(1,9):
s5k.set_clock_division(DAC, 4)
for DAC in range(9, 17):
s5k.set_clock_division(DAC, 400)
for DAC in range(1, 9):
s5k.set_waveform_mode(DAC, 'AWG')
s5k.set_digital_gain(DAC, 0.45)
for DAC in range(1, 9):
s5k.set_digital_gain(DAC, 1)
for DAC in range(9, 17):
s5k.set_digital_gain(DAC, 0)
wv_len = 4000
max_val = 2047
width = 0.5
t = np.linspace(0, 1, 4000)
sawtooth = signal.square(2*np.pi*t, width) * max_val
sawtooth = sawtooth.astype(int)
plt.figure()
plt.plot(sawtooth)
plt.title('Sawtooth RAM data')
plt.xlabel('Samples')
plt.ylabel('RAM values')
plt.show()
s5k.upload_waveform(1, sawtooth, 0, set_pattern_length = True)
s5k.upload_waveform(5, sawtooth, 0, set_pattern_length = True)
s5k.upload_waveform(9, sawtooth, 0, set_pattern_length = True)
s5k.upload_waveform(13, sawtooth, 0, set_pattern_length = True)
for DAC in range(1,17):
s5k.set_RAM_address(DAC, 0, len(sawtooth))
s5k.set_pattern_length_trigger(len(sawtooth)-1)
s5k.run_module(False)
fast_period = 1/50e6
slow_period = 1/500e3
delay_necessary = 15*slow_period
delay_cycles = round(delay_necessary/fast_period)
delay_cycles = int(delay_cycles)
s5k.write_AD9106(s5k.DAreg.PATTERN_DLY, delay_cycles-1, 3)
s5k.write_AD9106(s5k.DAreg.PATTERN_DLY, delay_cycles-1, 1)
s5k.run_module(True)
s5k.run_module(True)
for DAC in range(1, 9):
s5k.set_digital_gain(DAC, 1)
for DAC in range(9, 17):
s5k.set_digital_gain(DAC, 0.0)
s5k.set_digital_gain(4, -0.1)
#s5k.set_digital_gain(12, -0.7)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Open SPI rack connection and unlock (necessary after bootup of the controller module).
Step2: Create new S5k module object at correct address and set clock source to internal clock. The clock can be divided by all even numbers between 2-510.
Step3: Set all the DACs to AWG mode. This allows us to write to the internal 4096k samples RAM.
Step4: The ramp in both the slow and fast DAC's will be the same
Step5: We now have to upload the waveform to all DAC's. It only needs to be uploaded once to each DAC chip (each chip contains for DACs with shared memory). We will then simply point all the DAC's in the chip to use the same block of RAM.
Step6: We also have to set the length of the trigger period. It runs on the slowest clock used in the system, in this case at 500kHz. The period length is equal to the slow sawtooth
Step7: One issue we now run into is the trigger delay. Each chip has a delay of 15 clock cycles from trigger in, to start outputting. This is especially noticable in this case where half is running at 500 kHz and the other half at 50 MHz. To compensate for this (to get them to start at the same time), we delay the start of the fast running DACs.
Step8: Now we can start the module, either by running from software or giving a gate on the front of the module.
Step9: Set the gain of the slow ramp to 0.5x, and of the fast ramp to 0.1x. Gain can go to 2x, but both channels can max out the swing of the output at a gain of 1x.
|
4,496
|
<ASSISTANT_TASK:>
Python Code:
## Constants used by this program
CONSONANTS = "bcdfghjklmnpqrstvwyz"
VOWELS = "aeiou"
def convert_pin(pin):
##FIXME: Replace the rest with your code
pass
# %load test_foo.py
from nose.tools import assert_equal
class Testconvert_pin(object):
def test_convert_pin(self):
assert_equal(convert_pin(None), ValueError)
assert_equal(convert_pin('absd'), ValueError)
assert_equal(convert_pin(0), ValueError)
assert_equal(convert_pin(4327), lohi)
assert_equal(convert_pin(1298), dizo)
print('Success: test_convert_pin')
def main():
test = Testconvert_pin()
test.test_convert_pin()
if __name__ == '__main__':
main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unit Test
|
4,497
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
my_matrix = np.array([[1,3,5],[2,5,1],[2,3,8]])
print(my_matrix)
my_matrix.mean() # mean of the whole matrix
my_matrix.mean(axis=0) # mean of the columns
my_matrix.mean(axis=0)[0] # mean of the 0th column
np.mean(my_matrix, axis=0) # alternative
my_matrix.mean(axis=1) # mean of the rows
my_matrix.flatten() # convert to 1D (useful for some plotting)
plt.style.use('ggplot')
plt.rc('axes', grid=False) # turn off the background grid for images
plt.imshow(my_matrix, interpolation='nearest', cmap=plt.cm.Blues);
test_image = np.load("./MyData/test_data.npy") # load in a saved numpy array
test_image.ndim, test_image.shape, test_image.dtype
plt.imshow(test_image, cmap=plt.cm.gray);
print("The minimum value of the image is {0:.2f}".format(test_image.min()))
print("The maximum value of the image is {0:.2f}".format(test_image.max()))
print("The mean value of the image is {0:.2f}".format(test_image.mean()))
print("The standard deviation of the image is {0:.2f}".format(test_image.std()))
plt.hist(test_image.flatten(),bins=30); #flatten array to get histogram of whole image
another_test_image = test_image + 8
print("The minimum value of the other image is {0:.2f}".format(another_test_image.min()))
print("The maximum value of the other image is {0:.2f}".format(another_test_image.max()))
print("The mean value of the other image is {0:.2f}".format(another_test_image.mean()))
print("The standard deviation of the other image is {0:.2f}".format(another_test_image.std()))
plt.imshow(test_image, cmap=plt.cm.gray)
plt.colorbar();
fig, ax = plt.subplots(1,5,sharey=True)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(test_image, cmap=plt.cm.viridis)
ax[0].set_xlabel('viridis')
ax[1].imshow(test_image, cmap=plt.cm.hot)
ax[1].set_xlabel('hot')
ax[2].imshow(test_image, cmap=plt.cm.magma)
ax[2].set_xlabel('magma')
ax[3].imshow(test_image, cmap=plt.cm.spectral)
ax[3].set_xlabel('spectral')
ax[4].imshow(test_image, cmap=plt.cm.gray)
ax[4].set_xlabel('gray');
plt.imsave('Splash.png', test_image, cmap=plt.cm.gray) # Write the array I to a PNG file
my_png = plt.imread('Splash.png') # Read in the PNG file
print("The original data has a min = {0:.2f} and a max = {1:.2f}".format(test_image.min(), test_image.max()))
print("The PNG file has a min = {0:.2f} and a max = {1:.2f}".format(my_png.min(), my_png.max()))
X = np.linspace(-5, 5, 500)
Y = np.linspace(-5, 5, 500)
X, Y = np.meshgrid(X, Y) # turns two 1-d arrays (X, Y) into one 2-d grid
Z = np.sqrt(X**2+Y**2)+np.sin(X**2+Y**2)
Z.min(), Z.max(), Z.mean()
from matplotlib.colors import LightSource
ls = LightSource(azdeg=0,altdeg=40)
shadedfig = ls.shade(Z,plt.cm.copper)
fig, ax = plt.subplots(1,2)
fig.set_size_inches(8,4)
fig.tight_layout()
ax[0].imshow(shadedfig)
contlevels = [1,2,Z.mean()]
ax[1].imshow(shadedfig)
ax[1].contour(Z,contlevels);
my_doctor = plt.imread('./MyData/doctor5.png')
print("The image my_doctor has a shape [height,width] of {0}".format(my_doctor.shape))
print("The image my_doctor is made up of data of type {0}".format(my_doctor.dtype))
print("The image my_doctor has a maximum value of {0}".format(my_doctor.max()))
print("The image my_doctor has a minimum value of {0}".format(my_doctor.min()))
plt.imshow(my_doctor,cmap=plt.cm.gray);
fig, ax = plt.subplots(1,4)
fig.set_size_inches(12,6)
fig.tight_layout()
# You can show just slices of the image - Rememeber: The origin is the upper left corner
ax[0].imshow(my_doctor, cmap=plt.cm.gray)
ax[0].set_xlabel('Original')
ax[1].imshow(my_doctor[0:300,0:100], cmap=plt.cm.gray)
ax[1].set_xlabel('[0:300,0:100]') # 300 rows, 100 columns
ax[2].imshow(my_doctor[:,0:100], cmap=plt.cm.gray) # ":" = whole range
ax[2].set_xlabel('[:,0:100]') # all rows, 100 columns
ax[3].imshow(my_doctor[:,::-1], cmap=plt.cm.gray);
ax[3].set_xlabel('[:,::-1]') ; # reverse the columns
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,6)
fig.tight_layout()
CutLine = 300
ax[0].imshow(my_doctor, cmap=plt.cm.gray)
ax[0].hlines(CutLine, 0, 194, color='b', linewidth=3)
ax[1].plot(my_doctor[CutLine,:], color='b', linewidth=3)
ax[1].set_xlabel("X Value")
ax[1].set_ylabel("Pixel Value")
from scipy import ndimage
fig, ax = plt.subplots(1,5)
fig.set_size_inches(14,6)
fig.tight_layout()
ax[0].imshow(my_doctor, cmap=plt.cm.gray)
my_doctor_2 = ndimage.rotate(my_doctor,45,cval=0.75) # cval is the value to set pixels outside of image
ax[1].imshow(my_doctor_2, cmap=plt.cm.gray) # Rotate and reshape
my_doctor_3 = ndimage.rotate(my_doctor,45,reshape=False,cval=0.75) # Rotate and do not reshape
ax[2].imshow(my_doctor_3, cmap=plt.cm.gray)
my_doctor_4 = ndimage.shift(my_doctor,(10,30),cval=0.75) # Shift image
ax[3].imshow(my_doctor_4, cmap=plt.cm.gray)
my_doctor_5 = ndimage.gaussian_filter(my_doctor,5) # Blur image
ax[4].imshow(my_doctor_5, cmap=plt.cm.gray);
redfilter = plt.imread("./MyData/sphereR.jpg")
redfilter.shape,redfilter.dtype
redfilter = plt.imread("./MyData/sphereR.jpg")[:,:,0]
redfilter.shape,redfilter.dtype
plt.imshow(redfilter,cmap=plt.cm.gray);
greenfilter = plt.imread("./MyData/sphereG.jpg")[:,:,0]
bluefilter = plt.imread("./MyData/sphereB.jpg")[:,:,0]
fig, ax = plt.subplots(1,3)
fig.set_size_inches(12,3)
fig.tight_layout()
ax[0].set_title("Red Filter")
ax[1].set_title("Green Filter")
ax[2].set_title("Blue Filter")
ax[0].imshow(redfilter,cmap=plt.cm.gray)
ax[1].imshow(greenfilter,cmap=plt.cm.gray)
ax[2].imshow(bluefilter,cmap=plt.cm.gray);
rgb = np.zeros((480,640,3),dtype='uint8')
print(rgb.shape, rgb.dtype)
plt.imshow(rgb,cmap=plt.cm.gray);
rgb[:,:,0] = redfilter
rgb[:,:,1] = greenfilter
rgb[:,:,2] = bluefilter
fig, ax = plt.subplots(1,4)
fig.set_size_inches(14,3)
fig.tight_layout()
ax[0].set_title("Red Filter")
ax[1].set_title("Green Filter")
ax[2].set_title("Blue Filter")
ax[3].set_title("All Filters Stacked")
ax[0].imshow(redfilter,cmap=plt.cm.gray)
ax[1].imshow(greenfilter,cmap=plt.cm.gray)
ax[2].imshow(bluefilter,cmap=plt.cm.gray)
ax[3].imshow(rgb,cmap=plt.cm.gray);
print("The image rgb has a shape [height,width] of {0}".format(rgb.shape))
print("The image rgb is made up of data of type {0}".format(rgb.dtype))
print("The image rgb has a maximum value of {0}".format(rgb.max()))
print("The image rgb has a minimum value of {0}".format(rgb.min()))
rgb[:,:,0] = redfilter * 1.5
plt.imshow(rgb)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Working with images in python is essentially a visual way of working with 2-d arrays (matrices)
Step2: All of the normal numpy commands work with matrices (of any dimension)
Step3: You can work over just the rows or columns of the matrix
Step4: imshow will display 2-d arrays as images
Step5: Read in some data
Step6: Math on images applies to every value (pixel)
Step7: Show the image represenation of test_image with a colorbar
Step8: Colormap reference
Step9: WARNING! Common image formats DO NOT preserve dynamic range of original data!!
Step10: Creating images from math
Step11: Fancy Image Display
Step12: Reading in images (imread) - Common Formats
Step13: Images are just arrays that can be sliced.
Step14: Simple image manipulation
Step15: ndimage can do much more
Step16: We just want to read in one of the three channels
Step17: Need to create a blank 3-d array to hold all of the images
Step18: Fill the array with the filtered images
|
4,498
|
<ASSISTANT_TASK:>
Python Code:
import nltk
from nltk import corpus
# nltk.download()
# print(dir(corpus))
# corp = corpus.gutenberg
files = corpus.gutenberg.fileids()
print(files)
# NOTE: This is only needed to open NLTK's downloads manager!
# nltk.download()
# Get our source corpora from gutenberg in nltk.
emma_sents = corpus.gutenberg.sents('austen-emma.txt')
# Assign all of our samples that we'll be using.
corpora = emma_sents[:20]
# Iterate across the sentences.
alpha_sentences = pd.DataFrame()
for sentence in corpora:
# print(sent)
sent = list(filter(lambda x: str.isalpha(x), sentence))
sent = ' '.join(sent)
sent = pd.Series(sent)
alpha_sentences = alpha_sentences.append(sent, ignore_index=True)
print(alpha_sentences.head(10))
print(emma_sents[3],alpha_sentences.iloc[3][0])
# We'll need to vectorize the words, put that into a dataframe,
# and then generate another dataframe that's a vector of letter values.
from sklearn.decomposition import LatentDirichletAllocation as LDA
lda = LDA()
lda.fit(emma_sents)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Label Samples
|
4,499
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
n = 19
print("Каждая цифра представлена матрицей формы ", digits.data[n, :].shape)
digit = 255 - digits.data[n, :].reshape(8, 8)
plt.imshow(digit, cmap='gray', interpolation='none')
plt.title("This is " + str(digits.target[n]))
plt.show()
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(random_state=0)
clf.fit(digits.data[:-10], digits.target[:-10])
errors = 0
for i in range(1, 11):
k = clf.predict(digits.data[-i].reshape(1, -1))
print("Классификатор предсказал число {}, на самом деле это {}. Числа {}совпали."
.format(k[0], digits.target[-i],
"" if k[0] == digits.target[-i] else "не "))
if k[0] != digits.target[-i]:
errors += 1
fig = plt.figure(figsize=(12, 4))
frame = 1
for i in range(1, 11):
k = clf.predict(digits.data[-i].reshape(1, -1))
if k[0] != digits.target[-i]:
digit = 255 - digits.data[-i, :].reshape(8, 8)
ax = fig.add_subplot(1, errors, frame)
ax.imshow(digit, cmap='gray', interpolation='none')
ax.set_title("This is {}, recognized as {}".format(digits.target[-i], k[0]))
frame += 1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Чтобы отобразить её на экране, нужно применить метод reshape. Целевая форма — $8 \times 8$.
Step2: Возьмем один из методов прошлой лекции. Например, метод классификации, основанный на деревьях (CART).
Step3: Почти у всех классов, отвечающих за методы классификации в scikit-learn, есть следующие методы
Step4: Обучим классификатор на всех цифрах, кроме последних 10.
Step5: Теперь попробуем классифицировать оставшиеся 10 картинок.
Step6: Давайте посмотрим на "проблемные" числа
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.