text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 下載 mnist 資料
Step2: 載入訓練資料 train_set 和測試資料 test_set
Step3: 查看 mnist 資料的概況,用 .shape 看 np.array 的形狀
Step4: 資料的第一部份,每一筆都是一個 28x28 的圖片(28*28=784)
Step5: 寫一個函數可以更方面的看圖。
Step6: train_set 是用來訓練我們的模型用的
Step7: 完整的模型如下
Step8: 先計算 exp(Wx+b)
Step9: 然後 normalize,讓總和變成 1 (符合機率的意義)
Step10: 由於 W 和 b 都是隨機設定的,所以上面我們算出的機率也是隨機的。
Step11: 目前的誤差 1.4215 不算太差,畢竟我們運氣很好,隨機的 W 和 b,居然能讓正確答案有 24% 的機率。
Step12: 對 W 的偏微分也不難
Step13: 算好 gradient 後,讓 W 和 b 分別往 gradient 反方向走一點點,得到新的 W 和 b
Step14: 再一次計算 Pr 以及 loss
Step15: 發現這次誤差下降到 0.0005 左右,改進不少
Step16: 結果發現正確率大約是 92.42%, 但這是對訓練資料而不是對測試資料
Step17: 下面是更新過得訓練過程, 當 i%100000 時,順便計算一下 test accuracy 和 valid accuracy。
Step18: 最後得到的準確率是 92%-93%
Step19: 光看數據沒感覺,我們來看看前十筆測試資料跑起來的情形
Step20: 看看前一百筆資料中,是哪些情況算錯
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import os
import urllib
dataset = 'mnist.pkl.gz'
def reporthook(a,b,c):
print "\rdownloading: %5.1f%%"%(a*b*100.0/c),
if not os.path.isfile(dataset):
origin = "https://github.com/mnielsen/neural-networks-and-deep-learning/raw/master/data/mnist.pkl.gz"
print('Downloading data from %s' % origin)
urllib.urlretrieve(origin, dataset, reporthook=reporthook)
import gzip
import pickle
with gzip.open(dataset, 'rb') as f:
train_set, valid_set, test_set = pickle.load(f)
print "train_set", train_set[0].shape, train_set[1].shape
print "valid_set", valid_set[0].shape, valid_set[1].shape
print "test_set", test_set[0].shape, test_set[1].shape
imshow(train_set[0][0].reshape((28, 28)), cmap="gray")
def show(x, i=[0]):
plt.figure(i[0])
imshow(x.reshape((28,28)), cmap="gray")
i[0]+=1
for i in range(5):
print train_set[1][i]
show(train_set[0][i])
W = np.random.uniform(low=-1, high=1, size=(28*28,10))
b = np.random.uniform(low=-1, high=1, size=10)
x = train_set[0][0]
y = train_set[1][0]
Pr = exp(dot(x, W)+b)
Pr.shape
Pr = Pr/Pr.sum()
print Pr
loss = -log(Pr[y])
loss
gradb = Pr.copy()
gradb[y] -= 1
print gradb
print Pr.shape, x.shape, W.shape
gradW = dot(x.reshape(784,1), Pr.reshape(1,10), )
gradW[:, y] -= x
W -= 0.1 * gradW
b -= 0.1 * gradb
Pr = exp(dot(x, W)+b)
Pr = Pr/Pr.sum()
loss = -log(Pr[y])
loss
W = np.random.uniform(low=-1, high=1, size=(28*28,10))
b = np.random.uniform(low=-1, high=1, size=10)
score = 0
N=50000*20
d = 0.001
learning_rate = 1e-2
for i in xrange(N):
if i%50000==0:
print i, "%5.3f%%"%(score*100)
x = train_set[0][i%50000]
y = train_set[1][i%50000]
Pr = exp(dot(x, W)+b)
Pr = Pr/Pr.sum()
loss = -log(Pr[y])
score *=(1-d)
if Pr.argmax() == y:
score += d
gradb = Pr.copy()
gradb[y] -= 1
gradW = dot(x.reshape(784,1), Pr.reshape(1,10), )
gradW[:, y] -= x
W -= learning_rate * gradW
b -= learning_rate * gradb
def compute_Pr(x):
Pr = exp(dot(x, W)+b)
return Pr/Pr.sum(axis=1, keepdims=True)
def compute_accuracy(Pr, y):
return mean(Pr.argmax(axis=1)==y)
W = np.random.uniform(low=-1, high=1, size=(28*28,10))
b = np.random.uniform(low=-1, high=1, size=10)
score = 0
N=50000*100
batch_size = 500
learning_rate = .7
for i in xrange(0, N, batch_size):
if i%100000==0:
x, y = test_set[0], test_set[1]
test_score = compute_accuracy(compute_Pr(x), y)*100
x, y = valid_set[0], valid_set[1]
valid_score = compute_accuracy(compute_Pr(x), y)*100
print i, "%5.2f%%"%test_score, "%5.2f%%"%valid_score
# 隨機選出一些訓練資料出來
rndidx = np.random.choice(train_set[0].shape[0], batch_size, replace=False)
x, y = train_set[0][rndidx], train_set[1][rndidx]
# 一次計算所有的 Pr
Pr = compute_Pr(x)
# 計算平均 gradient
gradb = Pr.mean(axis=0)-[(y==i).mean() for i in range(10)]
gradW = dot(x.T, Pr)
for i in range(batch_size):
gradW[:, y[i]]-=x[i]
gradW /= batch_size
# 更新 W 和 b
W -= learning_rate * gradW
b -= learning_rate * gradb
x, y = test_set[0], test_set[1]
Pr = compute_Pr(x)
test_score = compute_accuracy(Pr, y)*100
x, y = valid_set[0], valid_set[1]
Pr = compute_Pr(x)
valid_score = compute_accuracy(Pr, y)*100
print "test accuracy %5.2f%%"%test_score, "valid accuracy %5.2f%%"%valid_score
x, y = train_set[0], train_set[1]
Pr = compute_Pr(x)
train_score = compute_accuracy(Pr, y)*100
print "train accuracy %5.2f%%"%train_score
x = test_set[0][:10]
y = test_set[1][:10]
Pr = compute_Pr(x)
print Pr.argmax(axis=1)
print y
for i in range(10):
show(x[i])
x = test_set[0][:100]
y = test_set[1][:100]
Pr = compute_Pr(x)
y2 = Pr.argmax(axis=1)
for i in range(100):
if y2[i] != y[i]:
print y2[i], y[i]
show(x[i])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now need to take care of the missing data for Age variable. Need to approximate- one way, to take mean age for all the missing values.
Step2: If Passenger belongs to Pclass 3, age assigned is 24, if 2, age is assigned 29, if 1 then 37.
Step3: There are two null values in Embarked, we can just drop them.
Step4: Getting dummy variables from categorical ones.
Step5: The train test split is done for parameter tuning.
Step6: Now taking in Competition Data.
Step7: There are 86 null values in Age, so we approximate them like we did earlier.
Step8: Cleaning up the test data
Step9: Prediction for Competition Data
|
<ASSISTANT_TASK:>
Python Code:
titanic_data = titanic.drop(['PassengerId','Name','Ticket'],1)
titanic_data.head()
sb.boxplot(x='Pclass',y='Age',data=titanic_data)
def age_approx(cols):
age = cols[0]
pclass = cols[1]
if pd.isnull(age):
if pclass == 1:
return 37
elif pclass == 2:
return 29
else:
return 24
else:
return age
titanic_data['Age'] = titanic_data[['Age', 'Pclass']].apply(age_approx, axis=1)
titanic_data.isnull().sum()
def cabin_approx(cols):
cabin = cols[0]
pclass = cols[1]
if pd.isnull(cabin):
return 0
elif cabin[0] == ('C' or 'B'):
return 3
elif cabin[0] == ('A' or 'D' or 'E' or 'T'):
return 2
elif cabin[0] == ('F' or 'G'):
return 1
else:
return 0
titanic_data['Cabin'] = titanic_data[['Cabin', 'Pclass']].apply(cabin_approx, axis=1)
#titanic_data.isnull().sum()
sb.boxplot(x='Cabin',y='Fare',data=titanic_data)
titanic_data.dropna(inplace=True)
titanic_data.isnull().sum()
gender = pd.get_dummies(titanic_data['Sex'],drop_first=True)
gender.head()
embark_location = pd.get_dummies(titanic_data['Embarked'],drop_first=True)
embark_location.head()
titanic_data.drop(['Sex','Embarked'],axis=1,inplace=True)
titanic_data.head()
titanic_dmy = pd.concat([titanic_data, gender, embark_location],axis=1)
titanic_dmy.tail()
#Checking for correlation between variables.
sb.heatmap(titanic_dmy.corr(),square=True)
#print(titanic_dmy.corr())
X = titanic_dmy.ix[:,(1,2,3,4,5,6,7,8,9)].values
y = titanic_dmy.ix[:,0].values
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.1, random_state=2)
!pip install xgboost
from sklearn.ensemble import RandomForestClassifier
#from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from xgboost import XGBClassifier
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import VotingClassifier
clf1 = SVC(kernel='linear',C=1.0,random_state=3)
clf2 = XGBClassifier(random_state=3)
clf3 = RandomForestClassifier(n_estimators=30, max_depth=10, random_state=300)
eclf = VotingClassifier(estimators=[('clf1', clf1), ('clf2', clf2),('clf3',clf3)], voting='hard')
eclf.fit(X_train, y_train)
y_pred = eclf.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(eclf.score(X_test, y_test))
url = 'C:\\Users\\ajaohri\\Desktop\\all/test.csv'
test = pd.read_csv(url)
test.head()
test.isnull().sum()
test.describe()
sb.set(rc={'figure.figsize':(11.7,8.27)})
ax = sb.boxplot(x='Pclass',y='Fare',data=test,width=0.9)
def fare_approx(cols):
fare = cols[0]
pclass = cols[1]
if pd.isnull(fare):
if pclass == 1:
return 55
elif pclass == 2:
return 20
else:
return 10
else:
return fare
test_data = test.drop(['Name','Ticket'],1)
test_data['Age'] = test_data[['Age', 'Pclass']].apply(age_approx, axis=1)
test_data['Fare'] = test_data[['Fare','Pclass']].apply(fare_approx, axis=1)
test_data['Cabin'] = test_data[['Cabin','Pclass']].apply(cabin_approx, axis=1)
#
gender_test = pd.get_dummies(test_data['Sex'],drop_first=True)
embark_location_test = pd.get_dummies(test_data['Embarked'],drop_first=True)
test_data.drop(['Sex','Embarked'],axis=1,inplace=True)
test_dmy = pd.concat([test_data, gender_test, embark_location_test],axis=1)
#test_dmy.describe()
test_data.dropna(inplace=True)
test_dmy.isnull().sum()
test_dmy.head()
X_competition = test_dmy.ix[:,(1,2,3,4,5,6,7,8,9)].values
y_comp = eclf.predict(X_competition)
submission = pd.DataFrame({'PassengerId':test_data['PassengerId'],'Survived':y_comp})
submission.head()
filename = 'Titanic Predictions 1.csv'
submission.to_csv(filename,index=False)
print('Saved file: ' + filename)
os.getcwd()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use the bash =)
Step2: So parsing does not work, do it manually
Step3: Some statistics about the payment.
Step4: So thats the statistic about payments. Remember, there are to tips recorded for cash payment
Step5: So 95% of the drives do not deal with tolls. We will drop the column then.
Step6: First, we want to generate the trip_time because this is our target.
Step7: Check for missing and false data
Step8: So there is not that much data missing. That's quite surprising, maybe it's wrong.
Step9: So we have many zeros in the data. How much percent?
Step10: <font color = 'blue' > Most of the zeros are missing data. So flag them as NaN (means also NA) to be consistent! </font color>
Step11: Quick preview about the trip_times
Step12: That many unique values do we have in trip_time.
Step13: So how many percent of data are left to be processed?
Step14: <font color = 'black'> So we only dropped 2% of the data because of missing geo tags. Someone could search the 'anomaly'-data for patterns, e.g. for fraud detection. We are also going to drop all the unrecognized trip_distances because we cannot (exactly) generate them (an approximation would be possible). </font color>
Step15: Drop all the columns with trip_time.isnull()
Step16: This is quite unreasonable. We have dropoff_datetime = pickup_datetime and the geo-coords of pickup and dropoff do not match! trip_time equals NaT here.
Step17: After filtering regarding the trip_time
Step18: We sometimes have some unreasonably small trip_times.
Step19: <font color = 'blue'> So all in all, we dropped less than 3% of the data. </font color>
Step20: We can deal with that. External investigation of the anomaly is recommended.
Step21: Distribution of the avg_amount_per_minute
Step22: Compare to http
Step23: So we dropped around 6% of the data.
Step24: Only look at trips in a given bounding box
Step25: So we've omitted about 2% of the data because the trips do not start and end in the box
Step26: Again, let's take a look at the distribution of the target variable we want to estimate
Step27: Make a new dataframe with features and targets to train the model
Step28: Use minutes for prediction instead of seconds (ceil the time). Definitley more robust than seconds!
Step29: So we hace 148 different times to predict.
Step30: So 90% of the trip_times are between 3 and 30 minutes.
Step31: Split the data into a training dataset and a test dataset. Evaluate the performance of the decision tree on the test data
Step32: Start model building
Step33: Train and compare a few decision trees with different parameters
Step34: Some more results
Step35: A tree with this depth is too big to dump. Graphviz works fine until around depth 12.
Step36: A few stats about the trained tree
Step37: Finding the leaves / predicted times
Step38: So we have 67260 leaves.
Step39: So 50% of the nodes are leaves. A little bit cross-checking
Step40: To get a feeling for the generalization of the tree
Step41: The above plot looks promising, but is not very useful. Nonetheless, you can represent this in a Lorenzcurve.
Step42: About 5% of the leaves represent about 40% of the samples
Step43: We found out that all samples have been considered.
Step44: Retrieve the decision path that leads to the leaf
Step45: Be aware, read this branch bottom up!
Step46: Via grouping, we can extract the relevant splits that are always the ones towards the end of the branch. Earlier splits become obsolete if the feature is splitted in the same manner again downwards the tree.
Step47: Groupby is very helpful here. Choose always the split with the first index. "min()" is used here for demonstration purposes only.
Step48: One might use an own get_group method. This will throw less exceptions if the key is not valid (e.g. there is no lower range on day_of_week). This can especially happen in trees with low depth.
Step49: Extract the pickup- and dropoff-area.
Step50: In order to draw the rectangle, we need the side lengths of the areas.
Step51: White is pickup- red is dropoff. But are this really the correct areas? Lets check it via filtering in the training set. Do we get the same amount of trips as the no. of samples in the leaf? (Time splits are hard coded in this case)
Step52: So we've found the trips that belong to this branch! The discrepancy, that might be caused by numeric instability when comparing the geo coordinates, is not that big.
Step53: We can see, that the intersection of the pickup- and dropoff-area is quite big. This corresponds to the predicted trip_time that is about 6.5 minutes. In this short period of time, a car cannot drive that far. It looks also quite similar to the picture above, that is based on a 2d-histogram of the pickups. We can qualitatively see, that the dropoff-area is smaller than the pickup-area.
|
<ASSISTANT_TASK:>
Python Code:
import os as os
import pandas as pd
import numpy as np
from scipy import stats, integrate
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
import datetime as dt
plt.style.use('seaborn-whitegrid')
plt.rcParams['image.cmap'] = 'blue'
#sns.set_context('notebook',font_scale=2)
sns.set_style("whitegrid")
% matplotlib inline
labelsize = 22
mpl.rcParams.update({'font.size': labelsize})
mpl.rcParams.update({'figure.figsize': (20,10)})
mpl.rcParams.update({'axes.titlesize': 'large'})
mpl.rcParams.update({'axes.labelsize': 'large'})
mpl.rcParams.update({'xtick.labelsize': labelsize})
mpl.rcParams.update({'ytick.labelsize': labelsize})
# mpl.rcParams.keys()
!cd data && ls
data = pd.read_csv('data/Taxi_from_2013-05-06_to_2013-05-13.csv', index_col=0, parse_dates=True)
data.info()
data['pickup_datetime'] =pd.to_datetime(data['pickup_datetime'], format = '%Y-%m-%d %H:%M:%S')
data['dropoff_datetime'] =pd.to_datetime(data['dropoff_datetime'], format = '%Y-%m-%d %H:%M:%S')
data.describe().transpose()
data.head()
payments = data.payment_type.value_counts()
payments/len(data)
data.tolls_amount.value_counts()/len(data)
data = data.drop(['vendor_id', 'rate_code', 'store_and_fwd_flag','payment_type','mta_tax', 'tolls_amount',
'surcharge'], axis=1)
data.describe().transpose()
data['trip_time']=data.dropoff_datetime-data.pickup_datetime
data.head()
data.info()
data.isnull().sum()
(data==0).sum()
(data==0).sum()/len(data)
data = data.replace(np.float64(0), np.nan);
data.isnull().sum()
trip_times_in_minutes = data['trip_time'] / np.timedelta64(1, 'm')
plt.hist(trip_times_in_minutes , bins=30, range=[0, 60],
weights=np.zeros_like(trip_times_in_minutes) + 1. / trip_times_in_minutes.size)
#plt.yscale('log')
print(trip_times_in_minutes.quantile(q=[0.025, 0.5, 0.75, 0.95, 0.975, 0.99]))
plt.xlabel('Trip Time in Minutes')
plt.ylabel('Relative Frequency')
plt.title('Distribution of Trip Time')
plt.savefig('figures/trip_time_distribution.eps', format='eps', dpi=1000)
len(data.trip_time.value_counts().values)
anomaly = data.loc[(data['dropoff_longitude'].isnull()) | (data['dropoff_latitude'].isnull()) |
(data['pickup_longitude'].isnull()) | (data['pickup_latitude'].isnull())]
data = data.drop(anomaly.index)
anomaly['flag'] = 'geo_NA'
data.isnull().sum()
len(data)/(len(data)+len(anomaly))
anomaly.tail()
anomaly = anomaly.append(data.loc[(data['trip_distance'].isnull())])
anomaly.loc[data.loc[(data['trip_distance'].isnull())].index,'flag'] = 'trip_dist_NA'
anomaly.tail()
data = data.drop(anomaly.index, errors='ignore') # ignore uncontained labels
data.isnull().sum()
1-len(data)/(len(data)+len(anomaly))
anomaly = anomaly.append(data.loc[(data['trip_time'].isnull())])
anomaly.loc[data.loc[(data['trip_time'].isnull())].index,'flag'] = 'trip_time_NA'
anomaly.tail()
data = data.drop(anomaly.index, errors='ignore') # ignore uncontained labels
data.describe().transpose()
plt.hist(data.trip_time.values / np.timedelta64(1, 'm'), bins=50, range=[0,100])
print(data.trip_time.describe())
np.percentile(data.trip_time, [1,5,10,15,25,50,75,85,95,99]) / np.timedelta64(1,'m')
anomaly.tail()
1-len(data)/(len(data)+len(anomaly))
data.isnull().sum()
data['avg_amount_per_minute'] = (data.fare_amount-2.5) / (data.trip_time / np.timedelta64(1,'m'))
data.avg_amount_per_minute.describe()
h = data.avg_amount_per_minute
plt.figure(figsize=(20,10))
plt.hist(h, normed=False, stacked=True, bins=40, range=[0 , 100], )
#, histtype='stepfilled')
plt.yscale('log')
plt.ylabel('log(freq x)', fontsize=40)
plt.xlabel('x = avg_amount_per_minute', fontsize=40)
print('Min:' + str(min(h)) + '\nMax:' + str(max(h)))
plt.yticks(fontsize=40)
plt.xticks(fontsize=40)
plt.locator_params(axis = 'x', nbins = 20)
plt.show()
data.head()
data.avg_amount_per_minute.quantile([.0001,.01, .5, .75, .95, .975, .99, .995])
lb = 0.5
ub = 2.5
anomaly = anomaly.append(data.loc[(data['avg_amount_per_minute'] > ub) |
(data['avg_amount_per_minute'] < lb)])
anomaly.loc[data.loc[(data['avg_amount_per_minute'] > ub)].index,'flag'] = 'too fast'
anomaly.loc[data.loc[(data['avg_amount_per_minute'] < lb)].index,'flag'] = 'too slow'
data = data.drop(anomaly.index, errors='ignore') # ignore uncontained labels / indices
print(1-len(data)/(len(data)+len(anomaly)))
data.avg_amount_per_minute.describe()
anomaly.tail()
jfk_geodata = (40.641547, -73.778118)
ridgefield_geodata = (40.856406, -74.020642)
data_in_box = data.loc[(data['dropoff_latitude'] > jfk_geodata[0]) &
(data['dropoff_longitude'] < jfk_geodata[1]) &
(data['dropoff_latitude'] < ridgefield_geodata[0]) &
(data['dropoff_longitude'] > ridgefield_geodata[1]) &
(data['pickup_latitude'] > jfk_geodata[0]) &
(data['pickup_longitude'] < jfk_geodata[1]) &
(data['pickup_latitude'] < ridgefield_geodata[0]) &
(data['pickup_longitude'] > ridgefield_geodata[1])
]
# taxidata = taxidata.drop(anomaly.index)
data_in_box.head()
print(jfk_geodata < ridgefield_geodata,
len(data_in_box)/len(data))
x = data_in_box.pickup_longitude
y = data_in_box.pickup_latitude
plt.jet()
H, xedges, yedges = np.histogram2d(x, y, bins=300)#, normed=False, weights=None)
fig = plt.figure(figsize=(20, 10))
plt.hist2d(x, y, bins=300, range=[[min(x.values),-73.95],[40.675,40.8]])
plt.colorbar()
plt.title('Pickup density (first full week in May 2013)')
plt.ylabel('Latitude')
plt.xlabel('Longitude')
ax = fig.gca()
ax.grid(False)
# plt.savefig('figures/pickup_density_manhattan_13.png', format='png', dpi=150)
h = data_in_box.trip_time.values / np.timedelta64(1, 'm')
plt.hist(h, normed=False, bins=150)
plt.yticks(fontsize=40)
plt.xticks(fontsize=40)
plt.show()
data_in_box.head()
time_regression_df = pd.DataFrame([#data_in_box['pickup_datetime'].dt.day,
data_in_box['pickup_datetime'].dt.dayofweek,
data_in_box['pickup_datetime'].dt.hour,
data_in_box['pickup_latitude'],
data_in_box['pickup_longitude'],
data_in_box['dropoff_latitude'],
data_in_box['dropoff_longitude'],
np.ceil(data_in_box['trip_time']/np.timedelta64(1, 'm')),
]).T
time_regression_df.columns = ['pickup_datetime_dayofweek', 'pickup_datetime_hour',
'pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude',
'trip_time']
time_regression_df.tail()
time_regression_df.head()
time_regression_df.ix[:,0:6].describe()
print(time_regression_df.trip_time.value_counts())
print(len(time_regression_df.trip_time.value_counts()))
time_regression_df.trip_time.quantile([0.05, 0.95])
hour_stats = time_regression_df.groupby(time_regression_df.pickup_datetime_hour)
plt.bar(left = hour_stats.pickup_datetime_hour.count().keys(), height=hour_stats.pickup_datetime_hour.count().values/7,
tick_label=hour_stats.pickup_datetime_hour.count().keys(), align='center')
plt.title('Avg. pickups per hour')
plt.xlabel('datetime_hour')
plt.ylabel('frequency')
plt.savefig('avg_pickups_per_hour.png')
print('Avg. pickups per half-hour (summarized over 1 week)')
hour_stats.pickup_datetime_hour.count()/14
(hour_stats.count()/14).quantile([.5])
time_regression_df.columns
from sklearn import cross_validation as cv
time_regression_df_train, time_regression_df_test = cv.train_test_split(time_regression_df, test_size=0.1, random_state=99)
y_train = time_regression_df_train['trip_time']
x_train = time_regression_df_train.ix[:, 0:6]
y_test = time_regression_df_test['trip_time']
x_test = time_regression_df_test.ix[:, 0:6]
time_regression_df_train.tail()
len(x_train)
xy_test = pd.concat([x_test, y_test], axis=1)
xy_test.head()
# xy_test.to_csv('taxi_tree_test_Xy_20130506-12.csv')
# x_test.to_csv('taxi_tree_test_X_20130506-12.csv')
# y_test.to_csv('taxi_tree_test_y_20130506-12.csv')
# xy_test_sample = Xy_test.sample(10000, random_state=99)
# xy_test_sample.to_csv('taxi_tree_test_Xy_sample.csv')
# xy_test_sample.head()
print(x_train.shape)
print(x_train.size)
print(x_test.shape)
print(time_regression_df.shape)
print(x_train.shape[0]+x_test.shape[0])
import time
# Import the necessary modules and libraries
from sklearn.tree import DecisionTreeRegressor
import numpy as np
import matplotlib.pyplot as plt
#features = ['pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude','pickup_datetime']
#print("* features:", features, sep="\n")
max_depth_list = (10,15,20,25,30)
scores = [-1, -1, -1, -1, -1]
sum_abs_devs = [-1, -1, -1, -1, -1]
times = [-1, -1, -1, -1, -1]
for i in range(0,len(max_depth_list)):
start = time.time()
regtree = DecisionTreeRegressor(min_samples_split=1000, random_state=10, max_depth=max_depth_list[i])# formerly 15. 15 is reasonable,
# 30 brings best results # random states: 99
regtree.fit(x_train, y_train)
scores[i]= regtree.score(x_test, y_test)
y_pred = regtree.predict(x_test)
sum_abs_devs[i] = sum(abs(y_pred-y_test))
times[i] = time.time() - start
print(max_depth_list)
print(scores)
print(sum_abs_devs)
print(times)
start = time.time()
regtree = DecisionTreeRegressor(min_samples_split=50, random_state=10, max_depth=25, splitter='best' )
regtree.fit(x_train, y_train)
regtree.score(x_test, y_test)
y_pred = regtree.predict(x_test)
sum_abs_devs = sum(abs(y_pred-y_test))
elapsed = time.time() - start
print(elapsed)
# from sklearn import tree
# tree.export_graphviz(regtree, out_file='figures/tree_d10.dot', feature_names=time_regression_df.ix[:,0:6].columns, class_names=time_regression_df.columns[6])
regtree.tree_.impurity
y_train.describe()
print('R²: ', regtree.score(x_test, y_test))
from sklearn.externals import joblib
joblib.dump(regtree, 'treelib/regtree_depth_25_mss_50_rs_10.pkl', protocol=2)
print(regtree.feature_importances_ ,'\n',
regtree.class_weight,'\n',
regtree.min_samples_leaf,'\n',
regtree.tree_.n_node_samples,'\n'
)
y_pred = regtree.predict(x_test)
np.linalg.norm(np.ceil(y_pred)-y_test)
diff = (y_pred-y_test)
# plt.figure(figsize=(12,10)) # not needed. set values globally
plt.hist(diff.values, bins=100, range=[-50, 50])
print('Perzentile(%): ', [1,5,10,15,25,50,75,90,95,99], '\n', np.percentile(diff.values, [1,5,10,15,25,50,75,90,95,99]))
print('Absolute time deviation (in 1k): ', sum(abs(diff))/1000)
plt.title('Error Distribution on the 2013 Test Set')
plt.xlabel('Error in Minutes')
plt.ylabel('Frequency')
plt.savefig('figures/simple_tree_error_d25_msp_50.eps', format='eps', dpi=1000)
diff.describe()
leaves = regtree.tree_.children_left*regtree.tree_.children_right
for idx, a in enumerate(leaves):
if a==1:
x=1# do nothing
else:
leaves[idx] = 0
print(leaves)
print(leaves[leaves==1].sum())
len(leaves[leaves==1])
len(leaves[leaves==1])/regtree.tree_.node_count
print((leaves==1).sum()+(leaves==0).sum())
print(len(leaves))
node_samples = regtree.tree_.n_node_samples
node_samples
leaf_samples = np.multiply(leaves, node_samples)
stats = np.unique(leaf_samples, return_counts=True)
stats
plt.scatter(stats[0][1:], stats[1][1:])
plt.yscale('log')
plt.xscale('log')
node_perc = np.cumsum(stats[1][1:]) # Cumulative sum of nodes
samples_perc = np.cumsum(np.multiply(stats[0][1:],stats[1][1:]))
node_perc = node_perc / node_perc[-1]
samples_perc = samples_perc / samples_perc[-1]
plt.plot(node_perc, samples_perc)
plt.plot((np.array(range(0,100,1))/100), (np.array(range(0,100,1))/100), color='black')
plt.ylim(0,1)
plt.xlim(0,1)
plt.title('Lorenz Curve Between Nodes And Samples')
plt.xlabel('Leaves %')
plt.ylabel('Samples %')
plt.fill_between(node_perc, samples_perc , color='blue', alpha='1')
plt.savefig('figures/lorenzcurve_d25_msp_50.eps', format='eps', dpi=1000)
plt.savefig('figures/lorenzcurve_d25_msp_50.png', format='png', dpi=300)
len(leaf_samples)==regtree.tree_.node_count
max_leaf = [np.argmax(leaf_samples), max(leaf_samples)]
print('So node no.', max_leaf[0] ,'is a leaf and has', max_leaf[1] ,'samples in it.')
print(max_leaf)
# Inspired by: http://stackoverflow.com/questions/20224526/
# how-to-extract-the-decision-rules-from-scikit-learn-decision-tree
def get_rule(tree, feature_names, leaf):
left = tree.tree_.children_left
right = tree.tree_.children_right
threshold = tree.tree_.threshold
features = [feature_names[i] for i in tree.tree_.feature]
value = tree.tree_.value
samples = tree.tree_.n_node_samples
global count
count = 0;
global result
result = {};
def recurse_up(left, right, threshold, features, node):
global count
global result
count = count+1;
#print(count)
if node != 0:
for i, j in enumerate(right):
if j == node:
print( 'Node:', node, 'is right of:',i, ' with ', features[i], '>', threshold[i])
result[count] = [features[i], False, threshold[i]]
return(recurse_up(left, right, threshold, features, i))
for i, j in enumerate(left):
if j == node:
print('Node:', node, 'is left of',i,' with ', features[i], '<= ', threshold[i])
result[count] = [features[i], True, threshold[i]]
return(recurse_up(left, right, threshold, features, i))
else :
return(result)
print('Leaf:',leaf, ', value: ', value[leaf][0][0], ', samples: ', samples[leaf])
recurse_up(left, right, threshold, features, leaf)
return(result)
branch_to_leaf=get_rule(regtree, time_regression_df.ix[:,0:6].columns,max_leaf[0])
branch_to_leaf
splitsdf = pd.DataFrame(branch_to_leaf).transpose()
splitsdf.columns = ['features', 'leq', 'value']
splitsdf
splitstats = splitsdf.groupby(['features','leq'])
splitstats.groups
splitstats.min()
def get_group(g, key):
if key in g.groups: return g.get_group(key)
return pd.DataFrame(list(key).append(np.nan))
area_coords = dict()
area_coords['dropoff_upper_left'] = [splitstats.get_group(('dropoff_latitude', True)).iloc[0].value,
splitstats.get_group(('dropoff_longitude', False)).iloc[0].value]
area_coords['dropoff_lower_right'] = [splitstats.get_group(('dropoff_latitude',False)).iloc[0].value,
splitstats.get_group(('dropoff_longitude',True)).iloc[0].value]
area_coords['pickup_upper_left'] = [splitstats.get_group(('pickup_latitude',True)).iloc[0].value,
splitstats.get_group(('pickup_longitude',False)).iloc[0].value]
area_coords['pickup_lower_right'] = [splitstats.get_group(('pickup_latitude',False)).iloc[0].value,
splitstats.get_group(('pickup_longitude',True)).iloc[0].value]
area_coords
import operator
dropoff_rect_len = list(map(operator.sub,area_coords['dropoff_upper_left'],
area_coords['dropoff_lower_right']))
pickup_rect_len = list(map(operator.sub,area_coords['pickup_upper_left'],
area_coords['pickup_lower_right']))
dropoff_rect_len, pickup_rect_len
import matplotlib.patches as patches
x = data_in_box.pickup_longitude
y = data_in_box.pickup_latitude
fig = plt.figure(figsize=(20, 10))
# Reduce the plot to Manhattan
plt.hist2d(x, y, bins=300, range=[[min(x.values),-73.95],[40.675,40.8]])
plt.colorbar()
plt.title('Pickup density (first full week in May 2013)')
plt.ylabel('Latitude')
plt.xlabel('Longitude')
plt.hold(True)
ax = fig.gca()
ax.add_patch(patches.Rectangle((area_coords['dropoff_upper_left'][1], area_coords['dropoff_lower_right'][0]),
abs(dropoff_rect_len[1]), dropoff_rect_len[0], fill=False, edgecolor='red', linewidth=5))
ax.add_patch(patches.Rectangle((area_coords['pickup_upper_left'][1], area_coords['pickup_lower_right'][0]),
abs(pickup_rect_len[1]), pickup_rect_len[0], fill=False, edgecolor='white', linewidth=5))
ax.grid(False)
plt.hold(False)
trips_of_leaf = x_train.loc[(x_train['dropoff_latitude'] > area_coords['dropoff_lower_right'][0]) &
(x_train['dropoff_longitude'] < area_coords['dropoff_lower_right'][1]) &
(x_train['dropoff_latitude'] < area_coords['dropoff_upper_left'][0]) &
(x_train['dropoff_longitude'] > area_coords['dropoff_upper_left'][1]) &
(x_train['pickup_latitude'] > area_coords['pickup_lower_right'][0]) &
(x_train['pickup_longitude'] < area_coords['pickup_lower_right'][1]) &
(x_train['pickup_latitude'] < area_coords['pickup_upper_left'][0]) &
(x_train['pickup_longitude'] > area_coords['pickup_upper_left'][1]) &
(x_train['pickup_datetime_dayofweek'] < 4.5) &
(x_train['pickup_datetime_hour'] < 18.5) &
(x_train['pickup_datetime_hour'] > 7.5)
]
trips_of_leaf.head()
print('Filtered trips: ', len(trips_of_leaf))
print('Trips in leaf: ', max_leaf[1])
len(trips_of_leaf) == max_leaf[1]
import gmaps
import gmaps.datasets
gmaps.configure(api_key='AI****') # Fill in your API-Code here
trips_of_leaf_pickup_list = trips_of_leaf.iloc[:,[2,3]].as_matrix().tolist()
trips_of_leaf_dropoff_list = trips_of_leaf.iloc[:,[4,5]].as_matrix().tolist()
data = gmaps.datasets.load_dataset('taxi_rides')
pickups_gmap = gmaps.Map()
dropoffs_gmap = gmaps.Map()
pickups_gmap.add_layer(gmaps.Heatmap(data=trips_of_leaf_pickup_list[0:1000]))
dropoffs_gmap.add_layer(gmaps.Heatmap(data=trips_of_leaf_dropoff_list[0:1000]))
pickups_gmap
dropoffs_gmap
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Scene 2
Step2: Chris Hemsworth
Step3: Our data looks good! The axes are a little strange, but we just want to make sure we have data we can work with!
Step4: Now that we've got that sorted out, let's take a look at all three Chrises together. How do their box office titles stack up with one another over time?
Step5: In the graph above, we color coded our Chris contingency as follows
Step6: In the above, we take a look at the box office grosses for the top 10 films for each Chris. Here, we start to wonder if maybe Evans has a more consistent box office performance. Of his top 10 filims, 9 are in the $200 million range, a stat unmatched by our other two gentlemen.
Step7: Buoyed by franchise films in the last five years, Chris Evans has been a steady player, but hasn't excelled outside the Marvel universe franchises. All his biggest hits are as a member of a franchise / ensemble. Evans's Marvel hits since 2011 have performed well, though non-Marvel titles have largely been blips on the radar.
Step8: Hemsworth had a very rough 2015. He featured prominently in 4 films, only one of which was a box office success (another Marvel Avengers installment). After a breakout 2012, are the tides turning after major flops like In the Heart of the Sea?
Step9: Pratt may have been a slower starter than our other leading gentlemen, but his 2014 breakout Guardians of the Galaxy cemented his status as leading man potential, and 2015's Jurassic World broke tons of box office records. As a non-Marvel film (though a franchise reboot), Jurassic World is unique in that it may be a standalone hit for Pratt, and everyone will be closely watching his box office performance in whatever leading man project he chooses next.
Step10: We love this data cut. Here, we take a comparative look of our Chrises over time. Keeping our colors consistent, Evans is purple, Hemsworth is red, Pratt is orange.
Step11: Whoa! Where did Hemsworth go?
Step12: Arnold Schwarzenegger
Step13: Tom Cruise
Step14: Scene 4
Step15: LEGEND
|
<ASSISTANT_TASK:>
Python Code:
#This guided coding excercise requires associated .csv files: CE1.csv, CH1.csv, CP1.csv, Arnold1.csv, Bruce1.csv, and Tom1.csv
#make sure you have these supplemental materials ready to go in your active directory before proceeding
#Let's start coding! We first need to make sure our preliminary packages are in order. We imported the following...
#some may have ended up superfluous, but we figured it was better to cover our bases!
import pandas as pd
import sys
import matplotlib as mpl
import matplotlib.pyplot as plt
import sys
import os
import datetime as dt
import csv
import requests, io
from bs4 import BeautifulSoup
%matplotlib inline
print('\nPython version: ', sys.version)
print('Pandas version: ', pd.__version__)
print('Requests version: ', requests.__version__)
print("Today's date:", dt.date.today())
# data scraped from Box Office Mojo, the authoritative source for Hollywood Box Office Data
# chris evans
url = 'http://www.boxofficemojo.com/people/chart/?view=Actor&id=chrisevans.htm'
evans = pd.read_html(url)
print('Ouput has type', type(evans), 'and length', len(evans))
print('First element has type', type(evans[0]))
#we have a list of dataframes, and the cut of data we want is represented by the below
evans[2]
ce=evans[2]
print("type=", type(ce)," ", "length=", len(ce), "shape=", ce.shape)
print(ce)
ce.to_csv("ce.csv")
#since scraped dataset is small, and had a tricky double index, we decided to export to csv and do a quick cleanup there
#removed indices; cleaned titles; cleaned date
#Clean File saved as CE1.csv
#this is the path for my machine; you'll have to link to the CE1.csv file that you've saved on your machine
path='C:\\Users\\Nick\\Desktop\\Data_Bootcamp\\Final Project\\CE1.csv'
CE = pd.read_csv(path)
print(type(CE), "shape is", CE.shape, "types:", CE.dtypes)
print(CE) #this is going to be much better for us to work with
#this looks good! let's test and make sure the data makes sense with a simple plot:
CE.plot.scatter('Release Year', 'Adjusted Gross')
#we love what we see, let's repeat it for our other leading gentlemen
# same process for our second leading Chris
# chris hemsworth
url = 'http://www.boxofficemojo.com/people/chart/?view=Actor&id=chrishemsworth.htm'
hemsworth = pd.read_html(url)
print('Ouput has type', type(hemsworth), 'and length', len(hemsworth))
print('First element has type', type(hemsworth[0]))
hemsworth[3]
ch=hemsworth[3]
print("type=", type(ch)," ", "length=", len(ch), "shape=", ch.shape)
print(ch)
ch.to_csv("ch.csv")
#since scraped dataset is small, and had a tricky double index, we decided to export to csv and do a quick cleanup there
#Cleaned File saved as CH1.csv
path='C:\\Users\\Nick\\Desktop\\Data_Bootcamp\\Final Project\\CH1.csv'
#again, this is the path on my machine, you'll want to make sure you adjust to wherever you saved down CH1
CH = pd.read_csv(path)
print(type(CH), "shape is", CH.shape, "types:", CH.dtypes)
CH.plot.scatter('Release Year', 'Adjusted Gross')
# Chris number three, coming through!
# chris pratt
url = 'http://www.boxofficemojo.com/people/chart/?view=Actor&id=chrispratt.htm'
pratt = pd.read_html(url)
print('Ouput has type', type(pratt), 'and length', len(pratt))
print('First element has type', type(pratt[0]))
pratt[3]
cp=pratt[3]
print("type=", type(cp)," ", "length=", len(cp), "shape=", cp.shape)
print(cp)
cp.to_csv("cp.csv")
#since scraped dataset is small, and had a tricky double index, we decided to export to csv and do a quick cleanup there
#Cleaned File saved as CP1.csv
path='C:\\Users\\Nick\\Desktop\\Data_Bootcamp\\Final Project\\CP1.csv'
#remember to adjust path to where you've saved the .csv down
CP = pd.read_csv(path)
print(type(CP), "shape is", CP.shape, "types:", CP.dtypes)
CP.plot.scatter('Release Year', 'Adjusted Gross')
plt.scatter(CE['Release Year'], CE['Adjusted Gross'],
color="purple")
plt.scatter(CH['Release Year'], CH['Adjusted Gross'],
color="red")
plt.scatter(CP['Release Year'], CP['Adjusted Gross'],
color="orange")
plt.title('Chris Film Box Office Share Over Time')
fig, ax = plt.subplots(nrows=3, ncols=1, sharex=True, sharey=True)
CE['Adjusted Gross'].head(10).plot(kind="bar",ax=ax[0], color='purple', title="Evans")
CH['Adjusted Gross'].head(10).plot(kind="bar",ax=ax[1], color='red', title="Hemsworth")
CP['Adjusted Gross'].head(10).plot(kind="bar",ax=ax[2], color='orange', title="Pratt")
plt.bar(CE['Release Year'], CE['Adjusted Gross'],
align='center',
color='pink')
plt.title('Chris Evans')
plt.bar(CH['Release Year'], CH['Adjusted Gross'],
align='center',
color='red')
plt.title("Chris Hemsworth")
plt.bar(CP['Release Year'], CP['Adjusted Gross'],
align='center',
color='orange')
plt.title("Chris Pratt")
plt.bar(CE['Release Year'], CE['Adjusted Gross'],
align='center',
color='purple')
plt.bar(CH['Release Year'], CH['Adjusted Gross'],
align='center',
color='red')
plt.bar(CP['Release Year'], CP['Adjusted Gross'],
align='center',
color='orange')
plt.title('Chris Film Box Office Share Over Time')
plt.bar(CH['Release Year'], CH['Adjusted Gross'],
align='center',
color='red')
plt.bar(CE['Release Year'], CE['Adjusted Gross'],
align='center',
color='purple')
plt.bar(CP['Release Year'], CP['Adjusted Gross'],
align='center',
color='orange')
plt.title('Chris Film Box Office Share Over Time')
#Movie scraping and data arranging like we did before
#Bruce Willis
url = 'http://www.boxofficemojo.com/people/chart/?id=brucewillis.htm'
willis = pd.read_html(url)
print('Ouput has type', type(willis), 'and length', len(willis))
print('First element has type', type(willis[0]))
willis[2]
bruce=willis[2]
bruce.to_csv("Bruce.csv") #Converting dataframe into a csv file
#editing and cleaning as needed, resaved as Bruce1.csv
path='/Users/Nick/Desktop/data_bootcamp/Final Project/Bruce1.csv'
BWillis = pd.read_csv(path)
print(type(BWillis), BWillis.shape, BWillis.dtypes)
import matplotlib as mpl
mpl.rcParams.update(mpl.rcParamsDefault)
BWillis.plot.scatter('Release Year', 'Adjusted Gross')
#That's a lot of films! Let's narrow:
BW=BWillis.head(11)
print(BW)
#we'll come back to this later, but let's get our other leading men in the frame!
#here we go again!
#Arnold Schwarzenegger
url = 'http://www.boxofficemojo.com/people/chart/?id=arnoldschwarzenegger.htm'
schwarz = pd.read_html(url)
print('Ouput has type', type(schwarz), 'and length', len(schwarz))
print('First element has type', type(schwarz[0]))
schwarz[2]
arnold=schwarz[2]
print("type=", type(arnold)," ", "length=", len(arnold))
arnold.shape
print(arnold)
arnold.to_csv("Arnold.csv")
path='/Users/Nick/Desktop/data_bootcamp/Final Project/Arnold1.csv'
ASchwarz = pd.read_csv(path)
print(type(ASchwarz), ASchwarz.shape, ASchwarz.dtypes)
print(ASchwarz)
ASchwarz.plot.scatter('Release Year', 'Adjusted Gross')
#let's scale back sample size again
AS=ASchwarz.head(11)
#we'll use this soon
#last but not least, our data for Tom Cruise
url = 'http://www.boxofficemojo.com/people/chart/?id=tomcruise.htm'
cruise = pd.read_html(url)
print('Ouput has type', type(cruise), 'and length', len(cruise))
print('First element has type', type(cruise[0]))
cruise[3]
Tom=cruise[3]
Tom.to_csv("Tom.csv")
path='/Users/Nick/Desktop/data_bootcamp/Final Project/Tom1.csv'
TCruise = pd.read_csv(path)
print(type(TCruise), TCruise.shape, TCruise.dtypes)
print(TCruise)
TCruise.plot.scatter('Release Year', 'Adjusted Gross')
#cutting down to the top 10
TC=TCruise.head(11)
#All of the old school action stars in one histogram. Representing share of box office cumulatively over time.
plt.bar(TC['Release Year'],
TC['Adjusted Gross'],
align='center',
color='Blue')
plt.bar(BW['Release Year'],
BW['Adjusted Gross'],
align='center',
color='Green')
plt.bar(AS['Release Year'],
AS['Adjusted Gross'],
align='center',
color='Yellow')
plt.title('"OG" Leading Box Office over Time')
#As a reminder, here's what we are comparing against:
fig, ax = plt.subplots(nrows=3, ncols=1, sharex=True, sharey=True)
CE['Adjusted Gross'].head(10).plot(kind="bar",ax=ax[0], color='purple', title="Evans")
CH['Adjusted Gross'].head(10).plot(kind="bar",ax=ax[1], color='red', title="Hemsworth")
CP['Adjusted Gross'].head(10).plot(kind="bar",ax=ax[2], color='orange', title="Pratt")
plt.bar(CE['Release Year'], CE['Adjusted Gross'],
align='center',
color='purple')
plt.bar(CH['Release Year'], CH['Adjusted Gross'],
align='center',
color='red')
plt.bar(CP['Release Year'], CP['Adjusted Gross'],
align='center',
color='orange')
plt.title('Chris Film Box Office Share Over Time')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Responsive in notebook
|
<ASSISTANT_TASK:>
Python Code:
N = 10000
x = np.random.normal(0, np.pi, N)
y = np.sin(x) + np.random.normal(0, 0.2, N)
p = figure(webgl=True)
p.scatter(x, y, alpha=0.1)
show(p)
!conda list | egrep "jupyter|notebook"
p = figure(plot_height=200, sizing_mode='scale_width')
p.scatter(x, y, alpha=0.1)
show(p)
N = 4000
x = np.random.random(size=N) * 100
y = np.random.random(size=N) * 100
radii = np.random.random(size=N) * 1.5
colors = [
"#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)
]
TOOLS="hover,crosshair,pan,wheel_zoom,box_zoom,undo,redo,reset,tap,save,box_select,poly_select,lasso_select,"
p = figure(tools=TOOLS, sizing_mode='scale_width')
p.scatter(x, y, radius=radii,
fill_color=colors, fill_alpha=0.6,
line_color=None)
show(p)
save(p, 'color_scatter.html')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading the data set
Step2: Year collected by continent
Step4: There are a lot of things that are not continents there! While iDigBio cleans stuff up a bit before indexing it, not everything is obvious to do. For the purposes of our graph, we will make an "other" continent and group things that are not one of the 7 continents (plus "oceania" which GBIF uses and we'll look at GBIF data lower down) into that category.
Step5: That output looks right. So now let's make a user defined function for Spark from our Python function. Notice that we pass the Python function as an argument to udf() and udf() returns to us a new function. In Python, functions are what's called "first class objects" and we can pass them around and manipulate them just like a variable.
Step6: Now re-run our grouping. This time use the fix_continent_udf() function to calculate a new column and add it to the dataframe with the withColumn() method. Then we can refer to that new column in the group by method. (Yes, we could have put the fix_continent_udf() call into the group by just like we put the year() call but this is an opportunity to introduce withColumn().)
Step7: That looks much better. Notice now that we can actually see all of the yearcollected=None values. Before there were many continents so they went well past the top 20 results we were looking at. Now we can see all 8 counts, one for each continent.
Step9: That looks pretty intuitive and we made the whole thing with one Spark statement. You can tell it's a Pandas dataframe now because the display is nice and tabular.
Step10: and then we'll call our function to make the graph and add some axes.
Step11: There it is, the number of specimen records in iDigBio by collection date and continent. We chose to add some vertical lines to the graph at particularly interesting dates. You can read the code comments to see what those are.
Step13: We use a different process for formating GBIF data and as part of GUODA's process, the columns are named with a syntax that can be difficult to work with in Python. First let's use the SQL "as" syntax to rename the columns.
Step14: We will again run the crosstab all at once on this data frame with renamed columns. This is exactly the same summarization we ran on the iDigBio data frame.
Step15: And we'll use the exact same code for graphing as well. We just need to remove the "australia" and it's color from our continent list because GBIF doesn't use it.
|
<ASSISTANT_TASK:>
Python Code:
# col() selects columns from a data frame, year() works on dates, and udf() creates user
# defined functions
from pyspark.sql.functions import col, year, udf
# Plotting library and configuration to show graphs in the notebook
import matplotlib.pyplot as plt
%matplotlib inline
df = sqlContext.read.load("/guoda/data/idigbio-20190612T171757.parquet")
df.count()
yc_sum = (df
.groupBy(year("datecollected").cast("integer").alias("yearcollected"),
"continent")
.count()
.orderBy("yearcollected")
)
yc_sum.head(20)
# allowed continents
continents = set(["africa", "australia", "asia", "oceania", "europe", "north america", "south america"])
def fix_continent(c):
If the given string contains one of the continents, return which one. If not
then return "other". Lower case the string and also replace "_" with " " to
be able to clean up GBIF's representation of continents that contain spaces.
for continent in continents:
if (c) and (continent in c.lower().replace("_", " ")):
return continent
return "other"
# Some example calls to this function
print(fix_continent(None))
print(fix_continent("europe"))
print(fix_continent("oceania"))
print(fix_continent("NORTH_AMERICA"))
print(fix_continent("North America, Canada, Manitoba, Churchill"))
print(fix_continent("East Indies, Indonesia: Pulo Pandjang, off Sumatra"))
print(fix_continent("Asia; Thailand; Pathum Thani"))
fix_continent_udf = udf(fix_continent)
yc_sum = (df
.withColumn("fixed", fix_continent_udf(df.continent))
.groupBy(year("datecollected").cast("integer").alias("yearcollected"),
"fixed")
.count()
.orderBy("yearcollected")
)
yc_sum.head(20)
pandas_yc_cross = (df
.select(year("datecollected").cast("integer").alias("yearcollected"),
df.continent)
.filter(col("yearcollected") >= 1817)
.filter(col("yearcollected") <= 2017)
.withColumn("fixed", fix_continent_udf(df.continent))
.crosstab("yearcollected", "fixed")
.orderBy(col("yearcollected_fixed"))
.toPandas()
)
pandas_yc_cross.head(5)
import pandas as pd
import numpy as np
def stack_bar_plot(pdf, columns, colors):
Matplotlib needs to have the bars stacked on top of each other by hand.
First, we plot one column of data, then we make a ...
plots = []
bottoms = pd.DataFrame(np.zeros((len(pdf['yearcollected_fixed']), 1)))
for c in columns:
plots.append(
plt.bar(pdf['yearcollected_fixed'], pdf[c],
color=colors[len(plots)], edgecolor='none',
width=1.0, bottom=bottoms[0])
)
bottoms[0] += pdf[c]
return plots
# set up ordering of continents and their colors
continents_list = sorted(continents)
continents_list.insert(0, "other")
colors = ["#888888", "#a75902", "#00e5c8", "#1bcda7", "#37b686", "#539f65", "#51827c", "#bf6603"]
plots = stack_bar_plot(pandas_yc_cross, continents_list, colors)
# Start of WWI
plt.axvline(x=1914)
# Start of WWII
plt.axvline(x=1939)
# "1988 - October 31: President Reagan signs the NSF Authorization Act of 1988,
# thereby authorizing the doubling of the NSF budget over the next five years."
plt.axvline(x=1988)
plt.legend(plots, continents_list, loc=2)
plt.title("Specimens in iDigBio by Collection Year and Continent")
plt.ylabel("Number of Specimen Records")
plt.xlabel("Year")
axes = plt.gca()
axes.set_xlim([1815, 2020])
axes.set_ylim([0, 1200000])
fig = plt.gcf()
fig.set_size_inches(12, 4)
gbif_df = sqlContext.read.parquet("/guoda/data/gbif-idigbio.parquet/source=gbif/date=20160825")
gbif_df.createOrReplaceTempView("gbif_df")
renamed_df = sqlContext.sql(
SELECT `http://rs.tdwg.org/dwc/terms/eventDate` as eventDate,
`http://rs.tdwg.org/dwc/terms/continent` as continent
FROM gbif_df
WHERE `http://rs.tdwg.org/dwc/terms/basisOfRecord` LIKE "%SPECIMEN%"
).persist()
renamed_df.count()
pandas_yc_cross_gbif = (renamed_df
.select(year("eventDate").cast("integer").alias("yearcollected"),
renamed_df.continent)
.filter(col("yearcollected") >= 1817)
.filter(col("yearcollected") <= 2017)
.withColumn("fixed", fix_continent_udf(renamed_df.continent))
.crosstab("yearcollected", "fixed")
.orderBy(col("yearcollected_fixed"))
.toPandas()
)
pandas_yc_cross_gbif.head(5)
# GBIF doesn't have "australia"
try:
continents_list.remove("australia")
colors.remove("#1bcda7")
except:
pass
plots = stack_bar_plot(pandas_yc_cross_gbif, continents_list, colors)
# Start of WWI
plt.axvline(x=1914)
# Start of WWII
plt.axvline(x=1939)
# "1988 - October 31: President Reagan signs the NSF Authorization Act of 1988, thereby authorizing the doubling of the NSF budget over the next five years."
plt.axvline(x=1988)
plt.legend(plots, continents_list, loc=2)
plt.title("Specimens in GBIF by Collection Year and Continent")
plt.ylabel("Number of Specimen Records")
plt.xlabel("Year")
axes = plt.gca()
axes.set_xlim([1815, 2020])
axes.set_ylim([0, 1200000])
fig = plt.gcf()
fig.set_size_inches(12, 4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data ready for work
Step2: Testing histplot
|
<ASSISTANT_TASK:>
Python Code:
#general imports
import matplotlib.pyplot as plt
import pygslib
import numpy as np
#make the plots inline
%matplotlib inline
#get the data in gslib format into a pandas Dataframe
mydata= pygslib.gslib.read_gslib_file('../datasets/cluster.dat')
# This is a 2D file, in this GSLIB version we require 3D data and drillhole name or domain code
# so, we are adding constant elevation = 0 and a dummy BHID = 1
mydata['Zlocation']=0
mydata['bhid']=1
# printing to verify results
print ' \n **** 5 first rows in my datafile \n\n ', mydata.head(n=5)
#view data in a 2D projection
plt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=mydata['Primary'])
plt.colorbar()
plt.grid(True)
plt.show()
print pygslib.gslib.__plot.histplt.__doc__
mydata['Declustering Weight'].sum()
parameters_histplot = {
'hmin' : 0.06, #in/output rank-0 array(float,'d')
'hmax' : 20.06, #in/output rank-0 array(float,'d')
'ncl' : 40, #int, number of bins
'iwt' : 0, #int, 1 use declustering weight
'ilog' : 1, #int, 1 use logscale
'icum' : 0, #int, 1 use cumulative
'va' : mydata['Primary'], # array('d') with bounds (nd)
'wt' : mydata['Declustering Weight']} # array('d') with bounds (nd), wight variable (obtained with declust?)
parameters_histplotd = {
'hmin' : 0.06, #in/output rank-0 array(float,'d')
'hmax' : 20.06, #in/output rank-0 array(float,'d')
'ncl' : 40, #int, number of bins
'iwt' : 1, #int, 1 use declustering weight
'ilog' : 1, #int, 1 use logscale
'icum' : 0, #int, 1 use cumulative
'va' : mydata['Primary'], # array('d') with bounds (nd)
'wt' : mydata['Declustering Weight']} # array('d') with bounds (nd), wight variable (obtained with declust?)
binval,nincls,cl, clwidth,xpt025,xlqt,xmed,xuqt,xpt975, \
xmin,xmax,xcvr,xmen,xvar,xfrmx,dcl,error = pygslib.gslib.__plot.histplt(**parameters_histplot)
binvald,ninclsd,cld, clwidthd, xpt025d,xlqtd,xmedd,xuqtd, \
xpt975d,xmind,xmaxd,xcvrd,xmend,xvard,xfrmxd,dcld,errord = pygslib.gslib.__plot.histplt(**parameters_histplotd)
print dcl
print cl.round(1)
print nincls
print binval.round(2)
print clwidth
mydata.Primary[mydata.Primary>20.1]
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_title('Bin probability')
plt.bar (cl, binval, width=-clwidth, label = 'Non-declustered')
plt.bar (cld, binvald, width=-clwidth, alpha=0.5, color='r', label = 'Declustered')
if parameters_histplot['ilog']>0:
ax.set_xscale('log')
plt.grid(True)
plt.legend(loc=2)
fig.show
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_title('Bin count (Warning: this will not show the effect of weight)')
plt.bar (cl, nincls, width=-clwidth,label = 'Non-Declustered')
plt.bar (cld, ninclsd, width=-clwidth, alpha=0.5, color='r',label = 'Declustered')
if parameters_histplot['ilog']>0:
ax.set_xscale('log')
plt.grid(True)
plt.legend(loc=2)
fig.show
parameters_histplot = {
'hmin' : 0.06, #in/output rank-0 array(float,'d')
'hmax' : 20.06, #in/output rank-0 array(float,'d')
'ncl' : 40, #int, number of bins
'iwt' : 0, #int, 1 use declustering weight
'ilog' : 1, #int, 1 use logscale
'icum' : 1, #int, 1 use cumulative
'va' : mydata['Primary'], # array('d') with bounds (nd)
'wt' : mydata['Declustering Weight']} # array('d') with bounds (nd), wight variable (obtained with declust?)
parameters_histplotd = {
'hmin' : 0.06, #in/output rank-0 array(float,'d')
'hmax' : 20.06, #in/output rank-0 array(float,'d')
'ncl' : 40, #int, number of bins
'iwt' : 1, #int, 1 use declustering weight
'ilog' : 1, #int, 1 use logscale
'icum' : 1, #int, 1 use cumulative
'va' : mydata['Primary'], # array('d') with bounds (nd)
'wt' : mydata['Declustering Weight']} # array('d') with bounds (nd), wight variable (obtained with declust?)
binval,nincls,cl, clwidth,xpt025,xlqt,xmed,xuqt,xpt975,xmin, \
xmax,xcvr,xmen,xvar,xfrmx,dcl,error = pygslib.gslib.__plot.histplt(**parameters_histplot)
binvald,ninclsd,cld, clwidthd,xpt025d,xlqtd,xmedd,xuqtd,xpt975d, \
xmind,xmaxd,xcvrd,xmend,xvard,xfrmxd,dcld,errord = pygslib.gslib.__plot.histplt(**parameters_histplotd)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_title('Bin probability, bin style')
plt.bar (cld, binvald, width=-clwidth, color='r', label = 'Declustered')
plt.bar (cl, binval, width=-clwidth, label = 'Non-declustered')
if parameters_histplot['ilog']>0:
ax.set_xscale('log')
plt.grid(True)
plt.legend(loc=2)
fig.show
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_title('Bin probability, step style')
plt.step (cld, binvald, where='post', color='r', label = 'Declustered')
plt.step (cl, binval, where='post', label = 'Non-declustered')
if parameters_histplot['ilog']>0:
ax.set_xscale('log')
plt.grid(True)
plt.legend(loc=2)
fig.show
print 'data min, max: ', xmin, xmax
print 'data quantile 2.5%, 25%, 50%, 75%, 97.75%: ' , xpt025,xlqt,xmed,xuqt,xpt975
print 'data cv, mean, variance : ', xcvr,xmen,xvar
print 'hitogram max frequency (useful to rescal some plots)' , xfrmx
print 'error <> 0? Then all ok?' , error==0
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Checking out correlations
Step2: The percentage of students enrolled in free/reduced-price lunch programs is often used as a proxy for poverty.
Step3: Conversely, the education level of a student's parents is often a good predictor of how well a student will do in school.
Step4: Running the regression
Step5: In our naive universe where we're only paying attention to two variables -- academic performance and free/reduced lunch -- we can clearly see that some percentage of schools is overperforming the performance that would be expected of them, taking poverty out of the equation.
Step6: Let's look specifically at Solano Avenue Elementary, which has an API of 922 and 80 percent of students being in the free/reduced lunch program. If you were to use the above regression to predict how well Solano would do, it would look like this
|
<ASSISTANT_TASK:>
Python Code:
df = pd.read_csv('data/apib12tx.csv')
df.describe()
df.corr()
df.plot(kind="scatter", x="MEALS", y="API12B")
df.plot(kind="scatter", x="AVG_ED", y="API12B")
data = np.asarray(df[['API12B','MEALS']])
x, y = data[:, 1:], data[:, 0]
lr = LinearRegression()
lr.fit(x, y)
# plot the linear regression line on the scatter plot
lr.coef_
lr.score(x, y)
plt.scatter(x, y, color='blue')
plt.plot(x, lr.predict(x), color='red', linewidth=1)
df[(df['MEALS'] >= 80) & (df['API12B'] >= 900)]
lr.predict(80)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting started
Step2: Authenticate your GCP account
Step3: ML Workflow using a BigQuery model
Step4: Define Constants
Step6: Unused Features and Features that are Biased
Step7: Data preparation
Step8: Create BigQuery dataset
Step10: Extract Training and Evaluation dataframes
Step12: Training dataset
Step14: Evaluation dataset
Step16: Testing dataset
Step17: DataFrame Length
Step18: Visualize data
Step19: When we look at the numerical features they do not have a strong correlation.
Step20: In this plot you can see the different samples and their values, this is important when you are doing feature engineering.
Step25: There is an imbalance in the proportion of labels, with 24720 values for <=50K, and 7841 for => 50K.
Step28: After running the SQL queries above we have found
Step30: Let's take a look at the following numerical attributes
Step33: education and education_number are indicators of the same attribute, and are fully correlated with direct mapping, it makes sense to remove one of them during feature selection.
Step35: As relationship and marital status are features that describe similar status,
Step37: Model information
Step39: Model evaluation
Step40: Model results
Step42: ROC
Step43: Visualize Model history
Step45: Model predictions
Step46: We will see a few predictions and the expected result
Step47: In this case the first prediction probability is ~ 0.53, which corresponds to >50K.
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
PROJECT_ID = "[your-gpc-project]" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
import sys
# If you are running this notebook in Colaboratory, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Google Cloud Storage bucket and lets us submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
else:
% env GOOGLE_APPLICATION_CREDENTIALS '/path/to/your/service-account-key.json'
# Data processing
import pandas as pd
# Visualizations
import matplotlib.pyplot as plt
import seaborn as sns
# BigQuery API
from google.cloud import bigquery
# Show software versions
print(__import__('sys').version)
print(pd.__version__)
# These are the features in the dataset.
# Dataset information: https://archive.ics.uci.edu/ml/datasets/census+income
# which exists now in: `bigquery-public-data.ml_datasets.census_adult_income`
_COLUMNS = [
'age', 'workclass', 'functional_weight', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'sex',
'capital_gain', 'capital_loss', 'hours_per_week', 'native_country',
'income_bracket'
]
# This is the label (target) we want to predict.
_LABEL_COLUMN = 'income_bracket'
These are columns we will not use as features for training. There are many
reasons not to use certain attributes of data for training. Perhaps their
values are noisy or inconsistent, or perhaps they encode bias that we do not
want our model to learn. For a deep dive into the features of this Census
dataset and the challenges they pose, see the Introduction to ML Fairness
notebook:
("https://colab.research.google.com/github/google/eng-edu/blob/master/ml/cc/"
"exercises/intro_to_fairness.ipynb")
UNUSED_COLUMNS = ['functional_weight', 'education', 'sex']
client = bigquery.Client(location='US', project=PROJECT_ID)
# Dataset and table information
dataset_name = 'census_tutorial'
# Create BigQuery dataset
dataset = client.create_dataset(dataset_name)
# This query will process 4.8 MB when run:
query =
SELECT
age,
workclass,
functional_weight,
education,
education_num,
marital_status,
occupation,
relationship,
race,
sex,
capital_gain,
capital_loss,
hours_per_week,
native_country,
income_bracket
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
dataset = client.query(query).to_dataframe()
query =
SELECT
age,
workclass,
functional_weight,
education,
education_num,
marital_status,
occupation,
relationship,
race,
sex,
capital_gain,
capital_loss,
hours_per_week,
native_country,
income_bracket
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(functional_weight AS STRING))), 10) < 8
train_dataset = client.query(query).to_dataframe()
query =
SELECT
age,
workclass,
functional_weight,
education,
education_num,
marital_status,
occupation,
relationship,
race,
sex,
capital_gain,
capital_loss,
hours_per_week,
native_country,
income_bracket
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(functional_weight AS STRING))), 10) = 8
eval_dataset = client.query(query).to_dataframe()
query =
SELECT
age,
workclass,
functional_weight,
education,
education_num,
marital_status,
occupation,
relationship,
race,
sex,
capital_gain,
capital_loss,
hours_per_week,
native_country,
income_bracket
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(functional_weight AS STRING))), 10) = 9
test_dataset = client.query(query).to_dataframe()
len(dataset), len(train_dataset), len(eval_dataset), len(test_dataset)
hmap = dataset.corr(method='pearson')
plt.subplots(figsize=(12, 9))
sns.heatmap(hmap, vmax=0.8 ,annot=True, cmap="BrBG", square=True)
# Visualize interactions
sns.pairplot(dataset, hue='income_bracket')
# Count of >50K & <=50K
sns.countplot(dataset['income_bracket'])
# Checking Empty records (NULL) OR (?) and their percentage overall
query =
SELECT
COUNTIF(workclass IS NULL
OR LTRIM(workclass) LIKE '?') AS workclass,
ROUND(COUNTIF(workclass IS NULL
OR LTRIM(workclass) LIKE '?') / COUNT(workclass) * 100)
AS workclass_percentage,
COUNTIF(occupation IS NULL
OR LTRIM(occupation) LIKE '?') AS occupation,
ROUND(COUNTIF(occupation IS NULL
OR LTRIM(occupation) LIKE '?') / COUNT(occupation) * 100)
AS occupation_percentage,
COUNTIF(native_country IS NULL
OR LTRIM(native_country) LIKE '?') AS native_country,
ROUND(COUNTIF(native_country IS NULL
OR LTRIM(native_country) LIKE '?') / COUNT(native_country) * 100)
AS native_country_percentage
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
client.query(query).to_dataframe()
# Checking for workclass values.
query =
SELECT
workclass,
COUNT(workclass) AS total_workclass
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
GROUP BY workclass
ORDER BY total_workclass DESC
client.query(query).to_dataframe()
# Checking for occupation values.
query =
SELECT
occupation,
COUNT(occupation) AS total_occupation
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
GROUP BY occupation
ORDER BY total_occupation DESC
client.query(query).to_dataframe()
# Checking for native_country values.
query =
SELECT
native_country,
COUNT(native_country) AS total_native_country
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
GROUP BY native_country
ORDER BY total_native_country DESC
client.query(query).to_dataframe()
# Checking for capital_gain
# Total records: 32561
query =
SELECT
capital_gain,
income_bracket,
COUNT(capital_gain) AS capital_gain_records
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
GROUP BY capital_gain, income_bracket
ORDER BY capital_gain_records DESC
client.query(query).to_dataframe()
# Checking for capital_loss
# Total records: 32561
query =
SELECT
capital_loss,
income_bracket,
COUNT(capital_loss) AS capital_loss_records
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
GROUP BY capital_loss, income_bracket
ORDER BY capital_loss_records DESC
client.query(query).to_dataframe()
# Checking for education
query =
SELECT
education,
education_num
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
GROUP BY education, education_num
ORDER BY education_num
client.query(query).to_dataframe()
# Explore Education Num vs Income
g = sns.catplot(x="education_num", y="income_bracket", data=dataset,kind="bar",
height = 6,palette = "muted")
g.despine(left=True)
g = g.set_ylabels(">50K probability")
# Checking for marital_status
query =
SELECT
marital_status
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
GROUP BY marital_status
client.query(query).to_dataframe()
# Checking for relationship
query =
SELECT
relationship
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
GROUP BY relationship
client.query(query).to_dataframe()
# Train a BQML model
train_query =
CREATE OR REPLACE MODEL `census_tutorial.census_model`
OPTIONS (
model_type='logistic_reg',
auto_class_weights=true,
data_split_method='no_split',
input_label_cols=['income_bracket'],
max_iterations=15) AS
SELECT
age,
CASE
WHEN workclass IS NULL THEN 'Private'
WHEN LTRIM(workclass) LIKE '?' THEN 'Private'
ELSE workclass
END AS workclass,
CASE
WHEN native_country IS NULL THEN 'United States'
WHEN LTRIM(native_country) LIKE '?' THEN 'United States'
ELSE native_country
END AS native_country,
CASE
WHEN LTRIM(marital_status) IN
(
'Never-married',
'Divorced',
'Separated',
'Widowed'
) THEN 'Single'
WHEN LTRIM(marital_status) IN
(
'Married-civ-spouse',
'Married-spouse-absent',
'Married-AF-spouse'
) THEN 'Married'
ELSE NULL
END AS marital_status,
education_num,
occupation,
race,
hours_per_week,
income_bracket
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(functional_weight AS STRING))), 10) < 8
AND (occupation IS NOT NULL OR LTRIM(occupation) NOT LIKE '?%')
GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9
client.query(train_query)
training_info =
SELECT
training_run,
iteration,
loss,
eval_loss,
duration_ms,
learning_rate
FROM
ML.TRAINING_INFO(MODEL `census_tutorial.census_model`)
ORDER BY iteration ASC
client.query(training_info).to_dataframe()
# Perform model evaluation
query_evaluate =
SELECT
precision,
recall,
accuracy,
f1_score,
log_loss,
roc_auc
FROM ML.EVALUATE (MODEL `census_tutorial.census_model`,
(
SELECT
age,
CASE
WHEN workclass IS NULL THEN 'Private'
WHEN LTRIM(workclass) LIKE '?' THEN 'Private'
ELSE workclass
END AS workclass,
CASE
WHEN native_country IS NULL THEN 'United States'
WHEN LTRIM(native_country) LIKE '?' THEN 'United States'
ELSE native_country
END AS native_country,
CASE
WHEN LTRIM(marital_status) IN
(
'Never-married',
'Divorced',
'Separated',
'Widowed'
) THEN 'Single'
WHEN LTRIM(marital_status) IN
(
'Married-civ-spouse',
'Married-spouse-absent',
'Married-AF-spouse'
) THEN 'Married'
ELSE NULL
END AS marital_status,
education_num,
occupation,
race,
hours_per_week,
income_bracket
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(functional_weight AS STRING))), 10) = 8
AND (occupation IS NOT NULL OR LTRIM(occupation) NOT LIKE '?%')
GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9
))
evaluation_job = client.query(query_evaluate).to_dataframe()
evaluation_job
# Perform model evaluation
query_roc_curve =
SELECT
threshold,
recall,
false_positive_rate,
true_positives,
false_positives,
true_negatives,
false_negatives
FROM
ML.ROC_CURVE(MODEL `census_tutorial.census_model`,
(
SELECT
age,
CASE
WHEN workclass IS NULL THEN 'Private'
WHEN LTRIM(workclass) LIKE '?' THEN 'Private'
ELSE workclass
END AS workclass,
CASE
WHEN native_country IS NULL THEN 'United States'
WHEN LTRIM(native_country) LIKE '?' THEN 'United States'
ELSE native_country
END AS native_country,
CASE
WHEN LTRIM(marital_status) IN
(
'Never-married',
'Divorced',
'Separated',
'Widowed'
) THEN 'Single'
WHEN LTRIM(marital_status) IN
(
'Married-civ-spouse',
'Married-spouse-absent',
'Married-AF-spouse'
) THEN 'Married'
ELSE NULL
END AS marital_status,
education_num,
occupation,
race,
hours_per_week,
income_bracket
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(functional_weight AS STRING))), 10) = 8
AND (occupation IS NOT NULL OR LTRIM(occupation) NOT LIKE '?%')
GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9))
roc_curve_job = client.query(query_roc_curve).to_dataframe()
roc_curve_job
# Visualize History for Accuracy.
plt.title('BigQuery ML Model accuracy')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.02])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
fpr = roc_curve_job['false_positive_rate']
tpr = roc_curve_job['recall']
plt.plot(fpr, tpr, color='darkorange')
plt.plot([0, 1], [0, 1], color='navy', linestyle='--')
plt.show()
query_prediction =
SELECT
income_bracket,
predicted_income_bracket,
predicted_income_bracket_probs
FROM
ML.PREDICT(MODEL `census_tutorial.census_model`,
(
SELECT
age,
CASE
WHEN workclass IS NULL THEN 'Private'
WHEN LTRIM(workclass) LIKE '?' THEN 'Private'
ELSE workclass
END AS workclass,
CASE
WHEN native_country IS NULL THEN 'United States'
WHEN LTRIM(native_country) LIKE '?' THEN 'United States'
ELSE native_country
END AS native_country,
CASE
WHEN LTRIM(marital_status) IN
(
'Never-married',
'Divorced',
'Separated',
'Widowed'
) THEN 'Single'
WHEN LTRIM(marital_status) IN
(
'Married-civ-spouse',
'Married-spouse-absent',
'Married-AF-spouse'
) THEN 'Married'
ELSE NULL
END AS marital_status,
education_num,
occupation,
race,
hours_per_week,
income_bracket
FROM
`bigquery-public-data.ml_datasets.census_adult_income`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(functional_weight AS STRING))), 10) = 9
AND occupation IS NOT NULL AND LTRIM(occupation) NOT LIKE '?%'
GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9
))
predictions = client.query(query_prediction).to_dataframe()
predictions[['income_bracket', 'predicted_income_bracket']].head()
predictions['predicted_income_bracket_probs'].head()
_count = predictions['income_bracket'].str.strip().str.lower() == \
predictions['predicted_income_bracket'].str.strip().str.lower()
# Group predictions:
_count.value_counts(normalize=True)
# Note: Set normalize=False to see grouped results.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'key1': ['a', 'a', 'b', 'b', 'a', 'c'],
'key2': ['one', 'two', 'one', 'two', 'one', 'two']})
def g(df):
return df.groupby('key1')['key2'].apply(lambda x: (x=='one').sum()).reset_index(name='count')
result = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import Data
Step2: Column Meanings
Step3: We firstly category Sun_hours into three levels
Step4: Preprocessing (Handling Missing Values)
Step5: output the number of missing values with in each column
Step6: According to the Bureau of Meteorology (http
Step7: Select Feature(s)
Step8: If you have domain knowledge, you will know that the difference between max and min temperature might be highly correlated to sun hours. Let us add such column.
Step9: Let's try the features with higher correlations to Sun_level
Step10: Build Decision Tree
Step11: Now we generate features
Step12: And we use Sun_level as labels
Step13: With features and labels, we can then train our decision tree
Step14: Evaluations and Investigations of the Trained Decision Tree
Step15: Let's see the accuracy on the testing dataset
Step16: We can also get the importance of each feature
Step17: Apparently the above tree is overfitting. One way to deal with it is to change the maximum height of the decision tree.
Step18: Predict the missing Sun_level value
Step19: Although we cannot remember what happened on these 7 days (can you?), it seems the predictions are probably correct. E.g.,
Step20: You can generate a pdf file if you want
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from IPython.display import Image
from sklearn.externals.six import StringIO
from sklearn.cross_validation import train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
RANDOM_SEED = 9
data = pd.read_csv('./asset/Daily_Weather_Observations.csv', sep=',')
print(data.shape)
data.head()
data_missing_sun_hours = data[pd.isnull(data['Sun_hours'])]
data_missing_sun_hours
data = data[pd.notnull(data['Sun_hours'])]
print(data.shape)
labels = ['Low','Med','High']
data['Sun_level'] = pd.cut(data.Sun_hours, [-1,5,10,25], labels=labels)
data[['Sun_hours','Sun_level']].head()
# output all rows with missing values
data[data.isnull().any(axis=1)]
data = data.dropna(subset = ['CLD_at_9am', 'Max_wind_dir', 'Max_wind_spd', 'Max_wind_dir'])
print(data.shape)
bitmap1 = data.Evap.notnull()
bitmap2 = bitmap1.shift(1)
bitmap2[0] = True
data = data[bitmap1 & bitmap2]
print(data.shape)
def corr(data):
c = data.corr()
df = c.Sun_level_num.to_frame()
df['abs'] = abs(df['Sun_level_num'])
df.sort_values(by = 'abs', ascending=False)
print(df.sort_values(by = 'abs', ascending=False))
# we need to convert labels (string) into numeric to get the correlation
labels = [0,1,2]
data['Sun_level_num'] = pd.cut(data.Sun_hours, [-1,5,10,25], labels=labels)
data[['Sun_level_num']] = data[['Sun_level_num']].apply(pd.to_numeric)
corr(data)
# c = data.corr()
# c.sort_values(by='Sun_level_num', ascending=True)['Sun_level_num']
data['Temps_diff'] = data['Temps_max'] - data['Temps_min']
corr(data)
feature_list = ['CLD_at_9am', 'CLD_at_3pm', 'RH_at_3pm', 'RH_at_9am', 'Temps_min', 'Temps_diff', 'Month']
train, test = train_test_split(data, test_size = 0.2)
X_train = train[feature_list]
X_test = test[feature_list]
X_train.head()
y_train = train.Sun_level
y_test = test.Sun_level
clf = DecisionTreeClassifier(criterion = "gini")
dtree = clf.fit(X_train, y_train)
dtree.score(X_train,y_train) # output the accuracy of the trained decision tree
dtree.score(X_test,y_test)
feature = pd.DataFrame({'name': pd.Series(feature_list),
'importance': pd.Series(dtree.feature_importances_)})
feature.sort_values(by = 'importance', ascending = False)
def experiment(train, test, features, depth=5):
X_train = train[features]
y_train = train.Sun_level
clf = DecisionTreeClassifier(criterion = "gini", max_depth = depth, random_state = RANDOM_SEED)
dtree = clf.fit(X_train, y_train)
err_training = dtree.score(X_train,y_train)
X_test = test[features]
y_test = test.Sun_level
err_testing = dtree.score(X_test,y_test)
err_diff = err_training - err_testing
print('{}, {}, {}'.format(err_training, err_testing, err_diff))
return err_training, err_testing
def evaluate(features, repeat_times = 10, depth = 5):
print('features: {}'.format(features))
print('max_depth: {}\n'.format(depth))
total_err_training = 0
total_err_testing = 0
for i in range(repeat_times):
train, test = train_test_split(data, test_size = 0.2, random_state = RANDOM_SEED + i)
err_training, err_testing = experiment(train, test, features, depth)
total_err_training += err_training
total_err_testing += err_testing
print('==============')
print('avg. training error:\t{}'.format(total_err_training/repeat_times))
print('avg. testing error:\t{}'.format(total_err_testing/repeat_times))
print('avg. difference:\t{}'.format((total_err_training - total_err_testing)/repeat_times))
print('============================')
feature_list = ['CLD_at_9am', 'CLD_at_3pm', 'RH_at_3pm', 'RH_at_9am', 'Temps_min', 'Temps_diff', 'Month']
evaluate(feature_list, 10, 6)
evaluate(feature_list, 10, 5)
evaluate(feature_list, 10, 4)
evaluate(feature_list, 10, 3)
X = data[feature_list]
y = data.Sun_level
clf = DecisionTreeClassifier(criterion = "gini", max_depth = 4, random_state = RANDOM_SEED)
dtree = clf.fit(X, y)
data_missing_sun_hours['Temps_diff'] = data_missing_sun_hours['Temps_max'] - data_missing_sun_hours['Temps_min']
X_pred = data_missing_sun_hours[feature_list]
data_missing_sun_hours['Sun_level_pred'] = dtree.predict(X_pred)
data_missing_sun_hours
import pydotplus
import sys
# sys.path.append("C:\\Program Files (x86)\\Graphviz2.38\bin")
dotfile = StringIO()
export_graphviz(dtree, out_file = dotfile, feature_names = X_train.columns)
graph = pydotplus.graph_from_dot_data(dotfile.getvalue())
Image(graph.create_png())
graph.write_pdf("./asset/dtree.pdf")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For further instructions on how to install Modin with conda or for specific platforms or engines, see our detailed installation guide.
Step2: Dataset
Step3: Faster Data Loading with Modin's read_csv
Step4: You can quickly check that the result from pandas and Modin is exactly the same.
Step5: Faster Append with Modin's concat
Step6: The result dataset is around 19GB in size.
Step7: Faster apply over a single column
|
<ASSISTANT_TASK:>
Python Code:
!pip install modin[all]
import modin.pandas as pd
import pandas
#############################################
### For the purpose of timing comparisons ###
#############################################
import time
import ray
ray.init()
from IPython.display import Markdown, display
def printmd(string):
display(Markdown(string))
# This may take a few minutes to download
import urllib.request
s3_path = "https://modin-test.s3.us-west-1.amazonaws.com/yellow_tripdata_2015-01.csv"
urllib.request.urlretrieve(s3_path, "taxi.csv")
start = time.time()
pandas_df = pandas.read_csv("taxi.csv", parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], quoting=3)
end = time.time()
pandas_duration = end - start
print("Time to read with pandas: {} seconds".format(round(pandas_duration, 3)))
start = time.time()
modin_df = pd.read_csv("taxi.csv", parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], quoting=3)
end = time.time()
modin_duration = end - start
print("Time to read with Modin: {} seconds".format(round(modin_duration, 3)))
printmd("## Modin is {}x faster than pandas at `read_csv`!".format(round(pandas_duration / modin_duration, 2)))
pandas_df
modin_df
N_copies= 100
start = time.time()
big_pandas_df = pandas.concat([pandas_df for _ in range(N_copies)])
end = time.time()
pandas_duration = end - start
print("Time to concat with pandas: {} seconds".format(round(pandas_duration, 3)))
start = time.time()
big_modin_df = pd.concat([modin_df for _ in range(N_copies)])
end = time.time()
modin_duration = end - start
print("Time to concat with Modin: {} seconds".format(round(modin_duration, 3)))
printmd("### Modin is {}x faster than pandas at `concat`!".format(round(pandas_duration / modin_duration, 2)))
big_modin_df.info()
start = time.time()
rounded_trip_distance_pandas = big_pandas_df["trip_distance"].apply(round)
end = time.time()
pandas_duration = end - start
print("Time to apply with pandas: {} seconds".format(round(pandas_duration, 3)))
start = time.time()
rounded_trip_distance_modin = big_modin_df["trip_distance"].apply(round)
end = time.time()
modin_duration = end - start
print("Time to apply with Modin: {} seconds".format(round(modin_duration, 3)))
printmd("### Modin is {}x faster than pandas at `apply` on one column!".format(round(pandas_duration / modin_duration, 2)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Load image
Step11: We need to find the chessboard squares within the image (assuming images will vary, boards will vary in color, etc. between different posts in reddit). A assumption we can make that simplifies things greatly is to assume the chessboards will be aligned with the image (orthorectified), so we only need to search for horizontal and vertical lines.
Step12: Now that we've got our kernels ready for convolution, let's create our tf variables.
Step13: Let's look at the gradients, we apply opening to them also to clean up noise
Step14: Looks pretty good, now how to find lines? Well with a Hough transform we resample into a parameter space of lines based on two variables $r$ and $\theta$ for example. In our case we already know we're doing vertical and horizontal lines so instead of a 2D space we just need two 1D spaces. In fact, we can simply do this by summing along the axes for each gradient.
Step15: Let's plot the responses of the summed gradients
Step20: Awesome, they show up clear as day. Since we've normalized the hough gradients to pixel values of 0-255, let's arbitrarily threshold half-way between.
Step22: Cool, we've got a set of lines now. We need to figure out which lines are associated with the chessboard, then split up the image into individual squares for feeding into the tensorflow CNN.
Step23: Awesome! We have squares, let's save them as 32x32 grayscale images in a subfolder with the same name as the image
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
np.set_printoptions(suppress=True)
sess = tf.InteractiveSession()
# Imports for visualization
import PIL.Image
from cStringIO import StringIO
from IPython.display import clear_output, Image, display
import scipy.ndimage as nd
import scipy.signal
def display_array(a, fmt='jpeg', rng=[0,1]):
Display an array as a picture.
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
# File
# img_file = 'img1.png'
# img_file = 'img2.png'
# img_file = 'img3.gif'
# img_file = 'img4.jpg'
# img_file = 'img7.png'
# img_file = 'img9.png' # Doesn't work anymore due to non-alternating checkerboard lines
# Bad fit example
# img_file = 't1.png'
img_file = 'bkn5nn4.png'
# img_file = 'lichess_5.png'
# folder = "chessboards/input_chessboards"
# folder = "chessboards/test_chessboards"
folder = "."
img = PIL.Image.open("%s/%s" % (folder,img_file))
print "Loaded %s (%dpx x %dpx)" % \
(img_file, img.size[0], img.size[1])
# Resize if image larger than 2k pixels on a side
if img.size[0] > 2000 or img.size[1] > 2000:
print "Image too big (%d x %d)" % (img.size[0], img.size[1])
new_size = 500.0 # px
if img.size[0] > img.size[1]:
# resize by width to new limit
ratio = new_size / img.size[0]
else:
# resize by height
ratio = new_size / img.size[1]
print "Reducing by factor of %.2g" % (1./ratio)
img = img.resize(img.size * ratio, PIL.Image.ADAPTIVE)
print "New size: (%d x %d)" % (img.size[0], img.size[1])
# See original image
display_array(np.asarray(img), rng=[0,255])
# Convert to grayscale and array
a = np.asarray(img.convert("L"), dtype=np.float32)
# Display array
display_array(a, rng=[0,255])
def make_kernel(a):
Transform a 2D array into a convolution kernel
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
A simplified 2D convolution operation
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def gradientx(x):
Compute the x gradient of an array
gradient_x = make_kernel([[-1.,0., 1.],
[-1.,0., 1.],
[-1.,0., 1.]])
return simple_conv(x, gradient_x)
def gradienty(x):
Compute the x gradient of an array
gradient_y = make_kernel([[-1., -1, -1],[0.,0,0], [1., 1, 1]])
return simple_conv(x, gradient_y)
def corners(x):
Find chess square corners in an array
chess_corner = make_kernel([[-1., 0, 1],[0., 0., 0.], [1.,0, -1]])
return simple_conv(x, chess_corner)
# Following are meant for binary images
def dilate(x, size=3):
Dilate
kernel = make_kernel(np.ones([size,size], dtype=np.float32))
return tf.clip_by_value(simple_conv(x, kernel),
np.float32(1),
np.float32(2))-np.float32(1)
def erode(x, size=3):
Erode
kernel = make_kernel(np.ones([size,size]))
return tf.clip_by_value(simple_conv(x, kernel),
np.float32(size*size-1),
np.float32(size*size))-np.float32(size*size-1)
def opening(x, size=3):
return dilate(erode(x,size),size)
def closing(x, size=3):
return erode(dilate(x,size),size)
def skeleton(x, size=3):
Skeletonize
return tf.clip_by_value(erode(x) - opening(erode(x)),
0.,
1.)
# Get our grayscale image matrix
A = tf.Variable(a)
# Get X & Y gradients and subtract opposite gradient
# Strongest response where gradient is unidirectional
# clamp into range 0-1
# Dx = tf.clip_by_value(np.abs(gradientx(A)) - np.abs(gradienty(A)),
# 0., 1.)
# Dy = tf.clip_by_value(np.abs(gradienty(A)) - np.abs(gradientx(A)),
# 0., 1.)
Dx = gradientx(A)
Dy = gradienty(A)
# Dxy = np.abs(gradientx(A) * gradienty(A))
# Dc = np.abs(corners(A))
# Initialize state to initial conditions
tf.initialize_all_variables().run()
display_array(Dx.eval(), rng=[-255,255])
display_array(Dy.eval(), rng=[-255,255])
Dx_pos = tf.clip_by_value(Dx, 0., 255., name="dx_positive")
Dx_neg = tf.clip_by_value(Dx, -255., 0., name='dx_negative')
Dy_pos = tf.clip_by_value(Dy, 0., 255., name="dy_positive")
Dy_neg = tf.clip_by_value(Dy, -255., 0., name='dy_negative')
hough_Dx = tf.reduce_sum(Dx_pos, 0) * tf.reduce_sum(-Dx_neg, 0) / (a.shape[0]*a.shape[0])
hough_Dy = tf.reduce_sum(Dy_pos, 1) * tf.reduce_sum(-Dy_neg, 1) / (a.shape[1]*a.shape[1])
# Normalized to 0-255*255=65025 range
%matplotlib inline
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(1,2,sharey=True, figsize=(15,5))
# Arbitrarily choose half of max value as threshold, since they're such strong responses
hough_Dx_thresh = tf.reduce_max(hough_Dx) * 3 / 5
hough_Dy_thresh = tf.reduce_max(hough_Dy) * 3 /5
ax1.plot(hough_Dx.eval());
ax1.axhline(hough_Dx_thresh.eval(), lw=2,linestyle=':',color='r')
ax1.set_title('Hough Gradient X')
ax1.set_xlabel('Pixel')
ax1.set_xlim(0,a.shape[1])
ax2.plot(hough_Dy.eval())
ax2.axhline(hough_Dy_thresh.eval(), lw=2,linestyle=':',color='r')
ax2.set_title('Hough Gradient Y')
ax2.set_xlim(0,a.shape[0])
ax2.set_xlabel('Pixel');
def checkMatch(lineset):
Checks whether there exists 7 lines of consistent increasing order in set of lines
linediff = np.diff(lineset)
x = 0
cnt = 0
for line in linediff:
# Within 5 px of the other (allowing for minor image errors)
if np.abs(line - x) < 5:
cnt += 1
else:
cnt = 0
x = line
return cnt == 5
def pruneLines(lineset):
Prunes a set of lines to 7 in consistent increasing order (chessboard)
linediff = np.diff(lineset)
x = 0
cnt = 0
start_pos = 0
for i, line in enumerate(linediff):
# Within 5 px of the other (allowing for minor image errors)
if np.abs(line - x) < 5:
cnt += 1
if cnt == 5:
end_pos = i+2
return lineset[start_pos:end_pos]
else:
cnt = 0
x = line
print i, x
start_pos = i
return lineset
def skeletonize_1d(arr):
return skeletonized 1d array (thin to single value, favor to the right)
_arr = arr.copy() # create a copy of array to modify without destroying original
# Go forwards
for i in range(_arr.size-1):
# Will right-shift if they are the same
if arr[i] <= _arr[i+1]:
_arr[i] = 0
# Go reverse
for i in np.arange(_arr.size-1, 0,-1):
if _arr[i-1] > _arr[i]:
_arr[i] = 0
return _arr
def getChessLines(hdx, hdy, hdx_thresh, hdy_thresh):
Returns pixel indices for the 7 internal chess lines in x and y axes
# Blur
gausswin = scipy.signal.gaussian(21,4)
gausswin /= np.sum(gausswin)
# Blur where there is a strong horizontal or vertical line (binarize)
blur_x = np.convolve(hdx > hdx_thresh, gausswin, mode='same')
blur_y = np.convolve(hdy > hdy_thresh, gausswin, mode='same')
skel_x = skeletonize_1d(blur_x)
skel_y = skeletonize_1d(blur_y)
# Find points on skeletonized arrays (where returns 1-length tuple)
lines_x = np.where(skel_x)[0] # vertical lines
lines_y = np.where(skel_y)[0] # horizontal lines
# Prune inconsistent lines
lines_x = pruneLines(lines_x)
lines_y = pruneLines(lines_y)
is_match = len(lines_x) == 7 and len(lines_y) == 7 and checkMatch(lines_x) and checkMatch(lines_y)
return lines_x, lines_y, is_match
# Get chess lines
lines_x, lines_y, is_match = getChessLines(hough_Dx.eval().flatten(), \
hough_Dy.eval().flatten(), \
hough_Dx_thresh.eval(), \
hough_Dy_thresh.eval())
lines_x, lines_y, is_match = getChessLines(hough_Dx.eval().flatten(), \
hough_Dy.eval().flatten(), \
hough_Dx_thresh.eval()*.9, \
hough_Dy_thresh.eval()*.9)
print "X",lines_x, np.diff(lines_x)
print "Y",lines_y, np.diff(lines_y)
if is_match:
print "Chessboard found"
else:
print "Couldn't find Chessboard"
# Plot blurred 1d hough arrays and skeletonized versions
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(20,5))
ax1.plot(hough_Dx.eval());
ax1.axhline(hough_Dx_thresh.eval(), lw=2,linestyle=':',color='r')
ax1.set_title('Hough Gradient X')
ax1.set_xlabel('Pixel')
ax1.set_xlim(0,a.shape[1])
ax2.plot(hough_Dy.eval())
ax2.axhline(hough_Dy_thresh.eval(), lw=2,linestyle=':',color='r')
ax2.set_title('Hough Gradient Y')
ax2.set_xlim(0,a.shape[0])
ax2.set_xlabel('Pixel');
# Plot lines for where peaks where found
if len(lines_x < 20):
for hx in lines_x:
ax1.axvline(hx,color='r')
if len(lines_y < 20):
for hy in lines_y:
ax2.axvline(hy,color='r')
plt.imshow(img)
for hx in lines_x:
plt.axvline(hx, color='b', lw=2)
for hy in lines_y:
plt.axhline(hy, color='r', lw=2)
print "X (vertical)",lines_x, np.diff(lines_x)
print "Y (horizontal)",lines_y, np.diff(lines_y)
def getChessTiles(a, lines_x, lines_y):
Split up input grayscale array into 64 tiles stacked in a 3D matrix using the chess linesets
# Find average square size, round to a whole pixel for determining edge pieces sizes
stepx = np.int32(np.round(np.mean(np.diff(lines_x))))
stepy = np.int32(np.round(np.mean(np.diff(lines_y))))
# Pad edges as needed to fill out chessboard (for images that are partially over-cropped)
# print stepx, stepy
# print "x",lines_x[0] - stepx, "->", lines_x[-1] + stepx, a.shape[1]
# print "y", lines_y[0] - stepy, "->", lines_y[-1] + stepy, a.shape[0]
padr_x = 0
padl_x = 0
padr_y = 0
padl_y = 0
if lines_x[0] - stepx < 0:
padl_x = np.abs(lines_x[0] - stepx)
if lines_x[-1] + stepx > a.shape[1]-1:
padr_x = np.abs(lines_x[-1] + stepx - a.shape[1])
if lines_y[0] - stepy < 0:
padl_y = np.abs(lines_y[0] - stepy)
if lines_y[-1] + stepx > a.shape[0]-1:
padr_y = np.abs(lines_y[-1] + stepy - a.shape[0])
# New padded array
# print "Padded image to", ((padl_y,padr_y),(padl_x,padr_x))
a2 = np.pad(a, ((padl_y,padr_y),(padl_x,padr_x)), mode='edge')
setsx = np.hstack([lines_x[0]-stepx, lines_x, lines_x[-1]+stepx]) + padl_x
setsy = np.hstack([lines_y[0]-stepy, lines_y, lines_y[-1]+stepy]) + padl_y
a2 = a2[setsy[0]:setsy[-1], setsx[0]:setsx[-1]]
setsx -= setsx[0]
setsy -= setsy[0]
# display_array(a2, rng=[0,255])
# print "X:",setsx
# print "Y:",setsy
# Matrix to hold images of individual squares (in grayscale)
# print "Square size: [%g, %g]" % (stepy, stepx)
squares = np.zeros([np.round(stepy), np.round(stepx), 64],dtype=np.uint8)
# For each row
for i in range(0,8):
# For each column
for j in range(0,8):
# Vertical lines
x1 = setsx[i]
x2 = setsx[i+1]
padr_x = 0
padl_x = 0
padr_y = 0
padl_y = 0
if (x2-x1) > stepx:
if i == 7:
x1 = x2 - stepx
else:
x2 = x1 + stepx
elif (x2-x1) < stepx:
if i == 7:
# right side, pad right
padr_x = stepx-(x2-x1)
else:
# left side, pad left
padl_x = stepx-(x2-x1)
# Horizontal lines
y1 = setsy[j]
y2 = setsy[j+1]
if (y2-y1) > stepy:
if j == 7:
y1 = y2 - stepy
else:
y2 = y1 + stepy
elif (y2-y1) < stepy:
if j == 7:
# right side, pad right
padr_y = stepy-(y2-y1)
else:
# left side, pad left
padl_y = stepy-(y2-y1)
# slicing a, rows sliced with horizontal lines, cols by vertical lines so reversed
# Also, change order so its A1,B1...H8 for a white-aligned board
# Apply padding as defined previously to fit minor pixel offsets
squares[:,:,(7-j)*8+i] = np.pad(a2[y1:y2, x1:x2],((padl_y,padr_y),(padl_x,padr_x)), mode='edge')
return squares
if is_match:
# Possibly check np.std(np.diff(lines_x)) for variance etc. as well/instead
print "7 horizontal and vertical lines found, slicing up squares"
squares = getChessTiles(a, lines_x, lines_y)
print "Tiles generated: (%dx%d)*%d" % (squares.shape[0], squares.shape[1], squares.shape[2])
else:
print "Number of lines not equal to 7"
letters = 'ABCDEFGH'
if is_match:
print "Order is row-wise from top left of image going right and down, so a8,b8....a7,b7,c7...h1"
print "Showing 5 random squares..."
for i in np.random.choice(np.arange(64),5,replace=False):
print "#%d: %s%d" % (i, letters[i%8], i/8+1)
display_array(squares[:,:,i],rng=[0,255])
else:
print "Didn't have lines to slice image up."
import os
img_save_dir = "chessboards/output_tiles/squares_%s" % img_file[:-4]
if not is_match:
print "No squares to save"
else:
if not os.path.exists(img_save_dir):
os.makedirs(img_save_dir)
print "Created dir %s" % img_save_dir
for i in range(64):
sqr_filename = "%s/%s_%s%d.png" % (img_save_dir, img_file[:-4], letters[i%8], i/8+1)
if i % 8 == 0:
print "#%d: saving %s..." % (i, sqr_filename)
# Make resized 32x32 image from matrix and save
PIL.Image.fromarray(squares[:,:,i]) \
.resize([32,32], PIL.Image.ADAPTIVE) \
.save(sqr_filename)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setting credentials for LogIn
Step2: Creating ConfigClient for server connection configuration
Step3: LogIn with user credentials
|
<ASSISTANT_TASK:>
Python Code:
# Initialize PYTHONPATH for pyopencga
import sys
import os
from pprint import pprint
cwd = os.getcwd()
print("current_dir: ...."+cwd[-10:])
base_modules_dir = os.path.dirname(cwd)
print("base_modules_dir: ...."+base_modules_dir[-10:])
sys.path.append(base_modules_dir)
from pyopencga.opencga_config import ConfigClient
from pyopencga.opencga_client import OpenCGAClient
import json
## Reading user config/credentials to connect to server
user_config_json = "./__user_config.json"
with open(user_config_json,"r") as f:
user_credentials = json.loads(f.read())
print('User: {}***'.format(user_credentials["user"][:3]))
user = user_credentials["user"]
passwd = user_credentials["pwd"]
## Creating ConfigClient
host = 'http://bioinfodev.hpc.cam.ac.uk/opencga-test'
cc = ConfigClient()
config_dict = cc.get_basic_config_dict(host)
print("Config information:\n",config_dict)
oc = OpenCGAClient(configuration=config_dict,
user=user,
pwd=passwd)
## Getting the session id / token
token = oc.session_id
print("Session token:\n{}...".format(token[:10]))
oc = OpenCGAClient(configuration=config_dict,
session_id=token)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: Exercise 1
Step3: b. Standard Deviation
Step4: c. Standard Error
Step5: d. Confidence Intervals
Step6: Exercise 2
Step7: Exercise 3
Step8: b. Plotting Sample Means - Exponential
Step9: c.i Plotting Sample Means - Autocorrelated
Step10: c.ii Plotting Sample Standard Deviations - Autocorrelated
Step11: Exercise 4
Step12: b. T-distribution Correction
Step13: Exercise 5
Step14: b. T-distribution Correction
Step15: c. Newey-West Matrix
Step16: d. Newey-West Correction
|
<ASSISTANT_TASK:>
Python Code:
def generate_autocorrelated_data(theta, mu, sigma, N):
X = np.zeros((N, 1))
for t in range(1, N):
X[t] = theta * X[t-1] + np.random.normal(mu, sigma)
return X
def newey_west_SE(data):
ind = range(0, len(data))
ind = sm.add_constant(ind)
model = regression.linear_model.OLS(data, ind).fit(cov_type='HAC',cov_kwds={'maxlags':1})
return model.bse[0]
def newey_west_matrix(data):
ind = range(0, len(data))
ind = sm.add_constant(ind)
model = regression.linear_model.OLS(data, ind).fit()
return sw.cov_hac(model)
# Useful Libraries
import numpy as np
import seaborn as sns
from scipy import stats
import matplotlib.pyplot as plt
from statsmodels.stats.stattools import jarque_bera
import statsmodels.stats.sandwich_covariance as sw
from statsmodels import regression
import statsmodels.api as sm
np.random.seed(11)
POPULATION_MU = 105
POPULATION_SIGMA = 20
sample_size = 50
sample = np.random.normal(POPULATION_MU, POPULATION_SIGMA, sample_size)
#Your code goes here
Mean = np.mean(sample)
print "Mean:", Mean
#Your code goes here
SD = np.std(sample)
print "Standard Deviation:", SD
#Your code goes here
SE = SD / np.sqrt(sample_size)
print "Standard Error:", SE
#Your code goes here
print "95% Confidence Interval:", (-1.96 * SE + Mean, 1.96 * SE + Mean)
print "90% Confidence Interval:", (-1.64 * SE + Mean, 1.64 * SE + Mean)
print "80% Confidence Interval:", (-1.28 * SE + Mean, 1.28 * SE + Mean)
n = 1000
correct = 0
samples = [np.random.normal(loc=POPULATION_MU, scale=POPULATION_SIGMA, size=sample_size) for i in range(n)]
#Your code goes here
for i in range(n):
sample_mean = np.mean(samples[i])
sample_SE = stats.sem(samples[i])
if ((POPULATION_MU >= -1.96 * sample_SE + sample_mean) and (POPULATION_MU <= 1.96 * sample_SE + sample_mean)):
correct += 1;
print "Expected Result:", .95 * 1000
print "Actual Result", correct
n = 500
normal_samples = [np.mean(np.random.normal(loc=POPULATION_MU, scale=POPULATION_SIGMA, size=sample_size)) for i in range(n)]
#Your code goes here
plt.hist(normal_samples, 10)
_, pvalue, _, _ = jarque_bera(normal_samples)
print pvalue
if pvalue > 0.05:
print 'The distribution of sample means is likely normal.'
else:
print 'The distribution of sample means is likely not normal.'
n = 500
expo_samples = [np.mean(np.random.exponential(POPULATION_MU, sample_size)) for i in range(n)]
#Your code goes here
plt.hist(expo_samples, 10)
_, pvalue, _, _ = jarque_bera(expo_samples)
print pvalue
if pvalue > 0.05:
print 'The distribution of sample means is likely normal, despite the underlying distribution being non-normal (exponential).'
else:
print 'The distribution of sample means is likely not normal.'
n = 500
autocorrelated_samples = [(generate_autocorrelated_data(0.5, 0, 1, sample_size) + POPULATION_MU) for i in range(n)]
autocorrelated_means = [np.mean(autocorrelated_samples[i]) for i in range(n)]
#Your code goes here
plt.hist(autocorrelated_means, 10)
_, pvalue, _, _ = jarque_bera(autocorrelated_means)
print pvalue
if pvalue > 0.05:
print 'The distribution of sample means is likely normal, despite an autocorrelated underlying distribution.'
else:
print 'The distribution of sample means is likely not normal.'
n = 500
autocorrelated_samples = [(generate_autocorrelated_data(0.5, 0, 1, sample_size) + POPULATION_MU) for i in range(n)]
autocorrelated_stds = [np.std(autocorrelated_samples[i]) for i in range(n)]
#Your code goes here
plt.hist(autocorrelated_stds, 10)
_, pvalue, _, _ = jarque_bera(autocorrelated_stds)
print pvalue
if pvalue > 0.05:
print 'The distribution of sample standard deviations is likely normal.'
else:
print 'The distribution of sample standard deviations is likely not normal, due to the autocorrelated underlying distribution and the different assumptions for the CLT for means and for standard deviations.'
n = 100
small_size = 3
correct = 0
samples = [np.random.normal(loc=POPULATION_MU, scale=POPULATION_SIGMA, size=small_size) for i in range(n)]
#Your code goes here
for i in range(n):
sample_mean = np.mean(samples[i])
sample_SE = stats.sem(samples[i])
if ((POPULATION_MU >= -1.96 * sample_SE + sample_mean) and (POPULATION_MU <= 1.96 * sample_SE + sample_mean)):
correct += 1
print "Expected Result:", .95 * n
print "Actual Result:", correct
print "Due to the small sample size, the actual number of confidence intervals containing the population mean is much lower than what we would expect given a correctly calibrated interval."
n = 100
small_size = 5
correct = 0
samples = [np.random.normal(loc=POPULATION_MU, scale=POPULATION_SIGMA, size=small_size) for i in range(n)]
#Your code goes here
for i in range(n):
sample_mean = np.mean(samples[i])
sample_SE = stats.sem(samples[i])
h = sample_SE * stats.t.ppf((1+0.95) / 2, len(samples[i])-1)
if ((POPULATION_MU >= sample_mean - h) and (POPULATION_MU <= sample_mean + h)):
correct += 1
print "Expected Result:", .95 * n
print "Actual Result:", correct
print "After using the t-distribution to correct for the smaller sample size, the actual number of confidence intervals containing the population mean is about what we expected."
n = 100
correct = 0
theta = 0.5
noise_mu = 0
noise_sigma = 1
#Your code goes here
for i in range(n):
X = generate_autocorrelated_data(theta, noise_mu, noise_sigma, sample_size) + POPULATION_MU
sample_mean = np.mean(X)
sample_SE = np.std(X) / np.sqrt(sample_size)
if ((POPULATION_MU >= -1.96 * sample_SE + sample_mean) and (POPULATION_MU <= 1.96 * sample_SE + sample_mean)):
correct += 1
print "Expected Result:", .95 * n
print "Actual Result:", correct
print "Because the underlying data was autocorrelated, the actual number of confidence intervals containing the population mean is much lower than what we expected."
n = 100
correct = 0
#Your code goes here
for i in range(n):
X = generate_autocorrelated_data(theta, noise_mu, noise_sigma, sample_size) + POPULATION_MU
sample_mean = np.mean(X)
sample_SE = np.std(X) / np.sqrt(sample_size)
h = sample_SE * stats.t.ppf((1+0.95) / 2, len(X)-1)
if ((POPULATION_MU >= sample_mean - h) and (POPULATION_MU <= sample_mean + h)):
correct += 1
print "Expected Result:", .95 * n
print "Actual Result:", correct
print "We did not see a significant improvement in the actual number of confidence intervals containing the population mean. This is because a t-distribution only corrects for small sample sizes, not autocorrelation."
X = generate_autocorrelated_data(theta, noise_mu, noise_sigma, sample_size) + POPULATION_MU
#Your code goes here
print newey_west_matrix(X)
n = 100
correct = 0
#Your code goes here
for i in range(n):
X = generate_autocorrelated_data(theta, noise_mu, noise_sigma, sample_size) + POPULATION_MU
sample_mean = np.mean(X)
sample_SE = newey_west_SE(X)
if ((POPULATION_MU >= -1.96 * sample_SE + sample_mean) and (POPULATION_MU <= 1.96 * sample_SE + sample_mean)):
correct += 1
print "New Standard Error:", sample_SE
print "Expected Result:", .95 * n
print "Actual Result:", correct
print "After accounting for autocorrelation by finding a Newey-West standard error, the actual number of confidence intervals containing the population mean is about what we expected."
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, we're going to simulate picking numbers out of a hat, doing it 'runs' times. The bucket_size is the number of nubmer in the hat.
Step2: Now, we'll pick a hundred times for all hat sizes from 1 to 9999
Step3: Now, we'll import the plotting stuff - I don't know why this warning appears for me.
Step4: Now, we'll grab and plot tuples of (number of items in hats, chance of pulling a number starting with 1) for all hat sizes
Step5: And we'll do the same for numbers starting with 2
Step6: And we'll plot them on top of each other. You'll see the stochastic approach shows expected results according to
|
<ASSISTANT_TASK:>
Python Code:
first_digit(100)
first_digit(399)
import random
def do_drawing(bucket_size, runs):
digits = [first_digit(random.randint(1,bucket_size)) for x in range(runs)]
return digits
import collections
counters=[(top_end, collections.Counter(do_drawing(top_end, 100))) for top_end in range(1,10000)]
import matplotlib.pyplot as plt
%matplotlib inline
one_odds = [counter[1][1] for counter in counters]
plt.semilogx(range(len(counters)), one_odds)
two_odds = [counter[1][2] for counter in counters]
plt.semilogx(range(len(counters)), two_odds)
plt.semilogx(range(len(counters)), one_odds, 'b', range(len(counters)), two_odds, 'r')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, we need to generate some data.
Step2: Now, we want to fit a model to the behavior above. It is challenging to estimate both the learning rate and temperature parameters, so we will just fix temperature and estimate learning rate. We first need to implement the likelihood function for the Q-learning model.
Step3: Fit the likelihood function to the data by maximum likelihood for a range of parameter values, and plot the likelihood surface.
Step4: Let's generate a bunch of datasets and look at how well the model fits. for each model, compute estimates using multiple starting points (randomly chosen within the allowable range of parameter values)
|
<ASSISTANT_TASK:>
Python Code:
# set up imports
import numpy
import statsmodels.nonparametric.smoothers_lowess
import matplotlib.pyplot as plt
from scipy.optimize import minimize
%matplotlib inline
# softmax response function
def softmax(q,temp):
p=numpy.exp(q[0]/temp)/(numpy.exp(q[0]/temp)+numpy.exp(q[1]/temp))
if p>numpy.random.rand():
return 0
else:
return 1
# function to generate probabilistic outcomes
# p_A is probability of reward for outcome 1
def outcome(resp,correct,prob=0.75):
if numpy.random.rand()<prob:
rewarded_outcome=correct
else:
rewarded_outcome=numpy.abs(1-correct)
if resp==rewarded_outcome:
return 1
else:
return 0
# function to generate correct responses
def get_correct_resp(ntrials=1000,nswitch=50):
correct_resp=numpy.zeros(ntrials)
# switch every 50 trials
for i in range(0,ntrials,nswitch*2):
correct_resp[i:(i+nswitch)]=1
return correct_resp
# forward model to generate data using q-learning
def generate_data(learning_rate=0.1,temp=1,prob=0.8):
correct_resp=get_correct_resp()
ntrials=len(correct_resp)
q=numpy.zeros((ntrials,2))
resp=numpy.zeros(ntrials)
reward=numpy.zeros(ntrials)
for i in range(ntrials):
if i==0:
q[i,:]=numpy.array([0,0])
else:
q[i,:]=q[i-1,:]
resp[i]=softmax(q[i,:],temp)
reward[i]=outcome(resp[i],correct_resp[i],prob)
q[i,int(resp[i])]=q[i,int(resp[i])] + learning_rate*(reward[i]-q[i,int(resp[i])])
return resp,reward,q,learning_rate,temp
# generate some data and plot it
resp,reward,q,learning_rate,temp=generate_data(prob=0.9)
correct= resp==get_correct_resp()
plt.subplot(2,1,1)
smooth=statsmodels.nonparametric.smoothers_lowess.lowess(correct,numpy.arange(resp.shape[0]),frac=0.02)
plt.plot(smooth[:,1])
plt.axis([0,1000,0,1])
plt.xlabel('trials')
plt.ylabel('Proportion optimal responses')
plt.subplot(2,1,2)
plt.plot(q[:,0],'blue')
plt.plot(q[:,1],'red')
plt.ylabel('Q values')
# we will be using a minimization routine, so we return the negative log likelihood
def q_negloglike(params):
lr=params[0]
if len(params)>1:
temp_t=params[1]
else:
temp_t=1
assert resp.shape[0]==reward.shape[0]
q=[0.0,0,0]
ll_sum=0
for i in range(resp.shape[0]):
# log of choice probability (from Daw): β · Qt(ct) − log(exp(β · Qt(L)) + exp(β · Qt(R)))
ll_sum+=temp_t*q[int(resp[i])] - numpy.log(numpy.exp(q[0]*temp_t)+numpy.exp(q[1]*temp_t))
q[int(resp[i])]=q[int(resp[i])] + lr*(reward[i]-q[int(resp[i])])
return -1*ll_sum
lrvals=numpy.arange(0,1,0.005)
tempvals=numpy.arange(0,2,0.01)
ll=numpy.zeros((len(lrvals),len(tempvals)))
for lr in range(len(lrvals)):
for t in range(len(tempvals)):
ll[lr,t]=q_negloglike([lrvals[lr],tempvals[t]])
plt.imshow(-1*ll,aspect='auto',cmap='gray',
extent=[numpy.min(lrvals),numpy.max(lrvals),numpy.min(tempvals),numpy.max(tempvals)])
plt.xlabel('learning rate')
plt.ylabel('softmax temperature')
plt.plot(learning_rate,temp,marker='o',markersize=18,color='yellow')
maxloc=numpy.where(ll==numpy.min(ll))
plt.plot(lrvals[maxloc[0][0]],tempvals[maxloc[1][0]],marker='+',markersize=18,color='red')
#plt.plot(loglike[80,:10])
print('parameter values (estimated/true):')
print('learning rate: %0.3f/%0.3f'%(lrvals[maxloc[0][0]],learning_rate))
print('temperature: %0.3f/%0.3f'%(tempvals[maxloc[1][0]],temp))
# function to perform maximum likelihood estimation
def ml_est(start=None):
if not start:
start=[1e-3]
result=minimize(q_negloglike,start,bounds=[[0,1]])
if result['success']:
return result['x']
else:
return [numpy.nan]
nruns=100
est=numpy.zeros(nruns)
lrate=numpy.zeros(nruns)
for r in range(nruns):
lrate[r]=numpy.random.rand()*0.2 + 0.05
resp,reward,_,_,_=generate_data(learning_rate=lrate[r],temp=1.0)
mle=ml_est()
if numpy.isnan(mle[0]):
nanctr+=1
est[r]=mle
plt.scatter(lrate,est)
plt.xlabel('true learning rate')
plt.ylabel('estimated learning rate')
plt.axis([0,0.4,0,0.4])
plt.plot([0,0.4],[0,0.4])
numpy.corrcoef(lrate,est)[0,1]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Count words in Hamlet
Step2: Use collections to do the same thing
|
<ASSISTANT_TASK:>
Python Code:
# Create a string
my_str = 'This is my string'
# Split it...
my_str_split = my_str.split()
print(my_str_split)
# ... and restore (join) it again
my_str_joined = ' '.join(my_str_split) # ' ' means join with space
print(my_str_joined)
# Find first occurence of word containing 'str'
print(my_str.find('str'))
# Replace all occurences of 'str' with 'th'
print(my_str.replace('str','th'))
# Print the length of the string. %.2f means that the values should have two decimals (a float).
print(my_str + ' contains %.2f characters' % len(my_str))
filename = 'img\hamlet.txt'
# Open file for reading
f = open(filename,'r')
# Create an empty dictionary
worddict = {}
# Loop over each line in file
for line in f:
# Loop over each word in line
words = line.split()
for i, word in enumerate(words):
# Make all words lowercase (so that The and the are counted as the same, for instance)
w = word.lower()
if w in worddict:
worddict[w] +=1
else:
worddict[w] = 1
# Close the file for reading and writing
f.close()
# Create a list from the dictionary
wordlist = []
for key, value in worddict.iteritems():
wordlist.append([value, key])
# Sort worddict such that the most common words are on the top
wordlist.sort(reverse=True)
# Print the 10 most common words
print(wordlist[:10])
# Write output to file
filename_out = 'my_hamlet_frequencies.txt'
f_out = open(filename_out,'w')
for w in wordlist:
f_out.write('\t'.join((w[1],str(w[0]),'\n')))
f_out.close()
import re
from collections import Counter
# Read all the words into a list
words = re.findall(r'\w+', open('img/hamlet.txt').read().lower())
# Present the 10
Counter(words).most_common(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initializing a ZPIC simulation requires setting the simulation box and timestep
Step2: Next we need to describe the particle species in the simulation. In this example (demonstration of the two stream instability) we are using 2 species
Step3: Writing diagnostic output to disk
Step4: We can now initialize the simulation, passing in the function we just created
Step5: To run the simulation use the run method, giving the final time as the sole parameter
Step6: Accessing simulation results
|
<ASSISTANT_TASK:>
Python Code:
import em1d
import numpy as np
nx = 120
box = 4 * np.pi
dt = 0.1
tmax = 50.0
ndump = 10
ppc = 500
ufl = [0.4, 0.0, 0.0]
uth = [0.001,0.001,0.001]
right = em1d.Species( "right", -1.0, ppc, ufl = ufl, uth = uth )
ufl[0] = -ufl[0]
left = em1d.Species( "left", -1.0, ppc, ufl = ufl, uth = uth )
def rep( sim ):
# sim.n has the current simulation iteration
if (sim.n % ndump == 0):
right.report("particles")
left.report("particles")
sim.emf.report("E",0)
sim = em1d.Simulation( nx, box, dt, species = [right,left], report = rep )
sim.run(tmax)
import zdf
import matplotlib.pyplot as plt
(particles,info) = zdf.read("PARTICLES/particles-right-000500.zdf")
x = particles['x1']
y = particles['u1']
plt.plot(x, y, '.', ms=1,alpha=0.1)
title = "u_1-x_1\,phasespace"
timeLabel = "t = {:g}\,[{:s}]".format(info.iteration.t, info.iteration.tunits)
plt.title(r'$\sf{' + title + r'}$' + '\n' + r'$\sf{' + timeLabel + r'}$')
xlabel = "x_1\,[{:s}]".format( info.particles.units['x1'] )
ylabel = "u_1\,[{:s}]".format( info.particles.units['u1'] )
plt.xlabel(r'$\sf{' + xlabel + r'}$')
plt.ylabel(r'$\sf{' + ylabel + r'}$')
(particles,info) = zdf.read("PARTICLES/particles-left-000500.zdf")
x = particles['x1']
y = particles['u1']
plt.plot(x, y, '.', ms=1,alpha=0.1)
plt.grid(True)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Correlation matrix
Step3: Correlation
Step4: Positive correlation
Step5: Positive correlation
Step6: Positive correlation
Step7: Correlation
|
<ASSISTANT_TASK:>
Python Code:
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
sns.set_style("white")
import util
df = util.load_burritos()
N = df.shape[0]
m_corr = ['Google','Yelp','Hunger','Cost','Volume','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
M = len(m_corr)
dfcorr = df[m_corr].corr()
from matplotlib import cm
clim1 = (-1,1)
plt.figure(figsize=(12,10))
cax = plt.pcolor(range(M+1), range(M+1), dfcorr, cmap=cm.bwr)
cbar = plt.colorbar(cax, ticks=(-1,-.5,0,.5,1))
cbar.ax.set_ylabel('Pearson correlation (r)', size=30)
plt.clim(clim1)
cbar.ax.set_yticklabels((-1,-.5,0,.5,1),size=20)
ax = plt.gca()
ax.set_yticks(np.arange(M)+.5)
ax.set_yticklabels(m_corr,size=25)
ax.set_xticks(np.arange(M)+.5)
ax.set_xticklabels(m_corr,size=25)
plt.xticks(rotation='vertical')
plt.tight_layout()
plt.xlim((0,M))
plt.ylim((0,M))
figname = 'metriccorrmat'
plt.savefig('/gh/fig/burrito/'+figname + '.png')
plt.figure(figsize=(4,4))
ax = plt.gca()
df.plot(kind='scatter',x='Cost',y='Volume',ax=ax,**{'s':40,'color':'k'})
plt.xlabel('Cost ($)',size=20)
plt.ylabel('Volume (L)',size=20)
plt.xticks(np.arange(3,11),size=15)
plt.yticks(np.arange(.4,1.4,.1),size=15)
plt.tight_layout()
print(df.corr()['Cost']['Volume'])
from tools.misc import pearsonp
print(pearsonp(df.corr()['Cost']['Volume'],len(df[['Cost','Volume']].dropna())))
figname = 'corr-volume-cost'
plt.savefig('/gh/fig/burrito/'+figname + '.png')
plt.figure(figsize=(12,8))
ax = plt.gca()
df.plot(kind='scatter',x='Cost',y='overall',ax=ax,**{'s':40,'color':'k'})
plt.xlabel('Cost ($)',size=30)
plt.ylabel('overall rating',size=30)
plt.xticks(np.arange(3,11),size=20)
plt.yticks(np.arange(1,5.5,.5),size=20)
plt.ylim((.9,5.1))
plt.tight_layout()
print(df.corr()['Cost']['overall'])
print(pearsonp(df.corr()['Cost']['overall'],len(df[['Cost','overall']].dropna())))
figname = 'corr-overall-cost'
plt.savefig('/gh/fig/burrito/'+figname + '.png')
# Visualize some correlations
from tools.plt import scatt_corr
scatt_corr(df['overall'].values,df['Meat'].values,
xlabel = 'overall rating', ylabel='meat rating', xlim = (-.5,5.5),ylim = (-.5,5.5),xticks=range(6),yticks=range(6))
#showline = True)
figname = 'corr-meat-overall'
plt.savefig('C:/gh/fig/burrito/'+figname + '.png')
plt.figure(figsize=(4,4))
ax = plt.gca()
df.plot(kind='scatter',x='Meat',y='Fillings',ax=ax,**{'s':40,'color':'k','alpha':.1})
plt.xlabel('Meat rating',size=20)
plt.ylabel('Non-meat rating',size=20)
plt.xticks(np.arange(0,6),size=15)
plt.yticks(np.arange(0,6),size=15)
dfMF = df[['Meat','Fillings']].dropna()
print sp.stats.spearmanr(dfMF.Meat,dfMF.Fillings)
figname = 'corr-meat-filling'
plt.savefig('C:/gh/fig/burrito/'+figname + '.png')
# Restrict analysis to burritos at the taco stand
restrictCali = False
import re
reTS = re.compile('.*taco stand.*', re.IGNORECASE)
reCali = re.compile('.*cali.*', re.IGNORECASE)
locTS = np.ones(len(df))
for i in range(len(df)):
mat = reTS.match(df['Location'][i])
if mat is None:
locTS[i] = 0
else:
if restrictCali:
mat = reCali.match(df['Burrito'][i])
if mat is None:
locTS[i] = 0
temp = np.arange(len(df))
dfTS = df.loc[temp[locTS==1]]
plt.figure(figsize=(4,4))
ax = plt.gca()
dfTS.plot(kind='scatter',x='Meat',y='Fillings',ax=ax,**{'s':40,'color':'k','alpha':.1})
plt.xlabel('Meat rating',size=20)
plt.ylabel('Non-meat rating',size=20)
plt.xticks(np.arange(0,6),size=15)
plt.yticks(np.arange(0,6),size=15)
dfTSMF = dfTS[['Meat','Fillings']].dropna()
print sp.stats.spearmanr(dfTSMF.Meat,dfTSMF.Fillings)
figname = 'corr-meat-filling-TS'
plt.savefig('C:/gh/fig/burrito/'+figname + '.png')
plt.figure(figsize=(4,4))
ax = plt.gca()
df.plot(kind='scatter',x='Hunger',y='overall',ax=ax,**{'s':40,'color':'k'})
plt.xlabel('Hunger',size=20)
plt.ylabel('Overall rating',size=20)
plt.xticks(np.arange(0,6),size=15)
plt.yticks(np.arange(0,6),size=15)
print df.corr()['Hunger']['overall']
from tools.misc import pearsonp
print pearsonp(df.corr()['Hunger']['overall'],len(df[['Hunger','overall']].dropna()))
figname = 'corr-hunger-overall'
plt.savefig('C:/gh/fig/burrito/'+figname + '.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Compute covariance using automated regularization
Step3: Show whitening
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Denis A. Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.datasets import sample
from mne.cov import compute_covariance
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 40, n_jobs=1, fir_design='firwin')
raw.info['bads'] += ['MEG 2443'] # bads + 1 more
events = mne.read_events(event_fname)
# let's look at rare events, button presses
event_id, tmin, tmax = 2, -0.2, 0.5
picks = mne.pick_types(raw.info, meg=True, eeg=True, eog=True, exclude='bads')
reject = dict(mag=4e-12, grad=4000e-13, eeg=80e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
# Uncomment next line to use fewer samples and study regularization effects
# epochs = epochs[:20] # For your data, use as many samples as you can!
noise_covs = compute_covariance(epochs, tmin=None, tmax=0, method='auto',
return_estimators=True, verbose=True, n_jobs=1,
projs=None)
# With "return_estimator=True" all estimated covariances sorted
# by log-likelihood are returned.
print('Covariance estimates sorted from best to worst')
for c in noise_covs:
print("%s : %s" % (c['method'], c['loglik']))
evoked = epochs.average()
evoked.plot() # plot evoked response
# plot the whitened evoked data for to see if baseline signals match the
# assumption of Gaussian white noise from which we expect values around
# 0 with less than 2 standard deviations. For the Global field power we expect
# a value of 1.
evoked.plot_white(noise_covs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Perplexity on Each Dataset
Step2: Loss vs. Epoch
Step3: Perplexity vs. Epoch
Step4: Generations
Step5: BLEU Analysis
Step6: N-pairs BLEU Analysis
Step7: Alignment Analysis
|
<ASSISTANT_TASK:>
Python Code:
report_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_200_512_04drb/encdec_noing6_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_200_512_04drb/encdec_noing15_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing23_200_512_04drb/encdec_noing23_200_512_04drb.json"]
log_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_200_512_04drb/encdec_noing6_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_200_512_04drb/encdec_noing15_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing23_200_512_04drb/encdec_noing23_200_512_04drb_logs.json"]
reports = []
logs = []
import json
import matplotlib.pyplot as plt
import numpy as np
for report_file in report_files:
with open(report_file) as f:
reports.append((report_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for log_file in log_files:
with open(log_file) as f:
logs.append((log_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for report_name, report in reports:
print '\n', report_name, '\n'
print 'Encoder: \n', report['architecture']['encoder']
print 'Decoder: \n', report['architecture']['decoder']
%matplotlib inline
from IPython.display import HTML, display
def display_table(data):
display(HTML(
u'<table><tr>{}</tr></table>'.format(
u'</tr><tr>'.join(
u'<td>{}</td>'.format('</td><td>'.join(unicode(_) for _ in row)) for row in data)
)
))
def bar_chart(data):
n_groups = len(data)
train_perps = [d[1] for d in data]
valid_perps = [d[2] for d in data]
test_perps = [d[3] for d in data]
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n_groups)
bar_width = 0.3
opacity = 0.4
error_config = {'ecolor': '0.3'}
train_bars = plt.bar(index, train_perps, bar_width,
alpha=opacity,
color='b',
error_kw=error_config,
label='Training Perplexity')
valid_bars = plt.bar(index + bar_width, valid_perps, bar_width,
alpha=opacity,
color='r',
error_kw=error_config,
label='Valid Perplexity')
test_bars = plt.bar(index + 2*bar_width, test_perps, bar_width,
alpha=opacity,
color='g',
error_kw=error_config,
label='Test Perplexity')
plt.xlabel('Model')
plt.ylabel('Scores')
plt.title('Perplexity by Model and Dataset')
plt.xticks(index + bar_width / 3, [d[0] for d in data])
plt.legend()
plt.tight_layout()
plt.show()
data = [['<b>Model</b>', '<b>Train Perplexity</b>', '<b>Valid Perplexity</b>', '<b>Test Perplexity</b>']]
for rname, report in reports:
data.append([rname, report['train_perplexity'], report['valid_perplexity'], report['test_perplexity']])
display_table(data)
bar_chart(data[1:])
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][1], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][2], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][3], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][4], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
def display_sample(samples, best_bleu=False):
for enc_input in samples:
data = []
for rname, sample in samples[enc_input]:
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Generated: </b>' + sample['generated']])
if best_bleu:
cbm = ' '.join([w for w in sample['best_match'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Closest BLEU Match: </b>' + cbm + ' (Score: ' + str(sample['best_score']) + ')'])
data.insert(0, ['<u><b>' + enc_input + '</b></u>', '<b>True: ' + gold+ '</b>'])
display_table(data)
def process_samples(samples):
# consolidate samples with identical inputs
result = {}
for rname, t_samples, t_cbms in samples:
for i, sample in enumerate(t_samples):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
if t_cbms is not None:
sample.update(t_cbms[i])
if enc_input in result:
result[enc_input].append((rname, sample))
else:
result[enc_input] = [(rname, sample)]
return result
samples = process_samples([(rname, r['train_samples'], r['best_bleu_matches_train'] if 'best_bleu_matches_train' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_train' in reports[1][1])
samples = process_samples([(rname, r['valid_samples'], r['best_bleu_matches_valid'] if 'best_bleu_matches_valid' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_valid' in reports[1][1])
samples = process_samples([(rname, r['test_samples'], r['best_bleu_matches_test'] if 'best_bleu_matches_test' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_test' in reports[1][1])
def print_bleu(blue_structs):
data= [['<b>Model</b>', '<b>Overall Score</b>','<b>1-gram Score</b>','<b>2-gram Score</b>','<b>3-gram Score</b>','<b>4-gram Score</b>']]
for rname, blue_struct in blue_structs:
data.append([rname, blue_struct['score'], blue_struct['components']['1'], blue_struct['components']['2'], blue_struct['components']['3'], blue_struct['components']['4']])
display_table(data)
# Training Set BLEU Scores
print_bleu([(rname, report['train_bleu']) for (rname, report) in reports])
# Validation Set BLEU Scores
print_bleu([(rname, report['valid_bleu']) for (rname, report) in reports])
# Test Set BLEU Scores
print_bleu([(rname, report['test_bleu']) for (rname, report) in reports])
# All Data BLEU Scores
print_bleu([(rname, report['combined_bleu']) for (rname, report) in reports])
# Training Set BLEU n-pairs Scores
print_bleu([(rname, report['n_pairs_bleu_train']) for (rname, report) in reports])
# Validation Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_valid']) for (rname, report) in reports])
# Test Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_test']) for (rname, report) in reports])
# Combined n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_all']) for (rname, report) in reports])
# Ground Truth n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_gold']) for (rname, report) in reports])
def print_align(reports):
data= [['<b>Model</b>', '<b>Average (Train) Generated Score</b>','<b>Average (Valid) Generated Score</b>','<b>Average (Test) Generated Score</b>','<b>Average (All) Generated Score</b>', '<b>Average (Gold) Score</b>']]
for rname, report in reports:
data.append([rname, report['average_alignment_train'], report['average_alignment_valid'], report['average_alignment_test'], report['average_alignment_all'], report['average_alignment_gold']])
display_table(data)
print_align(reports)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Authenticate your GCP account
Step2: Create a BigQuery dataset
Step3: Validate that your dataset created successfully (this will throw an error if there is no dataset)
Step4: Import libraries and define parameters
Step7: Define target audience and filters
Step10: Query custom dimensions to isolate fields with fewer unique values, which will be visualized
Step13: Repeat for hit level custom dimensions
Step18: Programmatically write a query that pulls distinct users, by class, for features and every custom dimension (session & hit level).
Step19: Save results to BQ. As-is, only writes if there is no table that already exists.
Step20: Visualize results with a pre-built Data Studio dashboard
|
<ASSISTANT_TASK:>
Python Code:
PROJECT_ID_BILLING = "" # Set the project ID
! gcloud config set project $PROJECT_ID_BILLING
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth
auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS ''
DATASET_NAME = "" # Name the dataset you'd like to save the output to
LOCATION = "US"
! bq mk --location=$LOCATION --dataset $PROJECT_ID_BILLING:$DATASET_NAME
! bq show --format=prettyjson $PROJECT_ID_BILLING:$DATASET_NAME
# Import libraries
import numpy as np
import pandas as pd
# Colab tools & bigquery library
from google.cloud import bigquery
bigquery.USE_LEGACY_SQL = False
pd.options.display.float_format = '{:.5f}'.format
GA_PROJECT_ID = "bigquery-public-data"
GA_DATASET_ID = "google_analytics_sample"
GA_TABLE_ID = "ga_sessions_*"
START_DATE = "20170501" # Format is YYYYMMDD, for GA360 BQ Export
END_DATE = "20170801"
UTC_ADJUSTMENT = -5
client = bigquery.Client(project=PROJECT_ID_BILLING)
# Define the query to identify your target audience with label
# (1 for target, 0 for general population)
user_label_query = f
SELECT
fullvisitorId,
max(case when totals.transactions = 1 then 1 else 0 end) as label,
min(case when totals.transactions = 1 then visitStartTime end) as event_session
FROM
`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}`
WHERE
_TABLE_SUFFIX BETWEEN '{START_DATE}' AND '{END_DATE}'
AND geoNetwork.Country="United States"
GROUP BY
fullvisitorId
# query_filter -- Change this if you want to adjust WHERE clause in
# the query. This will be inserted after all clauses selecting from
# the GA360 BQ Export.
query_filter = f
WHERE (
_TABLE_SUFFIX BETWEEN '{START_DATE}' AND '{END_DATE}'
AND geoNetwork.Country="United States"
AND (a.visitStartTime < IFNULL(event_session, 0)
or event_session is null) )
# Set cut off for session-level custom dimensions,
# then query BQ Export to pull relevant indices
sessions_cut_off = 20 # Max number of distinct values in custom dimensions
# By default, assume there will be custom dimensions at the session and hit level.
# Further down, set these to False if no appropriate CDs are found.
query_session_cd = True
# Unnest session-level custom dimensions a count values for each index
sessions_cd = f
SELECT index, count(distinct value) as dist_values
FROM (SELECT cd.index, cd.value, count(*) as sessions
FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}`,
UNNEST(customDimensions) as cd
WHERE _TABLE_SUFFIX BETWEEN '{START_DATE}' AND '{END_DATE}'
GROUP BY 1, 2
ORDER BY 1, 2)
GROUP BY index
try:
# Run a Standard SQL query with the project set explicitly
sessions_custom_dimensions = client.query(sessions_cd,
project=PROJECT_ID_BILLING).to_dataframe()
# Create list of session-level CDs to visualize
session_index_list = sessions_custom_dimensions.loc[
sessions_custom_dimensions.dist_values <= sessions_cut_off, 'index'].values
session_index_exclude = sessions_custom_dimensions.loc[
sessions_custom_dimensions.dist_values > sessions_cut_off, 'index'].values
if len(session_index_list) == 0:
query_session_cd = False
print("No session-level indices found.")
else:
print(fPrinting visualizations for the following session-level indices: \
{session_index_list};\n
Excluded the following custom dimension indices because they had more than \
{sessions_cut_off} possible values: {session_index_exclude}\n \n)
except:
query_session_cd = False
# Set cut off for hit-level custom dimensions,
# then query BQ Export to pull relevant indices
hit_cut_off = 20
# By default, assume there will be custom dimensions at the session and hit level.
# Further down, set these to False if no appropriate CDs are found.
query_hit_cd = True
hits_cd = f
SELECT index, count(distinct value) as dist_values
FROM (
SELECT cd.index, cd.value, count(*) as hits
FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}`,
UNNEST(hits) as ht,
UNNEST(ht.customDimensions) as cd
WHERE _TABLE_SUFFIX BETWEEN '{START_DATE}' AND '{END_DATE}'
GROUP BY 1, 2
ORDER BY 1, 2 )
GROUP BY index
try:
hits_custom_dimensions = client.query(hits_cd, project=PROJECT_ID_BILLING).to_dataframe()
# Create list of hit-level CDs to visualize
hit_index_list = hits_custom_dimensions.loc[hits_custom_dimensions.dist_values <= hit_cut_off, 'index'].values
hit_index_exclude = hits_custom_dimensions.loc[hits_custom_dimensions.dist_values > hit_cut_off, 'index'].values
if len(hit_index_list) == 0:
query_hit_cd = False
print("No hit-level indices found.")
else:
print(fPrinting visualizations for the following hit-level cds: \
{hit_index_list};\n
Excluded the following custom dimension indices because they had more than \
{hit_cut_off} possible values: {hit_index_exclude}\n \n)
except:
print("No hit-level custom dimensions found!")
query_hit_cd = False
# Write a big query that aggregates data to be used as dashboard input
# Set to True if you want to print the final query after it's generated
View_Query = False
final_query = f
WITH users_labeled as (
{user_label_query}
),
trafficSource_medium AS (
SELECT count(distinct CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
count(distinct CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
trafficSource_medium AS trafficSource_medium,
'trafficSource_medium' AS type
FROM (
SELECT a.fullvisitorId,
trafficSource.medium AS trafficSource_medium,
label
FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,
unnest (hits) as hits
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
GROUP BY 1,2,3)
GROUP BY trafficSource_medium),
dma_staging AS (
SELECT a.fullvisitorId,
geoNetwork.metro AS metro,
label,
COUNT(*) AS visits
FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
GROUP BY 1,2,3),
--- Finds the dma with the most visits for each user. If it's a tie, arbitrarily picks one.
visitor_dma AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
metro AS dma,
'dma' AS type
FROM (
SELECT fullvisitorId,
metro,
label,
ROW_NUMBER() OVER (PARTITION BY fullvisitorId ORDER BY visits DESC) AS row_num
FROM dma_staging)
WHERE row_num = 1
GROUP BY metro, type),
distinct_dma AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
distinct_dma AS distinct_dma,
'distinct_dma' AS type
FROM (
SELECT COUNT(DISTINCT metro) as distinct_dma,
fullvisitorId,
label
FROM dma_staging
GROUP BY fullvisitorId, label)
GROUP BY distinct_dma),
-- Finds the daypart with the most pageviews for each user; adjusts for timezones and daylight savings time, loosely
visitor_common_daypart AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
'day_part' AS type,
daypart
FROM (
SELECT fullvisitorId, daypart, label, ROW_NUMBER() OVER (PARTITION BY fullvisitorId ORDER BY pageviews DESC) AS row_num
FROM (
SELECT
fullvisitorId,
label,
CASE WHEN hour_of_day >= 1 AND hour_of_day < 6 THEN '1_night_1_6'
WHEN hour_of_day >= 6 AND hour_of_day < 11 THEN '2_morning_6_11'
WHEN hour_of_day >= 11 AND hour_of_day < 14 THEN '3_lunch_11_14'
WHEN hour_of_day >= 14 AND hour_of_day < 17 THEN '4_afternoon_14_17'
WHEN hour_of_day >= 17 AND hour_of_day < 19 THEN '5_dinner_17_19'
WHEN hour_of_day >= 19 AND hour_of_day < 22 THEN '6_evening_19_23'
WHEN hour_of_day >= 22 OR hour_of_day = 0 THEN '7_latenight_23_1'
END AS daypart, SUM(pageviews) AS pageviews
FROM (
SELECT a.fullvisitorId, b.label, EXTRACT(HOUR
FROM TIMESTAMP_ADD(TIMESTAMP_SECONDS(visitStartTime), INTERVAL {UTC_ADJUSTMENT} HOUR)) AS hour_of_day,
totals.pageviews AS pageviews
FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
)
GROUP BY 1,2,3) )
WHERE row_num = 1
GROUP BY type, daypart),
-- Finds the most common day based on pageviews
visitor_common_day AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
'DoW' AS type,
case when day = 1 then "1_Sunday"
when day = 2 then "2_Monday"
when day = 3 then "3_Tuesday"
when day = 4 then "4_Wednesday"
when day = 5 then "5_Thursday"
when day = 6 then "6_Friday"
when day = 7 then "7_Saturday" end as day
FROM (
SELECT fullvisitorId, day, label, ROW_NUMBER() OVER (PARTITION BY fullvisitorId ORDER BY pages_viewed DESC) AS row_num
FROM (
SELECT a.fullvisitorId,
EXTRACT(DAYOFWEEK FROM PARSE_DATE('%Y%m%d',date)) AS day,
SUM(totals.pageviews) AS pages_viewed,
b.label
FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
GROUP BY 1,2,4 ) )
WHERE row_num = 1
GROUP BY type, day),
technology AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
deviceCategory AS deviceCategory,
browser AS browser,
'technology' AS type
FROM (
SELECT fullvisitorId,
deviceCategory,
browser,
label,
ROW_NUMBER() OVER (PARTITION BY fullvisitorId ORDER BY visits DESC) AS row_num
FROM (
SELECT a.fullvisitorId,
device.deviceCategory AS deviceCategory,
CASE WHEN device.browser LIKE 'Chrome%' THEN device.browser WHEN device.browser LIKE 'Safari%' THEN device.browser ELSE 'Other browser' END AS browser,
b.label,
COUNT(*) AS visits
FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
GROUP BY 1,2,3,4))
WHERE row_num = 1
GROUP BY deviceCategory,browser,type),
PPL1 AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
PPL1 AS PPL1,
'PPL1' AS type
FROM (
SELECT a.fullvisitorId,
hits.page.pagePathLevel1 AS PPL1,
b.label
FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,
unnest (hits) as hits
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
GROUP BY 1,2,3)
GROUP BY PPL1),
ecomm_action AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
CASE WHEN ecomm_action = '1' THEN '1_Click product list'
WHEN ecomm_action = '2' THEN '2_Product detail view'
WHEN ecomm_action = '3' THEN '3_Add to cart'
WHEN ecomm_action = '4' THEN '4_Remove from cart'
WHEN ecomm_action = '5' THEN '5_Start checkout'
WHEN ecomm_action = '6' THEN '6_Checkout complete'
WHEN ecomm_action = '7' THEN '7_Refund'
WHEN ecomm_action = '8' THEN '8_Checkout options'
ELSE '9_No_ecomm_action'
END AS ecomm_action,
'ecomm_action' AS type
FROM (
SELECT a.fullvisitorId,
hits.eCommerceAction.action_type AS ecomm_action,
b.label
FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,
unnest (hits) as hits
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
GROUP BY 1,2,3)
GROUP BY ecomm_action),
prod_cat AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
prod_cat AS prod_cat,
'prod_cat' AS type
FROM (
SELECT a.fullvisitorId,
prod.v2ProductCategory AS prod_cat,
b.label
FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,
unnest (hits) as hits,
UNNEST (hits.product) AS prod
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
GROUP BY 1,2,3)
GROUP BY prod_cat),
agg_metrics AS (
SELECT fullvisitorId,
CASE WHEN label IS NULL then 0 else label end as label,
count(distinct visitId) as total_sessions,
sum(totals.pageviews) as pageviews,
count(totals.bounces)/count(distinct VisitID) as bounce_rate,
sum(totals.timeonSite)/sum(totals.pageviews) as time_per_page,
sum(totals.pageviews) / count(distinct VisitID) as avg_session_depth
FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a
LEFT JOIN users_labeled b
USING (fullvisitorId)
{query_filter}
GROUP BY 1,2
),
Agg_sessions AS (
SELECT fullvisitorId, label, total_sessions
FROM agg_metrics),
Agg_pageviews AS (
SELECT fullvisitorId, label, pageviews
FROM agg_metrics),
Agg_time_per_page AS (
SELECT fullvisitorId, label, time_per_page
FROM agg_metrics),
Agg_avg_session_depth AS (
SELECT fullvisitorId, label, avg_session_depth
FROM agg_metrics),
hist_sessions AS (
SELECT
ROUND(min+max/2) as avg_sessions,
COUNT(distinct case when label = 1 then fullvisitorId end) as count_1_users,
COUNT(distinct case when label = 0 or label is null then fullvisitorId end) as count_0_users,
'stats_sessions' as type
FROM Agg_sessions
JOIN (SELECT min+step*i min, min+step*(i+1)max
FROM (
SELECT max-min diff, min, max, (max-min)/20 step, GENERATE_ARRAY(0, 20, 1) i
FROM (
SELECT MIN(total_sessions) min, MAX(total_sessions) max
FROM Agg_sessions
JOIN (select APPROX_QUANTILES(total_sessions, 200 IGNORE NULLS)[OFFSET(199)] as trimmer FROM Agg_sessions) b
ON agg_sessions.total_sessions <= b.trimmer
)
), UNNEST(i) i) stats_sessions
ON Agg_sessions.total_sessions >= stats_sessions.min
AND Agg_sessions.total_sessions < stats_sessions.max
GROUP BY min, max
ORDER BY min),
hist_pageviews AS (
SELECT
ROUND(min+max/2) as avg_pageviews,
COUNT(distinct case when label = 1 then fullvisitorId end) as count_1_users,
COUNT(distinct case when label = 0 or label is null then fullvisitorId end) as count_0_users,
'stats_pageviews' as type
FROM Agg_pageviews
JOIN (SELECT min+step*i min, min+step*(i+1)max
FROM (
SELECT max-min diff, min, max, (max-min)/20 step, GENERATE_ARRAY(0, 20, 1) i
FROM (
SELECT MIN(pageviews) min, MAX(pageviews) max
FROM Agg_pageviews
JOIN (select APPROX_QUANTILES(pageviews, 200 IGNORE NULLS)[OFFSET(199)] as trimmer FROM Agg_pageviews) b
ON agg_pageviews.pageviews <= b.trimmer
)
), UNNEST(i) i) stats_pageviews
ON Agg_pageviews.pageviews >= stats_pageviews.min
AND Agg_pageviews.pageviews < stats_pageviews.max
GROUP BY min, max
ORDER BY min),
hist_time_per_page AS (
SELECT
ROUND(min+max/2) as avg_time_per_page,
COUNT(distinct case when label = 1 then fullvisitorId end) as count_1_users,
COUNT(distinct case when label = 0 or label is null then fullvisitorId end) as count_0_users,
'stats_time_per_page' as type
FROM Agg_time_per_page
JOIN (SELECT min+step*i min, min+step*(i+1)max
FROM (
SELECT max-min diff, min, max, (max-min)/20 step, GENERATE_ARRAY(0, 20, 1) i
FROM (
SELECT MIN(time_per_page) min, MAX(time_per_page) max
FROM Agg_time_per_page
JOIN (select APPROX_QUANTILES(time_per_page, 200 IGNORE NULLS)[OFFSET(199)] as trimmer FROM Agg_time_per_page) b
ON agg_time_per_page.time_per_page <= b.trimmer
)
), UNNEST(i) i) stats_time_per_page
ON Agg_time_per_page.time_per_page >= stats_time_per_page.min
AND Agg_time_per_page.time_per_page < stats_time_per_page.max
GROUP BY min, max
ORDER BY min),
hist_avg_session_depth AS (
SELECT
ROUND(min+max/2) as avg_avg_session_depth,
COUNT(distinct case when label = 1 then fullvisitorId end) as count_1_users,
COUNT(distinct case when label = 0 or label is null then fullvisitorId end) as count_0_users,
'stats_avg_session_depth' as type
FROM Agg_avg_session_depth
JOIN (SELECT min+step*i min, min+step*(i+1)max
FROM (
SELECT max-min diff, min, max, (max-min)/20 step, GENERATE_ARRAY(0, 20, 1) i
FROM (
SELECT MIN(avg_session_depth) min, MAX(avg_session_depth) max
FROM Agg_avg_session_depth
JOIN (select APPROX_QUANTILES(avg_session_depth, 200 IGNORE NULLS)[OFFSET(199)] as trimmer FROM Agg_avg_session_depth) b
ON agg_avg_session_depth.avg_session_depth <= b.trimmer
)
), UNNEST(i) i) stats_avg_session_depth
ON Agg_avg_session_depth.avg_session_depth >= stats_avg_session_depth.min
AND Agg_avg_session_depth.avg_session_depth < stats_avg_session_depth.max
GROUP BY min, max
ORDER BY min)
if query_session_cd:
session_cd_query = ",\nsession_cds AS (SELECT * FROM ("
counter = len(session_index_list)
start = 1
for ind in session_index_list:
ind_num = ind
session_custom_dimension_query_base = fSELECT
"session_dim_{ind_num}" as type,
count(distinct case when label = 1 then a.fullvisitorId end) as count_1_users,
count(distinct case when label = 0 then a.fullvisitorId end) as count_0_users,
cd.value as session_dim_{ind_num}_value
FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,
UNNEST(customDimensions) as cd
LEFT JOIN users_labeled b
ON a.fullvisitorId = b.fullvisitorId
{query_filter}
AND cd.index = {ind_num}
GROUP BY type, cd.value)
query_add = session_custom_dimension_query_base
session_cd_query += query_add
if start > 1:
session_cd_query += "USING (type, count_1_users, count_0_users)"
if start < counter:
session_cd_query += "\nFULL OUTER JOIN\n("
start+=1
session_cd_query+=")\n"
final_query += session_cd_query
# Query hits
if query_hit_cd:
hit_cd_query = ",\nhits_cds AS (SELECT * FROM ("
counter = len(hit_index_list)
start = 1
for ind in hit_index_list:
ind_num = ind
hit_cust_d_query_base = fSELECT
"hit_dim_{ind_num}" as type,
count(distinct case when label = 1 then a.fullvisitorId end) as count_1_users,
count(distinct case when label = 0 then a.fullvisitorId end) as count_0_users,
cd.value as hit_dim_{ind_num}_value
FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,
UNNEST(hits) as ht,
UNNEST(ht.customDimensions) as cd
LEFT JOIN users_labeled b
ON a.fullvisitorId = b.fullvisitorId
{query_filter}
AND cd.index = {ind_num}
GROUP BY type, cd.value)
query_add = hit_cust_d_query_base
hit_cd_query += query_add
if start > 1:
hit_cd_query += "USING (type, count_1_users, count_0_users)"
if start < counter:
hit_cd_query += "\nFULL OUTER JOIN\n("
start+=1
hit_cd_query+=")\n"
final_query += hit_cd_query
final_query += SELECT *, count_1_users/(count_1_users+count_0_users) as conv_rate FROM trafficSource_medium
FULL OUTER JOIN visitor_dma USING (type,count_1_users,count_0_users)
FULL OUTER JOIN distinct_dma USING (type,count_1_users,count_0_users)
FULL OUTER JOIN visitor_common_daypart USING (type,count_1_users,count_0_users)
FULL OUTER JOIN visitor_common_day USING (type,count_1_users,count_0_users)
FULL OUTER JOIN technology USING (type,count_1_users,count_0_users)
FULL OUTER JOIN PPL1 USING (type,count_1_users,count_0_users)
FULL OUTER JOIN ecomm_action USING (type,count_1_users,count_0_users)
FULL OUTER JOIN prod_cat USING (type,count_1_users,count_0_users)
FULL OUTER JOIN hist_sessions USING (type, count_1_users, count_0_users)
FULL OUTER JOIN hist_pageviews USING (type, count_1_users, count_0_users)
FULL OUTER JOIN hist_time_per_page USING (type, count_1_users, count_0_users)
FULL OUTER JOIN hist_avg_session_depth USING (type, count_1_users, count_0_users)
if query_hit_cd:
final_query+="FULL OUTER JOIN hits_cds USING (type,count_1_users,count_0_users)"
if query_session_cd:
final_query+="FULL OUTER JOIN session_cds USING (type,count_1_users,count_0_users)"
if (View_Query):
print(final_query)
# Set the destination for your query results.
# This will be your data source for the Data Studio dashboard.
DESTINATION = f"{PROJECT_ID_BILLING}.{DATASET_NAME}.ga360_gazer_output"
job_config = bigquery.QueryJobConfig(destination=DESTINATION,
writeDisposition="WRITE_EMPTY")
# Start the query, passing in the extra configuration.
query_job = client.query(final_query, job_config=job_config)
query_job.result()
print("Query results loaded to the table {}".format(DESTINATION))
# Delete the dataset and all contents within
! bq rm -r $PROJECT_ID_BILLING:$DATASET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Remember to import the pandas library to get access to Dataframes. Dataframes are two-dimensional arrays (matrices) where each column can be of a different datatype.
Step2: Look at the First Row of Data
Step3: Find the Average of Each Statistic
Step4: Make Pairwise Scatterplots
Step5: In Python, matplotlib is the primary plotting package, and seaborn is a widely used layer over matplotlib. You could have also used the pandas scatter_matrix for a similar result.
Step6: We can use the main Python machine learning package, scikit-learn, to fit a k-means clustering model and get our cluster labels. In order to cluster properly, we remove any non-numeric columns, or columns with missing values (NA, Nan, etc) with the get_numeric_data and dropna methods.
Step7: With Python, we used the PCA class in the scikit-learn library. We used matplotlib to create the plot.
Step8: In Python, the recent version of pandas came with a sample method that returns a certain proportion of rows randomly sampled from a source dataframe – this makes the code much more concise. We could also use Scikit-Learns KFolds and train_test_splits for different types of shuffle and splits in the data set. In both cases, we set a random seed to make the results reproducible.
Step9: Scikit-learn has a linear regression model that we can fit and generate predictions from. Note also the use of Lasso and Ridge regressions, though this doesn't apply in the univariate case.
Step10: If we want to get summary statistics about the fit, like r-squared value, we can use the score method of the Sckit-Learn model. However, if we want more advanced regression statistics we’ll need to do a bit more. The statsmodels package enables many statistical methods to be used in Python and is a good tool to know.
Step11: Calculate Error
Step12: The scikit-learn library has a variety of error metrics that we can use
Step13: Extract Player Box Scores
|
<ASSISTANT_TASK:>
Python Code:
# Load CSV into a dataframe named nba
# Print the number of rows and columns in the dataframe
# Print the first row of data
# Print the mean of each column
%matplotlib inline
# Use seaborn or pandas to plot the scatter matrix
# Use a clustering model like K-means to cluster the players
# Use PCA to plot the clusters in 2 dimensions
# Create train (80%) and test (20%) splits
# Compute the univariate regression of TO to AST
# Compute the regression results
# Compute random forest from the predictors "AGE", "MPG", "TO", "HT", "WT", "REBR" to the target, "AST"
# Compute the MSE of the classifier
# Download "http://www.basketball-reference.com/boxscores/201506140GSW.html"
# Use BeautifulSoup to parse the table from the web page
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
def fully_connected(prev_layer, num_units, training=True):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=None)
layer = tf.layers.batch_normalization(layer, training=training)
return tf.nn.relu(layer)
def conv_layer(prev_layer, layer_depth, training=True):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training=training)
return tf.nn.relu(conv_layer)
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Direct photo-detection
Step2: Highly efficient detection
Step3: Highly inefficient photon detection
Step4: Efficient homodyne detection
Step5: Form 1
Step6: Form 2
Step7: $\displaystyle D_{2}[A]\rho(t) = \sqrt{\eta} \mathcal{H}[\sqrt{\kappa} a]\rho(t) = \sqrt{\eta} \mathcal{H}[A]\rho(t)
Step8: Versions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from qutip import *
from qutip.expect import expect_rho_vec
from matplotlib import rcParams
rcParams['font.family'] = 'STIXGeneral'
rcParams['mathtext.fontset'] = 'stix'
rcParams['font.size'] = '14'
N = 15
w0 = 0.5 * 2 * np.pi
times = np.linspace(0, 15, 150)
dt = times[1] - times[0]
gamma = 0.1
a = destroy(N)
H = w0 * a.dag() * a
rho0 = fock(N, 5)
e_ops = [a.dag() * a, a + a.dag()]
eta = 0.7
c_ops = [np.sqrt(1-eta) * np.sqrt(gamma) * a] # collapse operator B
sc_ops = [np.sqrt(eta) * np.sqrt(gamma) * a] # stochastic collapse operator A
result_ref = mesolve(H, rho0, times, c_ops+sc_ops, e_ops)
result1 = photocurrent_mesolve(H, rho0, times, c_ops=c_ops, sc_ops=sc_ops, e_ops=e_ops,
ntraj=1, nsubsteps=100, store_measurement=True)
result2 = photocurrent_mesolve(H, rho0, times, c_ops=c_ops, sc_ops=sc_ops, e_ops=e_ops,
ntraj=10, nsubsteps=100, store_measurement=True)
fig, axes = plt.subplots(2,2, figsize=(12,8), sharex=True)
axes[0,0].plot(times, result1.expect[0], label=r'Stochastic ME (ntraj = 1)', lw=2)
axes[0,0].plot(times, result_ref.expect[0], label=r'Lindblad ME', lw=2)
axes[0,0].set_title("Cavity photon number (ntraj = 1)")
axes[0,0].legend()
axes[0,1].plot(times, result2.expect[0], label=r'Stochatic ME (ntraj = 10)', lw=2)
axes[0,1].plot(times, result_ref.expect[0], label=r'Lindblad ME', lw=2)
axes[0,1].set_title("Cavity photon number (ntraj = 10)")
axes[0,1].legend()
axes[1,0].step(times, dt * np.cumsum(result1.measurement[0].real), lw=2)
axes[1,0].set_title("Cummulative photon detections (ntraj = 1)")
axes[1,1].step(times, dt * np.cumsum(np.array(result2.measurement).sum(axis=0).real) / 10, lw=2)
axes[1,1].set_title("Cummulative avg. photon detections (ntraj = 10)")
fig.tight_layout()
eta = 0.1
c_ops = [np.sqrt(1-eta) * np.sqrt(gamma) * a] # collapse operator B
sc_ops = [np.sqrt(eta) * np.sqrt(gamma) * a] # stochastic collapse operator A
result_ref = mesolve(H, rho0, times, c_ops+sc_ops, e_ops)
result1 = photocurrent_mesolve(H, rho0, times, c_ops=c_ops, sc_ops=sc_ops, e_ops=e_ops,
ntraj=1, nsubsteps=100, store_measurement=True)
result2 = photocurrent_mesolve(H, rho0, times, c_ops=c_ops, sc_ops=sc_ops, e_ops=e_ops,
ntraj=10, nsubsteps=100, store_measurement=True)
fig, axes = plt.subplots(2,2, figsize=(12,8), sharex=True)
axes[0,0].plot(times, result1.expect[0], label=r'Stochastic ME (ntraj = 1)', lw=2)
axes[0,0].plot(times, result_ref.expect[0], label=r'Lindblad ME', lw=2)
axes[0,0].set_title("Cavity photon number (ntraj = 1)")
axes[0,0].legend()
axes[0,1].plot(times, result2.expect[0], label=r'Stochatic ME (ntraj = 10)', lw=2)
axes[0,1].plot(times, result_ref.expect[0], label=r'Lindblad ME', lw=2)
axes[0,1].set_title("Cavity photon number (ntraj = 10)")
axes[0,1].legend()
axes[1,0].step(times, dt * np.cumsum(result1.measurement[0].real), lw=2)
axes[1,0].set_title("Cummulative photon detections (ntraj = 1)")
axes[1,1].step(times, dt * np.cumsum(np.array(result2.measurement).sum(axis=0).real) / 10, lw=2)
axes[1,1].set_title("Cummulative avg. photon detections (ntraj = 10)")
fig.tight_layout()
rho0 = coherent(N, np.sqrt(5))
eta = 0.95
c_ops = [np.sqrt(1-eta) * np.sqrt(gamma) * a] # collapse operator B
sc_ops = [np.sqrt(eta) * np.sqrt(gamma) * a] # stochastic collapse operator A
result_ref = mesolve(H, rho0, times, c_ops+sc_ops, e_ops)
result = smesolve(H, rho0, times, c_ops, sc_ops, e_ops, ntraj=75, nsubsteps=100, solver="platen",
method='homodyne', store_measurement=True, map_func=parallel_map, noise=111)
plot_expectation_values([result, result_ref]);
fig, ax = plt.subplots(figsize=(8,4))
M = np.sqrt(eta * gamma)
for m in result.measurement:
ax.plot(times, m[:, 0].real / M, 'b', alpha=0.025)
ax.plot(times, result_ref.expect[1], 'k', lw=2);
ax.set_ylim(-25, 25)
ax.set_xlim(0, times.max())
ax.set_xlabel('time', fontsize=12)
ax.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real / M, 'b', lw=2);
L = liouvillian(H, np.sqrt(gamma) * a)
def d1_rho_func(t, rho_vec):
return L * rho_vec
n_sum = spre(np.sqrt(gamma) * a) + spost(np.sqrt(gamma) * a.dag())
def d2_rho_func(t, rho_vec):
e1 = expect_rho_vec(n_sum.data, rho_vec, False)
return np.vstack([np.sqrt(eta) * (n_sum * rho_vec - e1 * rho_vec)])
result_ref = mesolve(H, rho0, times, c_ops+sc_ops, e_ops)
result = general_stochastic(ket2dm(rho0), times, e_ops=[spre(op) for op in e_ops],
ntraj=75, nsubsteps=100, solver="platen",
d1=d1_rho_func, d2=d2_rho_func, len_d2=1,
m_ops=[spre(a + a.dag())], dW_factors=[1/np.sqrt(gamma * eta)],
store_measurement=True, map_func=parallel_map, noise=111)
plot_expectation_values([result, result_ref])
fig, ax = plt.subplots(figsize=(8,4))
for m in result.measurement:
ax.plot(times, m[:, 0].real, 'b', alpha=0.025)
ax.plot(times, result_ref.expect[1], 'k', lw=2);
ax.set_ylim(-25, 25)
ax.set_xlim(0, times.max())
ax.set_xlabel('time', fontsize=12)
ax.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real, 'b', lw=2);
from qutip.ipynbtools import version_table
version_table()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, define a function that returns the index numbers of the neighbors of a vertex i, when the
Step2: Define a function that enumerates the neighbors of a vertex i, when the
Step3: Define a function that enumerates the neighbors of a vertex i, when the
Step4: This next function is the simulation funtion. "n" is the number of vertices.
Step5: A simulation with 1000 vertices clearly shows that adjacency list is fastest
Step6: We see the expected behavior, with the running time for the adjacency-matrix and edge-list formats going up when we increase "n", but there is hardly any change in the running time for the graph stored in adjacency list format
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import igraph
import timeit
import itertools
def enumerate_matrix(gmat, i):
return np.nonzero(gmat[i,:])[1].tolist()
def enumerate_adj_list(adj_list, i):
return adj_list[i]
def enumerate_edge_list(edge_list, i):
inds1 = np.where(edge_list[:,0] == i)[0]
elems1 = edge_list[inds1, 1].tolist()
inds2 = np.where(edge_list[:,1] == i)[0]
elems2 = edge_list[inds2, 0].tolist()
return np.unique(elems1 + elems2).tolist()
def do_sim(n):
retlist = []
nrep = 10
nsubrep = 10
# this is (sort of) a Python way of doing the R function "replicate":
for _ in itertools.repeat(None, nrep):
# make a random undirected graph with fixed (average) vertex degree = 5
g = igraph.Graph.Barabasi(n, 5)
# get the graph in three different representations
g_matrix = np.matrix(g.get_adjacency().data)
g_adj_list = g.get_adjlist()
g_edge_list = np.array(g.get_edgelist())
start_time = timeit.default_timer()
for _ in itertools.repeat(None, nsubrep):
for i in range(0, n):
enumerate_matrix(g_matrix, i)
matrix_elapsed = timeit.default_timer() - start_time
start_time = timeit.default_timer()
for _ in itertools.repeat(None, nsubrep):
for i in range(0, n):
enumerate_adj_list(g_adj_list, i)
adjlist_elapsed = timeit.default_timer() - start_time
start_time = timeit.default_timer()
for _ in itertools.repeat(None, nsubrep):
for i in range(0, n):
enumerate_edge_list(g_edge_list, i)
edgelist_elapsed = timeit.default_timer() - start_time
retlist.append([matrix_elapsed, adjlist_elapsed, edgelist_elapsed])
# average over replicates and then
# divide by n so that the running time results are on a per-vertex basis
return np.mean(np.array(retlist), axis=0)/n
do_sim(1000)*1000
do_sim(2000)*1000
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Yup, it looks like it is not using the GPU
Step2: I do, and the solution says it has something to do with my path not being set properly. Lets see what my path is for this notebook instance.
Step3: It doesn't include any of the CUDA or VS stuff, so I'm betting this is the problem. I'll add the path items related to CUDA or VS I have in my path to this path, and try again.
Step4: Still not working!
Step5: Doesn't work. Lets see if I can find this nvcc compiler on the system. It is on the system, and also on the system path because this command worked fine on the command line.
Step6: But I threw in a call to show the system path as well, to see what it says. I note that the system path is not updated with the CUDA variables, but it does have everything else I have on my path normally, unlike the python system path above. I think I understand now.
|
<ASSISTANT_TASK:>
Python Code:
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')
import theano.sandbox.cuda
theano.sandbox.cuda.use("gpu0")
import sys
sys.path
cuda_paths = ['C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v8.0\\bin',
'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v8.0\\libnvvp',
'C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common',
'C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\bin',
]
for path in cuda_paths:
sys.path.append(path)
import theano.sandbox.cuda
theano.sandbox.cuda.use("gpu0")
sys.path
%%cmd
nvcc -V
path
%%cmd
nvcc -V
import theano.sandbox.cuda
theano.sandbox.cuda.use("gpu0")
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: V- 1 - used numpy to sum soon realized numpy does not work on codality. I have usually required more time working on solutions when solving something espically with a new technique.
Step2: Solution 2
Step3: Solution 3
Step4: solution 4
Step5: Solution 5
Step6: Solution 6 - reduced asympottic time complexity by increasing space complexity. O^n
Step7: Version 7 - Improving Time Complexity - O^n
Step8: [Result
|
<ASSISTANT_TASK:>
Python Code:
Ax = [-1, 3, -4, 5, 1, -6, 2, 1]
def solution_1(A):
addition_list = list()
list_index = 1
addition_list.append(A[0])
try:
if len(A) >= 0 and len(A) <= 100000:
for i, int_in_arr in enumerate(A):
# print i, " ", int_in_arr
if int_in_arr >= -2 and int_in_arr <= 647 and type(int_in_arr) is int:
if i == 0: continue
addition_list.append(int_in_arr + addition_list[list_index - 1])
print "i: ", i, "\n"
#print A[0:i], "\n"
# print numpy.sum(A[0:i])
#print A[i + 1:], "\n"
# print addition_list
list_index += 1
else:
raise ValueError("one of the array element: integer out of range", int_in_arr)
else:
raise ValueError("array indices out of range ", len(A))
print A
print "\n", addition_list
last_list_index = i
list_index = 1
while list_index != last_list_index:
print addition_list[last_list_index]
if A[list_index-1] == (addition_list[last_list_index] - addition_list[list_index]):
return list_index
except (ValueError, RuntimeError) as err:
print err.args
test = solution_1(Ax)
def solution_2(A):
addition_list = list()
list_index = 1
addition_list.append(A[0])
try:
if len(A) >= 0 and len(A) <= 100000:
for i, int_in_arr in enumerate(A):
# print i, " ", int_in_arr
if int_in_arr >= -2 and int_in_arr <= 647 and type(int_in_arr) is int:
if i == 0: continue
addition_list.append(int_in_arr + addition_list[list_index - 1])
#print "i: ", i, "\n"
#print A[0:i], "\n"
# print math.sum(A[0:i])
#print A[i + 1:], "\n"
# print addition_list
list_index += 1
else:
raise ValueError("one of the array element: integer out of range", int_in_arr)
else:
raise ValueError("array indices out of range ", len(A))
#print A
#print "\n", addition_list
last_list_index = i
list_index = 1
while list_index != last_list_index:
print addition_list[last_list_index]
if A[list_index-1] == (addition_list[last_list_index] - addition_list[list_index]):
return list_index
except (ValueError, RuntimeError) as err:
return err
print solution_2(Ax)
def solution_3(A):
addition_list = list()
list_index = 0
addition_list.append(A[0])
try:
if len(A) >= 0 and len(A) <= 100000:
for i, int_in_arr in enumerate(A):
if type(int_in_arr) is int:
if i == 0: continue
if sum(A[:i]) == (sum(A[:]) - sum(A[:i+1])):
return i
else:
raise ValueError("one of the array element: integer out of range", int_in_arr)
else:
raise ValueError("array indices out of range ", len(A))
return -1
except (ValueError, RuntimeError) as err:
return err
print solution_3(Ax)
def solution_4(A):
addition_list = list()
list_index = 0
addition_list.append(A[0])
try:
if 0 == sum(A[1:]):
return 0
elif len(A) == abs(sum(A[:])):
if len(A) % 2:
return len(A)/2
else:
return -1
elif len(A) >= 0 and len(A) <= 100000:
for i in xrange(len(A)):
#if type(int_in_arr) is int:
if i == 0: continue
if sum(A[:i]) == (sum(A[:]) - sum(A[:i+1])):
return i
else:
raise ValueError("one of the array element: integer out of range", int_in_arr)
else:
raise ValueError("array indices out of range ", len(A))
return -1
except (ValueError, RuntimeError) as err:
return err
print solution_4(Ax)
def solution_5(A):
try:
if 0 == sum(A[1:]):
return 0
if len(A) == abs(sum(A[:])):
if len(A) % 2:
return len(A)/2
else:
return -1
elif len(A) >= 0 and len(A) <= 100000:
left_sum = A[0]
right_sum = sum(A[1:])
#print "left_sum: " , left_sum
#print "right sum: ", right_sum, "\n"
for i,val in enumerate(A):
#if type(int_in_arr) is int:
if i == 0: continue
#print A
#print "i: ", i
#print "val: ", val
#print "left sum: ", left_sum
#print "right sum: ", right_sum
#print "\n\n"
right_sum -= val
if left_sum == right_sum:
#print "found match"
return i
left_sum += val
#else:
#raise ValueError("one of the array element: integer out of range", int_in_arr)
else:
return -1
except (ValueError, RuntimeError) as err:
return err
print solution_5(Ax)
def solution_6(A):
left_sum = 0
right_sum = sum(A[1:])
len_arr = len(A)
try:
if len_arr <= 1:
if len_arr == 1:
return 0
else:
return -1
if left_sum == right_sum:
return 0
#if sum(A[:-1]) == 0:
#return len_arr-1
if len(A) == abs(sum(A[:])):
if len(A) % 2:
return len(A)/2
else:
return -1
if len(A) >= 0 and len(A) <= 100000:
for i,val in enumerate(A):
if i == 0: continue
right_sum -= val
left_sum += A[i-1]
if left_sum == right_sum:
return i
if i >= len_arr:
return -1
else:
return -1
except (ValueError, RuntimeError) as err:
return err
print solution_6(Ax)
def solution_7(A):
left_sum = 0
right_sum = sum(A[1:])
len_arr = len(A)
try:
if len_arr <= 1:
if len_arr == 1:
return 0
else:
return -1
if left_sum == right_sum:
return 0
if sum(A[:-1]) == 0:
return len_arr-1
if len(A) == abs(sum(A[:])):
if len(A) % 2:
return len(A)/2
else:
return -1
if len(A) >= 0 and len(A) <= 100000:
left_sum = A[0]
for i,val in enumerate(A[1:]):
right_sum -= val
#left_sum += val
if left_sum == right_sum:
return i+1
left_sum +=val
if i >= len_arr:
return -1
else:
return -1
except (ValueError, RuntimeError) as err:
return err
print solution_7(Ax)
print "The end"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and process review dataset
Step2: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations
Step3: Now, let us take a look at what the dataset looks like (Note
Step4: Train-Validation split
Step5: Convert SFrame to NumPy array
Step6: We convert both the training and validation sets into NumPy arrays.
Step7: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
Step8: Adding L2 penalty
Step9: Quiz question
Step10: Quiz question
Step11: Explore effects of L2 regularization
Step12: Compare coefficients
Step13: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
Step14: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Step15: Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
Step16: Run the following cell to generate the plot. Use the plot to answer the following quiz question.
Step17: Quiz Question
Step18: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import graphlab
products = graphlab.SFrame('amazon_baby_subset.gl/')
products.head()
# The same feature processing (same as the previous assignments)
# ---------------------------------------------------------------
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
products
train_data, validation_data = products.random_split(.8, seed=2)
print 'Training set : %d data points' % len(train_data)
print 'Validation set : %d data points' % len(validation_data)
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
import math
sigmoid = lambda x: 1 / (1 + math.exp(-x))
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
dot_product = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = []
for dpi in dot_product:
predictions.append(sigmoid(dpi))
# return predictions
return predictions
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
derivative = np.dot(errors, feature)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
derivative += -2 * l2_penalty * coefficient
return derivative
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
## YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
derivative = feature_derivative_with_L2(
errors,
feature_matrix[:,j],
coefficients[j],
l2_penalty,
j == 0
)
# add the step size times the derivative to the current coefficient
coefficients[j] += step_size * derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
table = graphlab.SFrame({'word': ['(intercept)'] + important_words})
def add_coefficients_to_table(coefficients, column_name):
table[column_name] = coefficients
return table
add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')
add_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')
add_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')
add_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')
add_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')
add_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')
coefficients_l2_0_no_intercept = list(coefficients_0_penalty[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients_l2_0_no_intercept)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
positive_words = []
for t in word_coefficient_tuples[:5]:
positive_words.append(t[0])
positive_words
negative_words = []
for t in word_coefficient_tuples[-5:]:
negative_words.append(t[0])
negative_words
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table.filter_by(column_name='word', values=positive_words)
table_negative_words = table.filter_by(column_name='word', values=negative_words)
del table_positive_words['word']
del table_negative_words['word']
for i in xrange(len(positive_words)):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[i:i+1].to_numpy().flatten(),
'-', label=positive_words[i], linewidth=4.0, color=color)
for i in xrange(len(negative_words)):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[i:i+1].to_numpy().flatten(),
'-', label=negative_words[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
def get_classification_accuracy(feature_matrix, sentiment, coefficients):
scores = np.dot(feature_matrix, coefficients)
apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)
predictions = apply_threshold(scores)
num_correct = (predictions == sentiment).sum()
accuracy = num_correct / len(feature_matrix)
return accuracy
train_accuracy = {}
train_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)
train_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)
train_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)
train_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)
train_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)
train_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)
validation_accuracy = {}
validation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)
validation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)
validation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)
validation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)
validation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)
validation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print "L2 penalty = %g" % key
print "train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key])
print "--------------------------------------------------------------------------------"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem Statement
Step3: Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
Step4: Let's train the model without any regularization, and observe the accuracy on the train/test sets.
Step5: The train accuracy is 94.8% while the test accuracy is 91.5%. This is the baseline model (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
Step7: The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
Step9: Expected Output
Step10: Expected Output
Step11: Congrats, the test set accuracy increased to 93%. You have saved the French football team!
Step13: Observations
Step15: Expected Output
Step16: Expected Output
Step17: Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
|
<ASSISTANT_TASK:>
Python Code:
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
train_X, train_Y, test_X, test_Y = load_2D_dataset()
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (1 / m) * (lambd / 2) * (np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd / m) * W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd / m) * W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd / m) * W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0], A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = (D1 < keep_prob) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1 * D1 # Step 3: shut down some neurons of A1
A1 = A1 / keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0], A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = (D2 < keep_prob) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2 * D2 # Step 3: shut down some neurons of A2
A2 = A2 / keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2 * D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1 * D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
from numpy import array
from pickle import load
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.utils import plot_model
from keras.models import Model
from keras.layers import Input
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Embedding
from keras.layers import Dropout
from keras.layers.merge import add
from keras.callbacks import ModelCheckpoint
# load doc into memory
def load_doc(filename):
# open the file as read only
file = open(filename, 'r')
# read all text
text = file.read()
# close the file
file.close()
return text
# load a pre-defined list of photo identifiers
def load_set(filename):
doc = load_doc(filename)
dataset = list()
# process line by line
for line in doc.split('\n'):
# skip empty lines
if len(line) < 1:
continue
# get the image identifier
identifier = line.split('.')[0]
dataset.append(identifier)
return set(dataset)
# load clean descriptions into memory
def load_clean_descriptions(filename, dataset):
# load document
doc = load_doc(filename)
descriptions = dict()
for line in doc.split('\n'):
# split line by white space
tokens = line.split()
# split id from description
image_id, image_desc = tokens[0], tokens[1:]
# skip images not in the set
if image_id in dataset:
# create list
if image_id not in descriptions:
descriptions[image_id] = list()
# wrap description in tokens
desc = 'startseq ' + ' '.join(image_desc) + ' endseq'
# store
descriptions[image_id].append(desc)
return descriptions
# load photo features
def load_photo_features(filename, dataset):
# load all features
all_features = load(open(filename, 'rb'))
# filter features
features = {k: all_features[k] for k in dataset}
return features
# covert a dictionary of clean descriptions to a list of descriptions
def to_lines(descriptions):
all_desc = list()
for key in descriptions.keys():
[all_desc.append(d) for d in descriptions[key]]
return all_desc
# fit a tokenizer given caption descriptions
def create_tokenizer(descriptions):
lines = to_lines(descriptions)
tokenizer = Tokenizer()
tokenizer.fit_on_texts(lines)
return tokenizer
# calculate the length of the description with the most words
def max_length(descriptions):
lines = to_lines(descriptions)
return max(len(d.split()) for d in lines)
# create sequences of images, input sequences and output words for an image
def create_sequences(tokenizer, max_length, desc_list, photo, vocab_size):
X1, X2, y = list(), list(), list()
# walk through each description for the image
for desc in desc_list:
# encode the sequence
seq = tokenizer.texts_to_sequences([desc])[0]
# split one sequence into multiple X,y pairs
for i in range(1, len(seq)):
# split into input and output pair
in_seq, out_seq = seq[:i], seq[i]
# pad input sequence
in_seq = pad_sequences([in_seq], maxlen=max_length)[0]
# encode output sequence
out_seq = to_categorical([out_seq], num_classes=vocab_size)[0]
# store
X1.append(photo)
X2.append(in_seq)
y.append(out_seq)
return array(X1), array(X2), array(y)
# define the captioning model
def define_model(vocab_size, max_length):
# feature extractor model
inputs1 = Input(shape=(4096,))
fe1 = Dropout(0.5)(inputs1)
fe2 = Dense(256, activation='relu')(fe1)
# sequence model
inputs2 = Input(shape=(max_length,))
se1 = Embedding(vocab_size, 256, mask_zero=True)(inputs2)
se2 = Dropout(0.5)(se1)
se3 = LSTM(256)(se2)
# decoder model
decoder1 = add([fe2, se3])
decoder2 = Dense(256, activation='relu')(decoder1)
outputs = Dense(vocab_size, activation='softmax')(decoder2)
# tie it together [image, seq] [word]
model = Model(inputs=[inputs1, inputs2], outputs=outputs)
# compile model
model.compile(loss='categorical_crossentropy', optimizer='adam')
# summarize model
model.summary()
plot_model(model, to_file='model.png', show_shapes=True)
return model
# data generator, intended to be used in a call to model.fit_generator()
def data_generator(descriptions, photos, tokenizer, max_length, vocab_size):
# loop for ever over images
while 1:
for key, desc_list in descriptions.items():
# retrieve the photo feature
photo = photos[key][0]
in_img, in_seq, out_word = create_sequences(tokenizer, max_length, desc_list, photo, vocab_size)
yield [in_img, in_seq], out_word
# load training dataset (6K)
filename = 'Flickr8k_text/Flickr_8k.trainImages.txt'
train = load_set(filename)
print('Dataset: %d' % len(train))
# descriptions
train_descriptions = load_clean_descriptions('descriptions.txt', train)
print('Descriptions: train=%d' % len(train_descriptions))
# photo features
train_features = load_photo_features('features.pkl', train)
print('Photos: train=%d' % len(train_features))
# prepare tokenizer
tokenizer = create_tokenizer(train_descriptions)
vocab_size = len(tokenizer.word_index) + 1
print('Vocabulary Size: %d' % vocab_size)
# determine the maximum sequence length
max_length = max_length(train_descriptions)
print('Description Length: %d' % max_length)
# define the model
model = define_model(vocab_size, max_length)
# train the model, run epochs manually and save after each epoch
epochs = 20
steps = len(train_descriptions)
for i in range(epochs):
# create the data generator
generator = data_generator(train_descriptions, train_features, tokenizer, max_length, vocab_size)
# fit for one epoch
model.fit_generator(generator, epochs=1, steps_per_epoch=steps, verbose=1)
# save model
model.save('model_' + str(i) + '.h5')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, user needs to provide only the file name, as file will now be saved inside "shard" directory.
Step2: Now, for portability the user needs to use the "shard" folder
|
<ASSISTANT_TASK:>
Python Code:
index.output_prefix
index.save('prakhar')
index[vec_lsi]
index.output_prefix
index2 = similarities.Similarity.load('/home/prakhar/Documents/test/shard/prakhar')
index2.output_prefix
index2[vec_lsi]
#index2.output_prefix
#index2.output_prefix = '/home/prakhar/Documents/gentestOLD/prakhar.x'
#index2.check_moved()
#index[vec_lsi]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lo que el codigo anterior hace es asociar al nombre np todas las herramientas de la libreria numpy. Ahora podremos llamar funciones de numpy como np.<numpy_fun>. El nombre np es opcional, pueden cambiarlo pero necesitaran ese nombre para acceder a las funciones de numpy como <new_name>.<numpy_fun>. Otra opcion es solo inlcuir import numpy, en cuya caso las funciones se llaman como numpy.<numpy_fun>. Para saber mas del sistema de modulos pueden revisar la liga https
Step2: Vamos a construir una clase Array que incluye a las matrices y a los vectores. Desde el punto de vista computacional, un vector es una matriz de una columna. En clase vimos que conviene pensar a las matrices como transformacion de vectores, sin embargo, desde el punto de vista computacional, como la regla de suma y multiplicacion es similar, conviene pensarlos ambos como arrays, que es el nombre tradicional en programacion
Step3: Es posible sumar matrices y multiplicarlas por escalares
Step4: Las matrices de numpy se pueden multiplicar con la funcion matmul dentro de numpy
Step5: Los arrays the numpy pueden accesarse con indices y slices
Step6: Una fila entera
Step7: Una columna entera
Step8: Un subbloque (notar que un slice n
Step9: En numpy podemos saber la dimension de un array con el campo shape de numpy
Step10: Numpy es listo manejando listas simples como vectores
Step11: Comenzando desde cero...
Step12: El campo data de un Array almacena la lista de listas del array. Necesitamos implementar algunos metodos para que sea funcional como una clase de algebra lineal.
Step13: Por que estas diferencias? Python secretamente busca un metodo llamado __repr__ cuando un objeto es llamado sin imprimir explicitamente, y __str__ cuando se imprime con print explicitamente. Por ejemplo
Step14: <span style = "color
Step15: En el validador uso la lista de comprehension para verificar que todas las filas tengo el mismo tamano. Tambien creo un error si no se cumple con el comando raise. Este es un uso avanzado, entonces si no tienen experiencia en Python no se preocupen mucho por los detalles.
Step16: 3. Indexing and Item assignment
Step17: Para poder acceder a un index un metodo __getitem__.
Step18: <span style = "color
Step19: <span style = "color
Step20: <span style = "color
Step21: HONESTAMENTE NO PODRIA SER MAS COOL!
Step22: <span style = "color
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
x = [1,2,3]
y = [4,5,6]
x + y
B = np.array([[1,2,3], [4,5,6]]) # habiendo corrido import numpy as np
B + 2*B # Python sabe sumar y multiplicar arrays como algebra lineal
np.matmul(B.transpose(), B) # B^t*B
B[1,1]
B[1,:]
B[:,2]
B[0:2,0:2]
B.shape
vec = np.array([1,2,3])
print(vec)
class Array:
"Una clase minima para algebra lineal"
def __init__(self, list_of_rows):
"Constructor"
self.data = list_of_rows
self.shape = (len(list_of_rows), len(list_of_rows[0]))
A = Array([[1,2,3], [4,5,6]])
A.__dict__ # el campo escondido __dict__ permite acceder a las propiedades de clase de un objeto
A.data
A.shape
Array([[1,2,3], [4,5,6]])
print(Array([[1,2,3], [4,5,6]]))
np.array([[1,2,3], [4,5,6]])
print(np.array([[1,2,3], [4,5,6]]))
class TestClass:
def __init__(self):
pass # this means do nothing in Python
def say_hi(self):
print("Hey, I am just a normal method saying hi!")
def __repr__(self):
return "I am the special class method REPRESENTING a TestClass without printing"
def __str__(self):
return "I am the special class method for explicitly PRINTING a TestClass object"
x = TestClass()
x.say_hi()
x
print(x)
some_list = [1,2,3, 4, 5, 6]
[i**2 for i in some_list] # Elevar al cuadrado con listas de comprehension
class Array:
"Una clase minima para algebra lineal"
def __init__(self, list_of_rows):
"Constructor y validador"
# obtener dimensiones
self.data = list_of_rows
nrow = len(list_of_rows)
# ___caso vector: redimensionar correctamente
if not isinstance(list_of_rows[0], list):
nrow = 1
self.data = [[x] for x in list_of_rows]
# ahora las columnas deben estar bien aunque sea un vector
ncol = len(self.data[0])
self.shape = (nrow, ncol)
# validar tamano correcto de filas
if any([len(r) != ncol for r in self.data]):
raise Exception("Las filas deben ser del mismo tamano")
def __repr__(self):
"Ejercicio"
pass
def __str__(self):
"Ejercicio"
pass
Array([[1,2,3], [4,5]])
vec = Array([1,2,3])
vec.data
A = Array([[1,2], [3,4]])
A[0,0]
A[0,0] = 8
class Array:
"Una clase minima para algebra lineal"
def __init__(self, list_of_rows):
"Constructor y validador"
# obtener dimensiones
self.data = list_of_rows
nrow = len(list_of_rows)
# ___caso vector: redimensionar correctamente
if not isinstance(list_of_rows[0], list):
nrow = 1
self.data = [[x] for x in list_of_rows]
# ahora las columnas deben estar bien aunque sea un vector
ncol = len(self.data[0])
self.shape = (nrow, ncol)
# validar tamano correcto de filas
if any([len(r) != ncol for r in self.data]):
raise Exception("Las filas deben ser del mismo tamano")
def __getitem__(self, idx):
return self.data[idx[0]][idx[1]]
A = Array([[1,2],[3,4]])
A[0,1]
np.zeros((3,6))
np.array([[1,2], [3,4]]).transpose()
"hola " + "tu"
[1,2,3] + [2,3,4]
np.array([1,2,3]) + np.array([2,3,4])
np.array([1,2,3]) + 10 # Broadcasted sum, es muy util
class Array:
"Una clase minima para algebra lineal"
def __init__(self, list_of_rows):
"Constructor y validador"
# obtener dimensiones
self.data = list_of_rows
nrow = len(list_of_rows)
# ___caso vector: redimensionar correctamente
if not isinstance(list_of_rows[0], list):
nrow = 1
self.data = [[x] for x in list_of_rows]
# ahora las columnas deben estar bien aunque sea un vector
ncol = len(self.data[0])
self.shape = (nrow, ncol)
# validar tamano correcto de filas
if any([len(r) != ncol for r in self.data]):
raise Exception("Las filas deben ser del mismo tamano")
def __add__(self, other):
"Hora de sumar"
if isinstance(other, Array):
if self.shape != other.shape:
raise Exception("Las dimensiones son distintas!")
rows, cols = self.shape
newArray = Array([[0. for c in range(cols)] for r in range(rows)])
for r in range(rows):
for c in range(cols):
newArray.data[r][c] = self.data[r][c] + other.data[r][c]
return newArray
elif isinstance(2, (int, float, complex)): # en caso de que el lado derecho sea solo un numero
rows, cols = self.shape
newArray = Array([[0. for c in range(cols)] for r in range(rows)])
for r in range(rows):
for c in range(cols):
newArray.data[r][c] = self.data[r][c] + other
return newArray
else:
return NotImplemented # es un tipo de error particular usado en estos metodos
A = Array([[1,2], [3,4]])
B = Array([[5,6], [7,8]])
C = A + B
C.data
D = A + 10
D.data
Array([[1,2], [3,4]]) + Array([[5,6, 5], [7,8,3]])
class Vector(Array): # declara que Vector es un tipo de Array
def __init__(self, list_of_numbers):
self.vdata = list_of_numbers
list_of_rows = [[x] for x in list_of_numbers]
return Array.__init__(self, list_of_rows)
def __repr__(self):
return "Vector(" + str(self.vdata) + ")"
def __str__(self):
return str(self.vdata)
def __add__(self, other):
new_arr = Array.__add__(self, other)
return Vector([x[0] for x in new_arr.data])
Vector([1,2,3]).__dict__
Vector([1,2,3])
Vector([1,2,3]) + Vector([5,-2,0])
Vector([1,2,3]) + 10
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preparation (keras.dataset)
Step2: Training
Step3: Plotting Network Performance Trend
|
<ASSISTANT_TASK:>
Python Code:
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
#os.environ['THEANO_FLAGS'] = "device=gpu2"
from keras.models import load_model
from keras.models import Sequential
from keras.layers.core import Dense, Dropout
from keras.optimizers import SGD
nb_classes = 10
# FC@512+relu -> DropOut(0.2) -> FC@512+relu -> DropOut(0.2) -> FC@nb_classes+softmax
# ... your Code Here
# %load solutions/sol_221_1.py
from keras.models import Sequential
from keras.layers.core import Dense, Dropout
from keras.optimizers import SGD
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer=SGD(),
metrics=['accuracy'])
from keras.datasets import mnist
from keras.utils import np_utils
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype("float32")
X_test = X_test.astype("float32")
X_train /= 255
X_test /= 255
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, 10)
Y_test = np_utils.to_categorical(y_test, 10)
# You can train the network yourself or simply load a saved model :P, for now!!
#network_history = model.fit(X_train, Y_train, batch_size=1000,
# nb_epoch=100, verbose=1, validation_data=(X_test, Y_test))
#model.save('example_MNIST_FC.h5')
model=load_model('example_MNIST_FC.h5')
model.summary()
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.plot(network_history.history['loss'])
plt.plot(network_history.history['val_loss'])
plt.legend(['Training', 'Validation'])
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.plot(network_history.history['acc'])
plt.plot(network_history.history['val_acc'])
plt.legend(['Training', 'Validation'], loc='lower right')
import numpy as np
print(np.argmax(model.predict(X_test[5:10]),1))
print(y_test[5:10])
# Can you write a snippet that finds a misclassified sample in X_train and
# displays the image, correct classification and your prediction
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hyperparameter setup
Step2: Dataset
Step3: Image augmentations
Step4: Encoder architecture
Step5: Supervised baseline model
Step6: Self-supervised model for contrastive pretraining
Step7: Supervised finetuning of the pretrained encoder
Step8: Comparison against the baseline
|
<ASSISTANT_TASK:>
Python Code:
import math
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow import keras
from tensorflow.keras import layers
# Dataset hyperparameters
unlabeled_dataset_size = 100000
labeled_dataset_size = 5000
image_size = 96
image_channels = 3
# Algorithm hyperparameters
num_epochs = 20
batch_size = 525 # Corresponds to 200 steps per epoch
width = 128
temperature = 0.1
# Stronger augmentations for contrastive, weaker ones for supervised training
contrastive_augmentation = {"min_area": 0.25, "brightness": 0.6, "jitter": 0.2}
classification_augmentation = {"min_area": 0.75, "brightness": 0.3, "jitter": 0.1}
def prepare_dataset():
# Labeled and unlabeled samples are loaded synchronously
# with batch sizes selected accordingly
steps_per_epoch = (unlabeled_dataset_size + labeled_dataset_size) // batch_size
unlabeled_batch_size = unlabeled_dataset_size // steps_per_epoch
labeled_batch_size = labeled_dataset_size // steps_per_epoch
print(
f"batch size is {unlabeled_batch_size} (unlabeled) + {labeled_batch_size} (labeled)"
)
unlabeled_train_dataset = (
tfds.load("stl10", split="unlabelled", as_supervised=True, shuffle_files=True)
.shuffle(buffer_size=10 * unlabeled_batch_size)
.batch(unlabeled_batch_size)
)
labeled_train_dataset = (
tfds.load("stl10", split="train", as_supervised=True, shuffle_files=True)
.shuffle(buffer_size=10 * labeled_batch_size)
.batch(labeled_batch_size)
)
test_dataset = (
tfds.load("stl10", split="test", as_supervised=True)
.batch(batch_size)
.prefetch(buffer_size=tf.data.AUTOTUNE)
)
# Labeled and unlabeled datasets are zipped together
train_dataset = tf.data.Dataset.zip(
(unlabeled_train_dataset, labeled_train_dataset)
).prefetch(buffer_size=tf.data.AUTOTUNE)
return train_dataset, labeled_train_dataset, test_dataset
# Load STL10 dataset
train_dataset, labeled_train_dataset, test_dataset = prepare_dataset()
# Distorts the color distibutions of images
class RandomColorAffine(layers.Layer):
def __init__(self, brightness=0, jitter=0, **kwargs):
super().__init__(**kwargs)
self.brightness = brightness
self.jitter = jitter
def get_config(self):
config = super().get_config()
config.update({"brightness": self.brightness, "jitter": self.jitter})
return config
def call(self, images, training=True):
if training:
batch_size = tf.shape(images)[0]
# Same for all colors
brightness_scales = 1 + tf.random.uniform(
(batch_size, 1, 1, 1), minval=-self.brightness, maxval=self.brightness
)
# Different for all colors
jitter_matrices = tf.random.uniform(
(batch_size, 1, 3, 3), minval=-self.jitter, maxval=self.jitter
)
color_transforms = (
tf.eye(3, batch_shape=[batch_size, 1]) * brightness_scales
+ jitter_matrices
)
images = tf.clip_by_value(tf.matmul(images, color_transforms), 0, 1)
return images
# Image augmentation module
def get_augmenter(min_area, brightness, jitter):
zoom_factor = 1.0 - math.sqrt(min_area)
return keras.Sequential(
[
keras.Input(shape=(image_size, image_size, image_channels)),
layers.Rescaling(1 / 255),
layers.RandomFlip("horizontal"),
layers.RandomTranslation(zoom_factor / 2, zoom_factor / 2),
layers.RandomZoom((-zoom_factor, 0.0), (-zoom_factor, 0.0)),
RandomColorAffine(brightness, jitter),
]
)
def visualize_augmentations(num_images):
# Sample a batch from a dataset
images = next(iter(train_dataset))[0][0][:num_images]
# Apply augmentations
augmented_images = zip(
images,
get_augmenter(**classification_augmentation)(images),
get_augmenter(**contrastive_augmentation)(images),
get_augmenter(**contrastive_augmentation)(images),
)
row_titles = [
"Original:",
"Weakly augmented:",
"Strongly augmented:",
"Strongly augmented:",
]
plt.figure(figsize=(num_images * 2.2, 4 * 2.2), dpi=100)
for column, image_row in enumerate(augmented_images):
for row, image in enumerate(image_row):
plt.subplot(4, num_images, row * num_images + column + 1)
plt.imshow(image)
if column == 0:
plt.title(row_titles[row], loc="left")
plt.axis("off")
plt.tight_layout()
visualize_augmentations(num_images=8)
# Define the encoder architecture
def get_encoder():
return keras.Sequential(
[
keras.Input(shape=(image_size, image_size, image_channels)),
layers.Conv2D(width, kernel_size=3, strides=2, activation="relu"),
layers.Conv2D(width, kernel_size=3, strides=2, activation="relu"),
layers.Conv2D(width, kernel_size=3, strides=2, activation="relu"),
layers.Conv2D(width, kernel_size=3, strides=2, activation="relu"),
layers.Flatten(),
layers.Dense(width, activation="relu"),
],
name="encoder",
)
# Baseline supervised training with random initialization
baseline_model = keras.Sequential(
[
keras.Input(shape=(image_size, image_size, image_channels)),
get_augmenter(**classification_augmentation),
get_encoder(),
layers.Dense(10),
],
name="baseline_model",
)
baseline_model.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")],
)
baseline_history = baseline_model.fit(
labeled_train_dataset, epochs=num_epochs, validation_data=test_dataset
)
print(
"Maximal validation accuracy: {:.2f}%".format(
max(baseline_history.history["val_acc"]) * 100
)
)
# Define the contrastive model with model-subclassing
class ContrastiveModel(keras.Model):
def __init__(self):
super().__init__()
self.temperature = temperature
self.contrastive_augmenter = get_augmenter(**contrastive_augmentation)
self.classification_augmenter = get_augmenter(**classification_augmentation)
self.encoder = get_encoder()
# Non-linear MLP as projection head
self.projection_head = keras.Sequential(
[
keras.Input(shape=(width,)),
layers.Dense(width, activation="relu"),
layers.Dense(width),
],
name="projection_head",
)
# Single dense layer for linear probing
self.linear_probe = keras.Sequential(
[layers.Input(shape=(width,)), layers.Dense(10)], name="linear_probe"
)
self.encoder.summary()
self.projection_head.summary()
self.linear_probe.summary()
def compile(self, contrastive_optimizer, probe_optimizer, **kwargs):
super().compile(**kwargs)
self.contrastive_optimizer = contrastive_optimizer
self.probe_optimizer = probe_optimizer
# self.contrastive_loss will be defined as a method
self.probe_loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
self.contrastive_loss_tracker = keras.metrics.Mean(name="c_loss")
self.contrastive_accuracy = keras.metrics.SparseCategoricalAccuracy(
name="c_acc"
)
self.probe_loss_tracker = keras.metrics.Mean(name="p_loss")
self.probe_accuracy = keras.metrics.SparseCategoricalAccuracy(name="p_acc")
@property
def metrics(self):
return [
self.contrastive_loss_tracker,
self.contrastive_accuracy,
self.probe_loss_tracker,
self.probe_accuracy,
]
def contrastive_loss(self, projections_1, projections_2):
# InfoNCE loss (information noise-contrastive estimation)
# NT-Xent loss (normalized temperature-scaled cross entropy)
# Cosine similarity: the dot product of the l2-normalized feature vectors
projections_1 = tf.math.l2_normalize(projections_1, axis=1)
projections_2 = tf.math.l2_normalize(projections_2, axis=1)
similarities = (
tf.matmul(projections_1, projections_2, transpose_b=True) / self.temperature
)
# The similarity between the representations of two augmented views of the
# same image should be higher than their similarity with other views
batch_size = tf.shape(projections_1)[0]
contrastive_labels = tf.range(batch_size)
self.contrastive_accuracy.update_state(contrastive_labels, similarities)
self.contrastive_accuracy.update_state(
contrastive_labels, tf.transpose(similarities)
)
# The temperature-scaled similarities are used as logits for cross-entropy
# a symmetrized version of the loss is used here
loss_1_2 = keras.losses.sparse_categorical_crossentropy(
contrastive_labels, similarities, from_logits=True
)
loss_2_1 = keras.losses.sparse_categorical_crossentropy(
contrastive_labels, tf.transpose(similarities), from_logits=True
)
return (loss_1_2 + loss_2_1) / 2
def train_step(self, data):
(unlabeled_images, _), (labeled_images, labels) = data
# Both labeled and unlabeled images are used, without labels
images = tf.concat((unlabeled_images, labeled_images), axis=0)
# Each image is augmented twice, differently
augmented_images_1 = self.contrastive_augmenter(images, training=True)
augmented_images_2 = self.contrastive_augmenter(images, training=True)
with tf.GradientTape() as tape:
features_1 = self.encoder(augmented_images_1, training=True)
features_2 = self.encoder(augmented_images_2, training=True)
# The representations are passed through a projection mlp
projections_1 = self.projection_head(features_1, training=True)
projections_2 = self.projection_head(features_2, training=True)
contrastive_loss = self.contrastive_loss(projections_1, projections_2)
gradients = tape.gradient(
contrastive_loss,
self.encoder.trainable_weights + self.projection_head.trainable_weights,
)
self.contrastive_optimizer.apply_gradients(
zip(
gradients,
self.encoder.trainable_weights + self.projection_head.trainable_weights,
)
)
self.contrastive_loss_tracker.update_state(contrastive_loss)
# Labels are only used in evalutation for an on-the-fly logistic regression
preprocessed_images = self.classification_augmenter(
labeled_images, training=True
)
with tf.GradientTape() as tape:
# the encoder is used in inference mode here to avoid regularization
# and updating the batch normalization paramers if they are used
features = self.encoder(preprocessed_images, training=False)
class_logits = self.linear_probe(features, training=True)
probe_loss = self.probe_loss(labels, class_logits)
gradients = tape.gradient(probe_loss, self.linear_probe.trainable_weights)
self.probe_optimizer.apply_gradients(
zip(gradients, self.linear_probe.trainable_weights)
)
self.probe_loss_tracker.update_state(probe_loss)
self.probe_accuracy.update_state(labels, class_logits)
return {m.name: m.result() for m in self.metrics}
def test_step(self, data):
labeled_images, labels = data
# For testing the components are used with a training=False flag
preprocessed_images = self.classification_augmenter(
labeled_images, training=False
)
features = self.encoder(preprocessed_images, training=False)
class_logits = self.linear_probe(features, training=False)
probe_loss = self.probe_loss(labels, class_logits)
self.probe_loss_tracker.update_state(probe_loss)
self.probe_accuracy.update_state(labels, class_logits)
# Only the probe metrics are logged at test time
return {m.name: m.result() for m in self.metrics[2:]}
# Contrastive pretraining
pretraining_model = ContrastiveModel()
pretraining_model.compile(
contrastive_optimizer=keras.optimizers.Adam(),
probe_optimizer=keras.optimizers.Adam(),
)
pretraining_history = pretraining_model.fit(
train_dataset, epochs=num_epochs, validation_data=test_dataset
)
print(
"Maximal validation accuracy: {:.2f}%".format(
max(pretraining_history.history["val_p_acc"]) * 100
)
)
# Supervised finetuning of the pretrained encoder
finetuning_model = keras.Sequential(
[
layers.Input(shape=(image_size, image_size, image_channels)),
get_augmenter(**classification_augmentation),
pretraining_model.encoder,
layers.Dense(10),
],
name="finetuning_model",
)
finetuning_model.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")],
)
finetuning_history = finetuning_model.fit(
labeled_train_dataset, epochs=num_epochs, validation_data=test_dataset
)
print(
"Maximal validation accuracy: {:.2f}%".format(
max(finetuning_history.history["val_acc"]) * 100
)
)
# The classification accuracies of the baseline and the pretraining + finetuning process:
def plot_training_curves(pretraining_history, finetuning_history, baseline_history):
for metric_key, metric_name in zip(["acc", "loss"], ["accuracy", "loss"]):
plt.figure(figsize=(8, 5), dpi=100)
plt.plot(
baseline_history.history[f"val_{metric_key}"], label="supervised baseline"
)
plt.plot(
pretraining_history.history[f"val_p_{metric_key}"],
label="self-supervised pretraining",
)
plt.plot(
finetuning_history.history[f"val_{metric_key}"],
label="supervised finetuning",
)
plt.legend()
plt.title(f"Classification {metric_name} during training")
plt.xlabel("epochs")
plt.ylabel(f"validation {metric_name}")
plot_training_curves(pretraining_history, finetuning_history, baseline_history)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-1', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SunPy version (stable) 0.5
Step2: We can find out when this event occured
Step3: and where it occurred
Step4: Lightcurves!
Step5: The data is stored in a standard place
Step6: This is a pandas dataframe! Provides lots of additional functionality. For example
Step7: Compares well to the official max from the HEK
Step8: Meta data is also stored in a standard place
Step9: This is a dictionary like the hek results so...
Step10: Solar Images in SunPy
Step11: Then do a search.
Step12: It works, now lets see how many results we have!
Step13: That's way too many!
Step14: Let's also grab another wavelength for later.
Step15: Let's download this data!
Step16: For SunPy the top level name-space is kept clean. Importing SunPy does not give you access to much. You need to import specific names. SciPy is the same.
Step17: Maps contain both the image data and the metadata associated with the image, this metadata currently does not deviate much from the standard FITS WCS keywords, but presented in a instrument-independent manner.
Step18: SunPy Maps!
Step19: The data (stored in a numpy array)
Step20: Because it is just a numpy array you have access to all of those function
Step21: The original metadata (stored in a dictionary)
Step22: We also provide quick access to some key metadata values as object variables (these are shortcuts)
Step23: Maps also provide some nice map specific functions such as submaps. Let's zoom in on the flare location which was given to us by the HEK.
Step24: The default image scale is definitely not right. Let's fix that so we can see the flare region better.
Step25: Composite Maps
Step26: Some other topics...
Step27: Not all constants have a short-cut assigned to them (as above). The rest of the constants are stored in a dictionary. The following code grabs the dictionary and gets all of the keys.
Step28: These are astropy constants which are a subclass of Quantities (numbers with units) which are a great idea.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
matplotlib.rc('savefig', dpi=120)
import warnings
warnings.simplefilter("ignore", Warning)
from matplotlib import dates
import sunpy
sunpy.system_info()
from sunpy.net import hek
client = hek.HEKClient()
tstart, tend = '2014/01/01 00:00:00', '2014/01/02 00:00:00'
result = client.query(hek.attrs.Time(tstart, tend),
hek.attrs.EventType('FL'),
hek.attrs.FRM.Name=='SSW Latest Events')
len(result)
result[0]
for res in result:
print(res.get('fl_goescls'))
result = client.query(hek.attrs.Time(tstart, tend),
hek.attrs.EventType('FL'),
hek.attrs.FRM.Name=='SSW Latest Events',
hek.attrs.FL.GOESCls>'M')
len(result)
result
result[0].get('event_peaktime')
result[0].get('hpc_coord')
from sunpy.time import TimeRange, parse_time
from datetime import timedelta
tmax = parse_time(result[0].get('event_peaktime'))
tmax
tr = TimeRange(tmax - timedelta(minutes=30), tmax + timedelta(minutes=30))
tr
from sunpy.lightcurve import GOESLightCurve
goes = GOESLightCurve.create(tr)
goes.peek()
goes.data
print('The max flux is {flux:2.5f} at {time}'.format(flux=goes.data['xrsb'].max(), time=goes.data['xrsb'].idxmax()))
str(tmax)
goes.peek()
plt.axhline(goes.data['xrsb'].max())
plt.axvline(goes.data['xrsb'].idxmax())
goes.meta
goes.meta.get('COMMENT')
goes.data.resample('10s', how='mean')
from sunpy.net import vso
client=vso.VSOClient()
recs = client.query(vso.attrs.Time(tr), vso.attrs.Instrument('AIA'))
recs.num_records()
recs = client.query(vso.attrs.Time('2014/01/01 18:52:08', '2014/01/01 18:52:15'),
vso.attrs.Instrument('AIA'),
vso.attrs.Wave(171,171))
recs.num_records()
recs.show()
recs = client.query(vso.attrs.Time('2014/01/01 18:52:08', '2014/01/01 18:52:15'),
vso.attrs.Instrument('AIA'),
vso.attrs.Wave(94,171))
recs.num_records()
f = client.get(recs, methods = ('URL-FILE_Rice')).wait()
f
from sunpy.map import Map
aia = Map(f[1])
aia
aia.peek()
aia.data
type(aia.data)
aia.mean(),aia.max(),aia.min()
aia.data.std()
aia.data.shape
aia.meta
aia.meta.keys()
aia.meta.get('rsun_obs')
print(aia.date, aia.coordinate_system, aia.detector, aia.dsun)
result[0].get('hpc_coord')
point = [665.04, -233.4096]
dx = 50
dy = 50
xrange = [point[0] - dx, point[0] + dx]
yrange = [point[1] - dy, point[1] + dy]
aia.submap(xrange,yrange).peek()
plt.plot(point[0], point[1], '+')
plt.xlim(xrange)
plt.ylim(yrange)
smap = aia.submap(xrange,yrange)
import matplotlib.colors as colors
norm = colors.Normalize(0, 3000)
smap.plot(norm=norm)
plt.plot(point[0], point[1], '+')
smap.draw_grid(grid_spacing=1)
plt.colorbar()
aia131 = Map(f[0])
aia131
smap131 = aia131.submap(xrange, yrange)
smap131.peek()
norm = colors.Normalize(0, 4000)
smap131.plot(norm=norm)
plt.colorbar()
smap171 = smap
compmap = Map(smap171, smap131, composite=True)
levels = np.arange(0,100,5)
print(levels)
compmap.set_levels(1, levels, percent=True)
compmap.set_mpl_color_normalizer(0, norm)
compmap.set_colors(1, plt.cm.Reds)
compmap.plot(norm=norm)
plt.show()
from sunpy.sun import constants as solar_constants
solar_constants.mass
print(solar_constants.mass)
(solar_constants.mass/solar_constants.volume).cgs
solar_constants.volume + solar_constants.density
solar_constants.physical_constants.keys()
type(solar_constants.mass)
from astropy import units as u
u.keV
u.keV.decompose()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Full Rank Markov
Step2: Full rank Markov transition probability matrix
Step3: Full rank first mean passage times
Step4: Geographic Rank Markov
Step5: Visualizing first mean passage time from/to California/Mississippi
Step6: Visualizing sojourn time for each US state
|
<ASSISTANT_TASK:>
Python Code:
import libpysal as ps
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import pandas as pd
import geopandas as gpd
from giddy.markov import FullRank_Markov
income_table = pd.read_csv(ps.examples.get_path("usjoin.csv"))
income_table.head()
pci = income_table[list(map(str,range(1929,2010)))].values
pci
m = FullRank_Markov(pci)
m.ranks
m.transitions
m.p
m.fmpt
m.sojourn_time
df_fullrank = pd.DataFrame(np.c_[m.p.diagonal(),m.sojourn_time], columns=["Staying Probability","Sojourn Time"], index = np.arange(m.p.shape[0])+1)
df_fullrank.head()
df_fullrank.plot(subplots=True, layout=(1,2), figsize=(15,5))
sns.distplot(m.fmpt.flatten(),kde=False)
from giddy.markov import GeoRank_Markov, Markov, sojourn_time
gm = GeoRank_Markov(pci)
gm.transitions
gm.p
gm.sojourn_time[:10]
gm.sojourn_time
gm.fmpt
income_table["geo_sojourn_time"] = gm.sojourn_time
i = 0
for state in income_table["Name"]:
income_table["geo_fmpt_to_" + state] = gm.fmpt[:,i]
income_table["geo_fmpt_from_" + state] = gm.fmpt[i,:]
i = i + 1
income_table.head()
geo_table = gpd.read_file(ps.examples.get_path('us48.shp'))
# income_table = pd.read_csv(libpysal.examples.get_path("usjoin.csv"))
complete_table = geo_table.merge(income_table,left_on='STATE_NAME',right_on='Name')
complete_table.head()
complete_table.columns
fig, axes = plt.subplots(nrows=2, ncols=2,figsize = (15,7))
target_states = ["California","Mississippi"]
directions = ["from","to"]
for i, direction in enumerate(directions):
for j, target in enumerate(target_states):
ax = axes[i,j]
col = direction+"_"+target
complete_table.plot(ax=ax,column = "geo_fmpt_"+ col,cmap='OrRd',
scheme='quantiles', legend=True)
ax.set_title("First Mean Passage Time "+direction+" "+target)
ax.axis('off')
leg = ax.get_legend()
leg.set_bbox_to_anchor((0.8, 0.15, 0.16, 0.2))
plt.tight_layout()
fig, axes = plt.subplots(nrows=1, ncols=2,figsize = (15,7))
schemes = ["Quantiles","Equal_Interval"]
for i, scheme in enumerate(schemes):
ax = axes[i]
complete_table.plot(ax=ax,column = "geo_sojourn_time",cmap='OrRd',
scheme=scheme, legend=True)
ax.set_title("Rank Sojourn Time ("+scheme+")")
ax.axis('off')
leg = ax.get_legend()
leg.set_bbox_to_anchor((0.8, 0.15, 0.16, 0.2))
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: <body> In order to complete the project the following steps shown in the diagram below must be followed <body>
Step3: 1. Audit data
Step4: 2. Data Overview with MongoDB
Step5: There is a code for running directly from python the mongoimport ( see https
Step6: 3. Additional data exploration using MongoDB queries
Step7: 4. Ideas for additional improvements
Step8: This can be done either by having the information first hand, entering the data one by one or using other map data such as https
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image("Malaga_map.jpg")
Image("workflow.jpg")
#Import all necesary modules as follows:
#import flexible container object, designed to store hierarchical data structures in memory
import xml.etree.cElementTree as ET
#import function to supply missing values
from collections import defaultdict
#import Regular expression operations
import re
#import “pretty-print” arbitrary Python data structures in a form which can be used as input to the interpreter
import pprint
#import Codec registry and base classes
import codecs
#import JSON encoder and decoder
import json
import pymongo
import os
#Create a sample file in order to perform the tests and be able to visualize easily
OSM_FILE = "Malaga.osm" # OSM file for Malaga
SAMPLE_FILE = "sample.osm"
k = 10 # Parameter: take every k-th top level element
def get_element(osm_file, tags=('node', 'way', 'relation')):
Yield element if it is the right type of tag
Reference:
http://stackoverflow.com/questions/3095434/inserting-newlines-in-xml-file-generated-via-xml-etree-elementtree-in-python
context = iter(ET.iterparse(osm_file, events=('start', 'end')))
_, root = next(context)
for event, elem in context:
if event == 'end' and elem.tag in tags:
yield elem
root.clear()
with open(SAMPLE_FILE, 'wb') as output:
output.write('<?xml version="1.0" encoding="UTF-8"?>\n')
output.write('<osm>\n ')
# Write every kth top level element
for i, element in enumerate(get_element(OSM_FILE)):
if i % k == 0:
output.write(ET.tostring(element, encoding='utf-8'))
output.write('</osm>')
#Parse file and count number of unique element types
def count_tags(filename):
tags = {}
for event, elem in ET.iterparse(filename):
if elem.tag in tags:
tags[elem.tag] += 1
else:
tags[elem.tag] = 1
return tags
view_tags = count_tags(OSM_FILE)
# pprint.pprint(view_tags)
from IPython.display import display
from audit_atributes import audit, update_names, audit_pc, update_postal_codes
OSM_FILE ='./Malaga.osm'
CREATED = [ "version", "changeset", "timestamp", "user", "uid"]
street_types = audit(OSM_FILE)
# display(street_types)
#Update street type
updates = update_names(OSM_FILE)
# display(updates)
updates_pc = update_postal_codes(OSM_FILE)
# display(updates_pc)
import os
print "The downloaded OSM file is {} MB".format(os.path.getsize('Malaga.osm')/1.0e6) # convert from bytes to megabytes
from pymongo import MongoClient
db_name = 'openstreetmap1'
# Connect to Mongo DB
client = MongoClient('localhost:27017')
# Database 'openstreetmap' will be created
db = client[db_name]
collection = db['Malaga']
# Number of documents
documents = collection.find().count()
display(documents)
#Number of unique users
users=len(collection.distinct('created.user'))
display(users)
#number of nodes & ways
nodes=collection.find({'type':'node'}).count()
ways=collection.find({'type':'way'}).count()
display(nodes)
display(ways)
#disply types and number of nodes & ways
node_way = collection.aggregate([
{"$group" : {"_id" : "$type", "count" : {"$sum" : 1}}}])
pprint.pprint(list(node_way))
#top3 contributors to the map
top3 = collection.aggregate([{ '$group' : {'_id' : '$created.user',
'count' : { '$sum' : 1}}},
{ '$sort' : {'count' : -1}},
{ '$limit' : 3 }])
display(list(top3))
#number of documents with street addresses
addresses=collection.find({'address.street': {'$exists': 1}}).count()
print(addresses)
#top3 postal codes
top10_pc = collection.aggregate([{ '$group' : {'_id' : '$address.postcode',
'count' : { '$sum' : 1}}},
{ '$sort' : {'count' : -1}},
{ '$limit' : 3 }])
display(list(top10_pc))
#list of postal codes limited to 3
postal_codes = collection.aggregate([
{"$match" : {"address.postcode" : {"$exists" : 1}}}, \
{"$group" : {"_id" : "$address.postcode", "count" : {"$sum" : 1}}}, \
{"$sort" : {"count" : -1}}])
# pprint.pprint(list(postal_codes))
#list top 10 street names in data base
streets = collection.aggregate([
{"$match" : {"address.street" : {"$exists" : 1}}}, \
{"$group" : {"_id" : "$address.street", "count" : {"$sum" : 1}}}, \
{"$sort" : {"count" : -1}},
{"$limit":10}])
pprint.pprint(list(streets))
#Top 10 amenities
top10_amenities = collection.aggregate([{"$match":{"amenity":{"$exists":1}}},
{"$group":{"_id":"$amenity","count":{"$sum":1}}},
{"$sort":{"count":-1}},
{"$limit":10}])
display(list(top10_amenities))
#Top 5 building types
type_buildings = collection.aggregate([
{'$match': {'building': {'$exists': 1}}},
{'$group': { '_id': '$building','count': {'$sum': 1}}},
{'$sort': {'count': -1}}, {'$limit': 5}
])
pprint.pprint(list(type_buildings))
#Top 3 cult buildings
religion_buildings = collection.aggregate([
{"$match" : {"amenity" : "place_of_worship"}}, \
{"$group" : {"_id" : {"religion" : "$religion", "denomination" : "$denomination"}, "count" : {"$sum" : 1}}}, \
{"$sort" : {"count" : -1}}, {'$limit': 3}])
pprint.pprint(list(religion_buildings))
#Top 10 leisures
leisures = collection.aggregate([{"$match" : {"leisure" : {"$exists" : 1}}}, \
{"$group" : {"_id" : "$leisure", "count" : {"$sum" : 1}}}, \
{"$sort" : {"count" : -1}}, \
{"$limit" : 10}])
pprint.pprint(list(leisures))
#Top 10 univeristies
universities = collection.aggregate([
{"$match" : {"amenity" : "university"}}, \
{"$group" : {"_id" : {"name" : "$name"}, "count" : {"$sum" : 1}}}, \
{"$sort" : {"count" : -1}},
{"$limit":10}
])
# pprint.pprint(list(universities))
#Top 10 cuisines
restaurant = collection.aggregate([
{"$match":{"cuisine":{"$exists":1},"amenity":"restaurant"}},
{"$group":{"_id":"$cuisine","count":{"$sum":1}}},
{"$sort":{"count":-1}}, {"$limit":10} ])
# pprint.pprint(list(restaurant))
noaddresses=collection.find({'address.street': {'$exists': 0}}).count()
display(noaddresses)
Image("improvement_openstreetmap.jpg")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 变量简介
Step2: 创建变量
Step3: 变量与张量的定义方式和操作行为都十分相似,实际上,它们都是 tf.Tensor 支持的一种数据结构。与张量类似,变量也有 dtype 和形状,并且可以导出至 NumPy。
Step4: 大部分张量运算在变量上也可以按预期运行,不过变量无法重构形状。
Step5: 如上所述,变量由张量提供支持。您可以使用 tf.Variable.assign 重新分配张量。调用 assign(通常)不会分配新张量,而会重用现有张量的内存。
Step6: 如果在运算中像使用张量一样使用变量,那么通常会对支持张量执行运算。
Step7: 生命周期、命名和监视
Step8: 保存和加载模型时会保留变量名。默认情况下,模型中的变量会自动获得唯一变量名,所以除非您希望自行命名,否则不必多此一举。
Step9: 放置变量和张量
Step10: 您可以将变量或张量的位置设置在一个设备上,然后在另一个设备上执行计算。但这样会产生延迟,因为需要在两个设备之间复制数据。
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
# Uncomment to see where your variables get placed (see below)
# tf.debugging.set_log_device_placement(True)
my_tensor = tf.constant([[1.0, 2.0], [3.0, 4.0]])
my_variable = tf.Variable(my_tensor)
# Variables can be all kinds of types, just like tensors
bool_variable = tf.Variable([False, False, False, True])
complex_variable = tf.Variable([5 + 4j, 6 + 1j])
print("Shape: ",my_variable.shape)
print("DType: ",my_variable.dtype)
print("As NumPy: ", my_variable.numpy)
print("A variable:",my_variable)
print("\nViewed as a tensor:", tf.convert_to_tensor(my_variable))
print("\nIndex of highest value:", tf.argmax(my_variable))
# This creates a new tensor; it does not reshape the variable.
print("\nCopying and reshaping: ", tf.reshape(my_variable, ([1,4])))
a = tf.Variable([2.0, 3.0])
# This will keep the same dtype, float32
a.assign([1, 2])
# Not allowed as it resizes the variable:
try:
a.assign([1.0, 2.0, 3.0])
except Exception as e:
print(f"{type(e).__name__}: {e}")
a = tf.Variable([2.0, 3.0])
# Create b based on the value of a
b = tf.Variable(a)
a.assign([5, 6])
# a and b are different
print(a.numpy())
print(b.numpy())
# There are other versions of assign
print(a.assign_add([2,3]).numpy()) # [7. 9.]
print(a.assign_sub([7,9]).numpy()) # [0. 0.]
# Create a and b; they have the same value but are backed by different tensors.
a = tf.Variable(my_tensor, name="Mark")
# A new variable with the same name, but different value
# Note that the scalar add is broadcast
b = tf.Variable(my_tensor + 1, name="Mark")
# These are elementwise-unequal, despite having the same name
print(a == b)
step_counter = tf.Variable(1, trainable=False)
with tf.device('CPU:0'):
# Create some tensors
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
with tf.device('CPU:0'):
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.Variable([[1.0, 2.0, 3.0]])
with tf.device('GPU:0'):
# Element-wise multiply
k = a * b
print(k)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: By doing this we get a few variables initialized. First, a symmetric transition count matrix, $\mathbf{N}$, where we see that the most frequent transitions are those within metastable states (corresponding to the terms in the diagonal $N_{ii}$). Non-diagonal transitions are much less frequent (i.e. $N_{ij}<<N_{ii}$ for all $i\neq j$).
Step2: Committors and fluxes
Step3: We also obtain the flux matrix, $\mathbf{J}$, containing local fluxes ($J_{ji}=J_{i\rightarrow j}$) for the different edges in the network. The signs represent the direction of the transition
Step4: Paths through the network
Step5: We can exhaustively enumerate the paths and check whether the fluxes add up to the total flux.
Step6: So indeed the cumulative flux is equal to the total flux we estimated before.
Step7: Highest flux paths
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import fourstate
import itertools
import networkx as nx
import numpy as np
import operator
bhs = fourstate.FourState()
fig, ax = plt.subplots()
ax.bar([0.5,1.5,2.5], -1./bhs.evals[1:], width=1)
ax.set_xlabel(r'Eigenvalue', fontsize=16)
ax.set_ylabel(r'$\tau_i$', fontsize=18)
ax.set_xlim([0,4])
plt.show()
bhs.run_commit()
print " j J_j(<-) J_j(->)"
print " - -------- --------"
for i in [1,2]:
print "%2i %10.4e %10.4e"%(i, np.sum([bhs.J[i,x] for x in range(4) if bhs.pfold[x] < bhs.pfold[i]]),\
np.sum([bhs.J[x,i] for x in range(4) if bhs.pfold[x] > bhs.pfold[i]]))
import tpt_functions
Jnode, Jpath = tpt_functions.gen_path_lengths(range(4), bhs.J, bhs.pfold, \
bhs.sum_flux, [3], [0])
JpathG = nx.DiGraph(Jpath.transpose())
print Jnode
print Jpath
tot_flux = 0
paths = {}
k = 0
for path in nx.all_simple_paths(JpathG, 0, 3):
paths[k] ={}
paths[k]['path'] = path
f = bhs.J[path[1],path[0]]
print "%2i -> %2i: %10.4e "%(path[0], path[1], \
bhs.J[path[1],path[0]])
for i in range(2, len(path)):
print "%2i -> %2i: %10.4e %10.4e"%(path[i-1], path[i], \
bhs.J[path[i],path[i-1]], Jnode[path[i-1]])
f *= bhs.J[path[i],path[i-1]]/Jnode[path[i-1]]
tot_flux += f
paths[k]['flux'] = f
print " J(path) = %10.4e"%f
print
k+=1
print " Commulative flux: %10.4e"%tot_flux
sorted_paths = sorted(paths.items(), key=operator.itemgetter(1))
sorted_paths.reverse()
k = 1
for path in sorted_paths:
print k, ':', path[1]['path'], ':', 'flux = %g'%path[1]['flux']
k +=1
while True:
Jnode, Jpath = tpt_functions.gen_path_lengths(range(4), bhs.J, bhs.pfold, \
bhs.sum_flux, [3], [0])
# generate nx graph from matrix
JpathG = nx.DiGraph(Jpath.transpose())
# find shortest path
try:
path = nx.dijkstra_path(JpathG, 0, 3)
pathlength = nx.dijkstra_path_length(JpathG, 0, 3)
print " shortest path:", path, pathlength
except nx.NetworkXNoPath:
print " No path for %g -> %g\n Stopping here"%(0, 3)
break
# calculate contribution to flux
f = bhs.J[path[1],path[0]]
print "%2i -> %2i: %10.4e "%(path[0], path[1], bhs.J[path[1],path[0]])
path_fluxes = [f]
for j in range(2, len(path)):
i = j - 1
print "%2i -> %2i: %10.4e %10.4e"%(path[i], path[j], \
bhs.J[path[j],path[i]], \
bhs.J[path[j],path[i]]/Jnode[path[i]])
f *= bhs.J[path[j],path[i]]/Jnode[path[i]]
path_fluxes.append(bhs.J[path[j],path[i]])
# find bottleneck
ib = np.argmin(path_fluxes)
print "bottleneck: %2i -> %2i"%(path[ib],path[ib+1])
# remove flux from edges
for j in range(1,len(path)):
i = j - 1
bhs.J[path[j],path[i]] -= f
# numerically there may be some leftover flux in bottleneck
bhs.J[path[ib+1],path[ib]] = 0.
bhs.sum_flux -= f
print ' flux from path ', path, ': %10.4e'%f
print ' fluxes', path_fluxes
print ' leftover flux: %10.4e\n'%bhs.sum_flux
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: On the one extreme, a linear relationship is underfit; on the other, we see that including a very large number of polynomial terms is clearly overfitting the data.
Step2: We can select an appropriate polynomial order for the model using cross-validation, in which we hold out a testing subset from our dataset, fit the model to the remaining data, and evaluate its performance on the held-out subset.
Step3: A natural criterion to evaluate model performance is root mean square error.
Step4: We can now evaluate the model at varying polynomial degrees, and compare their fit.
Step5: In the cross-validation above, notice that the error is high for both very low and very high polynomial values, while training error declines monotonically with degree. The cross-validation error is composed of two components
Step6: We can now perform k-fold cross-validation, and report the average error over all of the folds.
Step7: Bootstrapping
Step8: Sampling is even easier in pandas; DataFrame and Series objects have sample methods that allow for sampling without the need for outside functions.
Step9: We can use either method to generate a sample with replacement, which we can use when bootstrapping.
Step10: We regard S as an "estimate" of population P
Step11: Bootstrap Estimates
Step12: Since we have estimated the expectation of the bootstrapped statistics, we can estimate the bias of T
Step13: Bootstrap error
Step14: Exercise
Step15: Missing Data
Step16: For three variables in the dataset, there are incomplete records.
Step17: Strategies for dealing with missing data
Step18: This approach may be reasonable under the MCAR assumption, but may induce bias under a MAR scenario, whereby missing values may differ systematically relative to non-missing values, making the particular summary statistic used for imputation biased as a mean/median/modal value for the missing values.
Step19: We will try to impute missing values the mother's high school education indicator variable, which takes values of 0 for no high school diploma, or 1 for high school diploma or greater. The appropriate model to predict binary variables is a logistic regression. We will use the scikit-learn implementation, LogisticRegression.
Step20: To keep things simple, we will only use variables that are themselves complete to build the predictive model, hence our subset of predictors will exclude family involvement score (family_inv) and previous disability (prev_disab).
Step21: Next, we scale the predictor variables to range from 0 to 1, to improve the performance of the regression model.
Step22: The training and test sets in this case will be the non-missing and missing values, respectively, since we want to use supervised learning to build our predictive model.
Step23: Next, we create a LogisticRegression model, and fit it using the non-missing observations.
Step24: Conveniently, scikit-learn models have predict methods for generating predictions from the model, using new data. Here, we will pass the predictor values for the subset with mother_hs missing.
Step25: These values can then be inserted in place of the missing values, and an analysis can be performed on the entire dataset.
Step26: Surprisingly few imputations are required to acheive reasonable estimates, with 3-10 usually sufficient. We will use 3.
Step27: Now we can perform 3 separate analyses, using the method of our choice, each based upon a different set of imputed values.
Step28: Inference for each coefficient is then based on the average of these three estimates.
Step29: We can see that we obtain somewhat different estimates when we use multiple imputation versus complete case analysis
Step30: Estimates from mean imputation are more similar.
Step31: Exercise
|
<ASSISTANT_TASK:>
Python Code:
salmon = pd.read_table("../data/salmon.dat", delim_whitespace=True, index_col=0)
plt.scatter(x=salmon.spawners, y=salmon.recruits)
fig, axes = plt.subplots(1, 2, figsize=(14,6))
xvals = np.arange(salmon.spawners.min(), salmon.spawners.max())
fit1 = np.polyfit(salmon.spawners, salmon.recruits, 1)
p1 = np.poly1d(fit1)
axes[0].plot(xvals, p1(xvals))
axes[0].scatter(x=salmon.spawners, y=salmon.recruits)
fit15 = np.polyfit(salmon.spawners, salmon.recruits, 14)
p15 = np.poly1d(fit15)
axes[1].plot(xvals, p15(xvals))
axes[1].scatter(x=salmon.spawners, y=salmon.recruits)
test_set = salmon.sample(frac=0.3)
train_set = salmon.drop(test_set.index)
def rmse(x, y, coefs):
yfit = np.polyval(coefs, x)
return np.sqrt(np.mean((y - yfit) ** 2))
# suppress warnings from Polyfit
import warnings
warnings.filterwarnings('ignore', message='Polyfit*')
degrees = np.arange(13)
train_err = np.zeros(len(degrees))
validation_err = np.zeros(len(degrees))
for i, d in enumerate(degrees):
p = np.polyfit(train_set.spawners, train_set.recruits, d)
train_err[i] = rmse(train_set.spawners, train_set.recruits, p)
validation_err[i] = rmse(test_set.spawners, test_set.recruits, p)
fig, ax = plt.subplots()
ax.plot(degrees, validation_err, lw=2, label = 'cross-validation error')
ax.plot(degrees, train_err, lw=2, label = 'training error')
ax.legend(loc=0)
ax.set_xlabel('degree of fit')
ax.set_ylabel('rms error')
# Write your answer here
from sklearn.cross_validation import cross_val_score, KFold
import warnings
warnings.filterwarnings('ignore', message='Polyfit*')
k = 5
degrees = np.arange(8)
k_fold_err = np.empty(len(degrees))
for i, d in enumerate(degrees):
error = np.empty(k)
#for j, fold in enumerate(gen_k_folds(salmon, k)):
for j, fold in enumerate(KFold(len(salmon), n_folds=k)):
training, validation = fold
y_train, x_train = salmon.values[training].T
y_test, x_test = salmon.values[validation].T
p = np.polyfit(x_train, y_train, d)
error[j] = rmse(x_test, y_test, p)
k_fold_err[i] = error.mean()
fig, ax = plt.subplots()
ax.plot(degrees, k_fold_err, lw=2)
ax.set_xlabel('degree of fit')
ax.set_ylabel('average rms error')
titanic = pd.read_excel("../data/titanic.xls", "titanic")
np.random.permutation(titanic.name)[:5]
titanic.name.sample(n=5)
titanic.name.sample(n=5, replace=True)
# Sample size
n = 10
# Bootstrap replicates
R = 1000
# Original sample (n=10)
data = np.random.normal(size=n)
# 1000 bootstrap samples of size 10
s = [data[np.random.randint(0,n,n)].mean() for i in range(R)]
_ = plt.hist(s, bins=30)
boot_mean = np.sum(s)/R
boot_mean
boot_mean_var = ((np.array(s) - boot_mean) ** 2).sum() / (R-1)
boot_mean_var
boot_mean - np.mean(data)
s_sorted = np.sort(s)
s_sorted[:10]
s_sorted[-10:]
alpha = 0.05
s_sorted[[(R+1)*alpha/2, (R+1)*(1-alpha/2)]]
# Write your answer here
test_scores = pd.read_csv('../data/test_scores.csv', index_col=0)
test_scores.head()
test_scores.isnull().sum(0)
test_scores.siblings.mean()
siblings_imputed = test_scores.siblings.fillna(test_scores.siblings.mean())
test_scores.dropna().corr(method='spearman')
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
impute_subset = test_scores.drop(labels=['family_inv','prev_disab','score'], axis=1)
y = impute_subset.pop('mother_hs').values
X = StandardScaler().fit_transform(impute_subset.astype(float))
missing = np.isnan(y)
mod = LogisticRegression()
mod.fit(X[~missing], y[~missing])
mother_hs_pred = mod.predict(X[missing])
mother_hs_pred
mod2 = LogisticRegression(C=1, penalty='l1')
mod2.fit(X[~missing], y[~missing])
mod2.predict(X[missing])
mod2 = LogisticRegression(C=0.4, penalty='l1')
mod2.fit(X[~missing], y[~missing])
mod2.predict(X[missing])
mother_hs_imp = []
for C in 0.1, 0.4, 2:
mod = LogisticRegression(C=C, penalty='l1')
mod.fit(X[~missing], y[~missing])
imputed = mod.predict(X[missing])
mother_hs_imp.append(imputed)
mother_hs_imp
from sklearn import linear_model
coefficients = []
for imputes in mother_hs_imp:
regr = linear_model.LinearRegression()
X = test_scores.drop(labels=['family_inv','prev_disab'], axis=1)
X.loc[missing, 'mother_hs'] = imputes
y = X.pop('score')
regr.fit(X, y)
coefficients.append(regr.coef_)
coeff_labels = ['male',
'siblings',
'non_english',
'age_test',
'non_severe_hl',
'mother_hs',
'early_ident',
'non_white']
coef_df = pd.DataFrame(coefficients, columns=coeff_labels)
coef_df
coef_df.mean()
regr_complete = linear_model.LinearRegression()
X_complete = test_scores.drop(labels=['family_inv','prev_disab'], axis=1).dropna()
y_complete = X_complete.pop('score')
regr_complete.fit(X_complete, y_complete)
pd.Series(regr_complete.coef_, index=coeff_labels)
regr_mean = linear_model.LinearRegression()
X_mean = test_scores.drop(labels=['family_inv','prev_disab'], axis=1)
X_mean = X_mean.fillna(X_mean.mean())
y_mean = X_mean.pop('score')
regr_mean.fit(X_mean, y_mean)
pd.Series(regr_mean.coef_, index=coeff_labels)
titanic = pd.read_excel("../data/titanic.xls", "titanic")
titanic.isnull().sum()
# Write your answer here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This SVM is 67% accurate.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from sklearn.model_selection import cross_val_score
df = pd.read_csv('../scikit/tweets.csv')
target = df['is_there_an_emotion_directed_at_a_brand_or_product']
text = df['tweet_text']
# We need to remove the empty rows from the text before we pass into CountVectorizer
fixed_text = text[pd.notnull(text)]
fixed_target = target[pd.notnull(text)]
# Do the feature extraction
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer() # initialize the count vectorizer
count_vect.fit(fixed_text) # set up the columns for the feature matrix
counts = count_vect.transform(fixed_text) # counts is the feature matrix
from sklearn.svm import LinearSVC
# Build a classifier using the LinearSVC algorithm
clf = LinearSVC() # initialize our classifier
clf.fit(counts, fixed_target) # fit our classifier to the training data
scores = cross_val_score(clf, counts, fixed_target, cv=10)
print(scores)
print(scores.mean())
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=50) # n_estimators is the number of trees
from sklearn.model_selection import cross_val_score
scores = cross_val_score(clf, counts, fixed_target, cv=10)
print(scores)
print(scores.mean())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='loading'></a>
Step6: To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
Step7: <a id='naive'></a>
Step8: <a id="multiscale"></a>
Step9: <a id="laplacian"></a>
Step10: <a id="playing"></a>
Step11: Lower layers produce features of lower complexity.
Step12: There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
Step13: <a id="deepdream"></a>
Step14: Let's load some image and populate it with DogSlugs (in case you've missed them).
Step15: Note that results can differ from the Caffe's implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
|
<ASSISTANT_TASK:>
Python Code:
# boilerplate code
import os
from cStringIO import StringIO
import numpy as np
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import tensorflow as tf
#!wget https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip inception5h.zip
model_fn = 'tensorflow_inception_graph.pb'
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
graph_def = tf.GraphDef.FromString(open(model_fn).read())
t_input = tf.placeholder(np.float32, name='input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]
feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]
print 'Number of layers', len(layers)
print 'Total number of feature channels:', sum(feature_nums)
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be shure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity
# to have non-zero gradients for features with negative initial activations.
layer = 'mixed4d_3x3_bottleneck_pre_relu'
channel = 139 # picking some feature channel to visualize
# start with a gray image with a little noise
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 1)*255)
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def visstd(a, s=0.1):
'''Normalize the image range for visualization'''
return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5
def T(layer):
'''Helper for getting layer output tensor'''
return graph.get_tensor_by_name("import/%s:0"%layer)
def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in xrange(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print score,
clear_output()
showarray(visstd(img))
render_naive(T(layer)[:,:,:,channel])
def tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See "resize" function below.
'''
placeholders = map(tf.placeholder, argtypes)
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
# Helper function that uses TF to resize an image
def resize(img, size):
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
resize = tffunc(np.float32, np.int32)(resize)
def calc_grad_tiled(img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in xrange(0, max(h-sz//2, sz),sz):
for x in xrange(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = sess.run(t_grad, {t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in xrange(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in xrange(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print '.',
clear_output()
showarray(visstd(img))
render_multiscale(T(layer)[:,:,:,channel])
k = np.float32([1,4,6,4,1])
k = np.outer(k, k)
k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)
def lap_split(img):
'''Split the image into lo and hi frequency components'''
with tf.name_scope('split'):
lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')
lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])
hi = img-lo2
return lo, hi
def lap_split_n(img, n):
'''Build Laplacian pyramid with n splits'''
levels = []
for i in xrange(n):
img, hi = lap_split(img)
levels.append(hi)
levels.append(img)
return levels[::-1]
def lap_merge(levels):
'''Merge Laplacian pyramid'''
img = levels[0]
for hi in levels[1:]:
with tf.name_scope('merge'):
img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi
return img
def normalize_std(img, eps=1e-10):
'''Normalize image by making its standard deviation = 1.0'''
with tf.name_scope('normalize'):
std = tf.sqrt(tf.reduce_mean(tf.square(img)))
return img/tf.maximum(std, eps)
def lap_normalize(img, scale_n=4):
'''Perform the Laplacian pyramid normalization.'''
img = tf.expand_dims(img,0)
tlevels = lap_split_n(img, scale_n)
tlevels = map(normalize_std, tlevels)
out = lap_merge(tlevels)
return out[0,:,:,:]
# Showing the lap_normalize graph with TensorBoard
lap_graph = tf.Graph()
with lap_graph.as_default():
lap_in = tf.placeholder(np.float32, name='lap_in')
lap_out = lap_normalize(lap_in)
show_graph(lap_graph)
def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,
iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in xrange(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in xrange(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print '.',
clear_output()
showarray(visfunc(img))
render_lapnorm(T(layer)[:,:,:,channel])
render_lapnorm(T(layer)[:,:,:,65])
render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])
render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)
def render_deepdream(t_obj, img0=img_noise,
iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# split the image into a number of octaves
img = img0
octaves = []
for i in xrange(octave_n-1):
hw = img.shape[:2]
lo = resize(img, np.int32(np.float32(hw)/octave_scale))
hi = img-resize(lo, hw)
img = lo
octaves.append(hi)
# generate details octave by octave
for octave in xrange(octave_n):
if octave>0:
hi = octaves[-octave]
img = resize(img, hi.shape[:2])+hi
for i in xrange(iter_n):
g = calc_grad_tiled(img, t_grad)
img += g*(step / (np.abs(g).mean()+1e-7))
print '.',
clear_output()
showarray(img/255.0)
img0 = PIL.Image.open('pilatus800.jpg')
img0 = np.float32(img0)
showarray(img0/255.0)
render_deepdream(tf.square(T('mixed4c')), img0)
render_deepdream(T(layer)[:,:,:,139], img0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configuration and metadata (layer size) gathering
Step2: Load a model
Step3: Upgrade the model
Step4: Save a description of the model and its new weights to cnn_model.{json,h5} in the cache directory.
Step5: (Optional) Re-read the model to verify that it wrote out correctly.
Step6: (Optional) visualise the models
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from keras.models import model_from_json
from keras.optimizers import SGD
from os import path
from train import infer_sizes
import models
cache_dir = '../cache/mpii-cooking/' # Change me!
orig_path = path.join(cache_dir, 'keras-checkpoints/checkpoints/model-iter-16640-r349513.h5') # Change me!
train_h5_path = path.join(cache_dir, 'train-patches/samples-000001.h5')
ds_shape = infer_sizes(train_h5_path)
sgd2 = SGD(lr=0.0001, nesterov=True, momentum=0.9)
poselet_model = models.vggnet16_poselet_class_flow(ds_shape, sgd2, 'glorot_normal')
poselet_model.load_weights(orig_path)
upgraded_poselet_model = models.upgrade_multipath_poselet_vggnet(poselet_model)
assert poselet_model.loss, "Model needs a non-empty loss"
upgraded_poselet_model.compile(sgd2, poselet_model.loss)
mod_json = upgraded_poselet_model.to_json()
mod_json_path = path.join(cache_dir, 'cnn_model.json')
mod_weight_path = path.join(cache_dir, 'cnn_model.h5')
with open(mod_json_path, 'w') as fp:
fp.write(mod_json)
upgraded_poselet_model.save_weights(mod_weight_path)
def ask(question):
res = raw_input(question + ' (y[es] for affirmative, any other response for negative) ')
return res.lower() in ['y', 'yes']
if ask('Want to re-read the model?'):
with open(mod_json_path) as fp:
json_data = fp.read()
m2 = model_from_json(json_data)
m2.load_weights(mod_weight_path)
if ask('Want to visualise the model?'):
SVG(to_graph(poselet_model, show_shape=True).create(prog='dot', format='svg'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Následuje příklad, ve kterém se vezme JSON text z předchozího příkladu a zpátku se z něj složí objekt.
Step2: Ve formátu json můžou být libovolně mezery a nové řádky. Z toho plyne že je možné naformátovat JSON tak, aby byl lépe čitelný. Příklad následuje na předchozích datech.
Step3: Pickle
Step4: Následuje příklad kde jsou uložená data deserializována zpátky do Pythonního objektu.
|
<ASSISTANT_TASK:>
Python Code:
import json
# ukazkova data
message = [
{"time": 123, "value": 5},
{"time": 124, "value": 6},
{"status": "ok", "finish": [True, False, False]},
]
# zabalit zpravu
js_message = json.dumps(message)
# show result
print(type(js_message))
print(js_message)
# unpack message
message = json.loads(js_message)
# show result
print(type(message))
print(message)
# zabalit zpravu
js_message = json.dumps(message, indent=4)
# show result
print(type(js_message))
print(js_message)
import pickle
# ukazkova data
message = [
{"time": 123, "value": 5},
{"time": 124, "value": 6},
{"status": "ok", "finish": [True, False, False]},
]
# serizalizace pomoci pickle
pickle.dump(message, open( "save.p", "wb" ))
# otevrit soubor pomoci pickle
loaded_message = pickle.load(open( "save.p", "rb" ))
# tisk dat
print(type(loaded_message))
print(loaded_message)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now compute Granger causality across all pairs of timeseries
|
<ASSISTANT_TASK:>
Python Code:
import os,sys
import numpy
%matplotlib inline
import matplotlib.pyplot as plt
import statsmodels.tsa.stattools
from dcm_sim import sim_dcm_dataset
sys.path.insert(0,'../')
from utils.graph_utils import show_graph_from_adjmtx,show_graph_from_pattern
# first we simulate some data using our DCM model, with the same HRF across all regions
_,data_conv,params=sim_dcm_dataset(verbose=True)
A=params['A']
B=params['B']
C=params['C']
data=data_conv[range(0,data_conv.shape[0],int(1./params['stepsize']))]
gc=numpy.zeros(A.shape)
for i in range(A.shape[0]):
for j in range(A.shape[0]):
if i==j: # don't compute self-connectivity
continue
result=statsmodels.tsa.stattools.grangercausalitytests(data[:,[i,j]],1)
if result[1][0]['params_ftest'][1]<0.05:
gc[i,j]=1
show_graph_from_adjmtx(gc,numpy.zeros(B.shape),numpy.zeros(C.shape),title='Granger')
show_graph_from_adjmtx(A,B,C,title='True model')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: IRIS
Step2: MNIST
Step3: Задание 5
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from line_profiler import LineProfiler
from sklearn.metrics.pairwise import pairwise_distances
import seaborn as sns
from sklearn import datasets
from sklearn.base import ClassifierMixin
from sklearn.datasets import fetch_mldata
from sklearn.neighbors.base import NeighborsBase, KNeighborsMixin, SupervisedIntegerMixin
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
%load_ext pycodestyle_magic
def profile_print(func_to_call, *args):
profiler = LineProfiler()
profiler.add_function(func_to_call)
profiler.runcall(func_to_call, *args)
profiler.print_stats()
%%pycodestyle
class MyKNeighborsClassifier(NeighborsBase, KNeighborsMixin,
SupervisedIntegerMixin, ClassifierMixin):
def __init__(self, n_neighbors=3):
self.n_neighbors = n_neighbors
def fit(self, X, y):
self.X = np.float64(X)
self.classes, self.y = np.unique(y, return_inverse=True)
def euclidean_metric(self, v):
return np.sqrt(((self.X - v) ** 2).sum(axis=1))
'''
def cnt(self, v):
z = np.zeros(self.classes.size)
for i in v:
z[i] += 1
return z
def predict_proba(self, X): # more understandable
X = np.float64(X)
# euclidean by default, can use multithreading
dist = pairwise_distances(X, self.X)
ind = np.argsort(dist, axis=1)[:, :self.n_neighbors]
return np.apply_along_axis(self.cnt, 1, self.y[ind]) / self.n_neighbors
'''
# '''
def predict_proba(self, X): # more quickly
X = np.float64(X)
# euclidean by default, can use multithreading
dist = pairwise_distances(X, self.X)
ind = np.argsort(dist, axis=1)[:, :self.n_neighbors]
classes = self.y[ind]
crange = np.arange(self.classes.shape[0])
clss = classes.reshape((classes.shape[0], 1, classes.shape[1]))
crng = crange.reshape((1, crange.shape[0], 1))
counts = np.sum(clss == crng, axis=2)
return counts / self.n_neighbors
# '''
def predict(self, X):
proba = self.predict_proba(X)
return self.classes[np.argsort(proba, axis=1)[:, -1]]
def score(self, X, y):
pred = self.predict(X)
return 1 - np.count_nonzero(y - pred) / y.shape[0]
iris = datasets.load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.1, stratify=iris.target)
clf = KNeighborsClassifier(n_neighbors=17)
my_clf = MyKNeighborsClassifier(n_neighbors=17)
%time clf.fit(X_train, y_train)
%time my_clf.fit(X_train, y_train)
%time clf.predict(X_test)
%time my_clf.predict(X_test)
#profile_print(my_clf.predict, X_test)
%time clf.predict_proba(X_test)
#%time my_clf.predict_proba(X_test)
profile_print(my_clf.predict_proba, X_test)
clf.score(X_test, y_test)
my_clf.score(X_test, y_test)
# Задание 3
# 16 - 17
num_n = 30
num_av = 2000
scm = np.zeros(num_n)
sc = np.zeros(num_av)
for n in range(1, num_n + 1):
print (n)
for i in range(num_av):
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.1, stratify=iris.target)
my_clf = MyKNeighborsClassifier(n_neighbors=n)
my_clf.fit(X_train, y_train)
sc[i] = my_clf.score(X_test, y_test)
scm[n - 1] = sc.mean()
plt.plot(range(1, num_n + 1), scm, 'ro-')
plt.show()
mnist = fetch_mldata('MNIST original')
X_train, X_test, y_train, y_test = train_test_split(mnist.data, mnist.target, test_size=0.01, stratify=mnist.target)
y_train.shape
clf = KNeighborsClassifier(n_neighbors=5)
my_clf = MyKNeighborsClassifier(n_neighbors=5)
%time clf.fit(X_train, y_train)
%time my_clf.fit(X_train, y_train)
%time clf.predict(X_test)
%time my_clf.predict(X_test)
%time clf.predict_proba(X_test)
#%time my_clf.predict_proba(X_test)
%time profile_print(my_clf.predict_proba, X_test)
clf.score(X_test, y_test)
my_clf.score(X_test, y_test)
# n_neighbors = 5
num_n = 30
num_av = 20
scm = np.zeros(num_n)
sc = np.zeros(num_av)
for n in range(1, num_n + 1):
print (n)
for i in range(num_av):
print (n, ' ', i)
X_train, X_test, y_train, y_test = train_test_split(mnist.data, mnist.target, test_size=0.001, stratify=mnist.target)
my_clf = MyKNeighborsClassifier(n_neighbors=n)
my_clf.fit(X_train, y_train)
sc[i] = my_clf.score(X_test, y_test)
scm[n - 1] = sc.mean()
plt.plot(range(1, num_n + 1), scm, 'ro-')
plt.show()
print (1)
# seaborn - красивые и простые в написании графики и визуализация
# matplotlib - более сложные в написании и более функциональные, чем seaborn
# train_test_split - разбиение данных на обучающую и тестовую часть
# Pipelin%load_ext e (from sklearn.pipeline import Pipeline) - конвейерный классификатор
# StandardScaler (from sklearn.preprocessing import StandardScaler) - нормировка
# ClassifierMixin - общий Mixin для классификаторов, в нем реализован score
# NeighborsBase - базовый класс Knn
# KNeighborsMixin - Mixin содержащий метод поиска ближайших соседей
# SupervisedIntegerMixin - Mixin с функцией fit для установления соответствия
# между данными и целевыми переменными
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For this notebook we will create a toy "Boron nitride" tight binding
Step2: Note that we could have obtained this hamiltonian from any other source. Then we generate a path for the band structure
Step3: And finally we just ask for the fatbands plot
Step4: We only see the bands here, but this is a fatbands plot, and it is ready to accept your requests on what to draw!
Step5: This setting works exactly like the requests setting in PdosPlot, which is documented here. Therefore we won't give an extended description of it, but just quickly show that you can autogenerate the groups
Step6: Or write them yourself if you want the maximum flexibility
Step7: Scaling fatbands
Step8: You can also use the scale_fatbands method, which additionally lets you choose if you want to rescale from the current size or just set the value of scale
Step9: Use BandsPlot settings
|
<ASSISTANT_TASK:>
Python Code:
import sisl
import sisl.viz
# First, we create the geometry
BN = sisl.geom.graphene(atoms=["B", "N"])
# Create a hamiltonian with different on-site terms
H = sisl.Hamiltonian(BN)
H[0, 0] = 2
H[1, 1] = -2
H[0, 1] = -2.7
H[1, 0] = -2.7
H[0, 1, (-1, 0)] = -2.7
H[0, 1, (0, -1)] = -2.7
H[1, 0, (1, 0)] = -2.7
H[1, 0, (0, 1)] = -2.7
band = sisl.BandStructure(H, [[0., 0.], [2./3, 1./3],
[1./2, 1./2], [1., 1.]], 301,
[r'$\Gamma$', 'K', 'M', r'$\Gamma$'])
fatbands = band.plot.fatbands()
fatbands
print(fatbands.get_param("groups").help)
fatbands.split_groups(on="species")
fatbands.update_settings(groups=[
{"species": "N", "color": "blue", "name": "Nitrogen"},
{"species": "B", "color": "red", "name": "Boron"}
])
fatbands.update_settings(scale=2)
fatbands.scale_fatbands(0.5, from_current=True)
thumbnail_plot = fatbands
if thumbnail_plot:
thumbnail_plot.show("png")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NumPy, at its core, provides a powerful array object. Let's start by exploring how the NumPy array differs from a Python list.
Step2: Element indexing
Step3: Differences between arrays and lists
Step4: But the same can not be done with an array, as we get an error message
Step5: Caveat, it can be done, but really don't do it; lists are generally better at non-homogeneous collections.
Step6: Arrays also have many useful statistical/mathematical methods
Step7: Data types
Step8: Once an array has been created, its dtype is fixed (in this case to an 8 byte/64 bit signed integer) and it can only store elements of the same type.
Step9: NumPy comes with most of the common data types (and some uncommon ones too).
Step10: Floating point precision is covered in detail at http
Step11: Creating Arrays
Step12: and similarly for ones
Step13: If we want an array initialized with an arbitrary value, we can create an empty array and then use the fill method to put the value we want into the array
Step14: Alternatives, such as
Step15: The linspace and logspace functions to create linearly and logarithmically-spaced grids respectively, with a fixed number of points that include both ends of the specified interval
Step16: Finally, it is often useful to create arrays with random numbers that follow a specific distribution.
Step17: To produce an array of 5 random samples taken from a standard normal distribution (0 mean and variance 1)
Step18: For an array of 5 samples from the normal distribution with a mean of 10 and a variance of 3
Step19: Indexing with other arrays
Step20: Now that we have this mask, we can use it to either read those values or to reset them to 0
Step21: Whilst beyond the scope of this course, it is also worth knowing that a specific masked array object exists in NumPy.
Step22: With two-dimensional arrays we start seeing the power of NumPy
Step23: Question
Step24: Multidimensional array creation
Step25: In fact, the shape of an array can be changed at any time, as long as the total number of elements is unchanged.
Step26: Slices
Step27: If you only provide one index to slice a multidimensional array, then the slice will be expanded to "
Step28: This is also known as "ellipsis".
Step29: Operating with arrays
Step30: Importantly, even the multiplication operator is by default applied element-wise. However it is not the matrix multiplication from linear algebra
Step31: We may also multiply an array by a scalar
Step32: This is an example of broadcasting.
Step33: We can also broadcast a 1D array to a 2D array, in this case adding a vector to all rows of a matrix
Step34: We can also broadcast in two directions at a time
Step35: Pictorially
Step36: Exercise 1
Step37: As you can see in this example, the value of the axis parameter is the dimension that will be consumed once the operation has been carried out. This is why to sum along the columns we use axis=0.
Step38: Another widely used property of arrays is the .T attribute, which allows you to access the transpose of the array
Step39: Generating 2D coordinate arrays
Step40: Reshape and newaxis
Step41: Another way to increase the dimensionality of an array is to use the newaxis keyword
Step42: Views, not Copies
Step43: What this means is that if one array is modified, the other will also be updated
Step44: This lack of copying allows for very efficient vectorized operations, but this power should be used carefully - if used badly it can lead to some bugs that are hard to track down.
Step45: Or to sample the exponential function between $-5$ and $5$ at intervals of $0.5$
Step46: Linear algebra in NumPy
Step47: For matrix-matrix multiplication, the regular $matrix \times matrix$ rules must be satisfied. For example $A \times A^T$
Step48: results in a (2, 2) array, yet $A^T \times A$ results in a (3, 3).
Step49: NumPy makes no distinction between row and column vectors and simply verifies that the dimensions match the required rules of matrix multiplication.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
lst = [10, 20, 30, 40]
arr = np.array([10, 20, 30, 40])
print(lst)
print(arr)
print(lst[0], arr[0])
print(lst[-1], arr[-1])
print(lst[2:], arr[2:])
lst[-1] = 'a string inside a list'
lst
arr[-1] = 'a string inside an array'
print('Data type :', arr.dtype)
print('Total number of elements :', arr.size)
print('Number of dimensions :', arr.ndim)
print('Shape (dimensionality) :', arr.shape)
print('Memory used (in bytes) :', arr.nbytes)
print('Minimum and maximum :', arr.min(), arr.max())
print('Sum and product of all elements :', arr.sum(), arr.prod())
print('Mean and standard deviation :', arr.mean(), arr.std())
arr.dtype
arr[-1] = 1.234
arr
np.array(256, dtype=np.uint8)
float_info = ('{finfo.dtype}: max={finfo.max:<18}, '
'approx decimal precision={finfo.precision};')
print(float_info.format(finfo=np.finfo(np.float32)))
print(float_info.format(finfo=np.finfo(np.float64)))
np.array(1, dtype=np.uint8).astype(np.float32)
np.zeros(5, dtype=np.float)
np.zeros(3, dtype=np.int)
print('5 ones:', np.ones(5, dtype=np.int))
a = np.empty(4, dtype=np.float)
a.fill(5.5)
a
np.arange(10, dtype=np.float64)
np.arange(5, 7, 0.1)
print("A linear grid between 0 and 1:")
print(np.linspace(0, 1, 5))
print("A logarithmic grid between 10**2 and 10**4:")
print(np.logspace(2, 4, 3))
import numpy as np
import numpy.random
print(np.random.randn(5))
norm10 = np.random.normal(10, 3, 5)
print(norm10)
mask = norm10 > 9
mask
print(('Values above 9:', norm10[mask]))
print('Resetting all values above 9 to 0...')
norm10[mask] = 0
print(norm10)
lst2 = [[1, 2, 3], [4, 5, 6]]
arr2 = np.array([[1, 2, 3], [4, 5, 6]])
print(arr2)
print(arr2.shape)
print(lst2[0][1])
print(arr2[0, 1])
print(lst2[0:2][1])
print(arr2[0:2, 1])
np.zeros((2, 3))
np.random.normal(10, 3, size=(2, 4))
arr = np.arange(8).reshape(2, 4)
print(arr)
arr = np.arange(2, 18, 2).reshape(2, 4)
print(arr)
print('Second element from dimension 0, last 2 elements from dimension one:')
print(arr[1, 2:])
print('First row: ', arr[0], 'is equivalent to', arr[0, :])
print('Second row: ', arr[1], 'is equivalent to', arr[1, :])
arr1 = np.empty((4, 6, 3))
print('Orig shape: ', arr1.shape)
print(arr1[...].shape)
print(arr1[..., 0:2].shape)
print(arr1[2:4, ..., ::2].shape)
print(arr1[2:4, :, ..., ::-1].shape)
arr1 = np.arange(4)
arr2 = np.arange(10, 14)
print(arr1, '+', arr2, '=', arr1 + arr2)
print(arr1, '*', arr2, '=', arr1 * arr2)
1.5 * arr1
print(np.arange(3))
print(np.arange(3) + 5)
np.ones((3, 3)) + np.arange(3)
a = np.arange(3).reshape((3, 1))
b = np.arange(3)
print(a, '+', b, '=\n', a + b)
arr1 = np.ones((2, 3))
arr2 = np.ones((2, 1))
# arr1 + arr2
arr1 = np.ones((2, 3))
arr2 = np.ones(3)
# arr1 + arr2
arr1 = np.ones((1, 3))
arr2 = np.ones((2, 1))
# arr1 + arr2
arr1 = np.ones((1, 3))
arr2 = np.ones((1, 2))
# arr1 + arr2
arr1 = np.ones((1, 3))
arr3 = arr2[:, :, np.newaxis]
# arr1 + arr3
print('For the following array:\n', arr)
print('The sum of elements along the rows is :', arr.sum(axis=1))
print('The sum of elements along the columns is :', arr.sum(axis=0))
np.zeros((3, 4, 5, 6)).sum(axis=2).shape
print('Array:\n', arr)
print('Transpose:\n', arr.T)
x = np.linspace(0, 9, 3)
y = np.linspace(-8, 4, 3)
x2d, y2d = np.meshgrid(x, y)
print(x2d)
print(y2d)
np.arange(6).reshape((1, -1))
np.arange(6).reshape((2, -1))
arr = np.arange(6)
print(arr[np.newaxis, :, np.newaxis].shape)
arr = np.arange(8)
arr_view = arr.reshape(2, 4)
# Print the "view" array from reshape.
print('Before\n', arr_view)
# Update the first element of the original array.
arr[0] = 1000
# Print the "view" array from reshape again,
# noticing the first value has changed.
print('After\n', arr_view)
x = np.linspace(0, 2*np.pi, 100)
y = np.sin(x)
x = np.arange(-5, 5.5, 0.5)
y = np.exp(x)
v1 = np.array([2, 3, 4])
v2 = np.array([1, 0, 1])
print(v1, '.', v2, '=', np.dot(v1, v2))
A = np.arange(6).reshape(2, 3)
print(A, '\n')
print(np.dot(A, A.T))
print(np.dot(A.T, A))
print(A, 'x', v1, '=', np.dot(A, v1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step11: Embedding
Step12: LSTM cell
Step13: RNN forward pass
Step14: Output
Step15: Validation accuracy
Step16: Batching
Step17: Training
Step18: Testing
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
from collections import Counter
with open('../sentiment_network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment_network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
# Create your dictionary that maps vocab words to integers here
unique_words = list(set(words))
vocab_to_int = {k:v for k, v in zip(unique_words, range(1, len(unique_words) + 1))}
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = [[vocab_to_int[word] for word in review.split()] for review in reviews]
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = [1 if label == 'positive' else 0 for label in labels.split()]
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
# Filter out that review with 0 length
reviews_ints = [review for review in reviews_ints if len(review) > 0 ]
def pad_take(review, n = 200):
if len(review) >= n:
return np.array(review[:n])
else:
return np.append(np.zeros(n - len(review)), review)
seq_len = 200
features = np.array([pad_take(review, seq_len) for review in reviews_ints])
features[:10,:100]
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder
labels_ =
keep_prob =
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding =
embed =
with graph.as_default():
# Your basic LSTM cell
lstm =
# Add dropout to the cell
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state =
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cells can also be used to create textual materials using the markup language Markdown.
Step2: <img class="logo" src="images/python-logo.png" height=100 align='right'/>
Step3: Floating point numbers
Step4: Variables
Step5: Booleans
Step6: Variables
Step7: Variables can be cast from one type to another
Step8: Containers
Step9: Indexing
Step10: Slicing
Step11: Lists can store different type of variable in each element
Step12: Containers
Step13: Containers
Step14: Flow control
Step15: Flow control
Step16: Functions
Step17: Classes
Step18: The Scientific Python ecosystem
Step19: Arrays can be multi-dimensional
Step20: The Scientific Python ecosystem
Step21: The Scientific Python ecosystem
|
<ASSISTANT_TASK:>
Python Code:
# This is a Python comment
# the next line is a line of Python code
print("Hello World!")
# These two lines turn on inline plotting
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot([1,2,3])
a = 1
a + 1
b = 2.1
b + 1
a + b
type(a + b)
c = 1.5 + 0.5j # complex numbers
print(c.real)
print(c.imag)
d = 3 > 4
print(d)
type(d)
s = "Hello everyone"
type(s)
a = "Hello "
b = "World"
print(a + b)
a = 1
print(a)
print(type(a))
b = float(a)
print(b)
print(type(b))
s = "1.23"
print(s)
print(type(s))
f = float(s)
print(f)
print(type(f))
l = ['red', 'blue', 'green', 'black', 'white']
len(l)
l
print(l[0])
print(l[1])
print(l[2])
print(l[-1]) # last element
print(l[-2])
l[0] = 'orange'
print(l)
print(l[2:5])
print(l[2:-1])
print(l[1:6:2])
l[::-1]
ll = [5, 22.9, 14.8+1j, 'hello', [1,2,3]]
ll
print(ll[0])
print(ll[1])
print(ll[2])
print(ll[3])
print(ll[4])
d = {'name': 'Bobby', 'id': 223984, 'location': 'USA'}
d.keys()
d.values()
d['name']
d['id']
d['id'] = 1234
d['id']
t = ('red', 'blue', 'green')
t[0]
t[1:3]
a = 4
if a > 10:
print("a is larger than 10")
elif a < 10:
print("a is less than 10")
else:
print("a is equal to 10")
for i in range(10):
print(i)
def func():
print("Hello world")
func()
class Car(object):
engine = 'V4' # class attribute
def start(self): # class method
print("Starting the car with a", self.engine, "engine")
mycar = Car()
type(mycar)
mycar.engine
mycar.start()
mycar.engine = 'V6'
mycar.engine
mycar.start()
import numpy as np
a = np.array([0, 1, 2, 3, 4, 5, 6, 7])
a
a.shape
a.ndim
a.dtype
a[0::2]
a[a>3]
a * 2 + 100
a.mean()
b = np.arange(12).reshape(3,4)
b.shape
b
b[1,2]
b[0:2, ::-1]
import scipy
print(scipy.__doc__)
%pylab inline
plot([1,2,3])
a = np.random.rand(30, 30)
imshow(a)
colorbar()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: FITS files consist of at least two parts - Header and Data
Step2: FITS format preserves the full dynamic range of data
Step3: You can use masks on images
Step4: You can use specific bins for histograms
Step5: You can add and subtract images
Step6: The two images above may look the same but they are not!
Step7: FITS Tables - An astronomical example
Step8: Stellar spectral classes
Step9: FITS Images - An astronomical example
Step10: Notice the origin is in the upper left corner (the image is upside down)
Step11: Better, the origin is in the lower left corner
Step12: RGB - Pseudocolor Images
Step13: Sigma Clipping
Step14: Rejected Data
Step15: Accepted Data
Step16: Sigma clipping an image
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import astropy.io.fits as fits
plt.style.use('ggplot')
plt.rc('axes', grid=False) # turn off the background grid for images
data_file = "./MyData/bsg01.fits"
my_fits_file = fits.open(data_file)
my_fits_file.info()
image_data = my_fits_file[0].data
image_header = my_fits_file[0].header
image_header
print("The image has a shape [height,width] of {0}".format(image_data.shape))
print("The image is made up of data of type {0}".format(image_data.dtype))
print("The image has a maximum value of {0}".format(image_data.max()))
print("The image has a minimum value of {0}".format(image_data.min()))
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(image_data,cmap=plt.cm.gray)
ax[1].hist(image_data.flatten(),bins=20);
copy_data = np.copy(image_data) # make a copy of the data to work with
cut_off = 40
mask = np.where(copy_data > cut_off)
copy_data[mask] = 60 # You can not just throw data away, you have to set it to something.
print("The cropped image has a maximum value of {0}".format(copy_data.max()))
print("The cropped image has a minimum value of {0}".format(copy_data.min()))
my_bins = np.arange(0,110,5)
my_bins
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(copy_data,cmap=plt.cm.gray)
ax[1].hist(image_data.flatten(),bins=my_bins)
ax[1].hist(copy_data.flatten(),bins=my_bins, alpha = 0.40);
another_image_file = "./MyData/bsg02.fits"
another_image_data = fits.getdata(another_image_file) # a quick way to just get the data
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].set_title("Original Image")
ax[1].set_title("Another Image")
ax[0].imshow(image_data, cmap=plt.cm.gray)
ax[1].imshow(another_image_data, cmap=plt.cm.gray);
real_image = image_data - another_image_data # Subtract the images pixel by pixel
fig, ax = plt.subplots(1,3)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].set_title("Original Image")
ax[1].set_title("Another Image")
ax[2].set_title("Real Image!")
ax[0].imshow(image_data, cmap=plt.cm.gray)
ax[1].imshow(another_image_data, cmap=plt.cm.gray);
ax[2].imshow(real_image, cmap=plt.cm.gray);
print("The real image has a maximum value of {0}".format(real_image.max()))
print("The real image has a minimum value of {0}".format(real_image.min()))
my_spectra_file = './MyData/Star_G5.fits'
my_spectra_fits = fits.open(my_spectra_file)
my_spectra_fits.info()
spectra_data = my_spectra_fits[0].data
spectra_header = my_spectra_fits[0].header
spectra_header[0:15]
# The FITS header has the information to make an array of wavelengths
start_wavelength = spectra_header['CRVAL1']
num_of_points = spectra_header['NAXIS1']
width_of_points = spectra_header['CDELT1']
end_wavelength = start_wavelength + (num_of_points * width_of_points)
wavelength_range = np.arange(start_wavelength,end_wavelength,width_of_points)
fig, ax = plt.subplots(2,1)
fig.set_size_inches(11,8.5)
fig.tight_layout()
# Full spectra
ax[0].plot(wavelength_range, spectra_data, color='b')
ax[0].set_ylabel("Flux")
ax[0].set_xlabel("Wavelength [angstroms]")
# Just the visible range with the hydrogen Balmer lines
ax[1].set_xlim(4000,7000)
ax[1].set_ylim(0.6,1.2)
ax[1].plot(wavelength_range, spectra_data, color='b')
ax[1].set_ylabel("Flux")
ax[1].set_xlabel("Wavelength [angstroms]")
H_Balmer = [6563,4861,4341,4102,3970,3889,3835,3646]
ax[1].vlines(H_Balmer,0,2, color='r', linewidth=3, alpha = 0.25);
import glob
star_list = glob.glob('./MyData/Star_*5.fits')
star_list
fig, ax = plt.subplots(1,1)
fig.set_size_inches(9,5)
fig.tight_layout()
# Full spectra
ax.set_ylabel("Flux")
ax.set_xlabel("Wavelength [angstroms]")
ax.set_ylim(0.0, 1.05)
for file in star_list:
spectra = fits.getdata(file)
spectra_normalized = spectra / spectra.max()
ax.plot(wavelength_range, spectra_normalized, label=file)
ax.legend(loc=0,shadow=True);
star_file = "./MyData/star_field.fits"
star_fits_file = fits.open(star_file)
star_fits_file.info()
star_data = star_fits_file[0].data
star_header = star_fits_file[0].header
plt.imshow(star_data, cmap=plt.cm.gray)
plt.imshow(star_data, origin='lower', cmap=plt.cm.gray)
from astropy.wcs import WCS
my_wcs = WCS(star_header)
my_wcs
fig = plt.figure()
ax = fig.add_subplot(111, projection=my_wcs)
fig.set_size_inches(6,6)
fig.tight_layout()
ax.grid(color='y', ls='-')
ax.set_xlabel('Right Ascension (J2000)')
ax.set_ylabel('Declination (J2000)')
ax.imshow(star_data, origin='lower', cmap=plt.cm.gray);
from astropy.visualization import make_lupton_rgb
red_img = fits.getdata("./MyData/m51_IR.fits").astype(float)
green_img = fits.getdata("./MyData/m51_red.fits").astype(float)
blue_img= fits.getdata("./MyData/m51_blue.fits").astype(float)
clean_r = red_img - np.median(red_img)
clean_g = green_img - np.median(green_img)
clean_b = blue_img - np.median(blue_img)
fig, ax = plt.subplots(1,3)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].set_title("R")
ax[1].set_title("G")
ax[2].set_title("B")
ax[0].imshow(clean_r, origin='lower', cmap=plt.cm.gray)
ax[1].imshow(clean_g, origin='lower', cmap=plt.cm.gray);
ax[2].imshow(clean_b, origin='lower', cmap=plt.cm.gray);
image = make_lupton_rgb(clean_r, clean_g, clean_b, stretch = 5000)
fig, ax = plt.subplots(1,1)
fig.set_size_inches(6,6)
fig.tight_layout()
ax.imshow(image, origin='lower', cmap=plt.cm.gray);
from astropy import stats
my_data = np.array([1, 5, 6, 8, 100, 5, 3, 2, 4, 5])
my_data.mean(), my_data.std()
filtered_data = stats.sigma_clip(my_data, sigma=2, iters=5)
filtered_data
my_data[filtered_data.mask]
my_data[~filtered_data.mask]
star_data.mean(), star_data.std(), star_data.max(), star_data.min()
clip_star = stats.sigma_clip(star_data, sigma=8, iters=5)
copy_data = np.copy(star_data) # make a copy of the data to work with
copy_data[~clip_star.mask] = star_data.min()
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(star_data, origin='lower', cmap=plt.cm.gray)
ax[1].imshow(copy_data, origin='lower', cmap=plt.cm.gray);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
def equvInverse(arr , N , P ) :
cntElem = 0
for i in range(0 , N ) :
if(( arr[i ] * arr[i ] ) % P == 1 ) :
cntElem = cntElem + 1
return cntElem
if __name__== "__main __":
arr =[1 , 6 , 4 , 5 ]
N = len(arr )
P = 7
print(equvInverse(arr , N , P ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
class newNode :
def __init__(self , data ) :
self . data = data
self . left = self . right = None
def deleteLeaves(root , x ) :
if(root == None ) :
return None
root . left = deleteLeaves(root . left , x )
root . right = deleteLeaves(root . right , x )
if(root . data == x and root . left == None and root . right == None ) :
return None
return root
def inorder(root ) :
if(root == None ) :
return
inorder(root . left )
print(root . data , end = "▁ ")
inorder(root . right )
if __name__== ' __main __' :
root = newNode(10 )
root . left = newNode(3 )
root . right = newNode(10 )
root . left . left = newNode(3 )
root . left . right = newNode(1 )
root . right . right = newNode(3 )
root . right . right . left = newNode(3 )
root . right . right . right = newNode(3 )
deleteLeaves(root , 3 )
print("Inorder ▁ traversal ▁ after ▁ deletion ▁ : ▁ ")
inorder(root )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example code block
Step2: Note
Step3: Debugging with %pdb
|
<ASSISTANT_TASK:>
Python Code:
print("Hello man")
def fibo(n):
if n == 0:
return 0
elif n == 1:
return 1
return fibo(n-1) + fibo(n-2)
%timeit fibo(20)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 1, 300)
for w in range(2, 6, 2):
plt.plot(x, np.sin(np.pi*x)*np.sin(2*w*np.pi*x))
%pdb
numbers = "hello"
sum(numbers)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: Solution
Step3: Part 2
Step4: Test
Step5: Solution
|
<ASSISTANT_TASK:>
Python Code:
import csv
def parse_registers(input_path):
registers = []
with open(input_path, 'rt') as f_input:
csv_reader = csv.reader(f_input, delimiter=' ')
for line in csv_reader:
registers.append((line[0], tuple(line[1:])))
return registers
from collections import defaultdict
import re
def init(input_path):
globals()['prog'] = defaultdict(int)
globals()['mul_count'] = 0
globals()['registers'] = parse_registers(input_path)
def retrieve(string, prog):
if re.search('\d+', string):
return int(string)
else:
return prog[string]
def r_set(x, y):
y = retrieve(y, prog)
prog[x] = y
def r_mul(x, y):
y = retrieve(y, prog)
prog[x] *= y
globals()['mul_count'] += 1
def r_sub(x, y):
y = retrieve(y, prog)
prog[x] -= y
def exec_register(reg):
comm = reg[0]
argx = reg[1][0]
argy = reg[1][1]
globals()['r_' + comm](argx, argy)
def run_update(registers, initial_head):
head = initial_head
while head < len(registers):
reg = registers[head]
if reg[0] != 'jnz':
exec_register(reg)
head += 1
else:
req = retrieve(reg[1][0], prog)
offset = retrieve(reg[1][1], prog)
if req != 0:
head += offset
else:
head += 1
init('input.txt')
run_update(registers, 0)
mul_count
def run_coprocessor_rough(alpha):
loop0, loop1, loop2 = False, False, False
a = alpha
b = 79
c = b
d = 0
e = 0
f = 0
g = 0
h = 0
if a != 0:
b *= 100
b += 100000
c = b
c += 17000
while (g != 0) or not loop0:
loop0 = True
f = 1
d = 2
while (g != 0) or not loop1:
loop1 = True
e = 2
while (g != 0) or not loop2:
loop2 = True
g = d
g *= e
g = g - b
if g == 0:
f = 0
e += 1
g = e
g = g - b
d += 1
g = d
g = g - b
if f == 0:
h += 1
g = b
g = g - c
if g == 0:
return a, b, c, d, e, f, g, h
else:
b += 17
run_coprocessor_rough(0)
def is_prime(x):
if x >= 2:
for y in range(2,x):
if not ( x % y ):
return False
else:
return False
return True
def run_coprocessor(alpha):
loop = False
a, b = alpha, 79
c = b
d, e, f, g, h = 0, 0, 0, 0, 0
if a != 0:
b *= 100
b += 100000
c = b
c += 17000
while (g != 0) or not loop:
loop = True
f = 1
d = 2
e = 2
if not is_prime(b):
f = 0
e = b
d = b
if f == 0:
h += 1
g = b
g = g - c
if g == 0:
return a, b, c, d, e, f, g, h
else:
b += 17
run_coprocessor(1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2016-11-04
Step2: 1. Decision trees
Step3: Question Compute the mean and standard deviation of the area under the ROC curve of these 5 trees. Plot the ROC curves of these 5 trees.
Step4: Question What parameters of DecisionTreeClassifier can you play with to define trees differently than with the default parameters? Cross-validate these using a grid search, and plot the optimal decision tree on the previous plot. Did you manage to improve performance?
Step5: Question How does the performance of decision trees compare to the performance of classifiers we have used previously on this data? Does this match your expectations?
Step7: Question Use cross_validate_optimize (as defined in the previous lab) to optimize the number of decision trees to use in the bagging method. How many trees did you find to be an optimal choice?
Step8: Question Plot the ROC curve of the optimized cross-validated bagging tree classifier obtained with cross_validate_optimize, and compare it to the previous ROC curves (non-optimized bagging tree, decision trees).
Step9: Question Use cross_validate_optimize (as defined in the previous lab) to optimize the number of decision trees to use in the random forest. How many trees do you find to be an optimal choice? How does the optimal random forest compare to the optimal bagging trees? How do the training times of the random forest and the bagging trees compare?
Step10: Question How do your tree-based classifiers compare to the linear regression (regularized or not)? Plot ROC curves.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%pylab inline
# Load the data
# TODO
# Normalize the data
from sklearn import preprocessing
X = preprocessing.normalize(X)
# Set up a stratified 10-fold cross-validation
from sklearn import cross_validation
folds = cross_validation.StratifiedKFold(y, 10, shuffle=True)
def cross_validate(design_matrix, labels, classifier, cv_folds):
Perform a cross-validation and returns the predictions.
Parameters:
-----------
design_matrix: (n_samples, n_features) np.array
Design matrix for the experiment.
labels: (n_samples, ) np.array
Vector of labels.
classifier: sklearn classifier object
Classifier instance; must have the following methods:
- fit(X, y) to train the classifier on the data X, y
- predict_proba(X) to apply the trained classifier to the data X and return probability estimates
cv_folds: sklearn cross-validation object
Cross-validation iterator.
Return:
-------
pred: (n_samples, ) np.array
Vectors of predictions (same order as labels).
pred = np.zeros(labels.shape)
for tr, te in cv_folds:
# Restrict data to train/test folds
Xtr = design_matrix[tr, :]
ytr = labels[tr]
Xte = design_matrix[te, :]
#print Xtr.shape, ytr.shape, Xte.shape
# Fit classifier
classifier.fit(Xtr, ytr)
# Predict probabilities (of belonging to +1 class) on test data
yte_pred = classifier.predict_proba(Xte)
index_of_class_1 = 1 - ytr[0] # 0 if the first sample is positive, 1 otherwise
pred[te] = yte_pred[:, index_of_class_1]
return pred
from sklearn import tree
from sklearn import metrics
# Use: clf = tree.DecisionTreeClassifier()
ypred_dt = [] # will hold the 5 arrays of predictions (1 per tree)
for tree_index in range(5):
# TODO
fpr_dt = [] # will hold the 5 arrays of false positive rates (1 per tree)
tpr_dt = [] # will hold the 5 arrays of true positive rates (1 per tree)
auc_dt = [] # will hold the 5 areas under the ROC curve (1 per tree)
for tree_index in range(5):
# TODO
for tree_index in range(4):
plt.plot(fpr_dt[tree_index], tpr_dt[tree_index], '-', color='orange')
plt.plot(fpr_dt[-1], tpr_dt[-1], '-', color='orange',
label='DT (AUC = %0.2f (+/- %0.2f))' % (np.mean(auc_dt), np.std(auc_dt)))
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.title('ROC curves', fontsize=16)
plt.legend(loc="lower right")
from sklearn import grid_search
param_grid = # TODO
clf = grid_search.GridSearchCV(tree.DecisionTreeClassifier(), param_grid,
scoring='roc_auc')
ypred_dt_opt = cross_validate(X, y, clf, folds)
fpr_dt_opt, tpr_dt_opt, thresholds = metrics.roc_curve(y, ypred_dt_opt, pos_label=1)
auc_dt_opt = metrics.auc(fpr_dt_opt, tpr_dt_opt)
# Plot the 5 decision trees from earlier
for tree_index in range(4):
plt.plot(fpr_dt[tree_index], tpr_dt[tree_index], '-', color='blue')
plt.plot(fpr_dt[-1], tpr_dt[-1], '-', color='blue',
label='DT (AUC = %0.2f (+/- %0.2f))' % (np.mean(auc_dt), np.std(auc_dt)))
# Plot the optimized decision tree
plt.plot(fpr_dt_opt, tpr_dt_opt, color='orange', label='DT optimized (AUC=%0.2f)' % auc)
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.title('ROC curves', fontsize=16)
plt.legend(loc="lower right")
from sklearn import ensemble
# By default, the base estimator is a decision tree with default parameters
# TODO: Use clf = ensemble.BaggingClassifier(n_estimators=5)
def cross_validate_optimize(design_matrix, labels, classifier, cv_folds):
Perform a cross-validation and returns the predictions.
Parameters:
-----------
design_matrix: (n_samples, n_features) np.array
Design matrix for the experiment.
labels: (n_samples, ) np.array
Vector of labels.
classifier: sklearn GridSearchCV object
GridSearchCV instance; must have the following methods/attributes:
- fit(X, y) to train the classifier on the data X, y
- predict_proba(X) to apply the trained classifier to the data X and return probability estimates
cv_folds: sklearn cross-validation object
- best_params_ the best parameter dictionary
Cross-validation iterator.
Return:
-------
pred: (n_samples, ) np.array
Vector of predictions (same order as labels).
pred = np.zeros(labels.shape)
for tr, te in cv_folds:
# Restrict data to train/test folds
Xtr = design_matrix[tr, :]
ytr = labels[tr]
Xte = design_matrix[te, :]
#print Xtr.shape, ytr.shape, Xte.shape
# Fit classifier
classifier.fit(Xtr, ytr)
# Print best parameter
print classifier.best_params_
# Predict probabilities (of belonging to +1 class) on test data
yte_pred = classifier.predict_proba(Xte)
index_of_class_1 = 1 - ytr[0] # 0 if the first sample is positive, 1 otherwise
pred[te] = yte_pred[:, index_of_class_1]
return pred
param_grid = {'n_estimators': [5, 15, 25, 50]}
# TODO
clf = ensemble.RandomForestClassifier(n_estimators=5)
# TODO
param_grid = {'n_estimators': [5, 15, 25, 50]}
# TODO
from sklearn import linear_model
param_grid = {'C':[1e-3, 1e-2, 1e-1, 1., 1e2, 1e3]}
clf = grid_search.GridSearchCV(linear_model.LogisticRegression(penalty='l1'),
param_grid, scoring='roc_auc')
ypred_l1 = cross_validate_optimize(X, y, clf, folds)
fpr_l1, tpr_l1, thresholds_l1 = metrics.roc_curve(y, ypred_l1, pos_label=1)
clf = grid_search.GridSearchCV(linear_model.LogisticRegression(penalty='l2'),
param_grid, scoring='roc_auc')
ypred_l2 = cross_validate_optimize(X, y, clf, folds)
fpr_l2, tpr_l2, thresholds_l2 = metrics.roc_curve(y, ypred_l2, pos_label=1)
# TODO
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.title('ROC curves', fontsize=16)
plt.legend(loc="lower right")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this guide, we'll take a look at building a discrete choice model using Larch.
Step2: The basic structure of a choice model in Larch is contained in the
Step3: Choices
Step4: After setting the choice definition, the loaded or computed choice array
Step5: Availability
Step6: After setting the availability definition, the loaded or computed availability array
Step7: Utility Functions
Step8: The computed values for the utility function can be accessed using
Step9: Data Preparation
Step10: Nesting Structures
Step11: Adding a nest can be accomplished the the new_node method,
Step12: The return value of new_node
Step13: Nothing in Larch prevents you from overloading the nesting structure with
Step14: Parameter Estimation
Step15: If we want to set certain parameters to be constrained to be certain values,
Step16: The default infinite bounds on the remaining parameters can be problematic
Step17: To actually develop maximum likelihood estimates for the remaining
Step18: In a Jupyter notebook, this method displays a live-updating view of the
Step19: Reporting
Step20: To save a model report to an Excel file, use the to_xlsx method.
|
<ASSISTANT_TASK:>
Python Code:
# TEST
import numpy as np
import pandas as pd
import larch.numba as lx
from pytest import approx
import larch.numba as lx
# HIDDEN
df_ca = pd.read_csv("example-data/tiny_idca.csv")
cats = df_ca['altid'].astype(pd.CategoricalDtype(['Car', 'Bus', 'Walk'])).cat
df_ca['altnum'] = cats.codes + 1
df_ca = df_ca.set_index(['caseid', 'altnum'])
data = lx.Dataset.construct.from_idca(df_ca.sort_index(), fill_missing=0)
data = data.drop_vars("_avail_")
data['ChosenCode'] = (data['Chosen'] * data['Chosen'].altnum).sum('altnum')
data.coords['alt_names'] = lx.DataArray(cats.categories, dims=('altnum'), coords={'altnum': data.altnum})
alts = dict(zip(data['altnum'].values, data['alt_names'].values))
for i, k in alts.items():
data[f'{k}Time'] = data['Time'].sel(altnum=i)
data
m = lx.Model(data)
m.choice_co_code = 'ChosenCode'
# TEST
ch = np.array([
[ 1., 0., 0., 1.],
[ 0., 1., 0., 1.],
[ 0., 0., 1., 1.],
[ 0., 0., 1., 1.],
])
assert m.loglike() == approx(np.log(1/3) * 4)
np.testing.assert_array_almost_equal(m._data_arrays.ch, ch)
m.choice_co_vars = {
1: 'ChosenCode == 1',
2: 'ChosenCode == 2',
3: 'ChosenCode == 3',
}
# TEST
assert m.loglike() == approx(np.log(1/3) * 4)
np.testing.assert_array_almost_equal(m._data_arrays.ch, ch)
m.choice_ca_var = 'Chosen'
# TEST
assert m.loglike() == approx(np.log(1/3) * 4)
np.testing.assert_array_almost_equal(m._data_arrays.ch, ch)
m.dataset['ch']
m.availability_ca_var = "Time > 0"
# TEST
assert m.loglike_casewise() == approx([-1.098612, -0.693147, -1.098612, -1.098612])
m.availability_co_vars = {
1: True,
2: 'BusTime > 0',
3: 'WalkTime > 0',
}
# TEST
assert m.availability_var is None
assert m.loglike_casewise() == approx([-1.098612, -0.693147, -1.098612, -1.098612])
m.dataset['av']
from larch import P, X
m.utility_ca = P.Time * X.Time + P.Cost * X.Cost
# TEST
assert m.loglike({'Time': -0.01, 'Cost': -0.02}) == approx(-4.040671968970389)
assert m.utility() == approx(np.array([
[-3.3 , -2.4 , -0.2 , -0.055162],
[-2.75 , -2.35 , -np.inf, -1.836985],
[-2.9 , -2. , -0.3 , -0.071306],
[-4.65 , -3.2 , -0.1 , -0.045875],
]), rel=1e-5)
m.utility_co = {
1: P.Income_Car * X.Income / 1000,
2: P.Income_Bus * X.Income / 1000,
}
# TEST
assert m.loglike({'Time': -0.01, 'Cost': -0.02, 'Income_Car': 0.1}) == approx(-6.089136524348358)
assert m.utility({
'Time': -0.01, 'Cost': -0.02, 'Income_Car': 0.1
}) == approx(np.array([
[-0.3 , -2.4 , -0.2 , 0.500937],
[ 0.25 , -2.35 , -np.inf , 0.321645],
[ 1.1 , -2. , -0.3 , 1.355918],
[ 0.35 , -3.2 , -0.1 , 0.860637],
]))
m.utility(
{'Time': -0.01, 'Cost': -0.02, 'Income_Car': 0.1},
return_format='dataarray',
)
m.datatree
m.dataset
m.graph
# TEST
assert sorted(m.graph.nodes) == [0, 1, 2, 3]
assert sorted(m.graph.edges) == [(0, 1), (0, 2), (0, 3)]
assert m.graph.standard_sort_names == ['Car', 'Bus', 'Walk', '_root_']
assert m.graph.standard_sort == (1, 2, 3, 0)
z = m.graph.new_node(parameter='Mu_Motorized', children=[1,2], name='Motorized')
m.graph
# TEST
assert sorted(m.graph.nodes) == [0, 1, 2, 3, 4]
assert sorted(m.graph.edges) == [(0, 3), (0, 4), (4, 1), (4, 2)]
assert m.graph.standard_sort_names == ['Car', 'Bus', 'Walk', 'Motorized', '_root_']
assert m.graph.standard_sort == (1, 2, 3, 4, 0)
m.graph.new_node(parameter='Mu_Omni', children=[z, 3], name='Omni')
m.graph
# TEST
assert sorted(m.graph.nodes) == [0, 1, 2, 3, 4, 5]
assert sorted(m.graph.edges) == [(0, 5), (4, 1), (4, 2), (5, 3), (5, 4)]
assert m.graph.standard_sort_names == [
'Car', 'Bus', 'Walk', 'Motorized', 'Omni', '_root_'
]
assert m.graph.standard_sort == (1, 2, 3, 4, 5, 0)
m.graph.remove_node(5)
m.graph
# TEST
assert sorted(m.graph.nodes) == [0, 1, 2, 3, 4]
assert sorted(m.graph.edges) == [(0, 3), (0, 4), (4, 1), (4, 2)]
assert m.graph.standard_sort_names == ['Car', 'Bus', 'Walk', 'Motorized', '_root_']
assert m.graph.standard_sort == (1, 2, 3, 4, 0)
m.pf
m.lock_value('Time', -0.01)
m.lock_value('Cost', -0.02)
m.pf
m.set_cap(100)
m.pf
m.maximize_loglike()
m.calculate_parameter_covariance()
m.pf
m.parameter_summary()
m.estimation_statistics()
m.most_recent_estimation_result
m.to_xlsx("/tmp/larch-demo.xlsx")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the data
Step2: Counting word frequency
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Step6: Text to vector function
Step7: If you do this right, the following code should return
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Step10: Building the network
Step11: Intializing the model
Step12: Training the network
Step13: Testing
Step14: Try out your own text!
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
from collections import Counter
total_counts = Counter()
for _, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
word2idx = {word: i for i, word in enumerate(vocab)}
word2idx
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, 10000])
# Hidden layer(s)
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 35, activation='ReLU')
# Output layer
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd',
learning_rate=0.05,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=64, n_epoch=150)
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
sentence = "MockingBot cannot make me satisfied, only fulfill my minor requirement"
test_sentence(sentence)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this example, we will assume that the stimuli are patches of different motion directions. These stimuli span a 360-degree, circular feature space. We will build an encoding model that has 6 channels, or basis functions, which also span this feature space.
Step2: Now we'll generate synthetic data. Ideally, each voxel that we measure from is roughly tuned to some part of the feature space (see Sprague, Boynton, Serences, 2019). So we will generate data that has a receptive field (RF). We can define the RF along the same feature axis as the channels that we generated above.
Step3: Now let's generate some training data and look at it. This code will create a plot that depicts the response of an example voxel for different trials.
Step4: Using this synthetic training data, we can fit the IEM.
Step5: Calling the IEM fit method defines the channels, or the basis set, which span the feature domain. We can examine the channels and plot them to check that they look appropriate.
Step6: Now we can generate test data and see how well we can predict the test stimuli.
Step7: In addition to predicting the exact feature, we can examine the model-based reconstructions in the feature domain. That is, instead of getting single predicted values for each feature, we can look at a reconstructed function which peaks at the predicted feature.
Step8: For a sanity check, let's check how R^2 changes as the number of voxels increases. We can write a quick wrapper function to train and test on a given set of motion directions, as below.
Step9: We'll iterate through the list and look at the resulting R^2 values.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from brainiak.reconstruct import iem as IEM
import matplotlib.pyplot as plt
import numpy.matlib as matlib
import scipy.signal
# Set up parameters
n_channels = 6
cos_exponent = 5
range_start = 0
range_stop = 360
feature_resolution = 360
iem_obj = IEM.InvertedEncoding1D(n_channels, cos_exponent, stimulus_mode='circular', range_start=range_start,
range_stop=range_stop, channel_density=feature_resolution)
# You can also try the half-circular space. Here's the associated code:
# range_stop = 180 # since 0 and 360 degrees are the same, we want to stop shy of 360
# feature_resolution = 180
# iem_obj = IEM.InvertedEncoding1D(n_channels, cos_exponent, stimulus_mode='halfcircular', range_start=range_start,
# range_stop=range_stop, channel_density=feature_resolution, verbose=True)
stim_vals = np.linspace(0, feature_resolution - (feature_resolution/6), 6).astype(int)
# Generate synthetic data s.t. each voxel has a Gaussian tuning function
def generate_voxel_RFs(n_voxels, feature_resolution, random_tuning=True, RF_noise=0.):
if random_tuning:
# Voxel selectivity is random
voxel_tuning = np.floor((np.random.rand(n_voxels) * range_stop) + range_start).astype(int)
else:
# Voxel selectivity is evenly spaced along the feature axis
voxel_tuning = np.linspace(range_start, range_stop, n_voxels+1)
voxel_tuning = voxel_tuning[0:-1]
voxel_tuning = np.floor(voxel_tuning).astype(int)
gaussian = scipy.signal.gaussian(feature_resolution, 15)
voxel_RFs = np.zeros((n_voxels, feature_resolution))
for i in range(0, n_voxels):
voxel_RFs[i, :] = np.roll(gaussian, voxel_tuning[i] - ((feature_resolution//2)-1))
voxel_RFs += np.random.rand(n_voxels, feature_resolution)*RF_noise # add noise to voxel RFs
voxel_RFs = voxel_RFs / np.max(voxel_RFs, axis=1)[:, None]
return voxel_RFs, voxel_tuning
def generate_voxel_data(voxel_RFs, n_voxels, trial_list, feature_resolution,
trial_noise=0.25):
one_hot = np.eye(feature_resolution)
# Generate trial-wise responses based on voxel RFs
if range_start > 0:
trial_list = trial_list + range_start
elif range_start < 0:
trial_list = trial_list - range_start
stim_X = one_hot[:, trial_list] #@ basis_set.transpose()
trial_data = voxel_RFs @ stim_X
trial_data += np.random.rand(n_voxels, trial_list.size)*(trial_noise*np.max(trial_data))
return trial_data
np.random.seed(100)
n_voxels = 50
n_train_trials = 120
training_stim = np.repeat(stim_vals, n_train_trials/6)
voxel_RFs, voxel_tuning = generate_voxel_RFs(n_voxels, feature_resolution, random_tuning=False, RF_noise=0.1)
train_data = generate_voxel_data(voxel_RFs, n_voxels, training_stim, feature_resolution, trial_noise=0.25)
print(np.linalg.cond(train_data))
# print("Voxels are tuned to: ", voxel_tuning)
# Generate plots to look at the RF of an example voxel.
voxi = 20
f = plt.figure()
plt.subplot(1, 2, 1)
plt.plot(train_data[voxi, :])
plt.xlabel("trial")
plt.ylabel("activation")
plt.title("Activation over trials")
plt.subplot(1, 2, 2)
plt.plot(voxel_RFs[voxi, :])
plt.xlabel("degrees (motion direction)")
plt.axvline(voxel_tuning[voxi])
plt.title("Receptive field at {} deg".format(voxel_tuning[voxi]))
plt.suptitle("Example voxel")
plt.figure()
plt.imshow(train_data)
plt.ylabel('voxel')
plt.xlabel('trial')
plt.suptitle('Simulated data from each voxel')
# Fit an IEM
iem_obj.fit(train_data.transpose(), training_stim)
# Let's visualize the basis functions.
channels = iem_obj.channels_
feature_axis = iem_obj.channel_domain
print(channels.shape)
plt.figure()
plt.subplot(1, 2, 1)
for i in range(0, channels.shape[0]):
plt.plot(feature_axis, channels[i,:])
plt.title('Channels (i.e. basis functions)')
plt.subplot(1, 2, 2)
plt.plot(np.sum(channels, 0))
plt.ylim(0, 2.5)
plt.title('Sum across channels')
# Generate test data
n_test_trials = 12
test_stim = np.repeat(stim_vals, n_test_trials/len(stim_vals))
np.random.seed(330)
test_data = generate_voxel_data(voxel_RFs, n_voxels, test_stim, feature_resolution, trial_noise=0.25)
# Predict test stim & get R^2 score
pred_feature = iem_obj.predict(test_data.transpose())
R2 = iem_obj.score(test_data.transpose(), test_stim)
print("Predicted features are: {} degrees.".format(pred_feature))
print("Actual features are: {} degrees.".format(test_stim))
print("Test R^2 is {}".format(R2))
# Now get the model-based reconstructions, which are continuous
# functions that should peak at each test stimulus feature
recons = iem_obj._predict_feature_responses(test_data.transpose())
f = plt.figure()
for i in range(0, n_test_trials-1):
plt.plot(feature_axis, recons[:, i])
for i in stim_vals:
plt.axvline(x=i, color='k', linestyle='--')
plt.title("Reconstructions of {} degrees".format(np.unique(test_stim)))
iem_obj.verbose = False
def train_and_test(nvox, ntrn, ntst, rfn, tn):
vRFs, vox_tuning = generate_voxel_RFs(nvox, feature_resolution, random_tuning=True, RF_noise=rfn)
trn = np.repeat(stim_vals, ntrn/6).astype(int)
trnd = generate_voxel_data(vRFs, nvox, trn, feature_resolution, trial_noise=tn)
tst = np.repeat(stim_vals, ntst/6).astype(int)
tstd = generate_voxel_data(vRFs, nvox, tst, feature_resolution, trial_noise=tn)
iem_obj.fit(trnd.transpose(), trn)
recons = iem_obj._predict_feature_responses(tstd.transpose())
pred_ori = iem_obj.predict(tstd.transpose())
R2 = iem_obj.score(tstd.transpose(), tst)
return recons, pred_ori, R2, tst
np.random.seed(300)
vox_list = (5, 10, 15, 25, 50)
R2_list = np.zeros(len(vox_list))
for idx, nvox in enumerate(vox_list):
recs, preds, R2_list[idx], test_features = train_and_test(nvox, 120, 30, 0.1, 0.25)
print("The R2 values for increasing numbers of voxels: ")
print(R2_list)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Specifying full supply and initial conditions
Step2: Releases
Step3: We can create each type of release mechanism.
|
<ASSISTANT_TASK:>
Python Code:
lva = v.model.node.storages.lva('IrrigationOnlyStorage')
lva
scaled_lva = lva * 2
scaled_lva
# v.model.node.storages.load_lva(scaled_lva) # Would load the same table into ALL storages
# v.model.node.storages.load_lva(scaled_lva,nodes=['StorageOnlyStorage','BothStorage']) # Will load into two storages
v.model.node.storages.load_lva(scaled_lva,nodes='IrrigationOnlyStorage')
v.model.node.storages.lva('IrrigationOnlyStorage')
v.model.node.storages.set_full_supply(95000,'Volume',nodes='IrrigationOnlyStorage')
v.model.node.storages.set_param_values('InitialVolume',50000,nodes='IrrigationOnlyStorage')
# OR
# v.model.node.storages.set_param_values('InitialStorageLevel',4.5,nodes='IrrigationOnlyStorage')
v.model.node.storages.outlets(nodes='IrrigationOnlyStorage')
v.model.node.storages.releases(nodes='IrrigationOnlyStorage',path=1)
v.model.node.storages.release_table('IrrigationOnlyStorage','Ungated Spillway #0')
release_table = [(0,0,0),(1,1,1),(5,10,12)]
release_table = pd.DataFrame(release_table,columns=['level','minimum','maximum'])
release_table
# To a particular outlet or a particular storage
#v.model.node.storages.add_valve(release_table,'IrrigationOnlyStorageOverflow',nodes='IrrigationOnlyStorage')
# Add the same valve to EVERY storage (on first outlet path)
#v.model.node.storages.add_valve(release_table)
# Optionally specify the name of the release
#v.model.node.storages.add_valve(release_table,name='my new valve')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unrolling
Step2: And a few more parameters in the Params object.
Step4: make_system computes rho_h, which we'll need to compute moment of inertia, and k, which we'll use to compute r.
Step5: Testing make_system
Step7: Here's how we compute I as a function of r
Step8: When r is Rmin, I is small.
Step9: As r increases, so does I.
Step10: Exercises
Step11: Test slope_func with the initial conditions.
Step12: Run the simulation.
Step13: And look at the results.
Step14: Check the results to see if they seem plausible
Step15: Plot omega
Step16: Plot y
|
<ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
radian = UNITS.radian
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
params = Params(Rmin = 0.02 * m,
Rmax = 0.055 * m,
Mcore = 15e-3 * kg,
Mroll = 215e-3 * kg,
L = 47 * m,
tension = 2e-4 * N,
t_end = 120 * s)
def make_system(params):
Make a system object.
params: Params with Rmin, Rmax, Mcore, Mroll,
L, tension, and t_end
returns: System with init, k, rho_h, Rmin, Rmax,
Mcore, Mroll, ts
L, Rmax, Rmin = params.L, params.Rmax, params.Rmin
Mroll = params.Mroll
init = State(theta = 0 * radian,
omega = 0 * radian/s,
y = L)
area = pi * (Rmax**2 - Rmin**2)
rho_h = Mroll / area
k = (Rmax**2 - Rmin**2) / 2 / L / radian
return System(params, init=init, area=area, rho_h=rho_h, k=k)
system = make_system(params)
system.init
def moment_of_inertia(r, system):
Moment of inertia for a roll of toilet paper.
r: current radius of roll in meters
system: System object with Mcore, rho, Rmin, Rmax
returns: moment of inertia in kg m**2
Mcore, Rmin, rho_h = system.Mcore, system.Rmin, system.rho_h
Icore = Mcore * Rmin**2
Iroll = pi * rho_h / 2 * (r**4 - Rmin**4)
return Icore + Iroll
moment_of_inertia(system.Rmin, system)
moment_of_inertia(system.Rmax, system)
# Solution goes here
# Solution goes here
# Solution goes here
results.tail()
def plot_theta(results):
plot(results.theta, color='C0', label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
plot_theta(results)
def plot_omega(results):
plot(results.omega, color='C2', label='omega')
decorate(xlabel='Time (s)',
ylabel='Angular velocity (rad/s)')
plot_omega(results)
def plot_y(results):
plot(results.y, color='C1', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
plot_y(results)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the data into hyperedges. We preserve order only in so far as the first element in each array is the sender. Email addresses may appear multiple times if they were included multiple times in the header. For example their exist cases where a given address was included in both the cc and the bcc lines of the same message.
Step2: Descriptive Statistics
Step3: Number of hyperedges in email hypergraph
Step4: Number of nodes in email hypergraph
Step5: Next read the edges into Lelands library
Step6: Pawel has requested that these addresses be remapped to one up integers and output in one array per line. To maximize code re-use I'll write that as function in the hypergraph library.
Step7: Recompute the edge size distribution
Step8: Now let's look at node size, which might be refered to as degree by some.
Step9: We can look at sender degree (number of emails sent by an individual) since the sender is currently coded as the first vertex in every hyperedge
Step10: Might be worth inducing the 2 section and overlaying the two degree distributions on each other.
Step11: Output in basic integer tuple format
|
<ASSISTANT_TASK:>
Python Code:
import re
import numpy as np
import pandas as pd
import email
#Plotting stuff
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('poster')
import hypergraph
import os
#for root, user, file in os.walk('/Users/jchealy/Downloads/maildir/'):
root = '/Users/jchealy/Downloads/maildir/'
count =0
edges = []
file_index = []
for user in os.listdir(root):
#print(user)
location = root+user+'/_sent_mail'
if(count>2):
break
if(os.path.isdir(location)):
#print(location)
for fname in os.listdir(location):
file= location+'/'+fname
#print(file)
with open(file) as f:
message = email.message_from_file(f)
edge = [message['from']]
if 'To' in message:
edge = edge+re.split(r'\s*,\s*', message['to'])
if 'Cc' in message:
edge = edge + re.split(r'\s*,\s*', message['cc'])
if 'Bcc' in message:
edge = edge + re.split(r'\s*,\s*', message['bcc'])
edges.append(edge)
file_index.append(file)
l = pd.Series([len(x) for x in edges])
d = l.value_counts().sort_index().reset_index()
d.columns = ['index', 'freq']
d['freq']=np.log10(d['freq'])
ax = d.plot(x='index', y='freq', kind='scatter', alpha=0.6, xlim=[-10,max(d['index'])+10])
ax.set_ylabel('Frequency (log10)')
ax.set_xlabel('Hyperedge Size')
len(edges)
flat = [item for sublist in edges for item in sublist]
len(flat)
nodes = np.unique(flat)
len(nodes)
edges[1:10]
hg = hypergraph.Hypergraph()
for i, edge in enumerate(edges):
hg.add_edge(i,edge)
if(i%10000 == 0):
print(i)
edge_size = pd.Series([hg.edge[x].size for x in hg.edge])
d = edge_size.value_counts().sort_index().reset_index()
d.columns = ['index', 'freq']
d['freq']=np.log10(d['freq'])
ax = d.plot(x='index', y='freq', kind='scatter', alpha=0.6, xlim=[-10,max(d['index'])+10])
ax.set_ylabel('Frequency (log10)')
ax.set_xlabel('Hyperedge Size')
node_size = pd.Series([hg.node[x].size for x in hg.node])
d = node_size.value_counts().sort_index().reset_index()
d.columns = ['index', 'freq']
d['freq']=np.log10(d['freq'])
d['index']=np.log10(d['index'])
ax = d.plot(x='index', y='freq', kind='scatter', alpha=0.6, xlim=[-1,max(d['index'])+1])
ax.set_ylabel('Frequency (log10)')
ax.set_xlabel('Vertex Size (log10)')
sender = pd.Series([hg.edge[x].labels[0] for x in hg.edge])
sender.head()
sender = pd.Series([hg.edge[x].labels[0] for x in hg.edge])
d = sender.value_counts().plot(kind='line', title='number of email sent by each user', logx=False, logy=True)
plt.xticks([])
plt.ylabel('outgoing hyperedges')
plt.xlabel('sender (decreasing order)')
pd.set_option('precision',2)
edge_size = pd.Series([hg.edge[x].size for x in hg.edge])
edge_card = pd.Series([hg.edge[x].cardinality for x in hg.edge])
#hg.edge[[edge_size!=edge_card]]
edge_difference_index = np.where(edge_size!=edge_card)[0]
print(str(len(edge_difference_index))+" out of " + str(len(edge_size)) + " ("+
"{:.2f}".format(len(edge_difference_index)/len(edge_size)*100)+
"%) edges differed between cardinality and size")
[hg.edge[x].labels for x in edge_difference_index][:5]
keys = hg.nodes
values = range(len(hg.nodes))
vertex_map = dict(zip(keys,values))
filename = "enronNumericHypergraphEdgelist.txt"
file = open(filename,'w')
@np.vectorize
def vertex_mapper(x):
return vertex_map[x]
for e in hg.edge:
tup = vertex_mapper(hg.edge[e].labels)
file.write(" ".join(str(elem) for elem in tup)+"\n")
#print(" ".join(str(elem) for elem in [vertex_map[x] for x in tup]))
file.close()
hg.node['tim.belden@enron.com'].labels
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next
Step2: The standard name above refers to the CF Conventions standard name. By using this, other netCDF software tools can interpret the time variable, which unfortunately can only be represented as an integer index.
Step3: We see that our fake dem is now contained in the z variable of the netcdf. You can optionally also insert the elevation data into the alt variable, but mostly the alt variable remains unused and is not required for iSNOBAL.
|
<ASSISTANT_TASK:>
Python Code:
# first, define our isnobal spatiotemporal parameters
isnobal_params = dict(
# generate a 10x8x(n_timesteps) grid for each variable
nlines=10, nsamps=8,
# with a resolution of 1.0m each; samp is north-south, so it's negative
dline=1.0, dsamp=-1.0,
# set base fake origin (easting, northing) = (442, 88)
bline=442, bsamp=88,
# enter start time and timestep; janky, but need to use '01' and '00'
# to get proper ISO 8601 formatting if < 10
year=2008, month=10, day='22', hour='05', dt='hours'
)
# now generate our netcdf
import sys, os
sys.path.append('../../')
if os.path.exists('test.nc'):
os.remove('test.nc')
from vwpy.netcdf import ncgen_from_template
# don't need path to the template, that's already set to be 'vwpy/cdl'
nc = ncgen_from_template('ipw_in_template.cdl',
'test.nc', **isnobal_params)
print isnobal_params
# first, let's inspect what variables are available to us
nc.variables
# we can get metadata on any variable by inspecting the ncattrs
time = nc.variables['time']
time.ncattrs()
# this is where the date went
time.getncattr('standard_name')
# let's create a fake DEM with some random data
import numpy as np
dem = abs(np.random.rand(isnobal_params['nlines'], isnobal_params['nsamps']))
%matplotlib inline
import matplotlib.pyplot as plt
plt.matshow(dem)
np.shape(dem)
np.shape(nc.variables['z'])
# use [:] to unpack and assign te values from the dem
nc.variables['z'][:] = dem
z = nc.variables['z'][:]
plt.matshow(z)
fake_ta = abs(np.random.rand(5, isnobal_params['nlines'], isnobal_params['nsamps']))
ta = nc.variables['T_a'] # np.shape(ta) == (0, 100, 10) at this point
# ta is a reference to the actual netcdf variable, so we can assign to it
ta[:] = fake_ta # now np.shape(ta) == (5, 100, 10)
# double click the image below to enlarge and check they are different
f, axs = plt.subplots(1, 5, figsize=(15, 10))
for idx, ax in enumerate(axs):
ax.matshow(ta[idx])
ax.set_title('t = ' + str(idx))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup
Step2: Forward system
Step3: Inverse problem
Step4: View the inversion results
|
<ASSISTANT_TASK:>
Python Code:
# The usual, we need to load some libraries
from SimPEG import Mesh, Utils, Maps, PF
from SimPEG import mkvc, Regularization, DataMisfit, Optimization, InvProblem, Directives,Inversion
from SimPEG.Utils import mkvc
from SimPEG.Utils.io_utils import download
import numpy as np
import scipy as sp
import os
%pylab inline
# Download data from the cloud
url = "https://storage.googleapis.com/simpeg/kevitsa_synthetic/"
cloudfiles = [
'Mesh_global_100m_padded.msh','GravSim.dat',
'Kevitsa.topo', 'SimPEG_GRAV.inp'
]
keys = ['mesh', 'data', 'topo', 'input']
# Download to ./KevitsaGrav
files = download([url+f for f in cloudfiles], folder='./KevitsaGrav', overwrite=True)
files = dict(zip(keys, files)) # allows us to name the files
# Read in the input file which included all parameters at once (mesh, topo, model, survey, inv param, etc.)
inputFile = files['input'] # input file was the last downloaded
driver = PF.GravityDriver.GravityDriver_Inv()
driver.basePath = './KevitsaGrav'
# All the parameters in the input files can be access via the driver object
# For example, to get the survey:
obs = driver.readGravityObservations(files['data'])
mesh = Mesh.TensorMesh.readUBC(files['mesh'])
# The gridded data holds 20k+ observation points, too large for a quick inversion
# Let's grab a random subset
nD = 500
indx = randint(0,high=obs.dobs.shape[0],size=nD)
# Create a new downsampled survey
locXYZ = obs.srcField.rxList[0].locs[indx,:]
rxLoc = PF.BaseGrav.RxObs(locXYZ)
srcField = PF.BaseGrav.SrcField([rxLoc])
survey = PF.BaseGrav.LinearSurvey(srcField)
survey.dobs = obs.dobs[indx]
survey.std = obs.std[indx]
ph = PF.Gravity.plot_obs_2D(survey.srcField.rxList[0].locs, survey.dobs,'Observed Data')
# Create a mesh, we will start coarse. Feel free to change the
# the mesh, but make sure you have enough memory and coffee brakes...
dx = 200.
npad = 5
hxind = [(dx, npad, -1.3), (dx, 65), (dx, npad, 1.3)]
hyind = [(dx, npad, -1.3), (dx, 45), (dx, npad, 1.3)]
hzind = [(dx, npad, -1.3), (150, 15), (10, 10, -1.3), (10,5)]
# Create the mesh and move the location to the center of the data
mesh = Mesh.TensorMesh([hxind, hyind, hzind], 'CC0')
mesh._x0 += [np.mean(locXYZ[:,0]), np.mean(locXYZ[:,1]), np.max(locXYZ[:,2])-np.sum(mesh.hz)]
ax = mesh.plotGrid()
# We will get the topography from the input file
topo = np.genfromtxt(files['topo'], skip_header=1)
# Find the active cells
actv = Utils.surface2ind_topo(mesh, topo, 'N')
actv = np.asarray(
[inds for inds, elem in enumerate(actv, 1) if elem], dtype=int
) - 1
nC = len(actv)
print("Number of data points: " + str(nD))
print("Number of model cells: " + str(nC))
# Create active map to go from reduce set to full
actvMap = Maps.InjectActiveCells(mesh, actv, -100)
# Create reduced identity map
idenMap = Maps.IdentityMap(nP=nC)
mstart = np.ones(nC)*1e-4
# Create gravity problem
prob = PF.Gravity.GravityIntegral(mesh, rhoMap=idenMap, actInd=actv)
survey.pair(prob)
# Make depth weighting,
# this will also require the calculation of the forward operator ... time for coffee
wr = np.sum(prob.G**2., axis=0)**0.5
wr = (wr/np.max(wr))
# % Create inversion objects
reg = Regularization.Sparse(mesh, indActive=actv, mapping=idenMap)
reg.cell_weights = wr
reg.norms = [0,2,2,2]
opt = Optimization.ProjectedGNCG(maxIter=100, lower=-.5,upper=0.5, maxIterLS = 20, maxIterCG= 10, tolCG = 1e-3)
dmis = DataMisfit.l2_DataMisfit(survey)
dmis.W = 1./survey.std
# This is where the misfit function and regularization are put together
invProb = InvProblem.BaseInvProblem(dmis, reg, opt)
# Here are few directives to make the inversion work and apply sparsity.
# After the l2, beta is re-adjusted on the fly to stay near the target misfit
betaest = Directives.BetaEstimate_ByEig()
IRLS = Directives.Update_IRLS(f_min_change=1e-4, minGNiter=3)
update_Jacobi = Directives.Update_lin_PreCond()
inv = Inversion.BaseInversion(invProb, directiveList=[betaest, IRLS,
update_Jacobi])
# Run the inversion
mrec = inv.run(mstart)
# Here is a quick script to slice through the final model
import ipywidgets as widgets
def ModSlicer(mesh, model):
def plotIt(m, normal, panel, vmin, vmax):
ypanel = int(mesh.vnC[1]/2)
plt.figure(figsize=(10, 8))
ax = plt.subplot(211)
ph = mesh.plotSlice(model[m], ax=ax, normal=normal, ind=int(panel),
grid=True,
clim=(vmin,vmax), pcolorOpts={'cmap': 'jet', })
# Set default limits
if normal == 'X':
Xlim = [mesh.vectorNy.min(), mesh.vectorNy.max()]
Ylim = [mesh.vectorNz.min(), mesh.vectorNz.max()]
elif normal == 'Y':
Xlim = [mesh.vectorNx.min(), mesh.vectorNx.max()]
Ylim = [mesh.vectorNz.min(), mesh.vectorNz.max()]
else:
Xlim = [mesh.vectorNx.min(), mesh.vectorNx.max()]
Ylim = [mesh.vectorNy.min(), mesh.vectorNy.max()]
ax.set_xlim(Xlim)
ax.set_ylim(Ylim)
ax.set_aspect('equal')
plt.colorbar(ph[0])
plt.title('Plan lp-model.')
plt.gca().set_aspect('equal')
plt.ylabel('y')
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
out = widgets.interactive(plotIt,
m = widgets.ToggleButtons(
options=['l2', 'lp'],
description='Model:'),
normal = widgets.ToggleButtons(
options=['X', 'Y', 'Z'],
description='Normal:',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description'),
panel = widgets.FloatSlider(min=0, max=mesh.vnC.max(), step=1,value=1, continuous_update=False),
vmin = widgets.FloatSlider(min=model['l2'][~np.isnan(model['l2'])].min(), max=model['l2'][~np.isnan(model['l2'])].max(), step=0.001,value=model['l2'][~np.isnan(model['l2'])].min(), continuous_update=False),
vmax = widgets.FloatSlider(min=model['l2'][~np.isnan(model['l2'])].min(), max=model['l2'][~np.isnan(model['l2'])].max(), step=0.001,value=model['l2'][~np.isnan(model['l2'])].max(), continuous_update=False),
)
return out
# Plot the result
m_lp = actvMap * mrec
m_lp[m_lp == -100] = np.nan
m_l2 = actvMap*IRLS.l2model
m_l2[m_l2 == -100] = np.nan
model={'l2':m_l2,'lp':m_lp}
# Execute the ploting function
ModSlicer(mesh, model)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: deep learning understanding
Step2: saliency
Step3: class saliency map extraction
Step4: class saliency map statistical uncertainties
Step5: reshaped to 1D class-saliency map with statistical uncertainties scaled by a factor of 10
Step6: 2D class saliency map with uncertainties shown in marginal plots
|
<ASSISTANT_TASK:>
Python Code:
%autosave 120
import numpy as np
np.random.seed(1337)
import datetime
import graphviz
from IPython.display import SVG
import keras
from keras import activations
from keras import backend as K
from keras.datasets import mnist
from keras.layers import (
concatenate,
Concatenate,
Conv1D,
Conv2D,
Dense,
Dropout,
Embedding,
Flatten,
Input,
MaxPooling1D,
MaxPooling2D)
from keras.models import load_model, Model, Sequential
from keras_tqdm import TQDMNotebookCallback
from keras.utils import plot_model
from keras.utils.vis_utils import model_to_dot
import math
import matplotlib
from matplotlib import gridspec
import matplotlib.pylab as plt
from matplotlib.ticker import NullFormatter, NullLocator, MultipleLocator
import pandas as pd
import random
from scipy import stats
import seaborn as sns
from sklearn.datasets import load_iris
import sklearn.ensemble
import sklearn.tree
from sklearn.metrics import (
auc,
confusion_matrix,
roc_curve,
precision_score)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import sqlite3
import sys
import talos as ta
import tensorflow as tf
from tensorflow.python.client.device_lib import list_local_devices
from tqdm import tqdm_notebook
import uuid
from vis.utils import utils
from vis.visualization import visualize_activation
from vis.visualization import visualize_saliency
import warnings
pd.set_option("display.max_columns", 500)
pd.set_option("display.max_rows", 500)
sns.set_palette('husl')
sns.set(style='ticks')
warnings.filterwarnings("ignore")
print('Python version:', sys.version)
print('Matplotlib version:', matplotlib.__version__)
print('NumPy version:', np.__version__)
print('Keras version:', keras.__version__)
print('TensorFlow version:', tf.__version__)
list_local_devices()
%matplotlib inline
plt.rcParams['figure.figsize'] = [10, 10]
# input dimensions
img_x = 28
img_y = 28
# Load MNIST data into training and testing datasets. The x data are the features and the y data are the labels.
(x_train, y_train), (x_test, y_test) = mnist.load_data()
num_classes = 10
# Reshape the data into a 4D tensor (sample_number, x_img_size, y_img_size, num_channels).
# MNIST is greyscale, which corresponds to a single channel/dimension.
# Alternatively, color, for example RGB, would correspond to three channels/dimensions.
x_train = x_train.reshape(x_train.shape[0], img_x, img_y, 1)
x_test = x_test.reshape(x_test.shape[0], img_x, img_y, 1)
input_shape = (img_x, img_y, 1)
# Cast the data as type float32.
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train = x_train / 255
x_test = x_test / 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# Convert class vectors to binary class matrices for use in the categorical_crossentropy loss.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
x_train.shape
input_shape
y_train[0] # labels
y_train.shape
for i in x_train[0].tolist():
print('\n', i)
plt.imshow(x_train[21].reshape(28, 28), cmap="Greys", interpolation="nearest");
model = Sequential()
model.add(Conv2D( # Add a 2D convolutional layer to process the 2D input (image) data.
32, # number of output channels
kernel_size = (3, 3), # kernel: 3 x 3 moving window
strides = (1, 1), # kernel strides in the x and y dimensions -- default: (1, 1)
activation = 'relu', # activation function: ReLU
input_shape = input_shape # input size/shape
))
model.add(MaxPooling2D( # Add a 2D max pooling layer.
pool_size = (2, 2), # size of the pooling in the x and y dimensions
strides = (2, 2) # strides in the x and y dimensions
))
# Add a convolutional layer. The input tensor for this layer is (batch_size, 28, 28, 32),
# where 28 x 28 corresponds to the input dimensions and 32 is the number of output channels from the previous layer.
model.add(Conv2D(
64, # number of output channels
(5, 5), # kernel: 5 x 5 moving window
strides = (1, 1), # kernel strides in x and y dimensions -- default: (1, 1)
activation = 'relu' # activation function: ReLU
))
model.add(Dropout(rate=0.5)) # Add a dropout layer.
model.add(MaxPooling2D( # Add a 2D max pooling layer.
pool_size = (2, 2) # size of the pooling in the x and y dimensions
))
# Flatten the output from convolutional layers to prepare them for input to fully-connected layers.
model.add(Flatten())
model.add(Dense( # Specify a fully-connected layer.
1000, # number of nodes
activation = 'relu' # activation function: ReLU
))
model.add(Dense( # Specify a fully-connected output layer.
num_classes, # number of classes
activation = 'softmax', # softmax classification
name = "preds"
))
#plot_model(model, to_file="model.png")
model.summary()
SVG(model_to_dot(model).create(prog='dot', format='svg'));
model.compile(
loss = 'categorical_crossentropy',
optimizer = 'nadam',
metrics = ['accuracy']
)
checkpoint = keras.callbacks.ModelCheckpoint(
filepath = 'best_model.{epoch:02d}-{val_loss:.2f}.h5',
monitor = 'val_loss',
save_best_only = True
)
%%time
out = model.fit(
x_train,
y_train,
batch_size = 512,
epochs = 5,
verbose = True,
validation_data = (x_test, y_test),
callbacks = [checkpoint]
)
score = model.evaluate(x_test, y_test, verbose=False)
print('test loss:', score[0])
print('test accuracy:', score[1])
plt.plot(out.history['acc'], label='train')
plt.plot(out.history['val_acc'], label='validate')
plt.legend()
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.show();
layer_idx = utils.find_layer_idx(model, 'preds')
model.layers[layer_idx].activation = activations.linear # Swap softmax for linear.
model = utils.apply_modifications(model)
filter_idx = 0
img = visualize_activation(model, layer_idx, filter_indices=filter_idx, verbose=False)
im = plt.imshow(img[..., 0])
plt.colorbar(im, fraction=0.0458, pad=0.04);
layer_idx = utils.find_layer_idx(model, 'preds')
model.layers[layer_idx].activation = activations.linear # Swap softmax for linear.
model = utils.apply_modifications(model)
filter_idx = 4
img = visualize_activation(model, layer_idx, filter_indices=filter_idx, verbose=False)
im = plt.imshow(img[..., 0])
plt.colorbar(im, fraction=0.0458, pad=0.04);
grads = visualize_saliency(model, layer_idx, filter_indices=filter_idx, seed_input=x_test[13], backprop_modifier='guided')
im = plt.imshow(grads, cmap='jet')
plt.colorbar(im, fraction=0.0458, pad=0.04);
# get indices in test dataset of instances of the class 0
y_test_non_categorical = np.argmax(y_test, axis=1)
indices = [i for i, j in enumerate(y_test_non_categorical) if j == 0]
# get the instances
x_test_0 = [x_test[i] for i in indices]
sample_size = 100
saliencies = []
for i in list(range(0, sample_size - 1)):
saliencies.append(visualize_saliency(model, layer_idx, filter_indices=filter_idx, seed_input=x_test_0[i], backprop_modifier='guided'))
im = plt.imshow(np.mean(saliencies, axis=0), cmap='jet')
plt.colorbar(im, fraction=0.0458, pad=0.04);
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.sem.html
im = plt.imshow(stats.sem(saliencies, axis=0), cmap='jet');
plt.colorbar(im, fraction=0.0458, pad=0.04);
mean = np.mean(saliencies, axis=0)
statistical_uncertainty = stats.sem(saliencies, axis=0)
x = list(range(0, len(mean.flatten()-1)))
y = mean.flatten()
y_su = statistical_uncertainty.flatten()
plt.plot(x, y, 'k-')
plt.fill_between(x, y-10*y_su/2, y+10*y_su/2)
plt.show();
plt.rcParams["figure.figsize"] = [10, 10]
xi = 0.2; yi = 0.2; wi = 0.7; hi = 0.7 # image
xc = 0.91; yc = 0.2; wc = 0.05; hc = 0.7 # colorbar
xh = 0.2; yh = 0.0; wh = 0.7; hh = 0.2 # horizontal plot
xv = 0.0; yv = 0.2; wv = 0.2; hv = 0.7 # vertical plot
ax_i = plt.axes((xi, yi, wi, hi))
ax_h = plt.axes((xh, yh, wh, hh))
ax_v = plt.axes((xv, yv, wv, hv))
ax_c = plt.axes((xc, yc, wc, hc))
ax_i.xaxis.set_major_formatter(NullFormatter())
ax_i.yaxis.set_major_formatter(NullFormatter())
ax_h.yaxis.set_major_formatter(NullFormatter())
ax_v.xaxis.set_major_formatter(NullFormatter())
plt.axes(ax_i)
plt.imshow(mean, aspect='auto', cmap="jet")
ax_h.plot(list(range(0, 28)), mean.sum(axis=0), '-k', drawstyle='steps')
ax_h.plot(list(range(0, 28)), mean.sum(axis=0) + np.sum(statistical_uncertainty, axis=0)/2, '-k', drawstyle='steps', color='red')
ax_h.plot(list(range(0, 28)), mean.sum(axis=0) - np.sum(statistical_uncertainty, axis=0)/2, '-k', drawstyle='steps', color='blue')
ax_v.plot(mean.sum(axis=1), list(range(0, 28)), '-k', drawstyle='steps')
ax_v.plot(mean.sum(axis=1) + np.sum(statistical_uncertainty, axis=1)/2, list(range(0, 28)), '-k', drawstyle='steps', color='red')
ax_v.plot(mean.sum(axis=1) - np.sum(statistical_uncertainty, axis=1)/2, list(range(0, 28)), '-k', drawstyle='steps', color='blue')
cb = plt.colorbar(cax=ax_c)
#cb.set_label('intensity')
#ax_i.set_title('input')
#ax_h.set_xlabel('${x}$')
#ax_h.set_ylabel('intensity')
#ax_h.yaxis.set_label_position('right')
#ax_v.set_ylabel('${y}$')
#ax_v.set_xlabel('intensity')
#ax_v.xaxis.set_label_position('top')
plt.show();
plt.rcParams["figure.figsize"] = [10, 10]
xi = 0.2; yi = 0.2; wi = 0.7; hi = 0.7 # image
xc = 0.91; yc = 0.2; wc = 0.05; hc = 0.7 # colorbar
xh = 0.2; yh = 0.0; wh = 0.7; hh = 0.2 # horizontal plot
xv = 0.0; yv = 0.2; wv = 0.2; hv = 0.7 # vertical plot
ax_i = plt.axes((xi, yi, wi, hi))
ax_h = plt.axes((xh, yh, wh, hh))
ax_v = plt.axes((xv, yv, wv, hv))
ax_c = plt.axes((xc, yc, wc, hc))
ax_i.xaxis.set_major_formatter(NullFormatter())
ax_i.yaxis.set_major_formatter(NullFormatter())
ax_h.yaxis.set_major_formatter(NullFormatter())
ax_v.xaxis.set_major_formatter(NullFormatter())
plt.axes(ax_i)
plt.imshow(mean, aspect='auto', cmap="jet")
ax_h.plot(list(range(0, 28)), mean.sum(axis=0), '-k', drawstyle='steps')
#ax_h.plot(list(range(0, 28)), mean.sum(axis=0) + np.sum(statistical_uncertainty, axis=0)/2, '-k', drawstyle='steps', color='red')
#ax_h.plot(list(range(0, 28)), mean.sum(axis=0) - np.sum(statistical_uncertainty, axis=0)/2, '-k', drawstyle='steps', color='blue')
ax_h.fill_between(
list(range(0, 28)),
mean.sum(axis=0) + np.sum(statistical_uncertainty, axis=0)/2,
mean.sum(axis=0) - np.sum(statistical_uncertainty, axis=0)/2,
step = 'pre',
facecolor = 'red',
alpha = 0.5
)
ax_h.set_xlim(-1, 27)
ax_v.plot(mean.sum(axis=1), list(range(0, 28)), '-k', drawstyle='steps')
#ax_v.plot(mean.sum(axis=1) + np.sum(statistical_uncertainty, axis=1)/2, list(range(0, 28)), '-k', drawstyle='steps', color='red')
#ax_v.plot(mean.sum(axis=1) - np.sum(statistical_uncertainty, axis=1)/2, list(range(0, 28)), '-k', drawstyle='steps', color='blue')
ax_v.fill_betweenx(
list(range(1, 29)),
mean.sum(axis=1) + np.sum(statistical_uncertainty, axis=1)/2,
mean.sum(axis=1) - np.sum(statistical_uncertainty, axis=1)/2,
step = 'pre',
facecolor = 'red',
alpha = 0.5
)
ax_v.set_ylim(0, 28)
cb = plt.colorbar(cax=ax_c)
#cb.set_label('intensity')
#ax_i.set_title('input')
#ax_h.set_xlabel('${x}$')
#ax_h.set_ylabel('intensity')
#ax_h.yaxis.set_label_position('right')
#ax_v.set_ylabel('${y}$')
#ax_v.set_xlabel('intensity')
#ax_v.xaxis.set_label_position('top')
plt.show();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: En esta actividad implementaremos una conocida métrica para medir disimilitud entre conjuntos
Step8: Paso 2.
Step9: Paso 3.
|
<ASSISTANT_TASK:>
Python Code:
import numba
import numpy as np
import numexpr as ne
import matplotlib.pyplot as plt
def metric_python(x, y):
standard Euclidean distance
ret = x-y
ret *= ret
return np.sqrt(ret).sum()
def inf_dist_python(x, Y):
inf distance between row x and array Y
m = Y.shape[0]
inf = np.inf
for i in range(m):
dist = metric_python(x, Y[i])
if dist < inf:
inf = dist
return inf
def hausdorff_python(X, Y):
Hausdorff distance between arrays X and Y
m = X.shape[0]
n = Y.shape[0]
sup1 = -1.
sup2 = -1.
for i in range(m):
inf1 = inf_dist_python(X[i], Y)
if inf1 > sup1:
sup1 = inf1
for i in range(n):
inf2 = inf_dist_python(Y[i], X)
if inf2 > sup2:
sup2 = inf2
return max(sup1, sup2)
@numba.jit('float64 (float64[:], float64[:])')
def metric_numba(x, y):
standard Euclidean distance
ret = x-y
ret *= ret
return np.sqrt(ret).sum()
@numba.jit('float64 (float64[:], float64[:,:])', nopython=True)
def inf_dist_numba(x, Y):
inf distance between row x and array Y
m = Y.shape[0]
inf = np.inf
for i in range(m):
dist = metric_numba(x, Y[i])
if dist < inf:
inf = dist
return inf
@numba.jit('float64 (float64[:,:], float64[:,:])', nopython=True)
def hausdorff_numba(X, Y):
Hausdorff distance between arrays X and Y
m = X.shape[0]
n = Y.shape[0]
sup1 = -1.
sup2 = -1.
for i in range(m):
inf1 = inf_dist_numba(X[i], Y)
if inf1 > sup1:
sup1 = inf1
for i in range(n):
inf2 = inf_dist_numba(Y[i], X)
if inf2 > sup2:
sup2 = inf2
return max(sup1, sup2)
#nrows = [10**n for n in range(10)]
nrows = np.linspace(100,5000,10).astype(int)
for nrow in nrows:
X = np.random.random((nrow,3))
Y = np.random.random((nrow,3))
tp = %timeit -o hausdorff_python(X,Y)
tn = %timeit -o hausdorff_numba(X,Y)
print("Number of rows: {0}".format(nrow))
print("Best time in Python: {0}".format(tp.best))
print("Best time in Numba: {0} \n".format(tn.best))
del X,Y
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Find time to compute using pyephem.observer.next_pass()
Step2: Speedup factor and percentage reduction.
|
<ASSISTANT_TASK:>
Python Code:
def find_passes(duration):
#instrument function calls
sat_azel.calls = 0
# orbital period in seconds
period = 24.0 * 60.0 * 60.0 / sat._n
# coarse steps to find the points near (enough) to elevation peaks
time_coarse = np.arange(0, INTERVAL_SECONDS, period/STEPS_PER_ORBIT)
el_coarse = np.fromiter((sat_azel(dt)[1] for dt in time_coarse), np.float, time_coarse.size)
peak_coarse_idx = argrelmax(el_coarse)[0]
# find all local elevation maxima
def fneg(dt):
return -sat_azel(dt)[1]
def truepeak(t, tol=0.1):
r = minimize_scalar(
fneg,
bracket=(t-period/STEPS_PER_ORBIT, t+period/STEPS_PER_ORBIT),
tol = tol/t)
return (r.x, -r.fun)
peak_true_times = np.zeros((len(peak_coarse_idx),))
peak_el = np.zeros((len(peak_coarse_idx),))
for i, dt in enumerate(time_coarse[peak_coarse_idx]):
t, el = truepeak(dt)
peak_true_times[i] = t
peak_el[i] = el
# select the visible passes
visible_idx = np.where(peak_el > HORIZON)[0]
visible_el = peak_el[visible_idx]
visible_tca = peak_true_times[visible_idx]
# find AOS events
def f(t):
return sat_azel(t)[1]
def find_aos(t):
a = (t - period*ORBIT_FRACTION)
b = t
return brentq(f, a, b, xtol=TOL)
def find_los(t):
a = t
b = (t + period*ORBIT_FRACTION)
return brentq(f, a, b, xtol=TOL)
visible_aos = np.fromiter((find_aos(t) for t in visible_tca), np.float, len(visible_tca))
visible_los = np.fromiter((find_los(t) for t in visible_tca), np.float, len(visible_tca))
return (visible_aos, visible_tca, visible_los, visible_el, peak_true_times, peak_el)
(visible_aos, visible_tca, visible_los, visible_el, peak_true_times, peak_el) = find_passes(INTERVAL_SECONDS)
%timeit find_passes(INTERVAL_SECONDS)
print('N passes:', len(visible_tca))
print('N calls:', sat_azel.calls)
print('Avg calls per found pass:', sat_azel.calls / len(visible_tca))
plt.plot(peak_true_times, peak_el, '+')
plt.plot(visible_tca, visible_el, '+r')
plt.plot(visible_aos, np.zeros_like(visible_tca), '.r')
plt.plot(visible_los, np.zeros_like(visible_tca), '.r')
import sys
from satbazaar import db
observer = {
'name':gs.name,
'lat':gs.lat,
'lon':gs.lon,
'altitude':gs.elevation,
'min_horizon': 0.0
}
satellite = {
'name':line0,
'tle':[line0, line1, line2],
'norad_cat_id': int(line2.split()[1]),
}
passes = db.compute_passes_ephem([observer, satellite, start, len(visible_tca), INTERVAL_SECONDS/60])
%timeit -r1 -n1 passes = db.compute_passes_ephem([observer, satellite, start, len(visible_tca), INTERVAL_SECONDS/60])
# number of calls from an instrumented pyephem for these conditions
obj_cir_calls = 278228
100 * ((sat_azel.calls / 278228) - 1)
1/(sat_azel.calls / 278228)
len(passes)
def td(time):
return (time - start).total_seconds()
plt.figure(figsize=(20, 6))
plt.interactive(False)
for p in passes:
aos = td(p.start)
tca = td(p.tca)
los = td(p.end)
el = p.max_el
plt.plot(aos, 0.0, '+b')
plt.plot(tca, el, '+b')
plt.plot(los, 0.0, '+b')
plt.plot(visible_aos, np.zeros_like(visible_aos), 'xr')
plt.plot(visible_los, np.zeros_like(visible_aos), 'xr')
plt.plot(visible_tca, visible_el, 'xr')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We first create a constant velocity model, with one source in the middle
Step2: To output a movie, we have to set the input 'movout' to a number greater than zero. For movout=10, the movie will contain every 10 time steps.
Step3: SeisCL python wrapper contains a method to read the movie file.
Step4: This last variable contains a list of movies for all the ouput variables given by seisout. In our case, seisout=1, so the outputs are vx and vz. We can visualize the movie with the following code.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from SeisCL import SeisCL
import matplotlib.pyplot as plt
import numpy as np
seis = SeisCL()
# Constants for the modeling
seis.ND = 2
N = 200
seis.N = np.array([N, 2*N])
seis.dt = dt = 0.25e-03
seis.dh = dh = 2
seis.NT = NT = 1000
seis.seisout = 1
seis.f0 = 20
# Source and receiver positions
sx = seis.N[1]//2 * dh
sy = 0
sz = seis.N[0]//2 * dh
gx = np.arange(N//4 * dh, (N - N//4)*dh, dh)
gy = gx * 0
gz = gx * 0 + N//4*dh
gsid = gz * 0
gid = np.arange(0, len(gz))
seis.src_pos_all = np.stack([[sx], [sy], [sz], [0], [100]], axis=0)
seis.rec_pos_all = np.stack([gx, gy, gz, gsid, gid, gx * 0, gx * 0, gx * 0], axis=0)
# We start with a simple model
vp_a = np.zeros(seis.N) + 3500
vs_a = np.zeros(seis.N) + 2000
rho_a = np.zeros(seis.N) + 2000
seis.movout = 20
seis.set_forward(seis.src_pos_all[3, :], {"vp": vp_a, "rho": rho_a, "vs": vs_a}, withgrad=False)
stdout = seis.execute()
movs = seis.read_movie()
from matplotlib import animation
from IPython.display import HTML
toplot = movs[0][:,:,:,0]
fig = plt.figure(figsize=(6, 6))
im = plt.imshow(toplot[:,:,0], animated=True, vmin=np.min(toplot) / 10, vmax=np.max(toplot) / 10)
def init():
im.set_array(toplot[:,:,0])
return im,
def animate(t):
im.set_array(toplot[:,:,t])
return [im]
plt.close()
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=movs[0].shape[2]-1, interval=100, blit=True, repeat=True)
HTML(anim.to_html5_video())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Slicing and indexing
Step2: But when we plot the red channel...
Step3: Obviously that's not red at all. The reason is that there's nothing to tell us that this array is supposed to be red
Step4: The shape of this array is the same as it would be for any gray-scale image.
Step5: Combining color-slices with row/column-slices
Step6: Histograms
Step7: That didn't work as expected. How would you fix the call above to make it work correctly?
Step8: Using this function, let's look at the histogram of a grayscale image
Step9: An image histogram shows the number of pixels at each intensity value (or range of intensity values, if values are binned). Low-intensity values are closer to black, and high-intensity values are closer to white.
Step10: As you can see, the intensity for each RGB channel is plotted separately. Unlike the previous histogram, these histograms almost look like Gaussian distributions that are shifted. This reflects the fact that intensity changes are relatively gradual in this picture
Step11: Notice the intensity values at the bottom. Since the image has a dtype of uint8, the values go from 0 to 255. Though you can see some pixels tail off toward 255, you can clearly see in the histogram, and in the image, that we're not using the high-intensity limits very well.
Step12: The contrast is visibly higher in the image, and the histogram is noticeably stretched. The sharp peak on the right is due to all the pixels greater than 180 (in the original image) that were piled into a single bin (i.e. 255).
Step13: Parameters for rescale_intensity
Step14: For each intensity value, the CDF gives the fraction of pixels below that intensity value.
Step15: The tightly distributed dark-pixels in the coat have been spread out, which reveals many details in the coat that were missed earlier. As promised, this more even distribution produces a CDF that approximates a straight line.
Step16: Functions in scikit-image allow any data-type as an input, but the output data-type may change depending on the algorithm. While uint8 is really efficient in terms of storage, we'll see in the next section that computations using uint8 images can be problematic in certain cases.
Step17: Contrasted-limited, adaptive histogram equalization
Step18: Compared to plain-old histogram equalization, the high contrast in the coat is maintained, but the contrast in the grass is also improved.
Step19: There's a bit more tweaking involved in using equalize_adapthist than in equalize_hist
Step20: Histograms and thresholding
Step21: Here the man and the tripod are fairly close to black, and the rest of the scene is mostly gray. But if you wanted to separate the two, how do you decide on a threshold value just based on the image? Looking at the histogram, it's pretty clear that a value of about 50 will separate the two large portions of this image.
Step22: Note that the histogram plotted here is for the image before thresholding.
Step23: Note that the features of the man's face are slightly better resolved in this case.
Step24: Color spaces
Step25: Here, we'll look at the LAB (a.k.a. CIELAB) color space (L = luminance, a and b define a 2D description of the actual color or "hue")
Step26: Converting to LAB didn't change the shape of the image at all. Let's try to plot it
Step27: Matplotlib expected an RGB array, and apparently, LAB arrays don't look anything like RGB arrays.
Step28: Lab gamut, showing only sRGB colors
Step29: Further reading
|
<ASSISTANT_TASK:>
Python Code:
from skimage import data
color_image = data.chelsea()
print(color_image.shape)
plt.imshow(color_image);
red_channel = color_image[:, :, 0] # or color_image[..., 0]
plt.imshow(red_channel);
red_channel.shape
from skimage import io
color_image = io.imread('../images/balloon.jpg')
import skdemo
#skdemo. # <TAB>
# This code is just a template to get you started.
red_image = np.zeros_like(color_image)
green_image = np.zeros_like(color_image)
blue_image = np.zeros_like(color_image)
skdemo.imshow_all(color_image, red_image, green_image, blue_image)
color_patches = color_image.copy()
# Remove green (1) & blue (2) from top-left corner.
color_patches[:100, :100, 1:] = 0
# Remove red (0) & blue (2) from bottom-right corner.
color_patches[-100:, -100:, (0, 2)] = 0
plt.imshow(color_patches);
#plt.hist(color_image)
skdemo.imshow_with_histogram?
image = data.camera()
skdemo.imshow_with_histogram(image);
cat = data.chelsea()
skdemo.imshow_with_histogram(cat);
image = data.camera()
skdemo.imshow_with_histogram(image);
from skimage import exposure
high_contrast = exposure.rescale_intensity(image, in_range=(10, 180))
skdemo.imshow_with_histogram(high_contrast);
exposure.rescale_intensity?
ax_image, ax_hist = skdemo.imshow_with_histogram(image)
skdemo.plot_cdf(image, ax=ax_hist.twinx())
equalized = exposure.equalize_hist(image)
ax_image, ax_hist = skdemo.imshow_with_histogram(equalized)
skdemo.plot_cdf(equalized, ax=ax_hist.twinx())
equalized.dtype
import skimage
#skimage.img_as # <TAB>
equalized = exposure.equalize_adapthist(image)
ax_image, ax_hist = skdemo.imshow_with_histogram(equalized)
skdemo.plot_cdf(equalized, ax=ax_hist.twinx())
equalized.dtype
exposure.equalize_adapthist?
skdemo.imshow_with_histogram(image);
threshold = 50
ax_image, ax_hist = skdemo.imshow_with_histogram(image)
# This is a bit of a hack that plots the thresholded image over the original.
# This just allows us to reuse the layout defined in `plot_image_with_histogram`.
ax_image.imshow(image > threshold)
ax_hist.axvline(threshold, color='red');
# Rename module so we don't shadow the builtin function
import skimage.filter as filters
threshold = filters.threshold_otsu(image)
print(threshold)
plt.imshow(image > threshold);
import skimage.filter as filters
filters.threshold # <TAB>
from skimage import color
#color.rgb2 # <TAB>
plt.imshow(color_image);
from skimage import color
lab_image = color.rgb2lab(color_image)
lab_image.shape
plt.imshow(lab_image);
skdemo.imshow_all(lab_image[..., 0], lab_image[..., 1], lab_image[..., 2],
titles=['L', 'a', 'b'])
# Your code here
%reload_ext load_style
%load_style ../themes/tutorial.css
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Engine
|
<ASSISTANT_TASK:>
Python Code:
Yummly API Example
"id": "Garlic-Cheese-Chicken-1041442",
"recipeName": "Garlic Cheese Chicken",
"ingredients": ["melted butter",
"garlic cloves",
"garlic powder",
"salt",
"corn flakes",
"shredded cheddar cheese",
"grated parmesan cheese",
"pepper",
"boneless skinless chicken breasts"],
Food Compound Breakdown
'Garlic-Cheese-Chicken-1041442': [37, 79, 132, 165, 177, 243, 250, 260,
273, 295, 348, 361, 378, 393, 434, 442,
460, 519, 602, 620, 628, 632, 704, 737,
767, 778, 793, 867, 886, 957, 965, 971,
1012, 1021, 1069, 1092]
Top 10 recipes with engine scores
('Mango-Pomegranate-Guacamole-967911', 0.0023206751054852333),
('Sauteed-Swiss-Chard-984678', 0.0025110071546505233),
('Flourless-Brownie-Muffins-770880', 0.0028129395218002813),
('Simple-Guacamole-995370', 0.003872418387741506),
('Leek_-Mushrooms-And-Tomatoes-Quiche-1021989', 0.005748945147679325),
('Limoncello-Ice-Cream-Cake-940127', 0.00616682895163908),
('Vegan-Buffalo-Cauliflower-With-Sweet-Potato-Noodles-_Vegan_-Gluten-Free_-1020823', 0.006724683544303797),
('Poor-Irish-Soup-Allrecipes', 0.007497565725413826),
('Dill-Dip-Ii-Allrecipes', 0.010594386351128235),
('Eggplant-Rolls-with-Spicy-Tomato-Sauce-1049581', 0.012002161160852114)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This sets up the colors we want in our fractal image.
Step2: Let's use over Mandelbrot test function to check some examples of "c"
Step3: 3. Generate a Fractal Image
Step4: Compare to an animated fractal GIF from the web
Step5: Now we open up the value of c to be defined by us and let the pixel location relate to the value of $z_{n}$
Step6: The default Julia Set output looks a little like Romanesco Broccoli, eh?
Step7: For Barnsley's Fern
|
<ASSISTANT_TASK:>
Python Code:
#The Python Imaging Library (PIL)
from PIL import Image, ImageDraw
# Basic math and color tools
import math, colorsys, numpy
# Mathematical plotting
import matplotlib as mpl
from matplotlib import colors as mplcolors
import matplotlib.pyplot as plt
# Displaying real graphical images (pictures)
from IPython.display import Image as ipythonImage
import pickle,glob
# Graphical representation library PANDAS
import pandas as pd
# Imports a lot of matplotlib code to run inline
#%pylab inline
%matplotlib notebook
import ipywidgets as widgets
# Perform the Mandelbrot Set test.
divergence_test_value = 2.0
# The mandelbrot_test function takes a test number, "c", and checks if it is in the Mandelbrot set.
# If the Mandelbrot series diverges for "c", it's not in the Mandelbrot set. To help us draw the
# image, return the value of the interation (n) at which the divergence test fails. If it doesn't
# diverge, return -1 to indicate that. Try 100 iterations to see if it diverges; after 100, assume
# it converges.
global mandelbrot_max_iterations
mandelbrot_max_iterations = 100
def mandelbrot_test(c):
z_n = complex(0,0)
for n in range(0,mandelbrot_max_iterations):
z_n = z_n*z_n + c
if abs(z_n) > divergence_test_value:
return n
return -1
test_c = 1
diverges_at = mandelbrot_test(test_c)
print(f"The test number {test_c} diverges at {diverges_at} iterations.")
test_c = 0
diverges_at = mandelbrot_test(test_c)
print(f"The test number {test_c} diverges at {diverges_at}.")
test_c = 0.5
diverges_at = mandelbrot_test(test_c)
print(f"The test number {test_c} diverges at {diverges_at} iterations.")
test_c = -1
diverges_at = mandelbrot_test(test_c)
print(f"The test number {test_c} diverges at {diverges_at}.")
# Define the physical maximal width and height of the image we will make
x_max = 800 #pixels
y_max = 800 #pixels
# Image step size and grid of x,y coordinates
dy=1
dx=1
y, x = numpy.mgrid[slice(0, y_max + dy, dy),
slice(0, x_max + dx, dx)]
# Recenter the cool part of the image
offset=(1.8,1.0) # making these numbers bigger slides the image down and to the right
x_scale = 2.0 # the smaller this is, the more you zoom in on the x-axis
y_scale = 2.0 # the smaller this is, the more you zoom in on the y-axis
# Calculate the numbers we will test using the Mandelbrot Set tester function
z = (x*x_scale/x_max-offset[0]) + 1j*(y*y_scale/y_max-offset[1])
# Create an array of the function in memory to deploy across all z's and compute all results
print(f"Please be patient while we test all the pixels in the {x_max}x{y_max} grid to see if they are in the Mandelbrot Set...")
n = numpy.vectorize(mandelbrot_test)(z)
# Convert the test results to a color scale between 0 and 1 (1 means "in the Mandelbrot set,"
# while [0-1) represents the fraction of total iterations before it diverged!
def ColorValueFromTest(n, max):
#Checks if we are in the Mandelbrot Set
if n == -1:
v=1
#If not, compute the fraction of total iterations
else:
v=float(n)/float(max)
return v
# compute the color values from the Mandelbrot tests
v = numpy.vectorize(ColorValueFromTest)(n, mandelbrot_max_iterations)
v = v[:-1, :-1]
# Time to draw!
# Create a colormesh plot to make the final figure
figure, axes = plt.subplots(figsize=(8,8))
color_map = plt.get_cmap('PiYG')
color_levels = mpl.ticker.MaxNLocator(nbins=15).tick_values(v.min(), v.max())
color_scale = mpl.colors.BoundaryNorm(color_levels, ncolors=color_map.N, clip=True)
plot_mandelbrot = plt.pcolormesh(x, y, v, cmap=color_map, norm=color_scale)
plt.tight_layout()
print("Here is the final image object after testing each pixel:")
figure.canvas.draw()
plt.show()
print(f"Save the fractal image to disk...")
fractal_filename = "fractal.png"
figure.savefig(fractal_filename)
def zn_operator(c, z_n):
# Uncomment one of these to alter the function used in the set iteration - play around!
# zn_result = z_n*z_n +c # this is what we used in the Mandelbrot Set
zn_result = numpy.power(z_n,2) + c # vary the power to change the function, e.g. change 2 to 3, etc.
#zn_result = numpy.sin(z_n) + c
#zn_result = numpy.cos(z_n) + c
#zn_result = numpy.tan(z_n) + c
#zn_result = numpy.asin(z_n) + c
#zn_result = numpy.acos(z_n) + c
#zn_result = numpy.atan(z_n) + c
#zn_result = numpy.sinh(z_n) + c
#zn_result = numpy.cosh(z_n) + c
#zn_result = numpy.tanh(z_n) + c
#zn_result = numpy.asinh(z_n) + c
#zn_result = numpy.acosh(z_n) + c
#zn_result = numpy.atanh(z_n) + c
#zn_result = numpy.exp(z_n) + c
#zn_result = numpy.log(z_n) + c # natural logarithm
#zn_result = numpy.log(z_n,10) + c # logarithm base 10; change the base to your liking!
return zn_result
# Define the Julia Set test function, which employs the zn_operator() defined above
divergence_test_value = 2.0
julia_max_iterations = 100
def julia_test(c, z_n = -2.0**.5):
for n in range(0,julia_max_iterations):
z_n = zn_operator(c, z_n)
if abs(z_n) > divergence_test_value:
return n
return -1
# Define the physical maximal width and height of the image we will make
x_max = 800 #pixels
y_max = 800 #pixels
# Image step size and grid of x,y coordinates
dy=1
dx=1
y, x = numpy.mgrid[slice(0, y_max + dy, dy),
slice(0, x_max + dx, dx)]
# Recenter the cool part of the image
offset=(1.5,1.5) # making these numbers bigger slides the image down and to the right
x_scale = 3.0 # the smaller this is, the more you zoom in on the x-axis
y_scale = 3.0 # the smaller this is, the more you zoom in on the y-axis
# Calculate the numbers we will test using the Mandelbrot Set tester function
z = (x*x_scale/x_max-offset[0]) + 1j*(y*y_scale/y_max-offset[1])
# Create an array of the function in memory to deploy across all z's and compute all results
print(f"Please be patient while we test all the pixels in the {x_max}x{y_max} grid to see if they are in the Julia Set...")
# Set the number to test in the Julia Set - you can play with this to change the image!
c_julia = complex(-0.4, 0.6)
n = numpy.vectorize(julia_test)(c_julia, z)
# compute the color values from the Mandelbrot tests
v = numpy.vectorize(ColorValueFromTest)(n, julia_max_iterations)
v = v[:-1, :-1]
# Time to draw!
# Create a colormesh plot to make the final figure
figure, axes = plt.subplots(figsize=(8,8))
color_map = plt.get_cmap('summer')
color_levels = mpl.ticker.MaxNLocator(nbins=15).tick_values(v.min(), v.max())
color_scale = mpl.colors.BoundaryNorm(color_levels, ncolors=color_map.N, clip=True)
plot_julia = plt.pcolormesh(x, y, v, cmap=color_map, norm=color_scale)
plt.tight_layout()
print("Here is the final image object after testing each pixel:")
figure.canvas.draw()
plt.show()
print(f"Save the fractal image to disk...")
fractal_filename = "julia.png"
figure.savefig(fractal_filename)
# Define a function for drawing Barnsley's Fern
def BarnsleysFern(f,itt):
colname = ["percent","a","b","c","d","e","f"]
fern_structure_frame = pd.DataFrame(data=numpy.array(f), columns = colname)
print(fern_structure_frame)
if itt > 5000:
itt = 5000
x,y = {0.5,0.0}
xypts=[]
print(f"Sum of percentages in your settings: {fern_structure_frame['percent'].sum()}")
if (1.0 - fern_structure_frame['percent'].sum()) > 1e-10:
print("Probabilities must sum to 1")
return
for i in range(itt):
rand = (numpy.random.random())
cond = 0.0
for j in range(len(f)):
if (cond <= rand) and (rand <= (cond+f[j][0])):
x = f[j][1]*x+f[j][2]*y+f[j][5]
y = f[j][3]*x+f[j][4]*y+f[j][6]
xypts.append((x,y))
cond = cond + f[j][0]
xmax,ymax = max(abs(numpy.transpose(xypts)[0])),max(abs(numpy.transpose(xypts)[1]))
figure, axes = plt.subplots(figsize=(6,6))
color = numpy.transpose([[abs(r)/xmax for r in numpy.transpose(xypts)[0]],[abs(g)/ymax for g in numpy.transpose(xypts)[1]],[b/itt for b in range(itt)]])
plt.scatter(numpy.transpose(xypts)[0],numpy.transpose(xypts)[1],alpha=0.5, facecolors=color, edgecolors='none', s=1)
plt.tight_layout()
figure.canvas.draw()
plt.show()
# Define the fern structure and draw one
fern_structure = \
((0.01,0.0,0.0,0.0,0.16,0.0,0.0),
(0.85,0.85,0.08,-0.08,0.85,0.0,1.60),
(0.07,0.20,-0.26,0.23,0.22,0.0,1.60),
(0.07,-0.15,0.28,0.26,0.24,0.0,0.44))
BarnsleysFern(fern_structure,5000)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-3', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This shows the distribution of prices for these
Step2: Density takes a value, x, and returns the
Step3: __init__ takes mu and sigma, which are the
Step4: __init__ takes a sample and computes a kernel density estimate. The
Step5: pdf is a Pdf object, estimated by KDE.
Step6: Again, we use the variance of diff to estimate the variance
Step7: pmf represents the prior distribution and
Step8: player and opponent are Player
|
<ASSISTANT_TASK:>
Python Code:
from price import *
import matplotlib.pyplot as plt
player1, player2 = MakePlayers(path='../code')
MakePrice1(player1, player2)
plt.legend();
class Pdf(object):
def Density(self, x):
raise UnimplementedMethodException()
def MakePmf(self, xs):
pmf = Pmf()
for x in xs:
pmf.Set(x, self.Density(x))
pmf.Normalize()
return pmf
class GaussianPdf(Pdf):
def __init__(self, mu, sigma):
self.mu = mu
self.sigma = sigma
def Density(self, x):
return scipy.stats.norm.pdf(x, self.mu, self.sigma)
class EstimatedPdf(Pdf):
def __init__(self, sample):
self.kde = scipy.stats.gaussian_kde(sample)
def Density(self, x):
return self.kde.evaluate(x)
data = ReadData(path='../code')
cols = zip(*data)
price1, price2, bid1, bid2, diff1, diff2 = cols
pdf = thinkbayes.EstimatedPdf(price1)
low, high = 0, 75000
n = 101
xs = numpy.linspace(low, high, n)
pdf.kde.evaluate([3, 3])
pmf = pdf.MakePmf(xs)
thinkplot.Pmfs([pmf])
MakePrice2(player1, player2)
class Price(thinkbayes.Suite):
def __init__(self, pmf, player):
thinkbayes.Suite.__init__(self, pmf)
self.player = player
def Likelihood(self, data, hypo):
price = hypo
guess = data
error = price - guess
like = self.player.ErrorDensity(error)
return like
class GainCalculator(object):
def __init__(self, player, opponent):
self.player = player
self.opponent = opponent
def ExpectedGains(self, low=0, high=75000, n=101):
bids = numpy.linspace(low, high, n)
gains = [self.ExpectedGain(bid) for bid in bids]
return bids, gains
def ExpectedGain(self, bid):
suite = self.player.posterior
total = 0
for price, prob in sorted(suite.Items()):
gain = self.Gain(bid, price)
total += prob * gain
return total
def Gain(self, bid, price):
# if you overbid, you get nothing
if bid > price:
return 0
# otherwise compute the probability of winning
diff = price - bid
prob = self.ProbWin(diff)
# if you are within 250 dollars, you win both showcases
if diff <= 250:
return 2 * price * prob
else:
return price * prob
def ProbWin(self, diff):
prob = (self.opponent.ProbOverbid() +
self.opponent.ProbWorseThan(diff))
return prob
player1.MakeBeliefs(20000)
player2.MakeBeliefs(40000)
calc1 = GainCalculator(player1, player2)
calc2 = GainCalculator(player2, player1)
bids, gains = calc1.ExpectedGains()
thinkplot.Plot(bids, gains, label='Player 1')
print('Player 1 optimal bid', max(zip(gains, bids)))
bids, gains = calc2.ExpectedGains()
thinkplot.Plot(bids, gains, label='Player 2')
plt.legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fires in Nepal
Step2: Further subset using bounding box
|
<ASSISTANT_TASK:>
Python Code:
cmr=CMR("../cmr.cfg")
results = cmr.searchGranule(entry_title='MODIS/Aqua Near Real Time (NRT) Thermal Anomalies/Fire 5-Min L2 Swath 1km (C005)',
temporal="2016-04-11T12:00:00Z,2016-04-11T13:00:00Z")
for res in results:
print(res.getDownloadUrl())
results = cmr.searchGranule(entry_title='MODIS/Aqua Near Real Time (NRT) Thermal Anomalies/Fire 5-Min L2 Swath 1km (C005)',
temporal="2016-04-11T12:00:00Z,2016-04-11T13:00:00Z", bounding_box="30,26,90,31")
for res in results:
print(res.getDownloadUrl())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here are a few examples from the data set (samples $20000-20009$)
Step2: A common task then looks like this
Step3: Note that data matrices in Python are often constructed such that the single data vectors are stored in rows (like above). This is exactly the other way round where as you would expect from the mathematical notation, where vectors often are notated as columns. But it has other advantages like the possibility of accessing the single data vectors by X[i] instead of X[i,
Step4: Calculate eigenvalues and eigenvectors of covariance matrix $C$ and project data on it
Step5: As the documentation tells us (call np.linalg.eigh?), the resulting eigenvalues and eigenvectors are not necessarily ordered. But there is a useful function to order them
Step6: Note that E[idc] uses a concept called advanced indexing, i.e., accessing elements in an array through a list of indices. For more details see the official documentation.
Step7: Now plot each class in the two-dimensional feature space
Step8: Indeed, quite a lot of information of the data set was preserved in the sense that the different classes can still be distinguished to some degree with only two dimensions (instead of $786$). As you may expect, classes become even more distinguishable when you add some more feature dimensions.
Step9: Now we have an NumPy array of $400$ faces, $64 \times 64 = 4096$ pixels each. Visualize the first 18 images!
Step10: As with the MNIST data set, calculating the covariance can take a few minutes. We therfore suggest to do this calculation in a seperate block.
Step11: The again, the eigenvectors must be calculated and sorted...
Step12: With the result we do something a bit more different than above. This time we want to visualize the eigenvectors themselves. Note that each eigenvector $\textbf{u}_i$ is a vector of $64 \times 64 = 4096$ dimensions, each component corresponding to one pixel in the image space. Thus, we can re-arrange each eigenvector to a $64 \times 64$ 2D-NumPy-array and visualize with np.imshow as if it was an image. The result makes clear why the approach was called Eigenfaces. It should look somewhat like this
Step13: Now these eigenfaces model how much variance there is in the pixels relative to the mean value. By calculating the mean face and then adding or substracting the eigenfaces we can reconstruct proper faces and even visualize the meaning that the different components have.
Step14: Next we visualize how the faces vary in different directions. Take the mean face and visualize how it looks like when the first eigenface is added/substracted to it. Do the same for some more eigenfaces...
|
<ASSISTANT_TASK:>
Python Code:
# prepare some imports
import numpy as np
from sklearn.datasets import fetch_mldata
import matplotlib.pyplot as plt
%matplotlib inline
# plot samples from MNIST
mnist = fetch_mldata('MNIST original')
for i in range(9):
plt.subplot(3, 3, i+1)
sample = mnist.data[20000+i]
sample_2d = sample.reshape((28, 28))
plt.imshow(sample_2d, cmap='gray')
X = mnist.data
C = np.cov(X, rowvar=0) # slow! may take a while because the data set is large
print 'X.shape:', X.shape
print 'C.shape:', C.shape
np.cov?
E, U = np.linalg.eigh(C) # Eigenvalues and eigenvectors of C
idc = np.argsort(E)
print idc[:10] # first ten indices that would sort the eigenvalues (in increasing order)
print E[idc][:10] # first ten sorted eigenvalues
# sort eigenvalues and eigenvectors
E = E[idc]
U = U[:,idc] # sort columns
U = U[:,-2:] # consider the last two ("largest") eigenvectors
Y = np.dot(mnist.data, U) # project data on these eigenvectors
print Y.shape
for i in range(10):
mask = (mnist.target == i)
class_i = Y[mask,:]
plt.plot(class_i[:,0], class_i[:,1], '.')
import sklearn.datasets
download = sklearn.datasets.fetch_olivetti_faces()
faces = download['data']
print faces.shape
for i in range(18):
# your code here
# our solution
from solutions import *
decrypt_solution(solution_pca_1, passphrase='foo')
# calculate covariance for faces
# your code here
# our solution
from solutions import *
decrypt_solution(solution_pca_2, passphrase='foo')
# calculate eigenvectors of face covariance and sort them
# your code here
# our solution
from solutions import *
decrypt_solution(solution_pca_3, passphrase='foo')
# plot the 9 "largest" eigenvectors/eigenfaces
for i in range(9):
# your code here
# our solution
from solutions import *
decrypt_solution(solution_pca_4, passphrase='foo')
# your code here
# our solution
from solutions import *
decrypt_solution(solution_pca_5, passphrase='foo')
# your code here
# our solution
from solutions import *
decrypt_solution(solution_pca_6, passphrase='foo')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1) Document
Step2: 2) Tokenization
Step3: 3) Text corpus
Step4: 4) Stop words
Step5: 5) Vectorize
Step6: Bag of words
|
<ASSISTANT_TASK:>
Python Code:
# first, get some text:
import fileinput
try:
import ujson as json
except ImportError:
import json
documents = []
for line in fileinput.FileInput("example_tweets.json"):
documents.append(json.loads(line)["text"])
print("One document: \"{}\"".format(documents[0]))
from nltk.stem import porter
from nltk.tokenize import TweetTokenizer
# tokenize the documents
# find good information on tokenization:
# https://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.html
# find documentation on pre-made tokenizers and options here:
# http://www.nltk.org/api/nltk.tokenize.html
tknzr = TweetTokenizer(reduce_len = True)
# stem the documents
# find good information on stemming and lemmatization:
# https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html
# find documentation on available pre-implemented stemmers here:
# http://www.nltk.org/api/nltk.stem.html
stemmer = porter.PorterStemmer()
for doc in documents[0:10]:
tokenized = tknzr.tokenize(doc)
stemmed = [stemmer.stem(x) for x in tokenized]
print("Original document:\n{}\nTokenized result:\n{}\nStemmed result:\n{}\n".format(
doc, tokenized, stemmed))
# number of documents in the corpus
print("There are {} documents in the corpus.".format(len(documents)))
from nltk.corpus import stopwords
stopset = set(stopwords.words('english'))
print("The English stop words list provided by NLTK: ")
print(stopset)
stopset.update(["twitter"]) # add token
stopset.remove("i") # remove token
print("\nAdd or remove stop words form the set: ")
print(stopset)
# we're going to use the vectorizer functions that scikit learn provides
# define the tokenizer that we want to use
# must be a callable function that takes a document and returns a list of tokens
tknzr = TweetTokenizer(reduce_len = True)
stemmer = porter.PorterStemmer()
def myTokenizer(doc):
return [stemmer.stem(x) for x in tknzr.tokenize(doc)]
# choose the stopword set that we want to use
stopset = set(stopwords.words('english'))
stopset.update(["http","https","twitter","amp"])
# vectorize
# we're using the scikit learn CountVectorizer function, which is very handy
# documentation here:
# http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
vectorizer = CountVectorizer(tokenizer = myTokenizer, stop_words = stopset)
vectorized_documents = vectorizer.fit_transform(documents)
vectorized_documents
import matplotlib.pyplot as plt
%matplotlib inline
_ = plt.hist(vectorized_documents.todense().sum(axis = 1))
_ = plt.title("Number of tokens per document")
_ = plt.xlabel("Number of tokens")
_ = plt.ylabel("Number of documents with x tokens")
from numpy import logspace, ceil, histogram, array
# get the token frequency
token_freq = sorted(vectorized_documents.todense().astype(bool).sum(axis = 0).tolist()[0], reverse = False)
# make a histogram with log scales
bins = array([ceil(x) for x in logspace(0, 3, 5)])
widths = (bins[1:] - bins[:-1])
hist = histogram(token_freq, bins=bins)
hist_norm = hist[0]/widths
# plot (notice that most tokens only appear in one document)
plt.bar(bins[:-1], hist_norm, widths)
plt.xscale('log')
plt.yscale('log')
_ = plt.title("Number of documents in which each token appears")
_ = plt.xlabel("Number of documents")
_ = plt.ylabel("Number of tokens")
# documentation on this sckit-learn function here:
# http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html
tfidf_vectorizer = TfidfVectorizer(tokenizer = myTokenizer, stop_words = stopset)
tfidf_vectorized_documents = tfidf_vectorizer.fit_transform(documents)
tfidf_vectorized_documents
# you can look at two vectors for the same document, from 2 different vectorizers:
tfidf_vectorized_documents[0].todense().tolist()[0]
vectorized_documents[0].todense().tolist()[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The next code cell repeats the work that you did in the previous exercise.
Step3: Next, we define a couple functions that you'll use to test the encodings that you implement in this exercise.
Step4: Run this cell to get a baseline score.
Step5: 1) Categorical encodings and leakage
Step6: 2) Count encodings
Step7: Next, encode the categorical features ['ip', 'app', 'device', 'os', 'channel'] using the count of each value in the data set.
Step8: Run the next code cell to see how count encoding changes the results.
Step9: Count encoding improved our model's score!
Step10: 4) Target encoding
Step11: Run the next cell to see how target encoding affects your results.
Step12: 5) Try removing IP encoding
Step13: 6) CatBoost Encoding
Step14: Run the next code cell to see how the CatBoost encoder changes your results.
|
<ASSISTANT_TASK:>
Python Code:
# Set up code checking
# This can take a few seconds
from learntools.core import binder
binder.bind(globals())
from learntools.feature_engineering.ex2 import *
import numpy as np
import pandas as pd
from sklearn import preprocessing, metrics
import lightgbm as lgb
clicks = pd.read_parquet('../input/feature-engineering-data/baseline_data.pqt')
def get_data_splits(dataframe, valid_fraction=0.1):
Splits a dataframe into train, validation, and test sets.
First, orders by the column 'click_time'. Set the size of the
validation and test sets with the valid_fraction keyword argument.
dataframe = dataframe.sort_values('click_time')
valid_rows = int(len(dataframe) * valid_fraction)
train = dataframe[:-valid_rows * 2]
# valid size == test size, last two sections of the data
valid = dataframe[-valid_rows * 2:-valid_rows]
test = dataframe[-valid_rows:]
return train, valid, test
def train_model(train, valid, test=None, feature_cols=None):
if feature_cols is None:
feature_cols = train.columns.drop(['click_time', 'attributed_time',
'is_attributed'])
dtrain = lgb.Dataset(train[feature_cols], label=train['is_attributed'])
dvalid = lgb.Dataset(valid[feature_cols], label=valid['is_attributed'])
param = {'num_leaves': 64, 'objective': 'binary',
'metric': 'auc', 'seed': 7}
num_round = 1000
bst = lgb.train(param, dtrain, num_round, valid_sets=[dvalid],
early_stopping_rounds=20, verbose_eval=False)
valid_pred = bst.predict(valid[feature_cols])
valid_score = metrics.roc_auc_score(valid['is_attributed'], valid_pred)
print(f"Validation AUC score: {valid_score}")
if test is not None:
test_pred = bst.predict(test[feature_cols])
test_score = metrics.roc_auc_score(test['is_attributed'], test_pred)
return bst, valid_score, test_score
else:
return bst, valid_score
print("Baseline model")
train, valid, test = get_data_splits(clicks)
_ = train_model(train, valid)
# Check your answer (Run this code cell to receive credit!)
q_1.solution()
import category_encoders as ce
cat_features = ['ip', 'app', 'device', 'os', 'channel']
train, valid, test = get_data_splits(clicks)
# Create the count encoder
count_enc = ____
# Learn encoding from the training set
____
# Apply encoding to the train and validation sets as new columns
# Make sure to add `_count` as a suffix to the new columns
train_encoded = ____
valid_encoded = ____
# Check your answer
q_2.check()
# Uncomment if you need some guidance
# q_2.hint()
# q_2.solution()
#%%RM_IF(PROD)%%
# Create the count encoder
count_enc = ce.CountEncoder(cols=cat_features)
# Learn encoding from the training set
count_enc.fit(train[cat_features])
# Apply encoding to the train and validation sets
train_encoded = train.join(count_enc.transform(train[cat_features]).add_suffix('_count'))
valid_encoded = valid.join(count_enc.transform(valid[cat_features]).add_suffix('_count'))
q_2.assert_check_passed()
# Train the model on the encoded datasets
# This can take around 30 seconds to complete
_ = train_model(train_encoded, valid_encoded)
# Check your answer (Run this code cell to receive credit!)
q_3.solution()
# Create the target encoder. You can find this easily by using tab completion.
# Start typing ce. the press Tab to bring up a list of classes and functions.
target_enc = ____
# Learn encoding from the training set. Use the 'is_attributed' column as the target.
____
# Apply encoding to the train and validation sets as new columns
# Make sure to add `_target` as a suffix to the new columns
train_encoded = ____
valid_encoded = ____
# Check your answer
q_4.check()
# Uncomment these if you need some guidance
#q_4.hint()
#q_4.solution()
#%%RM_IF(PROD)%%
target_enc = ce.TargetEncoder(cols=cat_features)
target_enc.fit(train[cat_features], train['is_attributed'])
train_encoded = train.join(target_enc.transform(train[cat_features]).add_suffix('_target'))
valid_encoded = valid.join(target_enc.transform(valid[cat_features]).add_suffix('_target'))
q_4.assert_check_passed()
_ = train_model(train_encoded, valid_encoded)
# Check your answer (Run this code cell to receive credit!)
q_5.solution()
# Remove IP from the encoded features
cat_features = ['app', 'device', 'os', 'channel']
# Create the CatBoost encoder
cb_enc = ce.CatBoostEncoder(cols=cat_features, random_state=7)
# Learn encoding from the training set
____
# Apply encoding to the train and validation sets as new columns
# Make sure to add `_cb` as a suffix to the new columns
train_encoded = ____
valid_encoded = ____
# Check your answer
q_6.check()
# Uncomment these if you need some guidance
#q_6.hint()
#q_6.solution()
#%%RM_IF(PROD)%%
cat_features = ['app', 'device', 'os', 'channel']
cb_enc = ce.CatBoostEncoder(cols=cat_features, random_state=7)
# Learn encodings on the train set
cb_enc.fit(train[cat_features], train['is_attributed'])
# Apply encodings to each set
train_encoded = train.join(cb_enc.transform(train[cat_features]).add_suffix('_cb'))
valid_encoded = valid.join(cb_enc.transform(valid[cat_features]).add_suffix('_cb'))
q_6.assert_check_passed()
_ = train_model(train_encoded, valid_encoded)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load amazon review dataset
Step2: Extract word counts and sentiments
Step3: Now, let's remember what the dataset looks like by taking a quick peek
Step4: Split data into training and test sets
Step5: Train a logistic regression classifier
Step6: Model Evaluation
Step7: Baseline
Step8: Quiz Question
Step9: Quiz Question
Step10: Computing the cost of mistakes
Step11: Precision and Recall
Step12: Quiz Question
Step13: Quiz Question
Step14: Quiz Question
Step15: Run prediction with output_type='probability' to get the list of probability values. Then use thresholds set at 0.5 (default) and 0.9 to make predictions from these probability values.
Step16: Quiz Question
Step17: Quiz Question (variant 1)
Step18: For each of the values of threshold, we compute the precision and recall scores.
Step19: Now, let's plot the precision-recall curve to visualize the precision-recall tradeoff as we vary the threshold.
Step20: Quiz Question
Step21: Quiz Question
Step22: This is the number of false negatives (i.e the number of reviews to look at when not needed) that we have to deal with using this classifier.
Step23: Now, let's predict the probability of classifying these reviews as positive
Step24: Let's plot the precision-recall curve for the baby_reviews dataset.
Step25: Second, as we did above, let's compute precision and recall for each value in threshold_values on the baby_reviews dataset. Complete the code block below.
Step26: Quiz Question
Step27: Quiz Question
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
from __future__ import division
import numpy as np
graphlab.canvas.set_target('ipynb')
products = graphlab.SFrame('amazon_baby.gl/')
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
review_clean = products['review'].apply(remove_punctuation)
# Count words
products['word_count'] = graphlab.text_analytics.count_words(review_clean)
# Drop neutral sentiment reviews.
products = products[products['rating'] != 3]
# Positive sentiment to +1 and negative sentiment to -1
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
products
train_data, test_data = products.random_split(.8, seed=1)
model = graphlab.logistic_classifier.create(train_data, target='sentiment',
features=['word_count'],
validation_set=None)
accuracy= model.evaluate(test_data, metric='accuracy')['accuracy']
print "Test Accuracy: %s" % accuracy
baseline = len(test_data[test_data['sentiment'] == 1])/len(test_data)
print "Baseline accuracy (majority class classifier): %s" % baseline
confusion_matrix = model.evaluate(test_data, metric='confusion_matrix')['confusion_matrix']
confusion_matrix
false_positive = confusion_matrix['count'][(confusion_matrix['target_label'] == -1) & (confusion_matrix['predicted_label'] == +1)]
false_positive
false_positive_test = confusion_matrix[(confusion_matrix['target_label'] == -1) & (confusion_matrix['predicted_label'] == +1)]['count'][0]
false_negative_test = confusion_matrix[(confusion_matrix['target_label'] == +1) & (confusion_matrix['predicted_label'] == -1)]['count'][0]
cost_test = (100 * false_positive_test) + (1 * false_negative_test)
print cost_test
precision = model.evaluate(test_data, metric='precision')['precision']
print "Precision on test data: %s" % precision
false_positive/confusion_matrix['count'].sum()
recall = model.evaluate(test_data, metric='recall')['recall']
print "Recall on test data: %s" % recall
def apply_threshold(probabilities, threshold):
### YOUR CODE GOES HERE
# +1 if >= threshold and -1 otherwise.
ans = graphlab.SArray([+1 if x >= threshold else -1 for x in probabilities])
return ans
probabilities = model.predict(test_data, output_type='probability')
predictions_with_default_threshold = apply_threshold(probabilities, 0.5)
predictions_with_high_threshold = apply_threshold(probabilities, 0.9)
print "Number of positive predicted reviews (threshold = 0.5): %s" % (predictions_with_default_threshold == 1).sum()
print "Number of positive predicted reviews (threshold = 0.9): %s" % (predictions_with_high_threshold == 1).sum()
# Threshold = 0.5
precision_with_default_threshold = graphlab.evaluation.precision(test_data['sentiment'],
predictions_with_default_threshold)
recall_with_default_threshold = graphlab.evaluation.recall(test_data['sentiment'],
predictions_with_default_threshold)
# Threshold = 0.9
precision_with_high_threshold = graphlab.evaluation.precision(test_data['sentiment'],
predictions_with_high_threshold)
recall_with_high_threshold = graphlab.evaluation.recall(test_data['sentiment'],
predictions_with_high_threshold)
print "Precision (threshold = 0.5): %s" % precision_with_default_threshold
print "Recall (threshold = 0.5) : %s" % recall_with_default_threshold
print "Precision (threshold = 0.9): %s" % precision_with_high_threshold
print "Recall (threshold = 0.9) : %s" % recall_with_high_threshold
threshold_values = np.linspace(0.5, 1, num=100)
print threshold_values
precision_all = []
recall_all = []
probabilities = model.predict(test_data, output_type='probability')
for threshold in threshold_values:
predictions = apply_threshold(probabilities, threshold)
precision = graphlab.evaluation.precision(test_data['sentiment'], predictions)
recall = graphlab.evaluation.recall(test_data['sentiment'], predictions)
precision_all.append(precision)
recall_all.append(recall)
import matplotlib.pyplot as plt
%matplotlib inline
def plot_pr_curve(precision, recall, title):
plt.rcParams['figure.figsize'] = 7, 5
plt.locator_params(axis = 'x', nbins = 5)
plt.plot(precision, recall, 'b-', linewidth=4.0, color = '#B0017F')
plt.title(title)
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.rcParams.update({'font.size': 16})
plot_pr_curve(precision_all, recall_all, 'Precision recall curve (all)')
threshold_values[np.array(precision_all) >= 0.965].min()
threshold = 0.98
predictions = apply_threshold(probabilities, threshold)
confusion_matrix = graphlab.evaluation.confusion_matrix(test_data['sentiment'], predictions)
confusion_matrix
confusion_matrix[(confusion_matrix['target_label'] == +1) & (confusion_matrix['predicted_label'] == -1)]['count'][0]
baby_reviews = test_data[test_data['name'].apply(lambda x: 'baby' in x.lower())]
probabilities = model.predict(baby_reviews, output_type='probability')
threshold_values = np.linspace(0.5, 1, num=100)
precision_all = []
recall_all = []
for threshold in threshold_values:
# Make predictions. Use the `apply_threshold` function
## YOUR CODE HERE
predictions = apply_threshold(probabilities, threshold)
# Calculate the precision.
# YOUR CODE HERE
precision = graphlab.evaluation.precision(baby_reviews['sentiment'], predictions)
# YOUR CODE HERE
recall = graphlab.evaluation.recall(baby_reviews['sentiment'], predictions)
# Append the precision and recall scores.
precision_all.append(precision)
recall_all.append(recall)
threshold_values[np.array(precision_all) >= 0.965].min()
plot_pr_curve(precision_all, recall_all, "Precision-Recall (Baby)")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now lets add a plot. Turn on the "pylab" environment
Step2: Good! Next collect some lists of data (slist and tlist) and use the "plot" function to display the values graphically.
Step3: Now, during the "in class" work time this week we'll pretty this up and compare to our analytical solution.
Step5: Below are two project options.
|
<ASSISTANT_TASK:>
Python Code:
#
# Simple python program to calculate s as a function of t.
# Any line that begins with a '#' is a comment.
# Anything in a line after the '#' is a comment.
#
lam=0.01 # define some variables: lam, dt, s, s0 and t. Set initial values.
dt=1.0
s=s0=100.0
t=0.0
def f_s(s,t): # define a function that describes the rate of change of 's'
return -lam*s
print ('t s') # print a header for the table of data
for i in range(11):
print (t,s) # iterate through 11 steps, starting at 0
ds = f_s(s,t)*dt # compute the change in 's' using the 'rule' that ds/dt = f(s,t)
s = s + ds # update s
t = t + dt # update t
%matplotlib inline
import numpy as np
import matplotlib.pyplot as pl
slist=[]
tlist=[]
lam=0.01
dt=1.0
s=s0=100.0
t=0.0
tlist.append(t)
slist.append(s)
print ('t s')
for i in range(11):
s += f_s(s,t)*dt
t += dt
tlist.append(t)
slist.append(s)
#plot(tlist, slist, 'b.', tlist, 100.0*exp(-lam*array(tlist)))
print ("tlist=", tlist)
print ("slist=", slist)
exact = s0*np.exp(-lam*np.array(tlist))
print ("exact", exact)
pl.title('Decay Results')
pl.xlabel('time (s)')
pl.ylabel('n (nuclei)')
pl.plot(tlist, slist, 'b.', tlist, exact, 'r-')
#
# Here is the raw data for the position of the muffin cup as a function of time. Use the "split" function to break it into
# a list of (possibly empty) strings.
#
data = 0.000000000E0 -2.688162330E0
3.336670003E-2 -4.301059729E0
6.673340007E-2 -5.376324661E0
1.001001001E-1 -6.989222059E0
1.334668001E-1 -1.129028179E1
1.668335002E-1 -1.451607658E1
2.002002002E-1 -2.043003371E1
2.335669002E-1 -2.526872591E1
2.669336003E-1 -3.118268303E1
3.003003003E-1 -3.870953756E1
3.336670003E-1 -4.623639208E1
3.670337004E-1 -5.430087907E1
4.004004004E-1 -6.236536606E1
4.337671004E-1 -7.150511799E1
4.671338005E-1 -8.010723744E1
5.005005005E-1 -8.924698937E1
5.338672005E-1 -9.892437376E1
5.672339006E-1 -1.080641257E2
6.006006006E-1 -1.177415101E2
6.339673006E-1 -1.274188945E2
6.673340007E-1 -1.370962788E2
7.007007007E-1 -1.467736632E2
7.340674007E-1 -1.575263126E2
7.674341008E-1 -1.672036969E2
8.008008008E-1 -1.768810813E2
8.341675008E-1 -1.865584657E2
8.675342009E-1 -1.973111150E2
9.009009009E-1 -2.075261319E2
9.342676009E-1 -2.182787812E2
9.676343010E-1 -2.284937981E2
.splitlines() # split this string on the "newline" character.
print("We have", len(data), "data points.")
#
# Here we'll take the list of strings defined above and break it into actual numbers in reasonable units.
#
tlist = []
ylist = []
for s in data:
t,y = s.split() # break string in two
t=float(t) # convert time to float
y=float(y)/100.0 # convert distanct (in meters) to float
tlist.append(t)
ylist.append(y)
print ("tlist=",tlist)
print ("ylist=",ylist)
pl.plot(tlist, ylist)
pl.title("raw data")
pl.xlabel("time(s)")
pl.ylabel("y(m)")
pl.grid()
vlist = [] # Velocity list (computed velocities from experimental data)
tvlist = [] # time list (times for corresponding velocities)
for i in range(1,len(tlist)):
dy=ylist[i]-ylist[i-1]
dt=tlist[i]-tlist[i-1]
vlist.append(dy/dt)
tvlist.append((tlist[i]+tlist[i-1])/2.0)
pl.plot(tvlist,vlist,'g.')
pl.title("Velocity graph")
pl.xlabel("time(s)")
pl.ylabel("$v_y$ (m/s)")
pl.grid()
m=0.0035 # kg
g=9.8 # m/s
b=0.001 # total guess, need to improve
v=0.0 # start with zero velocity
dt = (tlist[-1]-tlist[0])/(len(tlist)-1) # time per frame in original video
t=0.0
vclist = [v]
tclist = [t]
def deriv(v, t):
return b*v**2/m - g
for i in range(len(tlist)):
dv = deriv(v,t)*dt
v += dv
t += dt
vclist.append(v)
tclist.append(t)
pl.title("Comparison of experimental and drag model")
pl.xlabel("time(s)")
pl.ylabel("velocity (m/s)")
pl.plot(tclist, vclist, 'r-',tvlist,vlist,'g.')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's remove all the NaN values using the Pandas DataFrame.dropna function.
Step2: Now let's use the Pandas DataFrame.corr function to make a correlation matrix.
Step3: Finally, let's make a beautiful plot using Seaborn and Matplotlib.
Step4: Patterns Over Longer Periods of Time
Step5: There seems to be some correlation between humidity and radiation, especially in the long term! Let's try averaging over 2 weeks this time and see what the correlation looks like.
Step6: Currently, trying to broaden the timespan over which we are averaging data might lead to a loss of statistical significance, as the number of samples has an inverse relationship with the length of time that we sample over. Thus we are not going to try to find correlations over longer time intervals right now.
Step7: Interesting! At the $\alpha=0.10$ significance level, there is a moderately strong positive correlation between radiation and humidity! At that significance level, the correlation between radiation and temperature is very close to being statistically significant, and at the $\alpha=0.15$ significance level, it is statistically significant. Let's plot them against each other in order to visually appreciate the correlations!
Step8: Let's make the same plots as above, but with data averaged over 5 days instead of 14, so that we can see finer changes (and some more noise) in the measured quantities.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np # math
import pandas as pd # manipulating data
import matplotlib.pyplot as plt # graphing
import os # useful for handling filenames etc.
from scipy.stats import pearsonr # calculates the Pearson correlation coefficient and p-value
from datetime import datetime, date # working with date and time
import seaborn as sns # makes matplotlib beautiful
sns.set_style('darkgrid')
import matplotlib as mpl # control formatting
mpl.rcParams['axes.titlesize'] = 16
mpl.rcParams['axes.titleweight'] = 'semibold'
# interactive graphs
from bokeh.io import output_notebook, show, push_notebook
from bokeh.plotting import figure
from bokeh.layouts import row, column
from bokeh.models import DatetimeTickFormatter
output_notebook()
DATA_DIR = 'binned_data'
# load all the data
# `usecols=[1]` instructs pandas to only read the second column (index 1).
# This is useful because the first column is the timestamp, which is identical
# for all the .csv files, and also will not be useful for finding correlations
# (no one cares about the correlation between radiation and the number of seconds elapsed).
co2_data = pd.read_csv(os.path.join(DATA_DIR, 'data_co2_ppm_2400.csv'),
header=0, names=['unix_time', 'co2'], usecols=[1])
radiation_data = pd.read_csv(os.path.join(DATA_DIR, 'data_cpm_2400.csv'),
header=0, names=['unix_time', 'radiation'], usecols=[1])
humidity_data = pd.read_csv(os.path.join(DATA_DIR, 'data_humidity_2400.csv'),
header=0, names=['unix_time', 'humidity'], usecols=[1])
temperature_data = pd.read_csv(os.path.join(DATA_DIR, 'data_temperature_2400.csv'),
header=0, names=['unix_time', 'temperature'], usecols=[1])
pressure_data = pd.read_csv(os.path.join(DATA_DIR, 'data_pressure_2400.csv'),
header=0, names=['unix_time', 'pressure'], usecols=[1])
# make one awesome DataFrame object containing all the data
all_data = pd.concat([co2_data, radiation_data, humidity_data, temperature_data, pressure_data],
axis=1)
all_data.head()
all_data.dropna(axis=0, how='any', inplace=True)
all_data.head()
correlation_matrix = all_data.corr(method='pearson')
correlation_matrix
with sns.axes_style('dark'):
sns.heatmap(correlation_matrix, vmin=-1, vmax=1, annot=True, cbar=True, linecolor='#000000')
plt.xticks(rotation=45)
plt.yticks(rotation=45)
plt.title('Correlations Averaged Over 40 Minutes')
plt.show()
co2_data = pd.read_csv(os.path.join(DATA_DIR, 'data_co2_ppm_604800.csv'),
header=0, names=['unix_time', 'co2'], usecols=[1])
radiation_data = pd.read_csv(os.path.join(DATA_DIR, 'data_cpm_604800.csv'),
header=0, names=['unix_time', 'radiation'], usecols=[1])
humidity_data = pd.read_csv(os.path.join(DATA_DIR, 'data_humidity_604800.csv'),
header=0, names=['unix_time', 'humidity'], usecols=[1])
temperature_data = pd.read_csv(os.path.join(DATA_DIR, 'data_temperature_604800.csv'),
header=0, names=['unix_time', 'temperature'], usecols=[1])
pressure_data = pd.read_csv(os.path.join(DATA_DIR, 'data_pressure_604800.csv'),
header=0, names=['unix_time', 'pressure'], usecols=[1])
# make one awesome DataFrame object containing all the data
all_data = pd.concat([co2_data, radiation_data, humidity_data, temperature_data, pressure_data],
axis=1)
all_data.dropna(axis=0, how='any', inplace=True)
correlation_matrix = all_data.corr(method='pearson')
with sns.axes_style('dark'):
sns.heatmap(correlation_matrix, vmin=-1, vmax=1, annot=True, cbar=True, linecolor='#000000')
plt.xticks(rotation=45)
plt.yticks(rotation=45)
plt.title('Correlations Averaged Over 1 Week')
plt.show()
co2_data = pd.read_csv(os.path.join(DATA_DIR, 'data_co2_ppm_1209600.csv'),
header=0, names=['unix_time', 'co2'], usecols=[1])
radiation_data = pd.read_csv(os.path.join(DATA_DIR, 'data_cpm_1209600.csv'),
header=0, names=['unix_time', 'radiation'], usecols=[1])
humidity_data = pd.read_csv(os.path.join(DATA_DIR, 'data_humidity_1209600.csv'),
header=0, names=['unix_time', 'humidity'], usecols=[1])
temperature_data = pd.read_csv(os.path.join(DATA_DIR, 'data_temperature_1209600.csv'),
header=0, names=['unix_time', 'temperature'], usecols=[1])
pressure_data = pd.read_csv(os.path.join(DATA_DIR, 'data_pressure_1209600.csv'),
header=0, names=['unix_time', 'pressure'], usecols=[1])
# make one awesome DataFrame object containing all the data
all_data = pd.concat([co2_data, radiation_data, humidity_data, temperature_data, pressure_data],
axis=1)
all_data.dropna(axis=0, how='any', inplace=True)
correlation_matrix = all_data.corr(method='pearson')
with sns.axes_style('dark'):
sns.heatmap(correlation_matrix, vmin=-1, vmax=1, annot=True, cbar=True, linecolor='#000000')
plt.xticks(rotation=45)
plt.yticks(rotation=45)
plt.title('Correlations Averaged Over 2 Weeks')
plt.show()
# function to print correlations and their statistical significance
def corr_and_sig(col_1, col_2):
r, p = pearsonr(all_data[col_1], all_data[col_2])
print(f'Correlation between {col_1} and {col_2} is {r:.2f}, and its two-tailed p-value is {p:.2f}.\n')
for s in ['co2', 'humidity', 'temperature', 'pressure']:
corr_and_sig('radiation', s)
radiation_data = pd.read_csv(os.path.join(DATA_DIR, 'data_cpm_1209600.csv'),
header=0, names=['unix_time', 'radiation'])
humidity_data = pd.read_csv(os.path.join(DATA_DIR, 'data_humidity_1209600.csv'),
header=0, names=['unix_time', 'humidity'], usecols=['humidity'])
temperature_data = pd.read_csv(os.path.join(DATA_DIR, 'data_temperature_1209600.csv'),
header=0, names=['unix_time', 'temperature'], usecols=['temperature'])
rad_humid = pd.concat([radiation_data, humidity_data, temperature_data], axis=1)
rad_humid.dropna(axis=0, how='any', inplace=True)
sns.regplot(x='radiation', y='humidity', data=rad_humid)
plt.ylim(ymin=0)
plt.xlabel('Radiation (counts per minute)', fontdict={'fontsize': 12})
plt.ylabel('Humidity (percentage)', rotation=90, fontdict={'fontsize': 12})
plt.title('Radiation vs. Humidity', loc='left')
plt.plot()
sns.regplot(x='radiation', y='temperature', data=rad_humid)
plt.xlabel('Radiation (counts per minute)', fontdict={'fontsize': 12})
plt.ylabel('Temperature ($^oC$)', rotation=90, fontdict={'fontsize': 12})
plt.title('Radiation vs. Temperature', loc='left')
plt.show()
times = []
for idx, time, _, _, _ in rad_humid.itertuples():
times.append(datetime.fromtimestamp(int(time)))
dates = pd.Series(times)
pass
# humidity vs. time
time_humid = figure(plot_width=480, plot_height=320,
title='Humidity vs. Time', x_axis_type='datetime')
time_humid.line(dates, rad_humid['humidity'], line_width=2)
time_humid.xaxis.formatter=DatetimeTickFormatter(months=["%B %Y"])
time_humid.xaxis.major_label_orientation = 0.854 # 90 degrees in radians
time_humid.yaxis.axis_label = 'Humidity (percentage)'
# temperature vs. time
time_temp = figure(plot_width=480, plot_height=320,
title='Temperature vs. Time', x_axis_type='datetime')
time_temp.line(dates, rad_humid['temperature'], line_width=2)
time_temp.xaxis.formatter=DatetimeTickFormatter(months=["%B %Y"])
time_temp.xaxis.major_label_orientation = 0.854
time_temp.yaxis.axis_label = 'Temperature (degrees C)' # bokeh does not support LaTeX yet
# radiation vs. time
time_rad = figure(plot_width=480, plot_height=320,
title='Radiation vs. Time', x_axis_type='datetime')
time_rad.line(dates, rad_humid['radiation'], line_width=2, line_color='#EE00A0')
time_rad.xaxis.formatter=DatetimeTickFormatter(months=["%B %Y"])
time_rad.xaxis.major_label_orientation = 0.854
time_rad.yaxis.axis_label = 'Radiation (counts per minute)'
show(column(row(time_humid, time_temp), time_rad))
radiation_data = pd.read_csv(os.path.join(DATA_DIR, 'data_cpm_432000.csv'),
header=0, names=['unix_time', 'radiation'])
humidity_data = pd.read_csv(os.path.join(DATA_DIR, 'data_humidity_432000.csv'),
header=0, names=['unix_time', 'humidity'], usecols=['humidity'])
temperature_data = pd.read_csv(os.path.join(DATA_DIR, 'data_temperature_432000.csv'),
header=0, names=['unix_time', 'temperature'], usecols=['temperature'])
rad_humid = pd.concat([radiation_data, humidity_data, temperature_data], axis=1)
rad_humid.dropna(axis=0, how='any', inplace=True)
times = []
for idx, time, _, _, _ in rad_humid.itertuples():
times.append(datetime.fromtimestamp(int(time)))
dates = pd.Series(times)
# humidity vs. time
time_humid = figure(plot_width=480, plot_height=320,
title='Humidity vs. Time', x_axis_type='datetime')
time_humid.line(dates, rad_humid['humidity'], line_width=2)
time_humid.xaxis.formatter=DatetimeTickFormatter(months=["%B %Y"])
time_humid.xaxis.major_label_orientation = 0.854 # 90 degrees in radians
time_humid.yaxis.axis_label = 'Humidity (percentage)'
# temperature vs. time
time_temp = figure(plot_width=480, plot_height=320,
title='Temperature vs. Time', x_axis_type='datetime')
time_temp.line(dates, rad_humid['temperature'], line_width=2)
time_temp.xaxis.formatter=DatetimeTickFormatter(months=["%B %Y"])
time_temp.xaxis.major_label_orientation = 0.854
time_temp.yaxis.axis_label = 'Temperature (degrees C)' # bokeh does not support LaTeX yet
# radiation vs. time
time_rad = figure(plot_width=480, plot_height=320,
title='Radiation vs. Time', x_axis_type='datetime')
time_rad.line(dates, rad_humid['radiation'], line_width=2, line_color='#EE00A0')
time_rad.xaxis.formatter=DatetimeTickFormatter(months=["%B %Y"])
time_rad.xaxis.major_label_orientation = 0.854
time_rad.yaxis.axis_label = 'Radiation (counts per minute)'
show(column(row(time_humid, time_temp), time_rad))
SECONDS_IN_A_HALF_DAY = 43200 # 60 * 60 * 12
df_list = []
correlation_list = []
def get_corrs(interval):
co2_data = pd.read_csv(os.path.join(DATA_DIR, f'data_co2_ppm_{interval}.csv'),
header=0, names=['unix_time', 'co2'], usecols=[1])
radiation_data = pd.read_csv(os.path.join(DATA_DIR, f'data_cpm_{interval}.csv'),
header=0, names=['unix_time', 'radiation'], usecols=[1])
pgradiation_data = pd.read_csv(os.path.join(DATA_DIR, f'data_cpmpg_{interval}.csv'),
header=0, names=['unix_time', 'pgradiation'], usecols=[1])
humidity_data = pd.read_csv(os.path.join(DATA_DIR, f'data_humidity_{interval}.csv'),
header=0, names=['unix_time', 'humidity'], usecols=[1])
temperature_data = pd.read_csv(os.path.join(DATA_DIR, f'data_temperature_{interval}.csv'),
header=0, names=['unix_time', 'temperature'], usecols=[1])
pressure_data = pd.read_csv(os.path.join(DATA_DIR, f'data_pressure_{interval}.csv'),
header=0, names=['unix_time', 'pressure'], usecols=[1])
# make one awesome DataFrame object containing all the data
all_data = pd.concat([co2_data, radiation_data, humidity_data, temperature_data,
pressure_data, pgradiation_data],
axis=1)
all_data.dropna(axis=0, how='any', inplace=True)
correlation_matrix = all_data.corr(method='pearson')
return correlation_matrix.iloc[1]
for i in range(1, 29):
if i <= 2:
correlation_list.append(get_corrs(int(i * SECONDS_IN_A_HALF_DAY)).rename(f'{i/2:.1f} day'))
else:
correlation_list.append(get_corrs(int(i * SECONDS_IN_A_HALF_DAY)).rename(f'{i/2:.1f} days'))
correlation_df = pd.DataFrame(correlation_list)
# print(correlation_df)
with sns.axes_style('dark'):
plt.figure(figsize=(10, 9.5))
sns.heatmap(correlation_df, vmin=-1, vmax=1, annot=True, cbar=True, linecolor='#000000')
# plt.xticks(rotation=45)
# plt.yticks(rotation=45)
plt.title('Correlations with Radiation Using Data Averaged Over Different Time Intervals')
plt.show()
plt.figure(figsize=(12, 7))
plt.plot([i/2 for i in range(1, 29)], correlation_df['co2'],
[i/2 for i in range(1, 29)], correlation_df['radiation'],
[i/2 for i in range(1, 29)], correlation_df['humidity'],
[i/2 for i in range(1, 29)], correlation_df['temperature'],
[i/2 for i in range(1, 29)], correlation_df['pressure'],
[i/2 for i in range(1, 29)], correlation_df['pgradiation'])
plt.legend(['$CO_2$ concentration', 'Radiation', 'Humidity', 'Temperature', 'Pressure',
'Pocket Geiger Radiation'], loc=(1.05, 0.4))
plt.xlabel('Time Interval of Averaging (days)')
plt.ylabel("Pearson's Correlation Coefficient of Radiation vs. Quantity", rotation=90)
plt.title('Correlations with Radiation Using Data Averaged Over Different Time Intervals')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Feature Engineering Functions
Step5: Harris Corner Detector Histograms
Step8: Building Feature Vectors
Step9: Standardize and save
|
<ASSISTANT_TASK:>
Python Code:
# Enter your username:
YOUR_GMAIL_ACCOUNT = '******' # Whatever is before @gmail.com in your email address
# Libraries for this section:
import os
import cv2
import pickle
import numpy as np
from sklearn import preprocessing
# Directories:
PREPROC_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/')
TRAIN_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/training_small/') # Where the training dataset lives.
DEBUG_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/debugging_small/') # Where the debugging dataset lives.
def general_img_features(band):
Define a set of features that we can look at for each color band
Args:
band: array which is one of blue, green, or red
Returns:
features: unique colors, nonzero count, mean, standard deviation,
min, and max of the channel's pixel values
return [len(set(band.ravel())), np.count_nonzero(band),
np.mean(band), np.std(band),
band.min(), band.max()]
def concat_all_band_features(file, dir):
Extract features from a single image.
Args:
file - single image filename
dir - directory where the files are stored
Returns:
features - descriptive statistics for pixels
img = cv2.imread(os.path.join(dir, file))
features = []
blue = np.float32(img[:,:,0])
green = np.float32(img[:,:,1])
red = np.float32(img[:,:,2])
features.extend(general_img_features(blue)) # indices 0-4
features.extend(general_img_features(green)) # indices 5-9
features.extend(general_img_features(red)) # indices 10-14
return features
def harris_density(harris_img, square_size, threshold):
Apply Harris Corner Detection to image and get count of corners.
Args:
harris_img: image already processed by Harris Corner Detector (in cv2 package).
square_size: number of pixels per side of the window in which we detect corners.
threshold: indicates how "sharp" that corner must be to be detected.
Returns:
bins - counts in each bin of histogram.
max_val = harris_img.max()
shape = harris_img.shape
bins = [0] * (square_size * square_size + 1)
for row in xrange(0, shape[0], square_size):
for col in xrange(0, shape[1], square_size):
bin_val = sum(sum(harris_img[row: row + square_size,
col: col + square_size] > threshold * max_val))
bins[int(bin_val)] += 1
return bins
def get_features(img_path):
Engineer the features and output feature vectors.
Args:
img_path: filepath to image file
Returns:
features: np array of features
img = cv2.imread(img_path)
# Get the channels
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
blue = np.float32(img[:, :, 0])
green = np.float32(img[:, :, 1])
red = np.float32(img[:, :, 2])
# Run general summarization on each
features = general_img_features(gray)
features.extend(general_img_features(blue))
features.extend(general_img_features(green))
features.extend(general_img_features(red))
# Get Harris corner detection output
gray = cv2.cornerHarris(gray, 2, 3, 0.04)
blue = cv2.cornerHarris(blue, 2, 3, 0.04)
green = cv2.cornerHarris(green, 2, 3, 0.04)
red = cv2.cornerHarris(red, 2, 3, 0.04)
# Get general stats on each Harris detector results
features.extend(general_img_features(gray))
features.extend(general_img_features(blue))
features.extend(general_img_features(green))
features.extend(general_img_features(red))
# Get density bins on Harris detector results
features.extend(harris_density(gray, 4, 0.05))
return features
def get_features_and_labels(dir):
Get preprocessed features and labels.
Args:
dir: directory containing image files
Returns:
features: np array of features
labels: 1-d np array of binary labels
i = 0
features = None
labels = []
print('\nImages processed (out of {:d})...'.format(len(os.listdir(dir))))
for filename in os.listdir(dir):
feature_row = np.array([get_features(os.path.join(dir, filename))])
if features is not None:
features = np.append(features, feature_row, axis=0)
else:
features = feature_row
split_filename = filename.split('_')
label = int(split_filename[-1].split('.')[0])
labels = np.append(labels, label)
i += 1
if i % 100 == 0:
print(features.shape[0])
print(features.shape[0])
return features, labels
# Use a limited set of images, this is computationally expensive:
training_features, training_labels = get_features_and_labels(TRAIN_DIR)
debugging_features, debugging_labels = get_features_and_labels(DEBUG_DIR)
print('\nDone!')
# Standardize features:
standardizer = preprocessing.StandardScaler().fit(training_features)
training_std = standardizer.transform(training_features)
debugging_std = standardizer.transform(debugging_features)
# Save features as pkl:
pickle.dump(training_std, open(os.path.join(PREPROC_DIR, 'training_std.pkl'), 'w'))
pickle.dump(debugging_std, open(os.path.join(PREPROC_DIR, 'debugging_std.pkl'), 'w'))
pickle.dump(training_labels, open(os.path.join(PREPROC_DIR, 'training_labels.pkl'), 'w'))
pickle.dump(debugging_labels, open(os.path.join(PREPROC_DIR, 'debugging_labels.pkl'), 'w'))
print ('\nFeaturing engineering is complete!')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Start with Metadata
Step2: Custom fields
|
<ASSISTANT_TASK:>
Python Code:
remote_yaml = 'https://raw.githubusercontent.com/teamdigitale/api-starter-kit/master/openapi/simple.yaml.src'
render_markdown(f'''
[Swagger Editor]({oas_editor_url(remote_yaml)}) is a simple webapp
for editing OpenAPI 3 language specs.
''')
render_markdown(f'''
1- open [this incomplete OAS3 spec](/edit/notebooks/oas3/ex-01-info.yaml).
2- copy its content in the [Swagger Editor Online]({oas_editor_url('')}) fixing all errors
and adding the missing informations.
3- describe the first API we're going to implement: a service which returns the current
timestamp in [RFC5454](https://tools.ietf.org/html/rfc5424#section-6.2.3)
UTC (eg. `2019-01-01T00:00:00Z`).
4- provide contact informations and terms of services.
5- Feel free to add as many details as you want.
''')
### Basic fields
### Terms of Services
render_markdown(f'''
1- open [the previous OAS3 spec](/edit/notebooks/oas3/ex-01-info.yaml).
2- copy its content in the [Swagger Editor Online]({oas_editor_url('')}).
3- provide further informations via custom fields: if you think of any interesting
label, define them and comment properly using `#`
''')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install Pipeline SDK
Step2: Load the component using KFP SDK
Step3: Use the wordcount python sample
Step4: Example pipeline that uses the component
Step5: Submit the pipeline for execution
Step6: Inspect the output
|
<ASSISTANT_TASK:>
Python Code:
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
!python3 -m pip install 'kfp>=0.1.31' --quiet
import kfp.deprecated.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
!gsutil cat gs://ml-pipeline/sample-pipeline/word-count/wc.py
import kfp.deprecated as kfp
from kfp.deprecated import dsl, Client
import json
@dsl.pipeline(
name='dataflow-launch-python-pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = json.dumps(['--output', f'{staging_dir}/wc/wordcount.out']),
wait_interval = wait_interval)
Client().create_run_from_pipeline_func(pipeline, arguments={})
!gsutil cat $output/wc/wordcount.out
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a very small data set
Step2: Define variables -- the model parameters we'll learn -- and initialize them with "random" values
Step3: One single point at a time, define the error between the true label and the model's prediction of the label
Step4: Define optimizer as SSE-minimizing gradient descent
Step5: Define an operator that will initialize the graph with all available global variables
Step6: With the computational graph designed, we initialize a session to execute it
Step7: Calculate the predicted model outputs given the inputs xs
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(42)
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
tf.set_random_seed(42)
xs = [0., 1., 2., 3., 4., 5., 6., 7.] # feature (independent variable)
ys = [-.82, -.94, -.12, .26, .39, .64, 1.02, 1.] # labels (dependent variable)
fig, ax = plt.subplots()
_ = ax.scatter(xs, ys)
m = tf.Variable(-0.5)
b = tf.Variable(1.0)
total_error = 0.0
for x,y in zip(xs, ys):
y_model = m*x + b
total_error += (y-y_model)**2
optimizer_operation = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(total_error)
initializer_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(initializer_op)
n_epochs = 10
for iteration in range(n_epochs):
sess.run(optimizer_operation)
slope, intercept = sess.run([m, b])
slope
intercept
y_hat = slope*np.array(xs) + intercept
pd.DataFrame(list(zip(ys, y_hat)), columns=['y', 'y_hat'])
fig, ax = plt.subplots()
ax.scatter(xs, ys)
x_min, x_max = ax.get_xlim()
y_min, y_max = intercept, intercept + slope*(x_max-x_min)
ax.plot([x_min, x_max], [y_min, y_max])
_ = ax.set_xlim([x_min, x_max])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <img src="http
Step2: <img src="http
|
<ASSISTANT_TASK:>
Python Code:
%%bash
date
system_profiler SPSoftwareDataType
bsmaploc="/Users/Shared/Apps/bsmap-2.74/"
cd /Volumes/web/halfshell/working-directory/
!ls -lh
mkdir 16-10-29
cd 16-10-29
# Genome
cd ../data
!curl -O http://owl.fish.washington.edu/halfshell/working-directory/16-10-24/Ostrea_lurida-Scaff-10k.fa
!ls
!curl -O http://owl.fish.washington.edu/nightingales/O_lurida/4_TGACCA_L001_R1_001.fastq.gz
!curl -O http://owl.fish.washington.edu/nightingales/O_lurida/1_ATCACG_L001_R1_001.fastq.gz
!mv 1_ATCACG_L001_R1_001.fastq.gz ../16-10-17/1_ATCACG_L001_R1_001.fastq.gz
!ls -lh ../16-10-17/*gz
pwd
for i in ("1_ATCACG","2_CGATGT","3_TTAGGC","5_ACAGTG","4_TGACCA", "6_GCCAAT", "7_CAGATC", "8_ACTTGA"):
!{bsmaploc}bsmap \
-a ../16-10-17/{i}_L001_R1_001.fastq.gz \
-d Ostrea_lurida-Scaff-10k.fa \
-o bsmap_out_{i}.sam \
-p 8 \
2> bsmap_out_{i}.stderr
!date
!ls bsmap
!tail *err
for i in ("1_ATCACG","2_CGATGT","3_TTAGGC","5_ACAGTG","4_TGACCA", "6_GCCAAT", "7_CAGATC", "8_ACTTGA"):
!{bsmaploc}bsmap \
-a ../16-10-17/{i}_L001_R1_001.fastq.gz \
-d Ostrea_lurida-Scaff-10k.fa \
-o bsmap_out_{i}-2.sam \
-p 8 \
2> bsmap_out_{i}-2.stderr
ls -lh bsmap*
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TB Hamiltonian
Step2: Kwant routines
Step3: Apply parameters and extract the Hamiltonian
Step4: Dump Hamiltonian ordered along the position on the lattice and the list of sites
|
<ASSISTANT_TASK:>
Python Code:
import importlib
import os
import sys
def is_interactive():
import __main__ as main
return not hasattr(main, '__file__')
# defaults
shell_mode = not is_interactive()
plot_graphs = importlib.util.find_spec("matplotlib") is not None and not shell_mode
# Matplotlib
if (plot_graphs):
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.ticker import AutoMinorLocator
from matplotlib import gridspec
import subprocess
def plt_show():
plt.show() if not shell_mode else None
if not shell_mode:
# Matplotlib defaults
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
%load_ext autoreload
%autoreload 2
font = {'family' : 'sans-serif', 'weight' : 'normal', 'size' : 16}
font2 = {'family' : 'helvetica', 'weight' : 'bold', 'size' : 24}
lines = {'markersize' : 0.5}
matplotlib.rc('font', **font)
matplotlib.rc('lines')
matplotlib.rc('text', usetex=True)
# Plot styles
mycolors=["#083E76","#0B4D92","#2165AC","#407BB8","#368B6B","#58A588","#f4a582","#d6604d","#b2182b","#393B3D","#D13648"]
mypts=["o","^","v","s","p","h","d","8","4"]
mylines=["-","--",":","-.","-","--",":","-.","-"]
myformats = [x + y for (x,y) in zip(mypts,mylines)]
stylepick = lambda x,l : int(round((len(mycolors)-2) / float(l)*x + 1,0))
# Parameters from command line parser
def is_number(s):
try:
float(s)
return True
except ValueError:
pass
try:
import unicodedata
unicodedata.numeric(s)
return True
except (TypeError, ValueError):
pass
return False
def args_to_params(args_in, mp):
import argparse
parser = argparse.ArgumentParser(description='m_proximity',formatter_class=argparse.ArgumentDefaultsHelpFormatter)
[parser.add_argument('--'+n, default = v) for n,v in mp.items()]
args = parser.parse_args(args_in)
p1 = vars(args)
p = dict((k, type(mp[k])(v)) for k, v in p1.items() if v!=None)
return p
# Output data
data_out = dict()
p = {
"t" : 1,
"M": -1.5,
"v" : 1,
"save_hdf5" : 1,
"h5path" : "",
"h5section" : "",
"h5file" : "kwant_lattice.h5",
}
import itertools
ns_in = [x for x in itertools.chain.from_iterable(["--"+n, str(v)] for n,v in {**p}.items())]
p=args_to_params(ns_in if not shell_mode else sys.argv[1:len(sys.argv)], p)
print("Params:\n",p)
p_names = {
"t" : "t",
"M" : "M",
"v" : "v",
}
fname_text = "".join(["{}{:1.3f}".format(x.replace("_",""),p[x]) for x in sorted(p_names.keys())])
print(fname_text)
import kwant
import sympy
import sympy.functions
import sympy.functions.elementary
import sympy.functions.elementary.trigonometric
import numpy as np
import tinyarray
from types import SimpleNamespace
sympy.init_printing(use_latex='mathjax')
import discretizer as dr
k_x, k_y, k_z = dr.momentum_operators
k = sympy.sqrt(k_x**2+k_y**2 +k_z**2)
x, y, z = dr.coordinates
sigma_x = sympy.physics.matrices.msigma(1)
sigma_y = sympy.physics.matrices.msigma(2)
sigma_z = sympy.physics.matrices.msigma(3)
sigma_0 = sigma_z * sigma_z
M,t,mu,v = sympy.symbols('M t mu v', commutative=True)
H1 = k_x * sigma_x + k_y * sigma_y + k_z*k_z / 2 * sigma_z - mu * sigma_z
H2 = v * (k_x * sigma_x + k_y * sigma_y) + (M - t + t/2*k**2)*sigma_z
Hs = H2
#print(sympy.latex(H1))
#print(sympy.latex(H2))
Hs
tb = dr.Discretizer(Hs, discrete_coordinates={'x', 'y', 'z'}, lattice_constant=1.0, verbose=False)
tb.symbolic_hamiltonian
dims = np.array([4, 4, 4])
dims_mult = np.array([dims[1]*dims[2], dims[1], 1])
def shape(pos):
return (0 <= pos[0] < dims[0]) and (0 <= pos[1] < dims[1]) and (0<=pos[2] < dims[2])
def lead_shape_z(pos):
return (0 <= pos[1] < dims[1]) and (0 <= pos[0] < dims[0])
def lead_shape_x(pos):
return (0 <= pos[1] < dims[1]) and (0 <= pos[2] < dims[2])
def lead_shape_y(pos):
return (0 <= pos[0] < dims[0]) and (0 <= pos[2] < dims[2])
syst_in = tb.build(shape, (0, 0, 0))
lead_x = tb.build(lead_shape_x, (0, 0, 0), symmetry=(-1, 0, 0))
lead_y = tb.build(lead_shape_y, (0, 0, 0), symmetry=(0, -1, 0))
lead_z = tb.build(lead_shape_z, (0, 0, 0), symmetry=(0, 0, -1))
syst_in.attach_lead(lead_x)
syst_in.attach_lead(lead_x.reversed())
syst_in.attach_lead(lead_y)
syst_in.attach_lead(lead_y.reversed())
syst_in.attach_lead(lead_z)
syst_in.attach_lead(lead_z.reversed())
syst = syst_in.finalized()
if plot_graphs:
kwant.plot(syst)
args = SimpleNamespace()
vars(args).update(p)
print(args)
H_Weyl = syst.hamiltonian_submatrix(args=[args], sparse=True)
if plot_graphs:
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
kwant.plotter.bands(syst.leads[0], args=[args], momenta=np.linspace(-np.pi, np.pi, 21, endpoint=True), show=False, ax = ax);
ax.set_xlabel(r"$k_x$")
ax.set_ylabel(r"$E$")
plt.show()
#plt.ylim(-0.5, 0.5)
# Order sites
orb_size = syst.hamiltonian(0,0, args).shape[0]
print("orb_size =", orb_size)
# The order of sites in kwant is not guaranteed to follow the lattice
pos_array = np.array([np.array(s.pos).astype(np.int_) for s in syst.sites])
pos_flat = np.sum(pos_array * dims_mult[::-1], axis = 1)
pos_order = np.argsort(pos_flat)
Nsites = len(pos_order)
print("Nsites =", Nsites)
#print(pos_array[pos_order])
# Convert pos_order to order of hamiltonian matrix elements
mat_order = np.column_stack(tuple([pos_order * orb_size + o for o in range(orb_size)])).flatten()
# Extract finite size Hamiltonian
H1 = H_Weyl.tolil()
rows,cols = H1.nonzero()
export_hopping = np.array([
[row,col, H1[row,col]] for row,col in zip(rows[mat_order],cols[mat_order])
])
export_coords = pos_array[pos_order]
data_out["hopping"] = export_hopping
data_out["coordinates"] = export_coords
data_out["dims"] = dims
data_out["norbitals"] = orb_size
# HDF5
import h5py
def save_hdf5(h5file, h5section, name, data_out):
write_on_top = h5section == "" or h5section == "/" or h5section == "''"
h5mode = "a" if os.path.exists(h5file) else "w" # and not write_on_top
#h5mode="w"
hdf5_out = h5py.File(h5file, h5mode)
if not write_on_top:
if h5section in hdf5_out:
del hdf5_out[h5section]
h5top = hdf5_out.create_group(h5section)
else:
h5top = hdf5_out
print("Saving to", h5file, "/" + h5section + "/" + name,end=": ")
sg = h5top.create_group(name)
for h,v in data_out.items():
print(h,end="; ")
v1 = v
is_complex = False
if isinstance(v, (list, tuple, np.ndarray)):
is_complex = (v.dtype == np.complex)
if is_complex:
v1 = v.astype(np.complex).view(np.float).reshape(np.append(v.shape,[2]))
dset = sg.create_dataset(h, data = v1)
if is_complex:
dset.attrs["__complex__"] = 1
hdf5_out.close()
print("\nDone.\n")
def read_hdf5(h5file, h5section, name):
read_on_top = h5section == "" or h5section == "/" or h5section == "''"
h5mode = "r"
success = os.path.exists(h5file)
dummy_out = (False,None)
if not success:
return dummy_out
hdf5_out = h5py.File(h5file, h5mode)
success = h5section in hdf5_out or read_on_top
if not success:
return dummy_out
print (h5file, h5section, name, read_on_top)
if not read_on_top:
h5top = hdf5_out[h5section]
else:
h5top = hdf5_out
print("Reading from", h5file, "/", h5section)
success = name in h5top
if not success:
return dummy_out
sg = h5top[name]
vnames = sg.keys()
data_in1 = dict()
for h in vnames:
data_in1[h] = sg.get(h).value
hdf5_out.close()
return (success, data_in1)
h5path = os.getcwd() if p["h5path"] == "" else p["h5path"]
h5file = p["h5file"]
p["h5file"] = h5file
h5file = h5path + os.path.sep + p["h5file"]
if p["save_hdf5"]:
os.remove(h5file) if os.path.exists(h5file) else None
save_hdf5(h5file,p["h5section"],"parameters",p)
save_hdf5(h5file,p["h5section"],"lattice",data_out)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Peeking at a MEASURE calibration
Step2: There are a few things note about the above
Step3: An almost-trivial example
Step4: Raw capture results are by default represented as integers in the interval $[-2^{15}, 2^{15}]$. For many analyses you may prefer to normalize to the range $[-1,1]$.
Step5: Applying a PULSE before RAW-CAPTURE
Step6: Capturing an excited qubit
Step7: TODO
Step8: Number of samples in a job exceeds maximum
Step9: RAW-CAPTURE precludes the use of other capture operations
|
<ASSISTANT_TASK:>
Python Code:
from pyquil import Program, get_qc
qc = get_qc('Aspen-8')
cals = qc.compiler.calibration_program
from pyquil.quilatom import Qubit, Frame
from pyquil.quilbase import Pulse, Capture, DefMeasureCalibration
qubit = Qubit(0)
measure_defn = next(defn for defn in cals.calibrations
if isinstance(defn, DefMeasureCalibration) and defn.qubit == qubit)
print(measure_defn)
pulse = next(i for i in measure_defn.instrs if isinstance(i, Pulse))
print(pulse, "\n")
capture = next(i for i in measure_defn.instrs if isinstance(i, Capture))
print(capture, "\n")
frame = Frame([qubit], "ro_rx")
frame_defn = cals.frames[frame]
print(frame_defn)
from math import ceil
duration = capture.kernel.duration
sample_rate = frame_defn.sample_rate
memory_length = ceil(duration * sample_rate)
raw_capture_no_pulse = Program(
f'DECLARE raw REAL[{memory_length}]',
f'RAW-CAPTURE {frame} {duration} raw'
).wrap_in_numshots_loop(1000)
print(raw_capture_no_pulse)
exe = qc.compiler.native_quil_to_executable(raw_capture_no_pulse)
qc.run(exe)
raw_results_no_pulse = qc.qam.read_memory(region_name='raw')
print("shape", raw_results_no_pulse.shape)
print("data", raw_results_no_pulse)
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.figure()
plt.gcf().set_size_inches(20.5, 10.5)
plt.plot(np.arange(len(raw_results_no_pulse[0,:]))/sample_rate, raw_results_no_pulse[0,:])
plt.show()
avg_results_no_pulse = raw_results_no_pulse.mean(axis=0) / (2**15)
plt.psd(avg_results_no_pulse, Fs=sample_rate)
plt.show()
raw_capture_pulse = Program(
f'DECLARE raw REAL[{memory_length}]',
pulse,
f'RAW-CAPTURE {frame} {duration} raw'
).wrap_in_numshots_loop(1000)
print(raw_capture_pulse)
exe = qc.compiler.native_quil_to_executable(raw_capture_pulse)
qc.run(exe)
raw_results_pulse = qc.qam.read_memory(region_name='raw')
avg_results_pulse = raw_results_pulse.mean(axis=0) / 2**15
plt.psd(avg_results_pulse, Fs=sample_rate)
plt.show()
raw_capture_excited = Program(
f'DECLARE raw REAL[{memory_length}]',
f'RX(pi) {qubit}',
pulse,
f'RAW-CAPTURE {frame} {duration} raw'
).wrap_in_numshots_loop(1000)
print(raw_capture_excited)
exe = qc.compiler.native_quil_to_executable(raw_capture_excited)
qc.run(exe)
raw_results_excited = qc.qam.read_memory(region_name='raw')
avg_results_excited = raw_results_excited.mean(axis=0) / 2**15
plt.psd(avg_results_excited, Fs=sample_rate)
plt.show()
duration = 5e-6
samples = ceil(sample_rate*duration)
rrr = Program(
f'DECLARE raw REAL[{samples}]',
f'RAW-CAPTURE 0 "ro_rx" {duration} raw'
).wrap_in_numshots_loop(1)
try:
exe = qc.compiler.native_quil_to_executable(rrr)
except Exception as e:
print(e)
duration = 1e-06
samples = ceil(sample_rate*duration)
rrr = Program(
f'DECLARE raw REAL[{samples}]',
f'RAW-CAPTURE 0 "ro_rx" {duration} raw'
).wrap_in_numshots_loop(100000)
try:
exe = qc.compiler.native_quil_to_executable(rrr)
except Exception as e:
print(e)
duration = 1e-06
samples = ceil(sample_rate*duration)
rrr = Program(
f'DECLARE raw REAL[{samples}]',
'DECLARE ro BIT',
'MEASURE 1 ro',
f'RAW-CAPTURE 0 "ro_rx" {duration} raw'
)
try:
exe = qc.compiler.native_quil_to_executable(rrr)
except Exception as e:
print(e)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Non-RDD Example
Step2: DataProc - submit a job
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
spark_home = os.environ['SPARK_HOME'] = '/Users/ozimmer/GoogleDrive/berkeley/w261/spark-2.0.0-bin-hadoop2.6'
if not spark_home:
raise ValueError('SPARK_HOME enviroment variable is not set')
sys.path.insert(0,os.path.join(spark_home,'python'))
sys.path.insert(0,os.path.join(spark_home,'python/lib/py4j-0.9-src.zip'))
execfile(os.path.join(spark_home,'python/pyspark/shell.py'))
from pyspark.mllib.recommendation import ALS, MatrixFactorizationModel, Rating
# Load and parse the data
data = sc.textFile("/Users/ozimmer/GoogleDrive/berkeley/w210/w210_vendor_recommendor/test_spark_1.csv")
header = data.first() #filter out the header
ratings = data.filter(lambda row: row != header)\
.map(lambda l: l.split(','))\
.map(lambda l: Rating(int(l[0]), int(l[1]), float(l[2])))
# Build the recommendation model using Alternating Least Squares
rank = 10
numIterations = 10
model = ALS.train(ratings, rank, numIterations)
# Evaluate the model on training data
testdata = ratings.map(lambda p: (p[0], p[1]))
predictions = model.predictAll(testdata).map(lambda r: ((r[0], r[1]), r[2]))
ratesAndPreds = ratings.map(lambda r: ((r[0], r[1]), r[2])).join(predictions)
MSE = ratesAndPreds.map(lambda r: (r[1][0] - r[1][1])**2).mean()
print("Mean Squared Error = " + str(MSE))
# Save and load model
model.save(sc, "target/tmp/myCollaborativeFilter")
sameModel = MatrixFactorizationModel.load(sc, "target/tmp/myCollaborativeFilter")
#Create a RDD for prediction
data = [(145, 895988), (143, 348288), (143, 795270), (143, 795221), (143, 306804)]
data_rdd = sc.parallelize(data)
#Paste the prediction results in the model
model.predictAll(data_rdd).collect()
lines = spark.read.text("data/mllib/als/sample_movielens_ratings.txt").rdd
parts = lines.map(lambda row: row.value.split("::"))
ratingsRDD = parts.map(lambda p: Row(userId=int(p[0]), movieId=int(p[1]),
rating=float(p[2]), timestamp=long(p[3])))
ratings = spark.createDataFrame(ratingsRDD)
(training, test) = ratings.randomSplit([0.8, 0.2])
# Build the recommendation model using ALS on the training data
# Note we set cold start strategy to 'drop' to ensure we don't get NaN evaluation metrics
als = ALS(maxIter=5, regParam=0.01, userCol="userId", itemCol="movieId", ratingCol="rating",
coldStartStrategy="drop")
model = als.fit(training)
# Evaluate the model by computing the RMSE on the test data
predictions = model.transform(test)
evaluator = RegressionEvaluator(metricName="rmse", labelCol="rating",
predictionCol="prediction")
rmse = evaluator.evaluate(predictions)
print("Root-mean-square error = " + str(rmse))
# Generate top 10 movie recommendations for each user
userRecs = model.recommendForAllUsers(10)
# Generate top 10 user recommendations for each movie
movieRecs = model.recommendForAllItems(10)
!gsutil cp gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py .
!cat hello-world.py
!gcloud dataproc jobs submit pyspark --cluster test1 hello-world.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basics of Python
Step2: Basic data types
Step3: Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.
Step4: Now we let's look at the operations
Step5: Strings
Step6: String objects have a bunch of useful methods; for example
Step7: You can find a list of all string methods in the documentation.
Step8: As usual, you can find all the gory details about lists in the documentation.
Step9: Loops
Step10: If you want access to the index of each element within the body of a loop, use the built-in enumerate function
Step11: List comprehensions
Step12: You can make this code simpler using a list comprehension
Step13: List comprehensions can also contain conditions
Step14: Dictionaries
Step15: You can find all you need to know about dictionaries in the documentation.
Step16: Dictionary comprehensions
Step17: Sets
Step18: Loops
Step19: Set comprehensions
Step20: Tuples
Step21: Functions
Step22: We will often define functions to take optional keyword arguments, like this
Step23: Classes
Step24: Numpy
Step25: Arrays
Step26: Numpy also provides many functions to create arrays
Step27: Array indexing
Step28: A slice of an array is a view into the same data, so modifying it will modify the original array.
Step29: You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing
Step30: Two ways of accessing the data in the middle row of the array.
Step31: Integer array indexing
Step32: One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix
Step33: Boolean array indexing
Step34: For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.
Step35: You can read all about numpy datatypes in the documentation.
Step36: Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects
Step37: You can also use the @ operator which is equivalent to numpy's dot operator.
Step38: Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum
Step39: You can find the full list of mathematical functions provided by numpy in the documentation.
Step40: Broadcasting
Step41: This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this
Step42: Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting
Step43: The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.
Step44: Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.
Step45: By running this special iPython command, we will be displaying plots inline
Step46: Plotting
Step47: With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels
Step48: Subplots
|
<ASSISTANT_TASK:>
Python Code:
!python --version
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)
print(quicksort([3,6,8,10,1,2,1]))
x = 3
print(x, type(x))
print(x + 1) # Addition
print(x - 1) # Subtraction
print(x * 2) # Multiplication
print(x ** 2) # Exponentiation
x += 1
print(x)
x *= 2
print(x)
y = 2.5
print(type(y))
print(y, y + 1, y * 2, y ** 2)
t, f = True, False
print(type(t))
print(t and f) # Logical AND;
print(t or f) # Logical OR;
print(not t) # Logical NOT;
print(t != f) # Logical XOR;
hello = 'hello' # String literals can use single quotes
world = "world" # or double quotes; it does not matter
print(hello, len(hello))
hw = hello + ' ' + world # String concatenation
print(hw)
hw12 = '{} {} {}'.format(hello, world, 12) # string formatting
print(hw12)
s = "hello"
print(s.capitalize()) # Capitalize a string
print(s.upper()) # Convert a string to uppercase; prints "HELLO"
print(s.rjust(7)) # Right-justify a string, padding with spaces
print(s.center(7)) # Center a string, padding with spaces
print(s.replace('l', '(ell)')) # Replace all instances of one substring with another
print(' world '.strip()) # Strip leading and trailing whitespace
xs = [3, 1, 2] # Create a list
print(xs, xs[2])
print(xs[-1]) # Negative indices count from the end of the list; prints "2"
xs[2] = 'foo' # Lists can contain elements of different types
print(xs)
xs.append('bar') # Add a new element to the end of the list
print(xs)
x = xs.pop() # Remove and return the last element of the list
print(x, xs)
nums = list(range(5)) # range is a built-in function that creates a list of integers
print(nums) # Prints "[0, 1, 2, 3, 4]"
print(nums[2:4]) # Get a slice from index 2 to 4 (exclusive); prints "[2, 3]"
print(nums[2:]) # Get a slice from index 2 to the end; prints "[2, 3, 4]"
print(nums[:2]) # Get a slice from the start to index 2 (exclusive); prints "[0, 1]"
print(nums[:]) # Get a slice of the whole list; prints ["0, 1, 2, 3, 4]"
print(nums[:-1]) # Slice indices can be negative; prints ["0, 1, 2, 3]"
nums[2:4] = [8, 9] # Assign a new sublist to a slice
print(nums) # Prints "[0, 1, 8, 9, 4]"
animals = ['cat', 'dog', 'monkey']
for animal in animals:
print(animal)
animals = ['cat', 'dog', 'monkey']
for idx, animal in enumerate(animals):
print('#{}: {}'.format(idx + 1, animal))
nums = [0, 1, 2, 3, 4]
squares = []
for x in nums:
squares.append(x ** 2)
print(squares)
nums = [0, 1, 2, 3, 4]
squares = [x ** 2 for x in nums]
print(squares)
nums = [0, 1, 2, 3, 4]
even_squares = [x ** 2 for x in nums if x % 2 == 0]
print(even_squares)
d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data
print(d['cat']) # Get an entry from a dictionary; prints "cute"
print('cat' in d) # Check if a dictionary has a given key; prints "True"
d['fish'] = 'wet' # Set an entry in a dictionary
print(d['fish']) # Prints "wet"
print(d['monkey']) # KeyError: 'monkey' not a key of d
print(d.get('monkey', 'N/A')) # Get an element with a default; prints "N/A"
print(d.get('fish', 'N/A')) # Get an element with a default; prints "wet"
del d['fish'] # Remove an element from a dictionary
print(d.get('fish', 'N/A')) # "fish" is no longer a key; prints "N/A"
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal, legs in d.items():
print('A {} has {} legs'.format(animal, legs))
nums = [0, 1, 2, 3, 4]
even_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}
print(even_num_to_square)
animals = {'cat', 'dog'}
print('cat' in animals) # Check if an element is in a set; prints "True"
print('fish' in animals) # prints "False"
animals.add('fish') # Add an element to a set
print('fish' in animals)
print(len(animals)) # Number of elements in a set;
animals.add('cat') # Adding an element that is already in the set does nothing
print(len(animals))
animals.remove('cat') # Remove an element from a set
print(len(animals))
animals = {'cat', 'dog', 'fish'}
for idx, animal in enumerate(animals):
print('#{}: {}'.format(idx + 1, animal))
from math import sqrt
print({int(sqrt(x)) for x in range(30)})
d = {(x, x + 1): x for x in range(10)} # Create a dictionary with tuple keys
t = (5, 6) # Create a tuple
print(type(t))
print(d[t])
print(d[(1, 2)])
t[0] = 1
def sign(x):
if x > 0:
return 'positive'
elif x < 0:
return 'negative'
else:
return 'zero'
for x in [-1, 0, 1]:
print(sign(x))
def hello(name, loud=False):
if loud:
print('HELLO, {}'.format(name.upper()))
else:
print('Hello, {}!'.format(name))
hello('Bob')
hello('Fred', loud=True)
class Greeter:
# Constructor
def __init__(self, name):
self.name = name # Create an instance variable
# Instance method
def greet(self, loud=False):
if loud:
print('HELLO, {}'.format(self.name.upper()))
else:
print('Hello, {}!'.format(self.name))
g = Greeter('Fred') # Construct an instance of the Greeter class
g.greet() # Call an instance method; prints "Hello, Fred"
g.greet(loud=True) # Call an instance method; prints "HELLO, FRED!"
import numpy as np
a = np.array([1, 2, 3]) # Create a rank 1 array
print(type(a), a.shape, a[0], a[1], a[2])
a[0] = 5 # Change an element of the array
print(a)
b = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array
print(b)
print(b.shape)
print(b[0, 0], b[0, 1], b[1, 0])
a = np.zeros((2,2)) # Create an array of all zeros
print(a)
b = np.ones((1,2)) # Create an array of all ones
print(b)
c = np.full((2,2), 7) # Create a constant array
print(c)
d = np.eye(2) # Create a 2x2 identity matrix
print(d)
e = np.random.random((2,2)) # Create an array filled with random values
print(e)
import numpy as np
# Create the following rank 2 array with shape (3, 4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
# Use slicing to pull out the subarray consisting of the first 2 rows
# and columns 1 and 2; b is the following array of shape (2, 2):
# [[2 3]
# [6 7]]
b = a[:2, 1:3]
print(b)
print(a[0, 1])
b[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]
print(a[0, 1])
# Create the following rank 2 array with shape (3, 4)
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
print(a)
row_r1 = a[1, :] # Rank 1 view of the second row of a
row_r2 = a[1:2, :] # Rank 2 view of the second row of a
row_r3 = a[[1], :] # Rank 2 view of the second row of a
print(row_r1, row_r1.shape)
print(row_r2, row_r2.shape)
print(row_r3, row_r3.shape)
# We can make the same distinction when accessing columns of an array:
col_r1 = a[:, 1]
col_r2 = a[:, 1:2]
print(col_r1, col_r1.shape)
print()
print(col_r2, col_r2.shape)
a = np.array([[1,2], [3, 4], [5, 6]])
# An example of integer array indexing.
# The returned array will have shape (3,) and
print(a[[0, 1, 2], [0, 1, 0]])
# The above example of integer array indexing is equivalent to this:
print(np.array([a[0, 0], a[1, 1], a[2, 0]]))
# When using integer array indexing, you can reuse the same
# element from the source array:
print(a[[0, 0], [1, 1]])
# Equivalent to the previous integer array indexing example
print(np.array([a[0, 1], a[0, 1]]))
# Create a new array from which we will select elements
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
print(a)
# Create an array of indices
b = np.array([0, 2, 0, 1])
# Select one element from each row of a using the indices in b
print(a[np.arange(4), b]) # Prints "[ 1 6 7 11]"
# Mutate one element from each row of a using the indices in b
a[np.arange(4), b] += 10
print(a)
import numpy as np
a = np.array([[1,2], [3, 4], [5, 6]])
bool_idx = (a > 2) # Find the elements of a that are bigger than 2;
# this returns a numpy array of Booleans of the same
# shape as a, where each slot of bool_idx tells
# whether that element of a is > 2.
print(bool_idx)
# We use boolean array indexing to construct a rank 1 array
# consisting of the elements of a corresponding to the True values
# of bool_idx
print(a[bool_idx])
# We can do all of the above in a single concise statement:
print(a[a > 2])
x = np.array([1, 2]) # Let numpy choose the datatype
y = np.array([1.0, 2.0]) # Let numpy choose the datatype
z = np.array([1, 2], dtype=np.int64) # Force a particular datatype
print(x.dtype, y.dtype, z.dtype)
x = np.array([[1,2],[3,4]], dtype=np.float64)
y = np.array([[5,6],[7,8]], dtype=np.float64)
# Elementwise sum; both produce the array
print(x + y)
print(np.add(x, y))
# Elementwise difference; both produce the array
print(x - y)
print(np.subtract(x, y))
# Elementwise product; both produce the array
print(x * y)
print(np.multiply(x, y))
# Elementwise division; both produce the array
# [[ 0.2 0.33333333]
# [ 0.42857143 0.5 ]]
print(x / y)
print(np.divide(x, y))
# Elementwise square root; produces the array
# [[ 1. 1.41421356]
# [ 1.73205081 2. ]]
print(np.sqrt(x))
x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
v = np.array([9,10])
w = np.array([11, 12])
# Inner product of vectors; both produce 219
print(v.dot(w))
print(np.dot(v, w))
print(v @ w)
# Matrix / vector product; both produce the rank 1 array [29 67]
print(x.dot(v))
print(np.dot(x, v))
print(x @ v)
# Matrix / matrix product; both produce the rank 2 array
# [[19 22]
# [43 50]]
print(x.dot(y))
print(np.dot(x, y))
print(x @ y)
x = np.array([[1,2],[3,4]])
print(np.sum(x)) # Compute sum of all elements; prints "10"
print(np.sum(x, axis=0)) # Compute sum of each column; prints "[4 6]"
print(np.sum(x, axis=1)) # Compute sum of each row; prints "[3 7]"
print(x)
print("transpose\n", x.T)
v = np.array([[1,2,3]])
print(v )
print("transpose\n", v.T)
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = np.empty_like(x) # Create an empty matrix with the same shape as x
# Add the vector v to each row of the matrix x with an explicit loop
for i in range(4):
y[i, :] = x[i, :] + v
print(y)
vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other
print(vv) # Prints "[[1 0 1]
# [1 0 1]
# [1 0 1]
# [1 0 1]]"
y = x + vv # Add x and vv elementwise
print(y)
import numpy as np
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = x + v # Add v to each row of x using broadcasting
print(y)
# Compute outer product of vectors
v = np.array([1,2,3]) # v has shape (3,)
w = np.array([4,5]) # w has shape (2,)
# To compute an outer product, we first reshape v to be a column
# vector of shape (3, 1); we can then broadcast it against w to yield
# an output of shape (3, 2), which is the outer product of v and w:
print(np.reshape(v, (3, 1)) * w)
# Add a vector to each row of a matrix
x = np.array([[1,2,3], [4,5,6]])
# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),
# giving the following matrix:
print(x + v)
# Add a vector to each column of a matrix
# x has shape (2, 3) and w has shape (2,).
# If we transpose x then it has shape (3, 2) and can be broadcast
# against w to yield a result of shape (3, 2); transposing this result
# yields the final result of shape (2, 3) which is the matrix x with
# the vector w added to each column. Gives the following matrix:
print((x.T + w).T)
# Another solution is to reshape w to be a row vector of shape (2, 1);
# we can then broadcast it directly against x to produce the same
# output.
print(x + np.reshape(w, (2, 1)))
# Multiply a matrix by a constant:
# x has shape (2, 3). Numpy treats scalars as arrays of shape ();
# these can be broadcast together to shape (2, 3), producing the
# following array:
print(x * 2)
import matplotlib.pyplot as plt
%matplotlib inline
# Compute the x and y coordinates for points on a sine curve
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x)
# Plot the points using matplotlib
plt.plot(x, y)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Plot the points using matplotlib
plt.plot(x, y_sin)
plt.plot(x, y_cos)
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.title('Sine and Cosine')
plt.legend(['Sine', 'Cosine'])
# Compute the x and y coordinates for points on sine and cosine curves
x = np.arange(0, 3 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
# Show the figure.
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generally speaking, the procedure for scikit-learn is uniform across all machine-learning algorithms. Models are accessed via the various modules (ensemble, SVM, neighbors, etc), with user-defined tuning parameters. The features (or data) for the models are stored in a 2D array, X, with rows representing individual sources and columns representing the corresponding feature values.$^\dagger$ In cases where there is a known classification or scalar value (typically supervised methods), this information is stored in a 1D array y.
Step2: You likely haven't encountered a scikit-learn Bunch before. It's essentially the same as a dictionary.
Step3: Most importantly, iris contains data and target values. These are all you need for scikit-learn, though the feature and target names and description are useful.
Step4: The data is a 2d array with shape 150 x 4.
Step5: Problem 1e
Step6: Problem 1f
Step7: An important lesson from this week - worry about the data!
Step8: We will return to the iris data set later in this notebook.
Step9: Problem 2b
Step10: The sklearn.linear_model has a LinearRegression class.
Step11: Problem 2d
Step12: Problem 2e
Step13: Problem 2f
Step14: The results are identical!
Step15: In the previous cell we have simulated 200 galaxies, most of which are normal with masses > 60. We use this information to define our class vector y, such that all normal galaxies have a class of 1 and all dwarf galaxies have a class of 0. Finally, we simulate our observations X, which adds Gaussian noise to the true mass measurements.
Step16: Problem 3b
Step17: As we can see in the above plot, every galaxy with $\mathrm{mass} \gtrsim 78$ is classified as normal.
Step18: This represents a slight improvement over the linear regression model. (Note - if you did the reverse and used the normal galaxies to determine the mass cut, you would get similar results)
Step19: write your answer here
Step20: Problem 3f
Step21: This model shows significant improvement over our alternative methods!
Step22: We can see that the logistic regression model is clearly superior to the linear regression model. The model is not perfect, but no model could be in the presence of noise.
|
<ASSISTANT_TASK:>
Python Code:
# execute example code here
from sklearn import datasets
from sklearn.ensemble import RandomForestClassifier
iris = datasets.load_iris()
RFclf = RandomForestClassifier().fit(iris.data, iris.target)
# complete
# complete
print(np.shape(# complete
print(# complete
print( # complete
print(np.shape( # complete
# complete
# complete
fig, ax = plt.subplots()
ax.scatter(# complete
np.random.seed(2012)
n_obs = 25
x = np.random.uniform(0,100, n_obs)
y_true = 2.3*x + 14
y_obs = y_true + np.random.normal(0, 15, n_obs)
fig, ax = plt.subplots()
ax.plot(# complete
# complete
# complete
# complete
from sklearn.linear_model import LinearRegression
reg_model = # complete
X = # complete
reg_model.fit( # complete
print(f'The best fit is y = ' # complete
fig, ax = plt.subplots()
ax.plot( # complete
# complete
# complete
# complete
# complete
print('The best-fit model is: y = ' # complete
np.random.seed(1938)
n_obs = 200
mass = np.random.normal(65, 20, size=n_obs)
mass[mass > 70] *= (mass[mass > 70]-60)/60*5
y = (mass > 60).astype(bool)
obs_mass = mass + np.random.normal(0, 7, size=n_obs)
X = obs_mass.reshape(-1,1)
reg_model = # complete
reg_model.fit( # complete
y_pred = # complete
n_incorrect = np.sum((y_pred - y)**2)
accuracy = (n_obs - n_incorrect)/n_obs
print(f'This model has an accuracy of {accuracy:.3f}')
fig, ax = plt.subplots()
ax.plot( # complete
# complete
# complete
# complete
# complete
mass_cut = # complete
y_pred = # complete
# complete
# complete
# complete
# complete
x = np.linspace(-10,10,1000)
p_x = 1/(1 + np.exp(-x))
fig, ax = plt.subplots()
ax.plot(x, p_x)
ax.set_xlabel('x', fontsize=14)
ax.set_ylabel('p(x)', fontsize=14)
fig.tight_layout()
from sklearn.linear_model import LogisticRegression
logreg = # complete
logreg.fit( # complete
y_pred = # complete
# complete
# complete
# complete
fig, ax = plt.subplots()
ax.plot( # complete
# complete
# complete
# complete
from sklearn.metrics import accuracy_score
logreg = LogisticRegression( # complete
# complete
# complete
print('The accuracy of this model is {}' # complete
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The regressor class
Step2: Using the regressor
Step3: Revisiting the training process
Step4: Looking at the data we see that it is possible to separate the two clouds quite well, but there is a lot of noise so we can not hope to achieve zero loss. But we can get close to it. Let's set up a regressor. Here we will use a learning rate of 10, which is quite high.
Step5: You will probably even get an error message mentioning an overflow and it doesn't look like the regressor converged smoothly. This was a bumpy ride.
Step6: As you can see, the loss first went up quite significantly before then coming down. At multiple instances it moves up again. This is a clear sign that the learning rate is too large, let's try a lower one
Step7: This looks a bit smoother already, and you can see that the error is nearly ten times lower in the end. Let's try an even lower learning rate to see where we can take this.
Step8: This is a very smooth gradient descent but also a very slow one. The error is more than twice as high as before in the end. If we would let this run for a few more epochs we probably could achieve a very good model but at a very large computing expense.
Step9: Visualizing our regressor
Step10: To plot the boundary, we train a new regressor first.
Step11: And then we plot the boundary. Again, do not worry if you do not understand exactly what is going on here, as it is not part of the class.
Step12: As you can see, our logistic regressor seperates the two clouds with a simple line. This is appropriate for this case but might fail when the boundary is a more complex function. Let's try out a more complex function.
|
<ASSISTANT_TASK:>
Python Code:
# Numpy handles matrix multiplication, see http://www.numpy.org/
import numpy as np
# PyPlot is a matlab like plotting framework, see https://matplotlib.org/api/pyplot_api.html
import matplotlib.pyplot as plt
# This line makes it easier to plot PyPlot graphs in Jupyter Notebooks
%matplotlib inline
import sklearn
import sklearn.datasets
import matplotlib
# Slightly larger plot rendering
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
class LogisticRegressor:
# Here we are just setting up some placeholder variables
# This is the dimensionality of our input, that is how many features our input has
input_dim = 0
# This is the learning rate alpha
learning_rate = 0.1
# We will store the parameters of our model in a dictionary
model = {}
# The values calculated in the forward propagation will be stored in this dictionary
cache = {}
# The gradients that we calculate during back propagation will be stored in a dictionary
gradients = {}
# Init function of the class
def __init__(self,input_dim, learning_rate):
'''
Assigns the given hyper parameters and initializes the initial parameters.
'''
# Assign input dimensionality
self.input_dim = input_dim
# Assign learning rate
self.learning_rate = learning_rate
# Trigger parameter setup
self.init_parameters()
# Parameter setup function
def init_parameters(self):
'''
Initializes weights with random number between -1 and 1
Initializes bias with 0
Assigns weights and parameters to model
'''
# Randomly init weights
W1 = 2*np.random.random((self.input_dim,1)) - 1
# Set bias to 0
b1 = 0
# Assign to model
self.model = {'W1':W1,'b1':b1}
return
# Sigmoid function
def sigmoid(self,x):
'''
Calculates the sigmoid activation of a given input x
See: https://en.wikipedia.org/wiki/Sigmoid_function
'''
return 1/(1+np.exp(-x))
#Log Loss function
def log_loss(self,y,y_hat):
'''
Calculates the logistic loss between a prediction y_hat and the labels y
See: http://wiki.fast.ai/index.php/Log_Loss
We need to clip values that get too close to zero to avoid zeroing out.
Zeroing out is when a number gets so small that the computer replaces it with 0.
Therefore, we clip numbers to a minimum value.
'''
minval = 0.000000000001
m = y.shape[0]
l = -1/m * np.sum(y * np.log(y_hat.clip(min=minval)) + (1-y) * np.log((1-y_hat).clip(min=minval)))
return l
# Derivative of log loss function
def log_loss_derivative(self,y,y_hat):
'''
Calculates the gradient (derivative) of the log loss between point y and y_hat
See: https://stats.stackexchange.com/questions/219241/gradient-for-logistic-loss-function
'''
return (y_hat-y)
# Forward prop (forward pass) function
def forward_propagation(self,A0):
'''
Forward propagates through the model, stores results in cache.
See: https://stats.stackexchange.com/questions/147954/neural-network-forward-propagation
A0 is the activation at layer zero, it is the same as X
'''
# Load parameters from model
W1, b1 = self.model['W1'],self.model['b1']
# Do the linear step
z1 = A0.dot(W1) + b1
#Pass the linear step through the activation function
A1 = self.sigmoid(z1)
# Store results in cache
self.cache = {'A0':X,'z1':z1,'A1':A1}
return
# Backprop function
def backward_propagation(self,y):
'''
Backward propagates through the model to calculate gradients.
Stores gradients in grads dictionary.
See: https://en.wikipedia.org/wiki/Backpropagation
'''
# Load results from forward pass
A0, z1, A1 = self.cache['A0'],self.cache['z1'], self.cache['A1']
# Load model parameters
W1, b1 = self.model['W1'], self.model['b1']
# Read m, the number of examples
m = A0.shape[0]
# Calculate the gradient of the loss function
dz1 = self.log_loss_derivative(y=y,y_hat=A1)
# Calculate the derivative of the loss with respect to the weights W1
dW1 = 1/m*(A0.T).dot(dz1)
# Calculate the derivative of the loss with respect to the bias b1
db1 = 1/m*np.sum(dz1, axis=0, keepdims=True)
#Make sure the weight derivative has the same shape as the weights
assert(dW1.shape == W1.shape)
# Store gradients in gradient dictionary
self.grads = {'dW1':dW1,'db1':db1}
return
# Parameter update
def update_parameters(self):
'''
Updates parameters accoarding to gradient descent algorithm
See: https://en.wikipedia.org/wiki/Gradient_descent
'''
# Load model parameters
W1, b1 = self.model['W1'],self.model['b1']
# Load gradients
dW1, db1 = self.grads['dW1'], self.grads['db1']
# Update weights
W1 -= self.learning_rate * dW1
# Update bias
b1 -= self.learning_rate * db1
# Store new parameters in model dictionary
self.model = {'W1':W1,'b1':b1}
return
# Prediction function
def predict(self,X):
'''
Predicts y_hat as 1 or 0 for a given input X
'''
# Do forward pass
self.forward_propagation(X)
# Get output of regressor
regressor_output = self.cache['A1']
# Turn values to either 1 or 0
regressor_output[regressor_output > 0.5] = 1
regressor_output[regressor_output < 0.5] = 0
# Return output
return regressor_output
# Train function
def train(self,X,y, epochs):
'''
Trains the regressor on a given training set X, y for the specified number of epochs.
'''
# Set up array to store losses
losses = []
# Loop through epochs
for i in range(epochs):
# Forward pass
self.forward_propagation(X)
# Calculate loss
loss = self.log_loss(y,self.cache['A1'])
# Store loss
losses.append(loss)
# Print loss every 10th iteration
if (i%10 == 0):
print('Epoch:',i,' Loss:', loss)
# Do the backward propagation
self.backward_propagation(y)
# Update parameters
self.update_parameters()
# Return losses for analysis
return losses
#Seed the random function to ensure that we always get the same result
np.random.seed(1)
#Variable definition
#define X
X = np.array([[0,1,0],
[1,0,0],
[1,1,1],
[0,1,1]])
#define y
y = np.array([[0,1,1,0]]).T
# Define instance of class
regressor = LogisticRegressor(input_dim=3,learning_rate=1)
# Train classifier
losses = regressor.train(X,y,epochs=100)
# Plot the losses for analyis
plt.plot(losses)
# Generate a dataset and plot it
np.random.seed(0)
X, y = sklearn.datasets.make_blobs(n_samples=200,centers=2)
y = y.reshape(200,1)
plt.scatter(X[:,0], X[:,1], s=40, c=y.flatten(), cmap=plt.cm.Spectral)
# Define instance of class
# Learning rate = 1, same as no learning rate used
regressor = LogisticRegressor(input_dim=2,learning_rate=10)
# Train classifier
losses = regressor.train(X,y,epochs=100)
plt.plot(losses)
# Define instance of class
# Learning rate = 0.05
regressor = LogisticRegressor(input_dim=2,learning_rate=0.05)
# Train classifier
losses = regressor.train(X,y,epochs=100)
plt.plot(losses)
# Define instance of class
# Learning rate = 0.0005
regressor = LogisticRegressor(input_dim=2,learning_rate=0.0005)
# Train classifier
losses = regressor.train(X,y,epochs=100)
plt.plot(losses)
# Define instance of class
# Tweak learning rate here
regressor = LogisticRegressor(input_dim=2,learning_rate=1)
# Train classifier
losses = regressor.train(X,y,epochs=100)
plt.plot(losses)
# Helper function to plot a decision boundary.
# If you don't fully understand this function don't worry, it just generates the boundary plot.
def plot_decision_boundary(pred_func):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y.flatten(), cmap=plt.cm.Spectral)
# Define instance of class
# Learning rate = 0.05
regressor = LogisticRegressor(input_dim=2,learning_rate=0.05)
# Train classifier
losses = regressor.train(X,y,epochs=100)
# Plot the decision boundary
plot_decision_boundary(lambda x: regressor.predict(x))
plt.title("Decision Boundary for logistic regressor")
# Generate a dataset and plot it
np.random.seed(0)
X, y = sklearn.datasets.make_moons(200, noise=0.1)
y = y.reshape(200,1)
plt.scatter(X[:,0], X[:,1], s=40, c=y.flatten(), cmap=plt.cm.Spectral)
# Define instance of class
# Learning rate = 0.05
y = y.reshape(200,1)
regressor = LogisticRegressor(input_dim=2,learning_rate=0.05)
# Train classifier
losses = regressor.train(X,y,epochs=100)
# Plot the decision boundary
plot_decision_boundary(lambda x: regressor.predict(x))
plt.title("Decision Boundary for hidden layer size 3")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We see above a recept column that has their reception status. We want to look at reviewed and random, just like Underwood and Sellers did
Step2: Great, now we've successfully subsetted our data!
Step3: We can use sklearn's DictVectorizer to turn this into a DTM
Step4: Feature Selection, Training, Prediction
Step5: Let's add in the docid as the index instead of just a counter for the row
Step6: Inputs and Outputs
Step7: Now we'll set up the regression model with sklearn.
Step8: We'll then write a function set_author_aside that sifts out the target author's works from the dataset. Recall that Underwood and Sellers trained a model for each author when they were taken out.
Step9: We also need to get out the most common words by their document frequency for each model. The function below, top_vocab_by_docfreq will do this each time we loop and create a new model
Step10: We then need to normalize the frequencies
Step11: Training and Prediction
Step12: So how long is this going to take us?
Step13: We don't have 100 minutes!
Step14: Let's read in a preprocessed vocabulary file. This contains only words that will be used in classification. This list was created by simply iterating through each model and observing the words that appeared in it.
Step15: Now let's select only columns for words in our pre-processed vocabulary. This will make our computation more efficient later
Step16: We'll use the unique IDs from our metadata to keep track of each text
Step17: Document Frequency
Step18: Parallel Processing
Step19: Evaluation
Step20: Classification
|
<ASSISTANT_TASK:>
Python Code:
metadata_tb = Table.read_table('data/poemeta.csv', keep_default_na=False)
metadata_tb.show(5)
reception_mask = (metadata_tb['recept']=='reviewed') + (metadata_tb['recept']=='random')
clf_tb = metadata_tb.where(reception_mask)
clf_tb.show(5)
# Create list that will contain a series of dictionaries
freqdict_list = []
# Iterate through texts in our spreadsheet
for _id in clf_tb['docid']:
# Each text will have its own dictionary
# Keys are terms and values are frequencies
termfreq_dict = {}
# Open the given text's spreadsheet
with open('data/poems/' + _id + '.poe.tsv', encoding='utf-8') as f:
filelines = f.readlines()
# Each line in the spreadsheet contains a unique term and its frequency
for line in filelines:
termfreq = line.split('\t')
# 'If' conditions throw out junk lines in the spreadsheet
if len(termfreq) > 2 or len(termfreq) > 2:
continue
term, freq = termfreq[0], int(termfreq[1])
if len(term)>0 and term[0].isalpha():
# Create new entry in text's dictionary for the term
termfreq_dict[term] = freq
freqdict_list.append(termfreq_dict)
from sklearn.feature_extraction import DictVectorizer
dv = DictVectorizer()
dtm = dv.fit_transform(freqdict_list)
term_list = dv.feature_names_
dtm
dtm_df = pd.DataFrame(dtm.toarray(), columns = term_list)
dtm_df.head()
dtm_df.set_index(clf_tb['docid'], inplace=True)
dtm_df.head()
authors = list(set(clf_tb['author']))
probabilities = np.zeros([len(clf_tb['docid'])])
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C = 0.00007)
def set_author_aside(author, tb, df):
'''
Set aside each author's texts from training set
'''
train_ids = tb.where(tb['author']!=author).column('docid')
test_ids = tb.where(tb['author']==author).column('docid')
train_df_ = df.loc[train_ids]
test_df_ = df.loc[test_ids]
train_targets_ = tb.where(tb['author']!=author)['recept']=='reviewed'
return train_df_, test_df_, train_targets_
def top_vocab_by_docfreq(df, num_words):
'''
Retrieve the most common words (by document frequency) for a given model
'''
docfreq_df = df > 0
wordcolumn_sums = docfreq_df.sum()
words_by_freq = wordcolumn_sums.sort_values(ascending=False)
top_words = words_by_freq[:num_words]
top_words_list = top_words.index.tolist()
return top_words_list
def normalize_model(train_df_, test_df_, vocabulary):
'''
Normalize the model's term frequencies and put them into standard units
'''
# Select columns for only the most common words
train_df_ = train_df_[vocabulary]
test_df_ = test_df_[vocabulary]
# Normalize each value by the sum of all values in its row
train_df_ = train_df_.apply(lambda x: x/sum(x), axis=1)
test_df_ = test_df_.apply(lambda x: x/sum(x), axis=1)
# Get mean and stdev for each column
train_mean = np.mean(train_df_)
train_std = np.std(train_df_)
# Transform each value to standard units for its column
train_df_ = ( train_df_ - train_mean ) / train_std
test_df_ = ( test_df_ - train_mean ) / train_std
return train_df_, test_df_
import time
start = time.time()
for author in authors[:1]:
# Set aside each author's texts from training set
train_df, test_df, train_targets = set_author_aside(author, clf_tb, dtm_df)
# Retrieve the most common words (by document frequency) for a given model
vocab_list = top_vocab_by_docfreq(train_df, 3200)
# Normalize the model's term frequencies and put them into standard units
train_df, test_df = normalize_model(train_df, test_df, vocab_list)
# Learn the Logistic Regression over our model
clf.fit(train_df, train_targets)
# Some authors have more than one text in the corpus, so we retrieve all
for _id in test_df.index.tolist():
# Make prediction whether text was reviewed
text = test_df.loc[_id]
probability = clf.predict_proba([text])[0][1]
# Record predictions in same order as the metadata spreadsheet
_index = list(clf_tb.column('docid')).index(_id)
probabilities[_index] = probability
end = time.time()
print(end - start)
len(authors) * (end-start) / 60
len(term_list)
import pickle
with open('data/preprocessed_vocab.pickle', 'rb') as f:
pp_vocab = pickle.load(f)
len(pp_vocab)
dtm_df = dtm_df[pp_vocab]
dtm_df.set_index(clf_tb['docid'], inplace=True)
dtm_df.head()
# Create new DataFrame that simply lists whether a term appears in
# each document, so that we don't have to repeat this process evey iteration
term_in_doc_df = dtm_df>0
term_in_doc_df
# Re-write the model-building function
def set_author_aside(author, tb, dtm_df_, dfreq_df_):
train_ids = tb.where(tb['author']!=author).column('docid')
test_ids = tb.where(tb['author']==author).column('docid')
train_df_ = dtm_df_.loc[train_ids]
dfreq_df_ = dfreq_df_.loc[train_ids] # Include only term_in_doc values for texts in training set
test_df_ = dtm_df_.loc[test_ids]
train_targets_ = tb.where(tb['author']!=author)['recept']=='reviewed'
return train_df_, test_df_, train_targets_, dfreq_df_
# Re-write our vocabulary selection function
def top_vocab_by_docfreq(df, num_words):
# Removed the test of whether a term is in a given document (i.e. df>0)
wordcolumn_sums = sum(df)
words_by_freq = wordcolumn_sums.sort_values(ascending=False)
top_words = words_by_freq[:num_words]
top_words_list = top_words.index.tolist()
return top_words_list
# Parallel Processing means running our script on multiple cores simultaneously
# This can be used in situations where we might otherwise use a 'FOR' loop
# (when it doesn't matter what order we go through the list of values!)
clf = LogisticRegression(C = 0.00007)
def master_function(author):
# Note: Our only input is the name of the author.
# Remember that we had iterated over the list of authors previously.
train_df, test_df, train_targets, dfreq_df = set_author_aside(author, clf_tb, dtm_df, term_in_doc_df)
vocab_list = top_vocab_by_docfreq(dfreq_df, 3200)
train_df, test_df = normalize_model(train_df, test_df, vocab_list)
clf.fit(train_df, train_targets)
# Create a list of each text's probability of review AND its index in the metadata table
index_probability_tuples = []
for _id in test_df.index.tolist():
text = test_df.loc[_id]
probability = clf.predict_proba([text])[0][1]
_index = list(clf_tb.column('docid')).index(_id)
index_probability_tuples.append( (_index, probability) )
return index_probability_tuples
# Multiprocessing enables Python to parallelize
import multiprocessing
# Return number of cores
multiprocessing.cpu_count()
# By default, the Pool contains one worker for each core
# Since we are working on a shared server, we'll set the number
# of workers to 4.
pool = multiprocessing.Pool(4, maxtasksperchild=1)
# Efficiently applies the master_function() to our list of authors
# Returns a list where each entry is an item returned by the function
# Timing the process again
start = time.time()
output = pool.map(master_function, authors)
end = time.time()
print(end-start)
output[:10]
# In this case, each element in output is itself a list ('index_probability_tuples'),
# the length of which is the number of texts by a given author. We'll flatten it for
# ease of use.
flat_output = [tup for lst in output for tup in lst]
flat_output[:10]
# Use the indices returned with the output to arrange probabilities properly
probabilities = np.zeros([len(clf_tb['docid'])])
for tup in flat_output:
probabilities[tup[0]] = tup[1]
clf_tb['P(reviewed)'] = probabilities
clf_tb.select(['docid', 'firstpub','author', 'title', 'recept', 'P(reviewed)'])
# Visualize the probability each text was reviewed
colors = ['r' if recept=='reviewed' else 'b' for recept in clf_tb['recept']]
clf_tb.scatter('firstpub', 'P(reviewed)', c=colors, fit_line=True)
# Does the Logistic Regression Model think its likely each book was reviewed?
predictions = probabilities>0.5
predictions
from sklearn.metrics import accuracy_score
# Creates array where '1' indicates a reviewed book and '0' indicates not
targets = clf_tb['recept']=='reviewed'
print(accuracy_score(predictions, targets))
# Note: Often we prefer to evaluate accuracy based on the F1-score, which
# weighs the number of times we correctly predicted reviewed texts against
# the number of times we incorrectly predicted them as 'random'.
from sklearn.metrics import f1_score
print(f1_score(predictions, targets))
## EX. Change the regularization parameter ('C') in our Logistic Regression function.
## How does this change the classifier's accuracy?
## EX. Reduce the size of the vocabulary used for classification. How does accuracy change?
## Q. Are there cases when we might not want to set the classification threshold
## to 50% likelihood? How certain are we that 51% is different from a 49% probability?
# Train model using full set of 'reviewed' and 'random' texts
# Use this to predict the probability that other prestigious texts
# (i.e. ones that we haven't trained on) might have been reviewed
# ...if they had been published! The new texts include, for example,
# Emily Dickinson and Gerard Manley Hopkins.
# Re-run script from scratch
%pylab inline
matplotlib.style.use('ggplot')
from datascience import *
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction import DictVectorizer
corpus_path = 'poems/'
# Read metadata from spreadsheet
metadata_tb = Table.read_table('poemeta.csv', keep_default_na=False)
# We'll copy just our new texts into a separate table as well, for later
canon_tb = metadata_tb.where('recept','addcanon')
# Read Term Frequencies from files
freqdict_list = []
# Iterate through texts in our spreadsheet
for _id in metadata_tb['docid']:
# Each text will have its own dictionary
# Keys are terms and values are frequencies
termfreq_dict = {}
# Open the given text's spreadsheet
with open(corpus_path+_id+'.poe.tsv', encoding='utf-8') as file_in:
filelines = file_in.readlines()
# Each line in the spreadsheet contains a unique term and its frequency
for line in filelines:
termfreq = line.split('\t')
# 'If' conditions throw out junk lines in the spreadsheet
if len(termfreq) > 2 or len(termfreq) > 2:
continue
term, freq = termfreq[0], int(termfreq[1])
if len(term)>0 and term[0].isalpha():
# Create new entry in text's dictionary for the term
termfreq_dict[term] = freq
freqdict_list.append(termfreq_dict)
# Create the Document-Term-Matrix
dv = DictVectorizer()
dtm = dv.fit_transform(freqdict_list)
term_list = dv.feature_names_
# Place the DTM into a Pandas DataFrame for further manipulation
dtm_df = pd.DataFrame(dtm.toarray(), columns = term_list)
dtm_df.set_index(metadata_tb['docid'], inplace=True)
# These are Feature Selection functions like the ones we originally defined,
# not their efficiency minded counterparts, since we only train once
# Set aside each canonic texts from training set
def set_canon_aside(tb, df):
train_ids = tb.where(tb['recept']!='addcanon').column('docid')
classify_ids = tb.where(tb['recept']=='addcanon').column('docid')
train_df_ = df.loc[train_ids]
classify_df_ = df.loc[classify_ids]
train_targets_ = tb.where(tb['recept']!='addcanon')['recept']=='reviewed'
return train_df_, classify_df_, train_targets_
# Retrieve the most common words (by document frequency) for a given model
def top_vocab_by_docfreq(df, num_words):
docfreq_df = df > 0
wordcolumn_sums = sum(docfreq_df)
words_by_freq = wordcolumn_sums.sort_values(ascending=False)
top_words = words_by_freq[:num_words]
top_words_list = top_words.index.tolist()
return top_words_list
# Normalize the model's term frequencies and put them into standard units
def normalize_model(train_df_, classify_df_, vocabulary):
# Select columns for only the most common words
train_df_ = train_df_[vocabulary]
classify_df_ = classify_df_[vocabulary]
# Normalize each value by the sum of all values in its row
train_df_ = train_df_.apply(lambda x: x/sum(x), axis=1)
classify_df_ = classify_df_.apply(lambda x: x/sum(x), axis=1)
# Get mean and stdev for each column
train_mean = np.mean(train_df_)
train_std = np.std(train_df_)
# Transform each value to standard units for its column
train_df_ = ( train_df_ - train_mean ) / train_std
classify_df_ = ( classify_df_ - train_mean ) / train_std
return train_df_, classify_df_
# Train our Logistic Regression Model
clf = LogisticRegression(C = 0.00007)
model_df, classify_df, model_targets = set_canon_aside(metadata_tb, dtm_df)
vocab_list = top_vocab_by_docfreq(model_df, 3200)
model_df, classify_df = normalize_model(model_df, classify_df, vocab_list)
clf.fit(model_df, model_targets)
# Predict whether our new prestigious texts might have been reviewed
probabilities = numpy.zeros([len(canon_tb.column('docid'))])
for _id in classify_df.index.tolist():
text = classify_df.loc[_id]
probability = clf.predict_proba([text])[0][1]
_index = list(canon_tb.column('docid')).index(_id)
probabilities[_index] = probability
# Add this probability as a new column to our table of canonic texts
canon_tb['P(reviewed)'] = probabilities
# Visualize
canon_tb.scatter('firstpub','P(reviewed)', fit_line=True)
## Q. Two of the prestigious texts are assigned less than 50% probability
## that they were reviewed. How do we make sense of that?
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 上面的範例中,CC是對等連接詞、RB是副詞、IN是介系詞、NN是名詞、JJ則是形容詞。如果想知道詳細的tag定義,可以用nltk.help.upenn_tagset('RB')來查詢。
Step2: corpus中也有tagged sentences
Step3: Mapping Words to Properties
Step4: Default Dictionary
Step5: Inverting a Dictionary
Step6: 更簡單的方法
Step7: Summary Dictionary Methods
Step8: Default Tagger
Step9: Regular Expression Tagger
Step10: Unigram Tagger
Step11: unigram tagger是統計每個字最常出現的詞性,因此訓練資料越大,就會越準確。但遇到沒看過的字,就會傳回None。因此需要設backoff,當unigram tagger無法判斷時,用另一個tagger來輔助。
Step12: 兩個重點
Step13: Storing Taggers
Step14: Confusion Matrix
|
<ASSISTANT_TASK:>
Python Code:
import nltk
text = nltk.word_tokenize("And now for something completely different")
nltk.pos_tag(text)
nltk.tag.str2tuple('fly/NN')
# tagged_words() 是一個已經表示成tuple形態的資料
nltk.corpus.brown.tagged_words()
# 用參數 tagset='universal' 可以換成簡單的tag
nltk.corpus.brown.tagged_words(tagset='universal')
# 利用 FreqDist 統計詞性的數量
tag_fd = nltk.FreqDist(tag for (word, tag) in nltk.corpus.brown.tagged_words(tagset='universal'))
tag_fd.most_common()
%matplotlib inline
tag_fd.plot()
tag_cd = nltk.ConditionalFreqDist(nltk.corpus.brown.tagged_words(tagset='universal'))
# 查詢某單字的常用POS
tag_cd['yield']
nltk.corpus.brown.tagged_sents(tagset='universal')[0]
pos = {} # 在python中定義dictionary最簡單的方法
pos['hello'] = 'world'
pos['right'] = 'here'
pos
[w for w in pos] # 用for的時候會找出key
pos.keys()
pos.items()
pos.values()
pos = dict(hello = 'world', right = 'here') # 另一種定義方式
pos
f = nltk.defaultdict(int)
f['color'] = 4
f
f['dream'] # dream不存在,但查詢時會自動新增
f # 查詢dream後,就直接新增了一個dream
f = nltk.defaultdict(lambda: 'xxx')
f['hello'] = 'world'
f
f['here'] = f['here'] + 'comment'
f
old = dict(nltk.corpus.brown.tagged_words()[:100])
new = dict((value, key) for (key, value) in old.items())
new['JJ'] # 雖然成功的反相,但只能查出最後輸入的字
new2 = nltk.defaultdict(list) # 當key不存在時,都視為empty list
for (key, value) in old.items():
new2[value].append(key)
new2['JJ']
new3 = nltk.Index((value, key) for (key, value) in old.items())
new3['JJ']
from nltk.corpus import brown
brown_tagged_sents = brown.tagged_sents(categories='news')
brown_sents = brown.sents(categories='news')
tags = [tag for (word, tag) in brown.tagged_words(categories='news')]
nltk.FreqDist(tags).max()
default_tagger = nltk.DefaultTagger('NN') # 因為NN頻率最高,所以未知詞性的情況一律當成NN
default_tagger.tag(nltk.word_tokenize('i like my mother and dog'))
# 當然預測的準確率很差,因為只有13%是真的NN
default_tagger.evaluate(brown_tagged_sents)
patterns = [
(r'.*ing$', 'VBG'),
(r'.*ed$', 'VBD'),
(r'.*es$', 'VBZ'),
(r'.*ould$', 'MD'),
(r'.*\'s$', 'NN$'),
(r'.*s$', 'NNS'),
(r'^-?[0-9]+(.[0-9]+)?$', 'CD'),
(r'.*', 'NN')
]
regexp_tagger = nltk.RegexpTagger(patterns)
regexp_tagger.tag(nltk.word_tokenize('i could be sleeping in 9 AM'))
regexp_tagger.evaluate(brown_tagged_sents)
unigram_tagger = nltk.UnigramTagger(brown.tagged_sents(categories='news')[:500])
unigram_tagger.tag(nltk.word_tokenize('i could be sleeping in 9 AM'))
unigram_tagger = nltk.UnigramTagger(brown.tagged_sents(categories='news')[:500],
backoff = regexp_tagger)
unigram_tagger.evaluate(brown_tagged_sents[500:])
unigram_tagger = nltk.UnigramTagger(brown.tagged_sents(categories='news')[:4000],
backoff = regexp_tagger)
unigram_tagger.evaluate(brown_tagged_sents[4000:])
bigram_tagger = nltk.BigramTagger(brown.tagged_sents(categories='news')[:4000])
bigram_tagger.tag(nltk.word_tokenize('i could be sleeping in 9 AM'))
bigram_tagger = nltk.BigramTagger(brown.tagged_sents(categories='news')[:4000],
backoff=unigram_tagger)
bigram_tagger.evaluate(brown_tagged_sents[4000:])
from cPickle import dump
output = open('t2.pkl', 'wb')
dump(bigram_tagger, output, -1)
output.close()
from cPickle import load
input = open('t2.pkl', 'rb')
tagger = load(input)
input.close()
tagger.evaluate(brown_tagged_sents[4000:])
brown_sents = brown.sents()
brown_tagged_sents = brown.tagged_sents(tagset = 'universal')
default_tagger = nltk.DefaultTagger('NOUN')
unigram_tagger = nltk.UnigramTagger(brown_tagged_sents[:4000], backoff=default_tagger)
bigram_tagger = nltk.BigramTagger(brown_tagged_sents[:4000], backoff=unigram_tagger)
unigram_tagger.tag(nltk.word_tokenize('I like your mother'))
test = [tag for sent in brown_sents[4000:] for (word, tag) in bigram_tagger.tag(sent)]
gold = [tag for sent in brown_tagged_sents[4000:] for (word, tag) in sent]
print nltk.ConfusionMatrix(gold, test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This example features
Step2: Phase 1
Step3: Prepare hyperparameters
Step4: Instantiate client
Step5: Train models
Step6: Revisit Workflow
Step7: Train on full dataset
Step8: Calculate accuracy on full training set
Step9: Phase 2
Step10: Phase 3
Step11: Create and update an endpoint
Step12: Prepare "live" data
Step13: Query deployed model
|
<ASSISTANT_TASK:>
Python Code:
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
HOST = "app.verta.ai"
PROJECT_NAME = "Census Income Classification"
EXPERIMENT_NAME = "Logistic Regression"
WORKSPACE = "Demos"
# import os
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import os
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
train_data_url = "http://s3.amazonaws.com/verta-starter/census-train.csv"
train_data_filename = wget.detect_filename(train_data_url)
if not os.path.isfile(train_data_filename):
wget.download(train_data_url)
test_data_url = "http://s3.amazonaws.com/verta-starter/census-test.csv"
test_data_filename = wget.detect_filename(test_data_url)
if not os.path.isfile(test_data_filename):
wget.download(test_data_url)
df_train = pd.read_csv(train_data_filename)
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_train.head()
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME, workspace=WORKSPACE, public_within_org=True)
expt = client.set_experiment(EXPERIMENT_NAME)
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# create deployment artifacts
model_api = ModelAPI(X_train, model.predict(X_train))
requirements = ["scikit-learn"]
# save and log model
run.log_model(model, model_api=model_api, custom_modules=[])
run.log_requirements(requirements)
# NOTE: run_experiment() could also be defined in a module, and executed in parallel
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
registered_model = client.get_or_create_registered_model(name="census", workspace=WORKSPACE, public_within_org=True)
registered_model.create_version_from_run(best_run.id, name="v0")
registered_model = client.get_registered_model(name="census", workspace=WORKSPACE)
model_version = registered_model.get_version(name="v0")
endpoint = client.get_or_create_endpoint(path="/census", workspace=WORKSPACE, public_within_org=True)
endpoint.update(model_version, wait=True)
df_test = pd.read_csv(test_data_filename)
X_test = df_test.iloc[:,:-1]
model_version
deployed_model = endpoint.get_deployed_model()
for x in itertools.cycle(X_test.values.tolist()):
print(deployed_model.predict([x]))
time.sleep(.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) What's the current wind speed? How much warmer does it feel than it actually is?
Step2: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
Step3: 4) What's the difference between the high and low temperatures for today?
Step4: 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
Step5: 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
Step6: 7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
|
<ASSISTANT_TASK:>
Python Code:
Make a request from the Forecast.io API for where you were born (or lived, or want to visit!)
import requests
!pip3 install requests
#new york
response = requests.get("https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889")
data = response.json()
print(data)
# TA-COMMENT: (-0.5) You don't give us the current wind speed!
type(data)
data.keys()
print(data['currently'])
print(data['currently']['temperature']-data['currently']['apparentTemperature'])
print(data['daily'])
type(data['daily'])
data['daily'].keys()
print(data['daily']['data'][0])
type(data['daily']['data'])
print(data['daily']['data'][0]['moonPhase'])
weather_today = data['daily']['data'][0]
print(weather_today['temperatureMax']-weather_today['temperatureMin'])
print(data['daily']['data'])
daily_data = data['daily']['data']
weather_next_week = data['daily']['data']
for weather in weather_next_week:
print(weather['temperatureMax'])
if weather['temperatureMax'] > 84:
print("it's a hot day.")
elif weather['temperatureMax'] > 74 and weather['temperatureMax'] < 83:
print("it's a warm day.")
else:
print("it's a cold day.")
import requests
response = requests.get("https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/25.7738889, -80.1938889")
data = response.json()
print(data['hourly'])
data['hourly'].keys()
data['hourly']['data']
for cloudcover in data['hourly']['data']:
if cloudcover['cloudCover'] > 0.5:
print(cloudcover['temperature'], "and cloudy")
else:
print(cloudcover['temperature'])
import requests
response = requests.get("https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889,346550400")
data = response.json()
print(data['currently']['temperature'])
response = requests.get("https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889,662083200")
data = response.json()
print(data['currently']['temperature'])
response = requests.get("https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889,977702400")
data = response.json()
print(data['currently']['temperature'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's tackle the list of cafes first. In the cell below, write some code that creates a list of dictionaries with information about each cafe, assigning it to the variable cafe_list. I've written some of the code for you; you just need to fill in the rest. The list should end up looking like this
Step2: Great! In the following cell, write some code that creates a list of cats from the <table> tag on the page, storing them as a list of dictionaries in a variable called cat_list. Again, I've written a bit of the code for you. Expected output
Step3: Problem set #2
Step4: Here's a cell you can run if something goes wrong and you need to rollback the current query session
Step8: In the cell below, you're going to create three tables, necessary to represent the data you scraped above. I've given the basic framework of the Python code and SQL statements to create these tables. I've given the entire CREATE TABLE statement for the cafe table, but for the other two, you'll need to supply the field names and the data types for each column. If you're unsure what to call the fields, or what fields should be in the tables, consult the queries in "The Queries" below. Hints
Step9: After executing the above cell, issuing a \d command in psql should yield something that looks like the following
Step10: Issuing SELECT * FROM cafe in the psql client should yield something that looks like this
Step11: The dictionary maps the name of the cat cafe to its ID in the database. You'll need these values later when you're adding records to the linking table (cat_cafe).
Step12: Issuing a SELECT * FROM cat LIMIT 10 in psql should yield something that looks like this
Step14: In which zip codes can I find a lilac-colored tabby?
Step16: What's the average weight of cats currently residing at all locations?
Step18: Which location has the most cats with tortoiseshell coats?
|
<ASSISTANT_TASK:>
Python Code:
from bs4 import BeautifulSoup
from urllib.request import urlopen
html = urlopen("http://static.decontextualize.com/cats.html").read()
document = BeautifulSoup(html, "html.parser")
cafe_list = list()
cafe_table = document.find('table', {'class': 'cafe-list'})
tbody = cafe_table.find('tbody')
for tr_tag in tbody.find_all('tr'):
for td_tag in tr_tag.find_all('td'):
cafe_name = tr_tag.find('td', {'class': 'name'}).string
cafe_zip = tr_tag.find('td', {'class': 'zip'}).string
cafe_dict={'name': cafe_name, 'zip': cafe_zip}
cafe_list.append(cafe_dict)
cafe_list
cat_list = list()
cat_table = document.find('table', {'class': 'cat-list'})
tbody = cat_table.find('tbody')
for tr_tag in tbody.find_all('tr'):
cat_dict = {}
cat_dict['birthdate'] = tr_tag.find('td', {'class': 'birthdate'}).string
cat_dict['color'] = tr_tag.find('td', {'class': 'color'}).string
cat_dict['locations'] = tr_tag.find('td', {'class': 'locations'}).string.split(",")
cat_dict['name'] = tr_tag.find('td', {'class': 'name'}).string
cat_dict['pattern'] = tr_tag.find('td', {'class': 'pattern'}).string
cat_dict['weight'] = float(tr_tag.find('td', {'class': 'weight'}).string)
cat_list.append(cat_dict)
cat_list
import pg8000
conn = pg8000.connect(database="catcafes")
conn.rollback()
cursor = conn.cursor()
cursor.execute(
CREATE TABLE cafe (
id serial,
name varchar(40),
zip varchar(5)
)
)
cursor.execute(
CREATE TABLE cat (
id serial,
name varchar(50),
birthdate varchar(50),
weight float,
color varchar(50),
pattern varchar(50)
)
)
cursor.execute(
CREATE TABLE cat_cafe (
cat_id int,
cafe_id int,
active bool
)
)
conn.commit()
cafe_name_id_map = {}
for item in cafe_list:
cursor.execute("INSERT INTO cafe (name, zip) VALUES (%s, %s) RETURNING id",
[str(item['name']), str(item['zip'])])
rowid = cursor.fetchone()[0]
cafe_name_id_map[str(item['name'])] = rowid
conn.commit()
cafe_name_id_map
conn.rollback()
cat_insert = "Insert into cat (name, birthdate, weight, color, pattern) values (%s, %s, %s, %s, %s) returning id"
cat_cafe_insert = "Insert into cat_cafe (cat_id, cafe_id, active) values (%s, %s, %s)"
for cat in cat_list:
cursor.execute(cat_insert, [str(cat['name']), str(cat['birthdate']), float(cat['weight']), str(cat['color']), str(cat['pattern'])])
catrowid = cursor.fetchone()[0]
for cafe in cat['locations']:
for place in cafe_name_id_map:
if place in cafe:
if '*' in cafe:
cursor.execute(cat_cafe_insert, [catrowid, cafe_name_id_map[place], True])
else:
cursor.execute(cat_cafe_insert, [catrowid, cafe_name_id_map[place], False])
conn.commit()
cursor.execute("SELECT max(birthdate) FROM cat")
birthdate = cursor.fetchone()[0]
cursor.execute("SELECT name FROM cat WHERE birthdate = %s", [birthdate])
print(cursor.fetchone()[0])
cursor.execute(SELECT DISTINCT(cafe.zip)
FROM cat
JOIN cat_cafe ON cat.id = cat_cafe.cat_id
JOIN cafe ON cafe.id = cat_cafe.cafe_id
WHERE cat.color = 'lilac' AND cat.pattern = 'tabby' AND cat_cafe.active = true
)
print(', '.join([x[0] for x in cursor.fetchall()]))
cursor.execute(
SELECT cafe.name, avg(cat.weight)
FROM cat
JOIN cat_cafe ON cat.id = cat_cafe.cat_id
JOIN cafe ON cafe.id = cat_cafe.cafe_id
WHERE cat_cafe.active = true
GROUP BY cafe.name
)
for rec in cursor.fetchall():
print(rec[0]+":", "%0.2f" % rec[1])
cursor.execute(
SELECT cafe.name
FROM cat
JOIN cat_cafe ON cat.id = cat_cafe.cat_id
JOIN cafe ON cafe.id = cat_cafe.cafe_id
WHERE cat_cafe.active = true AND cat.pattern = 'tortoiseshell'
GROUP BY cafe.name
ORDER BY count(cat.name) DESC
LIMIT 1
)
print(cursor.fetchone()[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: results in a list of strings, we can loop oer
Step2: We can ask Python to take different actions, depending on a condition, with an if statement
Step3: second line of code above uses keyword if to denote choice
Step4: we can also chain several tests together using elif, short for else if
Step5: NOTE
Step6: while or is true if at least one part is true
Step7: Challenge - making choices
Step8: Challenge - making choices 2
|
<ASSISTANT_TASK:>
Python Code:
print(glob.glob('data/inflammation*.csv'))
# loop here
counter = 0
for filename in glob.glob('data/*.csv'):
#counter+=1
counter = counter + 1
print("number of files:", counter)
counter = 0
for filename in glob.glob('data/infl*.csv'):
#counter+=1
counter = counter + 1
print("number of files:", counter)
counter = 0
for filename in glob.glob('data/infl*.csv'):
#counter+=1
data = numpy.loadtxt(fname=filename, delimiter=',')
print(filename, "mean is: ", data.mean())
counter = counter + 1
print("number of files:", counter)
#We use an if statement to take different actions
#based on conditions
num = 37
if num > 100:
print('greater')
else:
print('not greater')
print('done')
num = 53
print('before conditional...')
if num > 100:
print('53 is greater than 100')
print('...after conditional')
num = -3
if num > 0:
print(num, "is positive")
elif num == 0:
print(num, "is zero")
else:
print(num, "is negative")
if (1 > 0) and (-1 > 0):
print('both parts are true')
else:
print('at least one part is false')
if (1 < 0) or (-1 < 0):
print('at least one test is true')
if 4 > 5:
print('A')
elif 4 == 5:
print('B')
elif 4 < 5:
print('C')
if '':
print('empty string is true')
if 'word':
print('word is true')
if []:
print('empty list is true')
if [1, 2, 3]:
print('non-empty list is true')
if 0:
print('zero is true')
if 1:
print('one is true')
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.