Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
6,100
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
import nibabel as nib
from scipy import linalg
import mne
from mne.io.constants import FIFF
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
trans_fname = op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
raw = mne.io.read_raw_fif(raw_fname)
trans = mne.read_trans(trans_fname)
src = mne.read_source_spaces(op.join(subjects_dir, 'sample', 'bem',
'sample-oct-6-src.fif'))
# load the T1 file and change the header information to the correct units
t1w = nib.load(op.join(data_path, 'subjects', 'sample', 'mri', 'T1.mgz'))
t1w = nib.Nifti1Image(t1w.dataobj, t1w.affine)
t1w.header['xyzt_units'] = np.array(10, dtype='uint8')
t1_mgh = nib.MGHImage(t1w.dataobj, t1w.affine)
fig = mne.viz.plot_alignment(raw.info, trans=trans, subject='sample',
subjects_dir=subjects_dir, surfaces='head-dense',
show_axes=True, dig=True, eeg=[], meg='sensors',
coord_frame='meg')
mne.viz.set_3d_view(fig, 45, 90, distance=0.6, focalpoint=(0., 0., 0.))
print('Distance from head origin to MEG origin: %0.1f mm'
% (1000 * np.linalg.norm(raw.info['dev_head_t']['trans'][:3, 3])))
print('Distance from head origin to MRI origin: %0.1f mm'
% (1000 * np.linalg.norm(trans['trans'][:3, 3])))
dists = mne.dig_mri_distances(raw.info, trans, 'sample',
subjects_dir=subjects_dir)
print('Distance from %s digitized points to head surface: %0.1f mm'
% (len(dists), 1000 * np.mean(dists)))
mne.viz.plot_alignment(raw.info, trans=None, subject='sample', src=src,
subjects_dir=subjects_dir, dig=True,
surfaces=['head-dense', 'white'], coord_frame='meg')
mne.viz.plot_alignment(raw.info, trans=trans, subject='sample',
src=src, subjects_dir=subjects_dir, dig=True,
surfaces=['head-dense', 'white'], coord_frame='meg')
# the head surface is stored in "mri" coordinate frame
# (origin at center of volume, units=mm)
seghead_rr, seghead_tri = mne.read_surface(
op.join(subjects_dir, 'sample', 'surf', 'lh.seghead'))
# to put the scalp in the "head" coordinate frame, we apply the inverse of
# the precomputed `trans` (which maps head → mri)
mri_to_head = linalg.inv(trans['trans'])
scalp_pts_in_head_coord = mne.transforms.apply_trans(
mri_to_head, seghead_rr, move=True)
# to put the scalp in the "meg" coordinate frame, we use the inverse of
# raw.info['dev_head_t']
head_to_meg = linalg.inv(raw.info['dev_head_t']['trans'])
scalp_pts_in_meg_coord = mne.transforms.apply_trans(
head_to_meg, scalp_pts_in_head_coord, move=True)
# The "mri_voxel"→"mri" transform is embedded in the header of the T1 image
# file. We'll invert it and then apply it to the original `seghead_rr` points.
# No unit conversion necessary: this transform expects mm and the scalp surface
# is defined in mm.
vox_to_mri = t1_mgh.header.get_vox2ras_tkr()
mri_to_vox = linalg.inv(vox_to_mri)
scalp_points_in_vox = mne.transforms.apply_trans(
mri_to_vox, seghead_rr, move=True)
def add_head(renderer, points, color, opacity=0.95):
renderer.mesh(*points.T, triangles=seghead_tri, color=color,
opacity=opacity)
renderer = mne.viz.backends.renderer.create_3d_figure(
size=(600, 600), bgcolor='w', scene=False)
add_head(renderer, seghead_rr, 'gray')
add_head(renderer, scalp_pts_in_meg_coord, 'blue')
add_head(renderer, scalp_pts_in_head_coord, 'pink')
add_head(renderer, scalp_points_in_vox, 'green')
mne.viz.set_3d_view(figure=renderer.figure, distance=800,
focalpoint=(0., 30., 30.), elevation=105, azimuth=180)
renderer.show()
# get the nasion
nasion = [p for p in raw.info['dig'] if
p['kind'] == FIFF.FIFFV_POINT_CARDINAL and
p['ident'] == FIFF.FIFFV_POINT_NASION][0]
assert nasion['coord_frame'] == FIFF.FIFFV_COORD_HEAD
nasion = nasion['r'] # get just the XYZ values
# transform it from head to MRI space (recall that `trans` is head → mri)
nasion_mri = mne.transforms.apply_trans(trans, nasion, move=True)
# then transform to voxel space, after converting from meters to millimeters
nasion_vox = mne.transforms.apply_trans(
mri_to_vox, nasion_mri * 1e3, move=True)
# plot it to make sure the transforms worked
renderer = mne.viz.backends.renderer.create_3d_figure(
size=(400, 400), bgcolor='w', scene=False)
add_head(renderer, scalp_points_in_vox, 'green', opacity=1)
renderer.sphere(center=nasion_vox, color='orange', scale=10)
mne.viz.set_3d_view(figure=renderer.figure, distance=600.,
focalpoint=(0., 125., 250.), elevation=45, azimuth=180)
renderer.show()
# mne.gui.coregistration(subject='sample', subjects_dir=subjects_dir)
sphere = mne.make_sphere_model(info=raw.info, r0='auto', head_radius='auto')
src = mne.setup_volume_source_space(sphere=sphere, pos=10.)
mne.viz.plot_alignment(
raw.info, eeg='projected', bem=sphere, src=src, dig=True,
surfaces=['brain', 'outer_skin'], coord_frame='meg', show_axes=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: .. raw
Step2: Coordinate frame definitions
Step3: A good example
Step4: Visualizing the transformations
Step5: Now that we've transformed all the points, let's plot them. We'll use the
Step6: The relative orientations of the coordinate frames can be inferred by
Step7: Defining the head↔MRI trans using the GUI
Step8: Alignment without MRI
|
6,101
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import networkx as nx
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import networkx as nx
G=nx.Graph() # G = nx.DiGraph() # 有向网络
# 添加(孤立)节点
G.add_node("spam")
# 添加节点和链接
G.add_edge(1,2)
print(G.nodes())
print(G.edges())
# 绘制网络
nx.draw(G, with_labels = True)
G = nx.Graph()
n = 0
with open ('/Users/chengjun/bigdata/www.dat.gz.txt') as f:
for line in f:
n += 1
#if n % 10**4 == 0:
#flushPrint(n)
x, y = line.rstrip().split(' ')
G.add_edge(x,y)
nx.info(G)
G = nx.karate_club_graph()
clubs = [G.node[i]['club'] for i in G.nodes()]
colors = []
for j in clubs:
if j == 'Mr. Hi':
colors.append('r')
else:
colors.append('g')
nx.draw(G, with_labels = True, node_color = colors)
G.node[1] # 节点1的属性
G.edge.keys()[:3] # 前三条边的id
nx.info(G)
G.nodes()[:10]
G.edges()[:3]
G.neighbors(1)
nx.average_shortest_path_length(G)
nx.diameter(G)#返回图G的直径(最长最短路径的长度)
nx.density(G)
nodeNum = len(G.nodes())
edgeNum = len(G.edges())
2.0*edgeNum/(nodeNum * (nodeNum - 1))
cc = nx.clustering(G)
cc.items()[:5]
plt.hist(cc.values(), bins = 15)
plt.xlabel('$Clustering \, Coefficient, \, C$', fontsize = 20)
plt.ylabel('$Frequency, \, F$', fontsize = 20)
plt.show()
# M. E. J. Newman, Mixing patterns in networks Physical Review E, 67 026126, 2003
nx.degree_assortativity_coefficient(G) #计算一个图的度匹配性。
Ge=nx.Graph()
Ge.add_nodes_from([0,1],size=2)
Ge.add_nodes_from([2,3],size=3)
Ge.add_edges_from([(0,1),(2,3)])
print(nx.numeric_assortativity_coefficient(Ge,'size'))
# plot degree correlation
from collections import defaultdict
import numpy as np
l=defaultdict(list)
g = nx.karate_club_graph()
for i in g.nodes():
k = []
for j in g.neighbors(i):
k.append(g.degree(j))
l[g.degree(i)].append(np.mean(k))
#l.append([g.degree(i),np.mean(k)])
x = l.keys()
y = [np.mean(i) for i in l.values()]
#x, y = np.array(l).T
plt.plot(x, y, 'r-o', label = '$Karate\;Club$')
plt.legend(loc=1,fontsize=10, numpoints=1)
plt.xscale('log'); plt.yscale('log')
plt.ylabel(r'$<knn(k)$> ', fontsize = 20)
plt.xlabel('$k$', fontsize = 20)
plt.show()
dc = nx.degree_centrality(G)
closeness = nx.closeness_centrality(G)
betweenness= nx.betweenness_centrality(G)
fig = plt.figure(figsize=(15, 4),facecolor='white')
ax = plt.subplot(1, 3, 1)
plt.hist(dc.values(), bins = 20)
plt.xlabel('$Degree \, Centrality$', fontsize = 20)
plt.ylabel('$Frequency, \, F$', fontsize = 20)
ax = plt.subplot(1, 3, 2)
plt.hist(closeness.values(), bins = 20)
plt.xlabel('$Closeness \, Centrality$', fontsize = 20)
ax = plt.subplot(1, 3, 3)
plt.hist(betweenness.values(), bins = 20)
plt.xlabel('$Betweenness \, Centrality$', fontsize = 20)
plt.tight_layout()
plt.show()
fig = plt.figure(figsize=(15, 8),facecolor='white')
for k in betweenness:
plt.scatter(dc[k], closeness[k], s = betweenness[k]*1000)
plt.text(dc[k], closeness[k]+0.02, str(k))
plt.xlabel('$Degree \, Centrality$', fontsize = 20)
plt.ylabel('$Closeness \, Centrality$', fontsize = 20)
plt.show()
from collections import defaultdict
import numpy as np
def plotDegreeDistribution(G):
degs = defaultdict(int)
for i in G.degree().values(): degs[i]+=1
items = sorted ( degs.items () )
x, y = np.array(items).T
y_sum = np.sum(y)
y = [float(i)/y_sum for i in y]
plt.plot(x, y, 'b-o')
plt.xscale('log')
plt.yscale('log')
plt.legend(['Degree'])
plt.xlabel('$K$', fontsize = 20)
plt.ylabel('$P_K$', fontsize = 20)
plt.title('$Degree\,Distribution$', fontsize = 20)
plt.show()
G = nx.karate_club_graph()
plotDegreeDistribution(G)
import networkx as nx
import matplotlib.pyplot as plt
RG = nx.random_graphs.random_regular_graph(3,200) #生成包含200个节点、每个节点有3个邻居的规则图RG
pos = nx.spectral_layout(RG) #定义一个布局,此处采用了spectral布局方式,后变还会介绍其它布局方式,注意图形上的区别
nx.draw(RG,pos,with_labels=False,node_size = 30) #绘制规则图的图形,with_labels决定节点是非带标签(编号),node_size是节点的直径
plt.show() #显示图形
plotDegreeDistribution(RG)
import networkx as nx
import matplotlib.pyplot as plt
ER = nx.random_graphs.erdos_renyi_graph(200,0.05) #生成包含20个节点、以概率0.2连接的随机图
pos = nx.shell_layout(ER) #定义一个布局,此处采用了shell布局方式
nx.draw(ER,pos,with_labels=False,node_size = 30)
plt.show()
plotDegreeDistribution(ER)
import networkx as nx
import matplotlib.pyplot as plt
WS = nx.random_graphs.watts_strogatz_graph(200,4,0.3) #生成包含200个节点、每个节点4个近邻、随机化重连概率为0.3的小世界网络
pos = nx.circular_layout(WS) #定义一个布局,此处采用了circular布局方式
nx.draw(WS,pos,with_labels=False,node_size = 30) #绘制图形
plt.show()
plotDegreeDistribution(WS)
nx.diameter(WS)
cc = nx.clustering(WS)
plt.hist(cc.values(), bins = 10)
plt.xlabel('$Clustering \, Coefficient, \, C$', fontsize = 20)
plt.ylabel('$Frequency, \, F$', fontsize = 20)
plt.show()
import numpy as np
np.mean(cc.values())
import networkx as nx
import matplotlib.pyplot as plt
BA= nx.random_graphs.barabasi_albert_graph(200,2) #生成n=20、m=1的BA无标度网络
pos = nx.spring_layout(BA) #定义一个布局,此处采用了spring布局方式
nx.draw(BA,pos,with_labels=False,node_size = 30) #绘制图形
plt.show()
plotDegreeDistribution(BA)
BA= nx.random_graphs.barabasi_albert_graph(20000,2) #生成n=20、m=1的BA无标度网络
plotDegreeDistribution(BA)
import networkx as nx
import matplotlib.pyplot as plt
BA= nx.random_graphs.barabasi_albert_graph(500,1) #生成n=20、m=1的BA无标度网络
pos = nx.spring_layout(BA) #定义一个布局,此处采用了spring布局方式
nx.draw(BA,pos,with_labels=False,node_size = 30) #绘制图形
plt.show()
nx.degree_histogram(BA)[:3]
BA.degree().items()[:3]
plt.hist(BA.degree().values())
plt.show()
from collections import defaultdict
import numpy as np
def plotDegreeDistributionLongTail(G):
degs = defaultdict(int)
for i in G.degree().values(): degs[i]+=1
items = sorted ( degs.items () )
x, y = np.array(items).T
y_sum = np.sum(y)
y = [float(i)/y_sum for i in y]
plt.plot(x, y, 'b-o')
plt.legend(['Degree'])
plt.xlabel('$K$', fontsize = 20)
plt.ylabel('$P_K$', fontsize = 20)
plt.title('$Degree\,Distribution$', fontsize = 20)
plt.show()
BA= nx.random_graphs.barabasi_albert_graph(5000,2) #生成n=20、m=1的BA无标度网络
plotDegreeDistributionLongTail(BA)
def plotDegreeDistribution(G):
degs = defaultdict(int)
for i in G.degree().values(): degs[i]+=1
items = sorted ( degs.items () )
x, y = np.array(items).T
x, y = np.array(items).T
y_sum = np.sum(y)
plt.plot(x, y, 'b-o')
plt.xscale('log')
plt.yscale('log')
plt.legend(['Degree'])
plt.xlabel('$K$', fontsize = 20)
plt.ylabel('$P_K$', fontsize = 20)
plt.title('$Degree\,Distribution$', fontsize = 20)
plt.show()
BA= nx.random_graphs.barabasi_albert_graph(50000,2) #生成n=20、m=1的BA无标度网络
plotDegreeDistribution(BA)
Ns = [i*10 for i in [1, 10, 100, 1000]]
ds = []
for N in Ns:
print N
BA= nx.random_graphs.barabasi_albert_graph(N,2)
d = nx.average_shortest_path_length(BA)
ds.append(d)
plt.plot(Ns, ds, 'r-o')
plt.xlabel('$N$', fontsize = 20)
plt.ylabel('$<d>$', fontsize = 20)
plt.xscale('log')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: WWW Data download
Step2: 描述网络
Step3: 网络直径
Step4: 密度
Step5: 作业:
Step6: Spacing in Math Mode
Step7: Degree centrality measures.(度中心性)
Step8: 度分布
Step9: 网络科学理论简介
Step10: ER随机网络
Step11: 小世界网络
Step12: BA网络
Step13: 作业:
|
6,102
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
from ensae_teaching_cs.data import generate_sells
import pandas
df = pandas.DataFrame(generate_sells())
df.head()
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(14, 4))
df.iloc[-30:].set_index('date').plot(ax=ax[0])
df.set_index('date').plot(ax=ax[1])
ax[0].set_title("chiffre d'affaire sur le dernier mois")
ax[1].set_title("chiffre d'affaire sur deux ans");
from statsmodels.tsa.tsatools import detrend
notrend = detrend(df.value, order=1)
df["notrend"] = notrend
df["trend"] = df['value'] - notrend
ax = df.plot(x="date", y=["value", "trend"], figsize=(14,4))
ax.set_title('tendance');
from statsmodels.tsa.stattools import acf
cor = acf(df.value)
cor
fig, ax = plt.subplots(1, 1, figsize=(14,2))
ax.plot(cor)
ax.set_title("Autocorrélogramme");
from statsmodels.tsa.stattools import pacf
from statsmodels.graphics.tsaplots import plot_pacf
plot_pacf(df.value, lags=50);
df["weekday"] = df.date.dt.weekday
df.head()
df_nosunday = df[df.weekday != 6]
df_nosunday.head(n=10)
fig, ax = plt.subplots(1, 1, figsize=(14,2))
cor = acf(df_nosunday.value)
ax.plot(cor)
ax.set_title("Autocorrélogramme");
plot_pacf(df_nosunday.value, lags=50);
from statsmodels.tsa.seasonal import seasonal_decompose
res = seasonal_decompose(df_nosunday.value, freq=7)
res.plot();
plt.plot(res.seasonal[-30:])
plt.title("Saisonnalité");
cor = acf(res.trend[5:-5]);
plt.plot(cor);
res_year = seasonal_decompose(res.trend[5:-5], freq=25)
res_year.plot();
from statsmodels.tsa.stattools import kpss
kpss(res.trend[5:-5])
from numpy.random import randn
bruit = randn(1000)
kpss(bruit)
from numpy.random import randn
from numpy import arange
bruit = randn(1000) * 100 + arange(1000) / 10
kpss(bruit)
from statsmodels.tsa.tsatools import lagmat
lag = 8
X = lagmat(df_nosunday["value"], lag)
lagged = df_nosunday.copy()
for c in range(1,lag+1):
lagged["lag%d" % c] = X[:, c-1]
lagged.tail()
lagged["weekday"] = lagged.date.dt.weekday
X = lagged.drop(["date", "value", "notrend", "trend"], axis=1)
Y = lagged["value"]
X.shape, Y.shape
from numpy import corrcoef
corrcoef(X)
X.columns
from sklearn.linear_model import LinearRegression
clr = LinearRegression()
clr.fit(X, Y)
from sklearn.metrics import r2_score
r2_score(Y, clr.predict(X))
clr.coef_
for i in range(1, X.shape[1]):
print("X(t-%d)" % (i), r2_score(Y, X.iloc[:, i]))
n = X.shape[0]
X_train = X.iloc[:n * 2//3]
X_test = X.iloc[n * 2//3:]
Y_train = Y[:n * 2//3]
Y_test = Y[n * 2//3:]
from sklearn.model_selection import TimeSeriesSplit
tscv = TimeSeriesSplit(n_splits=5)
for train_index, test_index in tscv.split(lagged):
data_train, data_test = lagged.iloc[train_index, :], lagged.iloc[test_index, :]
print("TRAIN:", data_train.shape, "TEST:", data_test.shape)
import warnings
from sklearn.ensemble import RandomForestRegressor
clr = RandomForestRegressor()
def train_test(clr, train_index, test_index):
data_train = lagged.iloc[train_index, :]
data_test = lagged.iloc[test_index, :]
clr.fit(data_train.drop(["value", "date", "notrend", "trend"],
axis=1),
data_train.value)
r2 = r2_score(data_test.value,
clr.predict(data_test.drop(["value", "date", "notrend",
"trend"], axis=1).values))
return r2
warnings.simplefilter("ignore")
last_test_index = None
for train_index, test_index in tscv.split(lagged):
r2 = train_test(clr, train_index, test_index)
if last_test_index is not None:
r2_prime = train_test(clr, last_test_index, test_index)
print(r2, r2_prime)
else:
print(r2)
last_test_index = test_index
from sklearn.metrics import r2_score
r2 = r2_score(data_test.value,
clr.predict(data_test.drop(["value", "date", "notrend",
"trend"], axis=1).values))
r2
for i in range(1, 9):
print(i, ":", r2_score(data_test.value, data_test["lag%d" % i]))
lagged[:5]
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
cols = ['lag1', 'lag2', 'lag3',
'lag4', 'lag5', 'lag6', 'lag7', 'lag8']
ct = ColumnTransformer(
[('pass', "passthrough", cols),
("dummies", OneHotEncoder(), ["weekday"])])
pred = ct.fit(lagged).transform(lagged[:5])
pred
from sklearn.pipeline import make_pipeline
from sklearn.decomposition import PCA, TruncatedSVD
cols = ['lag1', 'lag2', 'lag3',
'lag4', 'lag5', 'lag6', 'lag7', 'lag8']
model = make_pipeline(
make_pipeline(
ColumnTransformer(
[('pass', "passthrough", cols),
("dummies", make_pipeline(OneHotEncoder(),
TruncatedSVD(n_components=2)), ["weekday"])]),
LinearRegression()))
model.fit(lagged, lagged["value"])
from mlinsights.plotting import pipeline2dot
dot = pipeline2dot(model, lagged)
from jyquickhelper import RenderJsDot
RenderJsDot(dot)
r2_score(lagged['value'], model.predict(lagged))
from jinja2 import Template
template = Template('Hello {{ name }}!')
template.render(name='John Doe')
template = Template(
{{ name }}
{{ "-" * len(name) }}
Possède :
{% for i in range(len(meubles)) %}
- {{meubles[i]}}{% endfor %}
)
meubles = ['table', "tabouret"]
print(template.render(name='John Doe Doe', len=len,
meubles=meubles))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Données
Step2: Premiers graphiques
Step3: Elle a une vague tendance, on peut calculer un tendance à l'ordre 1, 2, ...
Step4: Autocorrélations...
Step5: La première saisonalité apparaît, 7, 14, 21... Les autocorrélations partielles confirment cela, plutôt 7 jours.
Step6: Comme il n'y a rien le dimanche, il vaut mieux les enlever. Garder des zéros nous priverait de modèles multiplicatifs.
Step7: On décompose la série en tendance + saisonnalité. Les étés et Noël apparaissent.
Step8: On cherche maintenant la saisonnalité de la série débarrassée de sa tendance herbdomadaire. On retrouve la saisonnalité mensuelle.
Step9: Test de stationnarité
Step10: Comme ce n'est pas toujours facile à interpréter, on simule une variable aléatoire gaussienne donc sans tendance.
Step11: Et puis une série avec une tendance forte.
Step12: Une valeur forte indique une tendance et la série en a clairement une.
Step13: On ajoute ou on réécrit le jour de la semaine qu'on utilise comme variable supplémentaire.
Step14: Etrange autant de grandes valeurs, cela veut dire que la tendance est trop forte pour calculer des corrélations, il vaudrait mieux tout recommencer avec la série $\Delta Y_t = Y_t - Y_{t-1}$. Bref, passons...
Step15: Une régression linéaire car les modèles linéaires sont toujours de bonnes baseline et pour connaître le modèle simulé, on ne fera pas beaucoup mieux.
Step16: On retrouve la saisonnalité, $Y_t$ et $Y_{t-6}$ sont de mèches.
Step17: Auparavant (l'année dernière en fait), je construisais deux bases, apprentissage et tests, comme ceci
Step18: Et puis scikit-learn est arrivée avec TimeSeriesSplit.
Step19: Et on calé une forêt aléatoire...
Step20: 2 ans coupé en 5, soit tous les 5 mois, ça veut dire que ce découpage inclut parfois Noël, parfois l'été et que les performances y seront très sensibles.
Step21: On compare avec le $r_2$ avec le même $r_2$ obtenu en utilisant $Y_{t-1}$, $Y_{t-2}$, ... $Y_{t-d}$ comme prédiction.
Step22: En fait le jour de la semaine est une variable catégorielle, on crée une colonne par jour.
Step23: On met tout dans un pipeline parce que c'est plus joli, plus pratique aussi.
Step24: C'est plus facile à voir visuellement.
Step26: Templating
|
6,103
|
<ASSISTANT_TASK:>
Python Code:
w = "file.txt"
x = "r"
with open(w,x) as y:
z = y.read()
# "r"
# .read()
# .readlines()
# "w", "a"
# .write(stuff)
w = "file.txt"
x = "w"
with open(w,x) as y:
for i in range(3):
y.write(f"{i}")
grades = ['A', 'B+','A','C+','B-']
grades[:2]
#List operators
x = [1,2,3]
y = [4,5,3]
z = x+y
z
items = ['a','b','c','d']
items.append('w')
items.remove('c')
items.insert(0,'x')
items.pop(1)
print(items)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A. w - variable that holds the filename
Step2: A. 0 1 2 3
Step3: A. ['A','B+','A']
Step4: Built-In List Functions
|
6,104
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import sklearn
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
ad_data = pd.read_csv('./advertising.csv')
ad_data.head()
ad_data.info()
ad_data.describe()
ad_data['Age'].plot(kind='hist', bins=40)
sns.jointplot(x='Age', y='Area Income', data=ad_data)
sns.jointplot(x='Age', y='Daily Time Spent on Site', data=ad_data, kind='kde', color='red')
sns.jointplot(x='Daily Time Spent on Site', y='Daily Internet Usage', data=ad_data, color='green')
sns.pairplot(data = ad_data, hue='Clicked on Ad')
from sklearn.model_selection import train_test_split
ad_data.info()
X = ad_data[['Daily Time Spent on Site','Age','Area Income','Daily Internet Usage','Male']]
y = ad_data['Clicked on Ad']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=33)
(X_train.shape, X_test.shape)
from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression()
logmodel.fit(X_train, y_train)
logmodel.intercept_
logmodel.intercept_scaling
logmodel.coef_
logmodel.coef_.round(4)
# coefficients after shuffling the train test dataset
logmodel.coef_.round(4)
train_predictions = logmodel.predict(X_train)
test_predictions = logmodel.predict(X_test)
from sklearn.metrics import classification_report
print("Training errors")
print(classification_report(y_true=y_train,
y_pred=train_predictions,
labels=[0,1],
target_names=['Not clicked on ad','Clicked on ad']))
print("Test errors")
print(classification_report(y_true=y_test,
y_pred=test_predictions,
labels=[0,1],
target_names=['Not clicked on ad','Clicked on ad']))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the Data
Step2: Check the head of ad_data
Step3: Use info and describe() on ad_data
Step4: Exploratory Data Analysis
Step5: Create a jointplot showing Area Income versus Age.
Step6: Create a jointplot showing the kde distributions of Daily Time spent on site vs. Age.
Step7: Create a jointplot of 'Daily Time Spent on Site' vs. 'Daily Internet Usage'
Step8: Finally, create a pairplot with the hue defined by the 'Clicked on Ad' column feature.
Step9: Logistic Regression
Step10: Train and fit a logistic regression model on the training set.
Step11: Predictions and Evaluations
Step12: Create a classification report for the model.
|
6,105
|
<ASSISTANT_TASK:>
Python Code:
!pip install -U -q tensorflow
!pip install -U -q tensorflow_data_validation
!pip install -U -q pandas
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "sa-data-validation"
BUCKET = "sa-data-validation"
BQ_DATASET_NAME = 'prediction_logs'
BQ_VIEW_NAME = 'vw_covertype_classifier_logs_v1'
MODEL_NAME = 'covertype_classifier'
MODEL_VERSION = 'v1'
!gcloud config set project $PROJECT_ID
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except: pass
import os
import tensorflow as tf
import tensorflow_data_validation as tfdv
from tensorflow_metadata.proto.v0 import schema_pb2, statistics_pb2, anomalies_pb2
import apache_beam as beam
import pandas as pd
from datetime import datetime
import json
import numpy as np
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
print("TF version: {}".format(tf.__version__))
print("TFDV version: {}".format(tfdv.__version__))
print("Beam version: {}".format(beam.__version__))
WORKSPACE = './workspace'
DATA_DIR = os.path.join(WORKSPACE, 'data')
TRAIN_DATA = os.path.join(DATA_DIR, 'train.csv')
if tf.io.gfile.exists(WORKSPACE):
print("Removing previous workspace artifacts...")
tf.io.gfile.rmtree(WORKSPACE)
print("Creating a new workspace...")
tf.io.gfile.makedirs(WORKSPACE)
tf.io.gfile.makedirs(DATA_DIR)
!gsutil cp gs://workshop-datasets/covertype/data_validation/training/dataset.csv {TRAIN_DATA}
!wc -l {TRAIN_DATA}
sample = pd.read_csv(TRAIN_DATA).head()
sample.T
baseline_stats = tfdv.generate_statistics_from_csv(
data_location=TRAIN_DATA,
stats_options = tfdv.StatsOptions(
sample_count=10000
)
)
reference_schema = tfdv.infer_schema(baseline_stats)
# Set Soil_Type to be categorical
tfdv.set_domain(reference_schema, 'Soil_Type', schema_pb2.IntDomain(
name='Soil_Type', is_categorical=True))
# Set Cover_Type to be categorical
tfdv.set_domain(reference_schema, 'Cover_Type', schema_pb2.IntDomain(
name='Cover_Type', is_categorical=True))
baseline_stats = tfdv.generate_statistics_from_csv(
data_location=TRAIN_DATA,
stats_options=tfdv.StatsOptions(
schema=reference_schema,
sample_count=10000
)
)
reference_schema = tfdv.infer_schema(baseline_stats)
# Set Soil_Type to be categorical
tfdv.set_domain(reference_schema, 'Soil_Type', schema_pb2.IntDomain(
name='Soil_Type', is_categorical=True))
# Set Cover_Type to be categorical
tfdv.set_domain(reference_schema, 'Cover_Type', schema_pb2.IntDomain(
name='Cover_Type', is_categorical=True))
# Set max and min values for Elevation
tfdv.set_domain(reference_schema,
'Elevation',
tfdv.utils.schema_util.schema_pb2.IntDomain(
min=1000,
max=5000))
# Allow no missing values
tfdv.get_feature(reference_schema,
'Slope').presence.min_fraction = 1.0
# Set distribution skew detector for Wilderness_Area
tfdv.get_feature(reference_schema,
'Wilderness_Area').skew_comparator.infinity_norm.threshold = 0.05
tfdv.display_schema(
schema=reference_schema)
tfdv.visualize_statistics(baseline_stats)
TARGET_FEATURE_NAME = 'Cover_Type'
FEATURE_NAMES = [feature.name for feature in reference_schema.feature
if feature.name != TARGET_FEATURE_NAME]
def generate_query(source, features, target, start_time, end_time):
query =
SELECT
FORMAT_TIMESTAMP('%Y-%m-%d', time) AS time,
{},
predicted_class AS {}
FROM `{}`
WHERE time BETWEEN '{}' AND '{}'
;
.format(features, target, source, start_time, end_time)
return query
start_time = '2020-05-01 00:00:00 UTC'
end_time = '2020-07-01 00:50:00 UTC'
source = "{}.{}".format(BQ_DATASET_NAME, BQ_VIEW_NAME)
features = ', '.join(FEATURE_NAMES)
query = generate_query(source, features, TARGET_FEATURE_NAME, start_time, end_time)
serving_data = pd.io.gbq.read_gbq(
query, project_id=PROJECT_ID)
print(len(serving_data.index))
serving_data.head(5).T
serving_data_file = os.path.join(DATA_DIR, 'serving.csv')
serving_data.to_csv(serving_data_file, index=False)
slice_fn = tfdv.get_feature_value_slicer(features={'time': None})
serving_stats_list = tfdv.generate_statistics_from_csv(
data_location=serving_data_file,
stats_options=tfdv.StatsOptions(
slice_functions=[slice_fn],
schema=reference_schema
)
)
slice_keys = sorted([dataset.name for dataset in serving_stats_list.datasets])
slice_keys
anomalies_list = []
for slice_key in slice_keys[1:]:
serving_stats = tfdv.get_slice_stats(serving_stats_list, slice_key)
anomalies = tfdv.validate_statistics(
serving_stats,
schema=reference_schema,
previous_statistics=baseline_stats
)
anomalies_list.append(anomalies)
slice_key = slice_keys[1]
serving_stats = tfdv.get_slice_stats(serving_stats_list, slice_key)
tfdv.visualize_statistics(
baseline_stats, serving_stats, 'baseline', 'current')
slice_key = slice_keys[-1]
serving_stats = tfdv.get_slice_stats(serving_stats_list, slice_key)
tfdv.visualize_statistics(
baseline_stats, serving_stats, 'baseline', 'current')
for i, anomalies in enumerate(anomalies_list):
tfdv.utils.anomalies_util.remove_anomaly_types(
anomalies, [anomalies_pb2.AnomalyInfo.SCHEMA_NEW_COLUMN])
print("Anomalies for {}".format(slice_keys[i+1]), )
tfdv.display_anomalies(anomalies)
categorical_features = [
feature.steps()[0]
for feature in tfdv.utils.schema_util.get_categorical_features(
reference_schema)
]
baseline_means = dict()
for feature in baseline_stats.datasets[0].features:
if feature.path.step[0] == 'time': continue
if feature.path.step[0] not in categorical_features:
mean = feature.num_stats.mean
baseline_means[feature.path.step[0]] = mean
from collections import defaultdict
feature_means = defaultdict(list)
for slice_key in slice_keys[1:]:
ds = tfdv.get_slice_stats(serving_stats_list, slice_key).datasets[0]
for feature in ds.features:
if feature.path.step[0] == 'time': continue
if feature.path.step[0] not in categorical_features:
mean = feature.num_stats.mean
feature_means[feature.path.step[0]].append(mean)
import matplotlib.pyplot as plt
dataframe = pd.DataFrame(feature_means, index=slice_keys[1:])
num_features = len(feature_means)
ncolumns = 3
nrows = int(num_features // ncolumns) + 1
fig, axes = plt.subplots(nrows=nrows, ncols=ncolumns, figsize=(25, 25))
for i, col in enumerate(dataframe.columns[:num_features]):
r = i // ncolumns
c = i % ncolumns
p = dataframe[col].plot.line(ax=axes[r][c], title=col, rot=10)
p.hlines(baseline_means[col], xmin=0, xmax=len(dataframe.index), color='red')
p.text(0, baseline_means[col], 'baseline mean', fontsize=15)
categorical_feature_stats = dict()
for feature_name in categorical_features:
categorical_feature_stats[feature_name] = dict()
for slice_key in slice_keys[1:]:
categorical_feature_stats[feature_name][slice_key] = dict()
ds = tfdv.get_slice_stats(serving_stats_list, slice_key).datasets[0]
for feature in ds.features:
if feature.path.step[0] == feature_name:
val_freq = list(feature.string_stats.top_values)
for item in val_freq:
categorical_feature_stats[feature_name][slice_key][item.value] = item.frequency
break
num_features = len(categorical_features)
ncolumns = 2
nrows = int(num_features // ncolumns) + 1
fig, axes = plt.subplots(nrows=nrows, ncols=ncolumns, figsize=(25, 15))
for i, feature_name in enumerate(categorical_features):
dataframe = pd.DataFrame(
categorical_feature_stats[feature_name]).T
r = i // ncolumns
c = i % ncolumns
dataframe.plot.bar(ax=axes[r][c], stacked=True, rot=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configure Google Cloud environment settings
Step2: Authenticate your GCP account
Step3: Import libraries
Step4: Create a local workspace
Step5: Part 1
Step6: 2. Compute baseline statistics
Step7: 3. Generate reference schema
Step8: Display the reference schema
Step9: Visualize baseline statistics
Step10: Part 2
Step12: 2.1. Read serving data from BigQuery
Step13: 2.2. Save serving data to CSV
Step14: 3. Compute Statistics from Serving Data
Step15: 4. Validate Serving Statistics
Step16: Part 2
Step17: Visualize statistics for a time window with skewed data points
Step18: 2. Display Anomalies
Step19: 3. Analyze Statistics Change Over time
Step20: Get mean values from baseline statistics
Step21: 3.3. Categorical feature distribution over time
|
6,106
|
<ASSISTANT_TASK:>
Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.8'
if 'COLAB_GPU' in os.environ: # this is always set on Colab, the value is 0 or 1 depending on whether a GPU is attached
from google.colab import auth
auth.authenticate_user()
# download "sidecar files" since on Colab, this notebook will be on Drive
!rm -rf txtclsmodel
!git clone --depth 1 https://github.com/GoogleCloudPlatform/training-data-analyst
!mv training-data-analyst/courses/machine_learning/deepdive/09_sequence/txtclsmodel/ .
!rm -rf training-data-analyst
# downgrade TensorFlow to the version this notebook has been tested with
!pip install --upgrade tensorflow==$TFVERSION
import tensorflow as tf
print(tf.__version__)
%%load_ext google.cloud.bigquery
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 10
from google.cloud import bigquery
bq = bigquery.Client(project=PROJECT)
query=
SELECT source, LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title FROM
(SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
title
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
)
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
df = bq.query(query + " LIMIT 5").to_dataframe()
df.head()
traindf = bq.query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) > 0").to_dataframe()
evaldf = bq.query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) = 0").to_dataframe()
traindf['source'].value_counts()
evaldf['source'].value_counts()
import os, shutil
DATADIR='data/txtcls'
shutil.rmtree(DATADIR, ignore_errors=True)
os.makedirs(DATADIR)
traindf.to_csv( os.path.join(DATADIR,'train.tsv'), header=False, index=False, encoding='utf-8', sep='\t')
evaldf.to_csv( os.path.join(DATADIR,'eval.tsv'), header=False, index=False, encoding='utf-8', sep='\t')
!head -3 data/txtcls/train.tsv
!wc -l data/txtcls/*.tsv
%%bash
source activate py2env # gcloud uses python2 by default
## Make sure we have the latest version of Google Cloud Storage package
pip install --upgrade google-cloud-storage
rm -rf txtcls_trained
gcloud ml-engine local train \
--module-name=trainer.task \
--package-path=${PWD}/txtclsmodel/trainer \
-- \
--output_dir=${PWD}/txtcls_trained \
--train_data_path=${PWD}/data/txtcls/train.tsv \
--eval_data_path=${PWD}/data/txtcls/eval.tsv \
--num_epochs=0.1
%%bash
gsutil cp data/txtcls/*.tsv gs://${BUCKET}/txtcls/
%%bash
OUTDIR=gs://${BUCKET}/txtcls/trained_fromscratch
JOBNAME=txtcls_$(date -u +%y%m%d_%H%M%S)
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/txtclsmodel/trainer \
--job-dir=$OUTDIR \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--train_data_path=gs://${BUCKET}/txtcls/train.tsv \
--eval_data_path=gs://${BUCKET}/txtcls/eval.tsv \
--num_epochs=5
!gcloud ml-engine jobs describe txtcls_190209_224828
from google.datalab.ml import TensorBoard
TensorBoard().start('gs://{}/txtcls/trained_fromscratch'.format(BUCKET))
for pid in TensorBoard.list()['pid']:
TensorBoard().stop(pid)
print('Stopped TensorBoard with pid {}'.format(pid))
%%bash
gsutil ls gs://${BUCKET}/txtcls/trained_fromscratch/export/exporter/
%%bash
MODEL_NAME="txtcls"
MODEL_VERSION="v1_fromscratch"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/txtcls/trained_fromscratch/export/exporter/ | tail -1)
#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME} --quiet
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
techcrunch=[
'Uber shuts down self-driving trucks unit',
'Grover raises €37M Series A to offer latest tech products as a subscription',
'Tech companies can now bid on the Pentagon’s $10B cloud contract'
]
nytimes=[
'‘Lopping,’ ‘Tips’ and the ‘Z-List’: Bias Lawsuit Explores Harvard’s Admissions',
'A $3B Plan to Turn Hoover Dam into a Giant Battery',
'A MeToo Reckoning in China’s Workplace Amid Wave of Accusations'
]
github=[
'Show HN: Moon – 3kb JavaScript UI compiler',
'Show HN: Hello, a CLI tool for managing social media',
'Firefox Nightly added support for time-travel debugging'
]
import pickle
from tensorflow.python.keras.preprocessing import sequence
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json
requests = techcrunch+nytimes+github
# Tokenize and pad sentences using same mapping used in the deployed model
tokenizer = pickle.load( open( "txtclsmodel/tokenizer.pickled", "rb" ) )
requests_tokenized = tokenizer.texts_to_sequences(requests)
requests_tokenized = sequence.pad_sequences(requests_tokenized,maxlen=50)
# JSON format the requests
request_data = {'instances':requests_tokenized.tolist()}
# Authenticate and call CMLE prediction API
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
parent = 'projects/%s/models/%s' % (PROJECT, 'txtcls') #version is not specified so uses default
response = api.projects().predict(body=request_data, name=parent).execute()
# Format and print response
for i in range(len(requests)):
print('\n{}'.format(requests[i]))
print(' github : {}'.format(response['predictions'][i]['dense_1'][0]))
print(' nytimes : {}'.format(response['predictions'][i]['dense_1'][1]))
print(' techcrunch: {}'.format(response['predictions'][i]['dense_1'][2]))
!gsutil cp gs://cloud-training-demos/courses/machine_learning/deepdive/09_sequence/text_classification/glove.6B.200d.txt gs://$BUCKET/txtcls/
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will look at the titles of articles and figure out whether the article came from the New York Times, TechCrunch or GitHub.
Step2: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step4: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step5: For ML training, we will need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset).
Step6: Below we can see that roughly 75% of the data is used for training, and 25% for evaluation.
Step7: Finally we will save our data, which is currently in-memory, to disk.
Step8: TensorFlow/Keras Code
Step9: Train on the Cloud
Step10: Change the job name appropriately. View the job in the console, and wait until the job is complete.
Step11: Monitor training with TensorBoard
Step12: Results
Step13: We will take the last export and deploy it as a REST API using Google Cloud Machine Learning Engine
Step14: Get Predictions
Step15: Our serving input function expects the already tokenized representations of the headlines, so we do that pre-processing in the code before calling the REST API.
Step16: How many of your predictions were correct?
|
6,107
|
<ASSISTANT_TASK:>
Python Code:
from six.moves import range
all_divides = lambda m, *numbers: all(m % n == 0 for n in numbers)
all_divides(2520, *range(1, 10))
# First we need a predicate to test
# if all elements of a list are equal
# There are a number of ways to do this
pairs = lambda lst: zip(lst[1:], lst[:-1])
all_equals = lambda lst: all(x == y for x, y in pairs)
all_equals = lambda lst: lst[1:] == lst[:-1]
all_equals = lambda lst: len(set(lst)) < 2
# We'll also need argmin. Note that NumPy
# comes bundled with all of these, but
# they're trivial, why not implement them ourselves!
argmin = lambda lst: lst.index(min(lst))
def _lcm_recursive(nums, nums_new):
if all_equals(nums_new):
# return any element
# why not the first one
return nums_new[0]
k = argmin(nums_new)
nums_new[k] += nums[k]
return _lcm(nums, nums_new)
def _lcm_iterative(nums):
nums_new = list(nums) # remember to use list for deep copy
while not all_equals(nums_new):
k = argmin(nums_new)
nums_new[k] += nums[k]
return nums_new[0]
# comment one out
lcm = lambda *nums: _lcm_recursive(list(nums), list(nums))
lcm = lambda *nums: _lcm_iterative(nums)
lcm(4, 7, 12, 21, 42)
lcm(*range(1, 10+1))
lcm(*range(1, 20))
%load_ext autoreload
%autoreload 2
from common.utils import prime_range, reconstruct
from collections import defaultdict, Counter
def _lcm_prime_divisors(nums):
divides_count = Counter()
for p in prime_range(max(nums)+1):
for n in nums:
tmp = 0
while n % p == 0:
tmp += 1
n /= p
if tmp > divides_count[p]:
divides_count[p] = tmp
return reconstruct(divides_count)
lcm = lambda *nums: _lcm_prime_divisors(nums)
lcm(4, 7, 12, 21, 42)
lcm(*range(1, 11))
lcm(*range(1, 21))
# TODO
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The least common multiple of the numbers 1 to 10 is 2520. We are asked to find that of the numbers 1 to 20.
Step2: This is way too slow! Let's try something else!
Step3: MUCH better.
|
6,108
|
<ASSISTANT_TASK:>
Python Code:
from trappy.stats.Topology import Topology
from bart.sched.SchedMultiAssert import SchedMultiAssert
from bart.sched.SchedAssert import SchedAssert
import trappy
import os
import operator
import json
#Define a CPU Topology (for multi-cluster systems)
BIG = [1, 2]
LITTLE = [0, 3, 4, 5]
CLUSTERS = [BIG, LITTLE]
topology = Topology(clusters=CLUSTERS)
BASE_PATH = "/Users/kapileshwarsingh/AnalysisRawData/LPC/sched_deadline/"
THRESHOLD = 10.0
def between_threshold(a, b):
return abs(((a - b) * 100.0) / b) < THRESHOLD
TRACE_FILE = os.path.join(BASE_PATH, "yield")
ftrace = trappy.FTrace(TRACE_FILE, "cpuhog")
# Assert Period
s = SchedMultiAssert(ftrace, topology, execnames="periodic_yield")
if s.assertPeriod(30, between_threshold, rank=1):
print "PASS: Period"
print json.dumps(s.getPeriod(), indent=3)
print ""
# Assert DutyCycle
if s.assertDutyCycle(1, operator.lt, window=(0,4), rank=2):
print "PASS: DutyCycle"
print json.dumps(s.getDutyCycle(window=(0,4)), indent=3)
TRACE_FILE = os.path.join(BASE_PATH, "cpuhog")
ftrace = trappy.FTrace(TRACE_FILE, "cpuhog")
s = SchedMultiAssert(ftrace, topology, execnames="cpuhog")
s.plot().view()
# Assert DutyCycle
if s.assertDutyCycle(10, between_threshold, window=(0, 5), rank=1):
print "PASS: DutyCycle"
print json.dumps(s.getDutyCycle(window=(0, 5)), indent=3)
TRACE_FILE = os.path.join(BASE_PATH, "cancel_dl_timer")
ftrace = trappy.FTrace(TRACE_FILE, "cpuhog")
s = SchedAssert(ftrace, topology, execname="cpuhog")
s.plot().view()
NUM_PHASES = 10
PHASE_DURATION = 2
start = s.getStartTime()
DUTY_CYCLE_FACTOR = 10
for phase in range(NUM_PHASES + 1):
window = (start + (phase * PHASE_DURATION),
start + ((phase + 1) * PHASE_DURATION))
if phase % 2 == 0:
DUTY_CYCLE = (phase + 2) * DUTY_CYCLE_FACTOR / 2
else:
DUTY_CYCLE = 100
print "WINDOW -> [{:.2f}, {:.2f}]".format(window[0],
window[1])
if s.assertDutyCycle(DUTY_CYCLE, between_threshold, window=window):
print "PASS: Expected={} Actual={:.2f} THRESHOLD={}".format(DUTY_CYCLE,
s.getDutyCycle(window=window),
THRESHOLD)
else:
print "FAIL: Expected={} Actual={:.2f} THRESHOLD={}".format(DUTY_CYCLE,
s.getDutyCycle(window=window),
THRESHOLD)
print ""
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Periodic Yield
Step2: CPU Hog
Step3: Changing Reservations
|
6,109
|
<ASSISTANT_TASK:>
Python Code:
from qiskit_aqua_chemistry import AquaChemistry
# Input dictionary to configure Qiskit Aqua Chemistry for the chemistry problem.
aqua_chemistry_dict = {
'problem': {'random_seed': 50},
'driver': {'name': 'PYSCF'},
'PYSCF': {'atom': 'O 0.0 0.0 0.0; H 0.757 0.586 0.0; H -0.757 0.586 0.0', 'basis': 'sto-3g'},
'operator': {'name': 'hamiltonian', 'freeze_core': True},
'algorithm': {'name': 'ExactEigensolver'}
}
solver = AquaChemistry()
result = solver.run(aqua_chemistry_dict)
print('Ground state energy: {}'.format(result['energy']))
for line in result['printable']:
print(line)
aqua_chemistry_dict['algorithm']['name'] = 'VQE'
aqua_chemistry_dict['optimizer'] = {'name': 'COBYLA', 'maxiter': 25000}
aqua_chemistry_dict['variational_form'] = {'name': 'UCCSD'}
aqua_chemistry_dict['initial_state'] = {'name': 'HartreeFock'}
solver = AquaChemistry()
result = solver.run(aqua_chemistry_dict)
print('Ground state energy: {}'.format(result['energy']))
for line in result['printable']:
print(line)
print('Actual VQE evaluations taken: {}'.format(result['algorithm_retvals']['eval_count']))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: With the above input problem dictionary for water we now create an AquaChemistry object and call run on it passing in the dictionary to get a result. We use ExactEigensolver first as a reference.
Step2: The run method returns a result dictionary. Some notable fields include 'energy' which is the computed ground state energy.
Step3: There is also a 'printable' field containing a complete ready to print readable result
Step4: We update the dictionary, for VQE with UCCSD, and run the computation again.
|
6,110
|
<ASSISTANT_TASK:>
Python Code:
# Create a csv file
text = (
'col1,col2,col3\n'
'hello,5/4/82,1\n'
'one,1/1/15,2\n'
'happy,7/4/92,3\n')
f = StringIO(text)
f.seek(0)
# Load the file
records = io.read_csv(f)
# Records are an iterator over the rows
row = next(records)
row
# Replace first row so as not to loose any data
records = pr.prepend(records, row)
# Guess column types. Note: `detect_types` returns a new `records`
# generator since it consumes rows during type detection
records, result = pr.detect_types(records)
result['types']
# Now type cast the records. Note: most `meza.process` functions return
# generators, so lets wrap the result in a list to view the data
casted = list(pr.type_cast(records, **result))
casted[0]
# Cut out the first column of data
cut_recs = pr.cut(casted, ['col1'], exclude=True)
cut_recs
# merge the cut rows to and get the max value of the remaining columns.
# Note: since `merge` (by definition) will always contain just one row,
# it is returned as is (not wrapped in a generator)
merged = pr.merge(cut_recs, pred=bool, op=max)
merged
# Now write merged data back to a new csv file.
f = StringIO()
io.write(f, cv.records2csv([merged]))
f.getvalue()
# Read a file like object and de-duplicate the header
f = StringIO('col,col\nhello,world\n')
records = io.read_csv(f, dedupe=True)
# View the first row
next(records)
# Create some data in the same structure as what the various `read...`
# functions output
header = ['A', 'B', 'C', 'D']
data = [
[0.5607, 0.9338, 0.4769, 0.7804],
[0.8227, 0.2844, 0.8166, 0.7323],
[0.4627, 0.8633, 0.3283, 0.1909],
[0.3932, 0.5454, 0.9604, 0.6376],
[0.3685, 0.9166, 0.9457, 0.8066],
[0.7584, 0.6981, 0.5625, 0.3578],
[0.8959, 0.6932, 0.2565, 0.3378]]
df = [dict(zip(header, d)) for d in data]
df[0]
# Sort records by the value of column `B` --> df.sort_values(by='B')
next(pr.sort(df, 'B'))
# Select column `A` --> df['A']
next(pr.cut(df, ['A']))
# Select the first 3 rows of data --> df[0:3]
list(it.islice(df, 3))
# Select all data whose value for column `A` is less than 0.5 --> df[df.A < 0.5]
next(pr.tfilter(df, 'A', lambda x: x < 0.5))
# Calculate the mean of column `A` across all data --> df.mean()['A']
pr.aggregate(df, 'A', stats.mean)['A']
# Calculate the sum of each column across all data --> df.sum()
pr.merge(df, pred=bool, op=sum)
# First create a few simple csv files
f1 = StringIO('col_1,col_2,col_3\n1,dill,male\n2,bob,male\n3,jane,female\n')
f2 = StringIO('col_1,col_2,col_3\n4,tom,male\n5,dick,male\n6,jill,female\n')
f1.seek(0)
f2.seek(0)
# First, let's combine the files into one iterator --> csvstack file1.csv file2.csv
records = io.join(f1, f2, ext='csv')
next(records)
next(it.islice(records, 4, None))
# Now let's create a persistant records list so we can do some simple manipulations
f1.seek(0)
records = list(io.read_csv(f1))
# Sort records by the value of column `col_2` --> csvsort -c col_2 file1.csv
next(pr.sort(records, 'col_2'))
# Select column `col_2` --> csvcut -c col_2 file1.csv
next(pr.cut(records, ['col_2']))
# Select all data whose value for column `col_2` contains `jan`
# --> csvgrep -c col_2 -m jan file1.csv
next(pr.grep(records, [{'pattern': 'jan'}], ['col_2']))
# Convert a csv file to json --> csvjson -i 4 file1.csv
f_json = StringIO()
io.write(f_json, cv.records2json(records))
f_json.getvalue()
# First create a geojson file
f = StringIO(
'{"type": "FeatureCollection","features": ['
'{"type": "Feature", "id": 11, "geometry": '
'{"type": "Point", "coordinates": [10, 20]}},'
'{"type": "Feature", "id": 12, "geometry": '
'{"type": "Point", "coordinates": [5, 15]}}]}')
f.seek(0)
# Load the geojson file and peek at the results
records, peek = pr.peek(io.read_geojson(f))
peek[0]
# Split the records by feature ``id`` and select the first feature --> geojsplit -k id file.geojson
splits = pr.split(records, 'id')
feature_records, name = next(splits)
name
# Convert the feature records into a GeoJSON file-like object
geojson = cv.records2geojson(feature_records)
geojson.readline()
# First let's create a simple tsv file like object
f = StringIO('col1\tcol2\nhello\tworld\n')
f.seek(0)
# Next create a records list so we can reuse it
records = list(io.read_tsv(f))
records[0]
# Create a csv file like object
f_out = cv.records2csv(records)
f_out.readline().strip()
# Create a json file like object
f_out = cv.records2json(records)
f_out.readline()
# First create some records and types. Also, convert the records to a list
# so we can reuse them.
records = [{'a': 'one', 'b': 2}, {'a': 'five', 'b': 10, 'c': 20.1}]
records, result = pr.detect_types(records)
records, types = list(records), result['types']
types
# Convert records to a list of arrays
narray = cv.records2array(records, types, native=True)
narray
# Convert list of arrays back to records
next(cv.array2records(narray, native=True))
# Convert a 2-D array to records
arr = [(1, 4), (2, 5), (3, 6)]
data = [array('i', a) for a in arr]
data
next(cv.array2records(data, native=True))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: now run some operations on the type casted data
Step2: Reading data
Step3: Processing data
Step4: Note
Step5: Text processing (à la csvkit)
Step6: Now we can read the files, manipulate the data, convert the manipulated data to json, and write the json back to a new file. Also, note that since all readers return equivalent records iterators, you can use them interchangeably (in place of read_csv) to open any supported file. E.g., read_xls, read_sqlite, etc.
Step7: Geo processing (à la mapbox)
Step8: Now we can open the file, split the data by id, and finally convert the split data to a new geojson file-like object.
Step9: Note
Step10: Now we're ready to write the records data to file
Step11: Interoperability
|
6,111
|
<ASSISTANT_TASK:>
Python Code:
sample_docs = [
'The quick brown fox jumped over the lazy dog',
'The dog jumped over squirrel',
'Four score and seven years ago'
]
# First we'll vectorize our documents, as we did last week
vectorizer = CountVectorizer()
features = vectorizer.fit_transform(sample_docs).toarray()
print features
# We'll use a helpful scikit-learn function to calculate their pairwise distances, starting with Euclidean
euclidean_distances = pairwise_distances(features, metric='euclidean')
print euclidean_distances
cosine_distances = pairwise_distances(features, metric='cosine')
print cosine_distances
emails = open('data/jeb_subjects.csv').read().split('\n')
print emails [:100]
# As usual, we'll vectorize our documents first
vectorizer = CountVectorizer()
features = vectorizer.fit_transform(emails).toarray()
# Now we'll use k-means to try splitting them up into 20 groups
number_of_clusters = 20
kmeans = KMeans(n_clusters=number_of_clusters)
kmeans.fit(features)
print clusters_to_csv(kmeans.labels_, emails)
# TF-IDF is invoked in the same way as a count vectorizer
vectorizer = TfidfVectorizer()
features = vectorizer.fit_transform(emails).toarray()
# Let's try 50 clusters
number_of_clusters = 50
kmeans = KMeans(n_clusters=number_of_clusters)
kmeans.fit(features)
print clusters_to_csv(kmeans.labels_, emails)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll use those sample docs to start. Intuitively, you should be able to see that documents 0 and 1 have some similar elements ("dog," "jumped over," etc.) but document 2 is pretty different from the rest. Let's quantify that using two different distance measures
Step2: Euclidean distance
Step3: According to our Euclidean distance measure, document 0 and document 1 are 2.8 units apart, documents 0 and 2 are 4.1 units apart and documents 2 and 3 are 3.6 units apart. So this definitely captures the distances we're looking for. But in practice, there's another similarity measure that's more often used for looking at documents, known as cosine similarity.
Step4: In practice, either one of these metrics can work for document similarity tasks. For now it's mostly important to know that there's more than one definition of similarity. Usually I start with cosine distance and test other metrics to see which work best for the task at hand.
Step5: You'll see that some clusters here make sense and others don't, which is partly a result of the size of the dataset we're using and the fact that subject lines are often too short to provide much meaningful clustering information. But you should also be able to see some clusters of documents that appear to make sense.
|
6,112
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import cross_validation, grid_search, linear_model, metrics, pipeline, preprocessing
import numpy as np
import pandas as pd
%pylab inline
raw_data = pd.read_csv('bike_sharing_demand.csv', header = 0, sep = ',')
raw_data.head()
raw_data.datetime = raw_data.datetime.apply(pd.to_datetime)
raw_data['month'] = raw_data.datetime.apply(lambda x : x.month)
raw_data['hour'] = raw_data.datetime.apply(lambda x : x.hour)
train_data = raw_data.iloc[:-1000, :]
hold_out_test_data = raw_data.iloc[-1000:, :]
print raw_data.shape, train_data.shape, hold_out_test_data.shape
#обучение
train_labels = train_data['count'].values
train_data = train_data.drop(['datetime', 'count', 'casual', 'registered'], axis = 1)
#тест
test_labels = hold_out_test_data['count'].values
test_data = hold_out_test_data.drop(['datetime', 'count', 'casual', 'registered'], axis = 1)
binary_data_columns = ['holiday', 'workingday']
binary_data_indices = np.array([(column in binary_data_columns) for column in train_data.columns], dtype = bool)
print binary_data_columns
print binary_data_indices
categorical_data_columns = ['season', 'weather', 'month']
categorical_data_indices = np.array([(column in categorical_data_columns) for column in train_data.columns], dtype = bool)
print categorical_data_columns
print categorical_data_indices
numeric_data_columns = ['temp', 'atemp', 'humidity', 'windspeed', 'hour']
numeric_data_indices = np.array([(column in numeric_data_columns) for column in train_data.columns], dtype = bool)
print numeric_data_columns
print numeric_data_indices
regressor = linear_model.SGDRegressor(random_state = 0, n_iter = 3, loss = 'squared_loss', penalty = 'l2')
estimator = pipeline.Pipeline(steps = [
('feature_processing', pipeline.FeatureUnion(transformer_list = [
#binary
('binary_variables_processing', preprocessing.FunctionTransformer(lambda data: data[:, binary_data_indices])),
#numeric
('numeric_variables_processing', pipeline.Pipeline(steps = [
('selecting', preprocessing.FunctionTransformer(lambda data: data[:, numeric_data_indices])),
('scaling', preprocessing.StandardScaler(with_mean = 0))
])),
#categorical
('categorical_variables_processing', pipeline.Pipeline(steps = [
('selecting', preprocessing.FunctionTransformer(lambda data: data[:, categorical_data_indices])),
('hot_encoding', preprocessing.OneHotEncoder(handle_unknown = 'ignore'))
])),
])),
('model_fitting', regressor)
]
)
estimator.fit(train_data, train_labels)
metrics.mean_absolute_error(test_labels, estimator.predict(test_data))
estimator.get_params().keys()
parameters_grid = {
'model_fitting__alpha' : [0.0001, 0.001, 0,1],
'model_fitting__eta0' : [0.001, 0.05],
}
grid_cv = grid_search.GridSearchCV(estimator, parameters_grid, scoring = 'mean_absolute_error', cv = 4)
%%time
grid_cv.fit(train_data, train_labels)
print grid_cv.best_score_
print grid_cv.best_params_
test_predictions = grid_cv.best_estimator_.predict(test_data)
metrics.mean_absolute_error(test_labels, test_predictions)
print test_labels[:20]
print test_predictions[:20]
pylab.figure(figsize=(8, 6))
pylab.grid(True)
pylab.xlim(-100,1100)
pylab.ylim(-100,1100)
pylab.scatter(train_labels, grid_cv.best_estimator_.predict(train_data), alpha=0.5, color = 'red')
pylab.scatter(test_labels, grid_cv.best_estimator_.predict(test_data), alpha=0.5, color = 'blue')
from sklearn.ensemble import RandomForestRegressor
regressor = RandomForestRegressor(random_state = 0, max_depth = 20, n_estimators = 50)
estimator = pipeline.Pipeline(steps = [
('feature_processing', pipeline.FeatureUnion(transformer_list = [
#binary
('binary_variables_processing', preprocessing.FunctionTransformer(lambda data: data[:, binary_data_indices])),
#numeric
('numeric_variables_processing', pipeline.Pipeline(steps = [
('selecting', preprocessing.FunctionTransformer(lambda data: data[:, numeric_data_indices])),
('scaling', preprocessing.StandardScaler(with_mean = 0, with_std = 1))
])),
#categorical
('categorical_variables_processing', pipeline.Pipeline(steps = [
('selecting', preprocessing.FunctionTransformer(lambda data: data[:, categorical_data_indices])),
('hot_encoding', preprocessing.OneHotEncoder(handle_unknown = 'ignore'))
])),
])),
('model_fitting', regressor)
]
)
estimator.fit(train_data, train_labels)
metrics.mean_absolute_error(test_labels, estimator.predict(test_data))
test_labels[:10]
estimator.predict(test_data)[:10]
pylab.figure(figsize=(16, 6))
pylab.subplot(1,2,1)
pylab.grid(True)
pylab.xlim(-100,1100)
pylab.ylim(-100,1100)
pylab.scatter(train_labels, grid_cv.best_estimator_.predict(train_data), alpha=0.5, color = 'red')
pylab.scatter(test_labels, grid_cv.best_estimator_.predict(test_data), alpha=0.5, color = 'blue')
pylab.title('linear model')
pylab.subplot(1,2,2)
pylab.grid(True)
pylab.xlim(-100,1100)
pylab.ylim(-100,1100)
pylab.scatter(train_labels, estimator.predict(train_data), alpha=0.5, color = 'red')
pylab.scatter(test_labels, estimator.predict(test_data), alpha=0.5, color = 'blue')
pylab.title('random forest model')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Загрузка данных
Step2: Предобработка данных
Step3: Pipeline
Step4: Подбор параметров
Step5: Оценка по отложенному тесту
Step6: Другая модель
|
6,113
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/cifar/cifar-10-python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 2
sample_id = 15
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
import numpy
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
x = numpy.array(x)
x_normed = (x - x.min(0)) / x.ptp(0)
return x_normed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
nb_classes = 10
targets = numpy.array(x).reshape(-1)
one_hot_targets = numpy.eye(nb_classes)[targets]
return one_hot_targets
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, shape=(None, image_shape[0],
image_shape[1], image_shape[2]), name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, shape=(None, n_classes), name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
batch_size, in_width, in_height, in_depth = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1],
in_depth, conv_num_outputs]))
biases = tf.Variable(tf.zeros(conv_num_outputs))
conv = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1],
padding='SAME')
conv = tf.nn.bias_add(conv, biases)
conv = tf.nn.relu(conv)
filter_shape = [1, pool_ksize[0], pool_ksize[1], 1]
strides = [1, pool_strides[0], pool_strides[1], 1]
return tf.nn.max_pool(conv, filter_shape, strides, 'SAME')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
return tf.contrib.layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return tf.contrib.layers.fully_connected(x_tensor, num_outputs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return tf.contrib.layers.fully_connected(x_tensor, num_outputs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_ksize = [2,2]
conv_strides = [1,1]
pool_ksize = [2,2]
pool_strides = [1,1]
conv_num_outputs = 16
x_tensor = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
conv_ksize = [3,3]
conv_strides = [2,2]
pool_ksize = [2,2]
pool_strides = [2,2]
conv_num_outputs = 40
x_tensor = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
conv_num_outputs = 10
x_tensor = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x_tensor = flatten(x_tensor)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
num_outputs = 60
x_tensor = fully_conn(x_tensor, num_outputs)
num_outputs = 40
x_tensor = fully_conn(x_tensor, num_outputs)
num_outputs = 20
x_tensor = fully_conn(x_tensor, num_outputs)
num_classes = 10
return output(x_tensor, num_classes)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
cost = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
acc = session.run(accuracy, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
validation_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('cost: {}'.format(cost))
print('accuracy: {}'.format(accuracy))
print('validation accuracy: {}'.format(validation_accuracy))
# TODO: Tune Parameters
epochs = 70
batch_size = 256
keep_probability = 0.9
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
6,114
|
<ASSISTANT_TASK:>
Python Code:
def generate_data():
# 2 layer model with some random error
ml = ModelMaq(kaq=[10, 20], z=[0, -20, -22, -42], c=[1000],
Saq=[0.0002, 0.0001], tmin=0.001, tmax=100)
w = Well(ml, 0, 0, rw=0.3, tsandQ=[(0, 800)])
ml.solve()
t = np.logspace(-2, 1, 100)
h = ml.head(10, 0, t)
plt.figure()
r = 0.01 * np.random.randn(100)
n = np.zeros_like(r)
alpha = 0.8
for i in range(1, len(n)):
n[i] = 0.8 * n[i - 1] + r[i]
ho = h[0] + n
plt.plot(t, ho, '.')
data = np.zeros((len(ho), 2))
data[:, 0] = t
data[:, 1] = ho
#np.savetxt('pumpingtestdata.txt', data, fmt='%2.3f', header='time (d), head (m)')
return data
np.random.seed(11)
data = generate_data()
to = data[:, 0]
ho = data[:, 1]
def func(p, to=to, ho=ho, returnmodel=False):
k = p[0]
S = p[1]
ml = ModelMaq(kaq=k, z=[0, -20], Saq=S, tmin=0.001, tmax=100)
w = Well(ml, 0, 0, rw=0.3, tsandQ=[(0, 800)])
ml.solve(silent=True)
if returnmodel:
return ml
h = ml.head(10, 0, to)
return np.sum((h[0] - ho) ** 2)
from scipy.optimize import fmin
lsopt = fmin(func, [10, 1e-4])
print('optimal parameters:', lsopt)
print('rmse:', np.sqrt(func(lsopt) / len(ho)))
ml = func(lsopt, returnmodel=True)
plt.figure()
plt.plot(data[:, 0], data[:, 1], '.', label='observed')
hm = ml.head(10, 0, to)
plt.plot(to, hm[0], 'r', label='modeled')
plt.legend()
plt.xlabel('time (d)')
plt.ylabel('head (m)');
cal = Calibrate(ml)
cal.set_parameter(name='kaq0', initial=10, pmin=0.1, pmax=1000)
cal.set_parameter(name='Saq0', initial=1e-4, pmin=1e-5, pmax=1e-3)
cal.series(name='obs1', x=10, y=0, layer=0, t=to, h=ho)
cal.fit(report=False)
print('rmse:', cal.rmse())
cal.parameters.style.set_precision(3)
def func2(p, to=to, ho=ho, returnmodel=False):
k = p[0]
S = p[1]
c = p[2]
ml = ModelMaq(kaq=k, z=[2, 0, -20], Saq=S, c=c, topboundary='semi',
tmin=0.001, tmax=100)
w = Well(ml, 0, 0, rw=0.3, tsandQ=[(0, 800)])
ml.solve(silent=True)
if returnmodel:
return ml
h = ml.head(10, 0, to)
return np.sum((h[0] - ho) ** 2)
lsopt2 = fmin(func2, [10, 1e-4, 1000])
print('optimal parameters:', lsopt2)
print('rmse:', np.sqrt(func2(lsopt2) / len(ho)))
ml = func2(lsopt2, returnmodel=True)
plt.figure()
plt.plot(data[:, 0], data[:, 1], '.', label='observed')
hm = ml.head(10, 0, to)
plt.plot(to, hm[0], 'r', label='modeled')
plt.legend()
plt.xlabel('time (d)')
plt.ylabel('head (m)');
ml = ModelMaq(kaq=10, z=[2, 0, -20], Saq=1e-4, c=1000, topboundary='semi', tmin=0.001, tmax=100)
w = Well(ml, 0, 0, rw=0.3, tsandQ=[(0, 800)])
ml.solve(silent=True)
cal = Calibrate(ml)
cal.set_parameter(name='kaq0', initial=10)
cal.set_parameter(name='Saq0', initial=1e-4)
cal.set_parameter(name='c0', initial=1000)
cal.series(name='obs1', x=10, y=0, layer=0, t=to, h=ho)
cal.fit(report=False)
cal.parameters.style.set_precision(5)
cal.rmse(), ml.aq.kaq
plt.figure()
plt.plot(data[:, 0], data[:, 1], '.', label='observed')
hm = ml.head(10, 0, to)
plt.plot(to, hm[0], 'r', label='modeled')
plt.legend()
plt.xlabel('time (d)')
plt.ylabel('head (m)');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model as semi-confined
|
6,115
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import sys
sys.path.append('utils/')
import os
os.environ['OMP_NUM_THREADS'] = str(1)
import matplotlib.pyplot as plt
% matplotlib inline
import scipy.stats as stats
import statsmodels.api as sm
import multiprocessing as mp
import sklearn.preprocessing as preprocessing
import sklearn.svm as svm
import statsmodels.sandbox.stats.multicomp as mc
# Import custom modules
import multregressionconnectivity as mreg
import model
import analysis
from matplotlib.colors import Normalize
# Code to generate a normalized midpoint for plt.imshow visualization function
class MidpointNormalize(Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
# Specify the directory to read in provided data
# If data file was unzippd in current working directory this shouldn't need to be changed
datadir = 'ItoEtAl2017_Simulations/'
# Specify number of CPUs to process on (using multiprocessing module in python)
nproc = 10
# Output file to save generated figures (from this notebook)
outputdir = './figures/' # default with output in current working directory
if not os.path.exists(outputdir): os.makedirs(outputdir)
nsubjs = range(0,20) # number of simulations (i.e., subject numbers)
nblocks = 20 # number of blocks per task condition
#### Define the condition numbers associated with each task
# Conditions 1-4 are for top-down stimulation only (i.e., task 1)
topdown_only = range(1,5)
# Conditions 5-9 are simultaneous top-down (hub-network) and bottom-up (local-network) stimulation (i.e., task 2)
topdown_and_bottomup = range(5,9)
#### Set up subject networks ####
# Parameters for subject's networks
ncommunities = 5
innetwork_dsity = .35
outnetwork_dsity = .05
hubnetwork_dsity = .20
nodespercommunity = 50
totalnodes = nodespercommunity*ncommunities
##########
# Construct structural matrix
W = model.generateStructuralNetwork(ncommunities=ncommunities, innetwork_dsity=innetwork_dsity,
outnetwork_dsity=outnetwork_dsity, hubnetwork_dsity=hubnetwork_dsity,
nodespercommunity=nodespercommunity, showplot=False)
# Construct synaptic matrix
G = model.generateSynapticNetwork(W, showplot=False)
# Define community affiliation vector
Ci = np.repeat(np.arange(ncommunities),nodespercommunity)
# Plot figure
plt.figure()
# norm = MidpointNormalize(midpoint=0)
plt.imshow(G,origin='lower',interpolation='none')
plt.xlabel('Regions')
plt.ylabel('Regions')
plt.title('Synaptic Weight Matrix', y=1.04, fontsize=18)
plt.colorbar()
fcmat_pearson = np.zeros((totalnodes,totalnodes,len(nsubjs)))
fcmat_multreg = np.zeros((totalnodes,totalnodes,len(nsubjs)))
##########
# Load in subject FC data
scount = 0
for subj in nsubjs:
indir = datadir + '/restfc/'
# Load in pearson FC matrix
filename1 = 'subj' + str(subj) + '_restfc_pearson.txt'
fcmat_pearson[:,:,scount] = np.loadtxt(indir + filename1, delimiter=',')
# Loda in multreg FC matrix
filename2 = 'subj' + str(subj) + '_restfc_multreg.txt'
fcmat_multreg[:,:,scount] = np.loadtxt(indir + filename2, delimiter=',')
scount += 1
##########
# Plot group FC averages
plt.figure()
avg = np.mean(fcmat_pearson,axis=2)
np.fill_diagonal(avg,0)
plt.imshow(avg ,origin='lower',interpolation='none')#,vmin=0)
plt.xlabel('Regions')
plt.ylabel('Regions')
plt.title('Group Rest FC Matrix\nPearson FC', y=1.04, fontsize=18)
plt.colorbar()
plt.tight_layout()
plt.figure()
avg = np.mean(fcmat_multreg,axis=2)
np.fill_diagonal(avg,0)
plt.imshow(avg ,origin='lower',interpolation='none')#,vmin=-.08,vmax=.08)
plt.xlabel('Regions')
plt.ylabel('Regions')
plt.title('Group Rest FC Matrix\nMultiple Regression FC', y=1.04, fontsize=18)
plt.colorbar()
plt.tight_layout()
outofnet_intrinsicFC = np.zeros((ncommunities,len(nsubjs)))
indices = np.arange(nodespercommunity*ncommunities)
##########
# Calculate average out-of-network degree across subjects
scount = 0
for subj in nsubjs:
for net in range(0,ncommunities):
# if net == hubnet: continue
net_ind = np.where(Ci==net)[0]
net_ind.shape = (len(net_ind),1)
outofnet_ind = np.setxor1d(net_ind,indices)
outofnet_ind.shape = (len(outofnet_ind),1)
outofnet_intrinsicFC[net,scount] = np.mean(fcmat_multreg[net_ind, outofnet_ind.T, scount])
scount += 1
# Compute average stats
fcmean = np.mean(outofnet_intrinsicFC,axis=1)
fcerr = np.std(outofnet_intrinsicFC,axis=1)/np.sqrt(len(nsubjs))
##########
# Plot figure
fig = plt.bar(range(len(fcmean)), fcmean, yerr=fcerr)
# fig = plt.ylim([.09,0.10])
fig = plt.xticks(np.arange(.4,5.4,1.0),['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'],fontsize=14)
fig = plt.ylabel('Multiple Regression FC', fontsize=16)
fig = plt.xlabel('Networks', fontsize=16)
fig = plt.title("Average Out-Of-Network IntrinsicFC\nSimulated Resting-State Data", fontsize=18, y=1.02)
fig = plt.tight_layout()
# Empty variables for topdown task analysis
ite_topdown = np.zeros((ncommunities,ncommunities,len(nsubjs)))
# Empty variables for topdown and bottomup task analysis
ite_topdownbottomup = np.zeros((ncommunities,ncommunities,len(nsubjs)))
##########
# Run predicted-to-actual similarity for every network-to-network configuration (using RSA approach)
for i in range(ncommunities):
for j in range(ncommunities):
if i==j: continue
fromnet = i
net = j
nblocks = nblocks
## First run on topdown only task conditions
inputs = []
for subj in nsubjs: inputs.append((subj,net,fromnet,topdown_only,nblocks,Ci,nodespercommunity,datadir))
# Run multiprocessing across subjects
pool = mp.Pool(processes=nproc)
results_topdown = pool.map_async(analysis.predictedToActualRSA, inputs).get()
pool.close()
pool.join()
## Second run on topdown and bottomup task conditions
inputs = []
for subj in nsubjs: inputs.append((subj,net,fromnet,topdown_and_bottomup,nblocks,Ci,nodespercommunity,datadir))
# Run multiprocessing
pool = mp.Pool(processes=nproc)
results_topdownbottomup = pool.map_async(analysis.predictedToActualRSA, inputs).get()
pool.close()
pool.join()
## Get results and store in network X network X subjects matrix
scount = 0
for subj in nsubjs:
# Obtain topdown task results
ite = results_topdown[scount]
ite_topdown[i,j,scount] = ite
# Obtain topdown and bottom up task results
ite = results_topdownbottomup[scount]
ite_topdownbottomup[i,j,scount] = ite
scount += 1
# Instantiate empty result matrices
tmat_topdown = np.zeros((ncommunities,ncommunities))
pmat_topdown = np.ones((ncommunities,ncommunities))
tmat_topdownbottomup = np.zeros((ncommunities,ncommunities))
pmat_topdownbottomup = np.ones((ncommunities,ncommunities))
# Run t-tests for every network-to-network configuration
for i in range(ncommunities):
for j in range(ncommunities):
if i==j: continue
##########
## Run statistical test for first task (topdown only stim)
t, p = stats.ttest_1samp(ite_topdown[i,j,:],0)
tmat_topdown[i,j] = t
# Make p-value one-sided (for one-sided t-test)
if t > 0:
p = p/2.0
else:
p = 1-p/2.0
pmat_topdown[i,j] = p
##########
## Run statistical test for second task (topdown and bottomup stim)
t, p = stats.ttest_1samp(ite_topdownbottomup[i,j,:],0)
# Make p-value one-sided (for one-sided t-test)
tmat_topdownbottomup[i,j] = t
if t > 0:
p = p/2.0
else:
p = 1-p/2.0
pmat_topdownbottomup[i,j] = p
##########
# Run FDR correction on p-values (exclude diagonal values)
## TopDown Task
qmat_topdown = np.ones((ncommunities,ncommunities))
triu_ind = np.triu_indices(ncommunities,k=1)
tril_ind = np.tril_indices(ncommunities,k=-1)
all_ps = np.hstack((pmat_topdown[triu_ind],pmat_topdown[tril_ind]))
h, all_qs = mc.fdrcorrection0(all_ps)
# the first half of all qs belong to triu, second half belongs to tril
qmat_topdown[triu_ind] = all_qs[:len(triu_ind[0])]
qmat_topdown[tril_ind] = all_qs[len(tril_ind[0]):]
binary_mat_topdown = qmat_topdown < .05
## TopDown and BottomUp Task
qmat_topdownbottomup = np.ones((ncommunities,ncommunities))
triu_ind = np.triu_indices(ncommunities,k=1)
tril_ind = np.tril_indices(ncommunities,k=-1)
all_ps = np.hstack((pmat_topdownbottomup[triu_ind],pmat_topdownbottomup[tril_ind]))
h, all_qs = mc.fdrcorrection0(all_ps)
# the first half of all qs belong to triu, second half belongs to tril
qmat_topdownbottomup[triu_ind] = all_qs[:len(triu_ind[0])]
qmat_topdownbottomup[tril_ind] = all_qs[len(tril_ind[0]):]
binary_mat_topdownbottomup = qmat_topdownbottomup < .05
##########
# Plot figures for topdown task
# (Unthresholded plot)
plt.figure(figsize=(12,10))
plt.subplot(121)
norm = MidpointNormalize(midpoint=0)
plt.imshow(np.mean(ite_topdown,axis=2),norm=norm,origin='lower',interpolation='None',cmap='bwr')
plt.title('Network-to-Network ITE (using RSA) (Unthresholded)\nTopDown Tasks',fontsize=16, y=1.02)
plt.colorbar(fraction=.046)
plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.ylabel('Network ActFlow FROM',fontsize=15)
plt.xlabel('Network ActFlow TO',fontsize=15)
# (Thresholded plot)
plt.subplot(122)
threshold_acc = np.multiply(binary_mat_topdown,np.mean(ite_topdown,axis=2))
norm = MidpointNormalize(midpoint=0)
plt.imshow(threshold_acc,norm=norm,origin='lower',interpolation='None',cmap='bwr')
plt.title('Network-to-Network ITE (using RSA) (Thresholded)\nTopDown Tasks',fontsize=16, y=1.02)
plt.colorbar(fraction=.046)
plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.ylabel('Network ActFlow FROM',fontsize=15)
plt.xlabel('Network ActFlow TO',fontsize=15)
plt.tight_layout()
plt.savefig(outputdir + 'SFig_CompModel_RSA_topdownOnly.pdf')
##########
# Plot figures for topdown and bottomup task
# (Unthresholded plot)
plt.figure(figsize=(12,10))
((12,10))
plt.subplot(121)
norm = MidpointNormalize(midpoint=0)
plt.imshow(np.mean(ite_topdownbottomup,axis=2),origin='lower',interpolation='None',norm=norm,cmap='bwr')
plt.title('Network-to-Network ITE (using RSA) (Unthresholded)\nTopDownBottomUp Tasks',fontsize=16, y=1.02)
plt.colorbar(fraction=.046)
plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.ylabel('Network ActFlow FROM',fontsize=15)
plt.xlabel('Network ActFlow TO',fontsize=15)
# (Thresholded plot)
plt.subplot(122)
threshold_acc = np.multiply(binary_mat_topdownbottomup,np.mean(ite_topdownbottomup,axis=2))
norm = MidpointNormalize(midpoint=0)
plt.imshow(threshold_acc,origin='lower',interpolation='None',norm=norm,cmap='bwr')
plt.title('Network-to-Network ITE (using RSA)(Thresholded)\nTopDownBottomUp Tasks',fontsize=16, y=1.02)
plt.colorbar(fraction=.046)
plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.ylabel('Network ActFlow FROM',fontsize=15)
plt.xlabel('Network ActFlow TO',fontsize=15)
plt.tight_layout()
plt.savefig(outputdir + 'SFig_CompModel_RSA_topdownbottomup.pdf')
# Empty variables for topdown task analysis
svm_topdown = np.zeros((ncommunities,ncommunities,len(nsubjs)))
# Empty variables for topdown and bottomup task analysis
svm_topdownbottomup = np.zeros((ncommunities,ncommunities,len(nsubjs)))
##########
# Run predicted-to-actual similarity for every network-to-network configuration (using RSA approach)
for i in range(ncommunities):
for j in range(ncommunities):
if i==j: continue
fromnet = i
net = j
nblocks = nblocks
## First run on topdown only task conditions
inputs = []
for subj in nsubjs: inputs.append((subj,net,fromnet,topdown_only,nblocks,Ci,nodespercommunity,datadir))
# Run multiprocessing across subjects
pool = mp.Pool(processes=nproc)
results_topdown = pool.map_async(analysis.predictedToActualSVM, inputs).get()
pool.close()
pool.join()
## Second run on topdown and bottomup task conditions
inputs = []
for subj in nsubjs: inputs.append((subj,net,fromnet,topdown_and_bottomup,nblocks,Ci,nodespercommunity,datadir))
# Run multiprocessing
pool = mp.Pool(processes=nproc)
results_topdownbottomup = pool.map_async(analysis.predictedToActualSVM, inputs).get()
pool.close()
pool.join()
## Get results and store in network X network X subjects matrix
scount = 0
for subj in nsubjs:
# Obtain topdown task results
svm = results_topdown[scount]
svm_topdown[i,j,scount] = svm
# Obtain topdown and bottom up task results
svm = results_topdownbottomup[scount]
svm_topdownbottomup[i,j,scount] = svm
scount += 1
# Instantiate empty result matrices
tmat_topdown_svm = np.zeros((ncommunities,ncommunities))
pmat_topdown_svm = np.ones((ncommunities,ncommunities))
tmat_topdownbottomup_svm = np.zeros((ncommunities,ncommunities))
pmat_topdownbottomup_svm = np.ones((ncommunities,ncommunities))
# Perform accuracy decoding t-test against chance, which is 25% for a 4-way classification
chance = .25
for i in range(ncommunities):
for j in range(ncommunities):
if i==j: continue
# Run statistical test for first task (topdown only stim)
t, p = stats.ttest_1samp(svm_topdown[i,j,:],chance)
tmat_topdown_svm[i,j] = t
# Make p-value one-sided (for one-sided t-test)
if t > 0:
p = p/2.0
else:
p = 1-p/2.0
pmat_topdown_svm[i,j] = p
# Run statistical test for second task (topdown and bottomup stim)
t, p = stats.ttest_1samp(svm_topdownbottomup[i,j,:],chance)
tmat_topdownbottomup_svm[i,j] = t
# Make p-value one-sided (for one-sided t-test)
if t > 0:
p = p/2.0
else:
p = 1-p/2.0
pmat_topdownbottomup_svm[i,j] = p
## TopDown Tasks
# Run FDR correction on p-values (Don't get diagonal values)
qmat_topdown_svm = np.ones((ncommunities,ncommunities))
triu_ind = np.triu_indices(ncommunities,k=1)
tril_ind = np.tril_indices(ncommunities,k=-1)
all_ps = np.hstack((pmat_topdown_svm[triu_ind],pmat_topdown_svm[tril_ind]))
h, all_qs = mc.fdrcorrection0(all_ps)
# the first half of all qs belong to triu, second half belongs to tril
qmat_topdown_svm[triu_ind] = all_qs[:len(triu_ind[0])]
qmat_topdown_svm[tril_ind] = all_qs[len(tril_ind[0]):]
binary_mat_topdown_svm = qmat_topdown_svm < .05
## TopDown and BottomUp Tasks
# Run FDR correction on p-values (Don't get diagonal values)
qmat_topdownbottomup_svm = np.ones((ncommunities,ncommunities))
triu_ind = np.triu_indices(ncommunities,k=1)
tril_ind = np.tril_indices(ncommunities,k=-1)
all_ps = np.hstack((pmat_topdownbottomup_svm[triu_ind],pmat_topdownbottomup_svm[tril_ind]))
h, all_qs = mc.fdrcorrection0(all_ps)
# the first half of all qs belong to triu, second half belongs to tril
qmat_topdownbottomup_svm[triu_ind] = all_qs[:len(triu_ind[0])]
qmat_topdownbottomup_svm[tril_ind] = all_qs[len(tril_ind[0]):]
binary_mat_topdownbottomup_svm = qmat_topdownbottomup_svm < .05
####
## Plot figures for Top Down Task
# Unthresholded map
plt.figure(figsize=(12,10))
plt.subplot(121)
mat = np.mean(svm_topdown,axis=2)
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat,norm=norm,origin='lower',interpolation='None',cmap='bwr')
plt.title('Network-to-Network ITE (using SVMs) (Unthresholded)\nTopDown Tasks',fontsize=16, y=1.02)
plt.colorbar(fraction=0.046)
plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.ylabel('Network ActFlow FROM',fontsize=15)
plt.xlabel('Network ActFlow TO',fontsize=15)
plt.tight_layout()
plt.savefig(outputdir + 'SFig_CompModel_SVM_topdownOnly_Unthresholded.pdf')
# Thresholded map
plt.subplot(122)
mat = np.mean(svm_topdown,axis=2)
mat = np.multiply(binary_mat_topdown_svm,mat)
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat,norm=norm,origin='lower',interpolation='None',cmap='bwr')
plt.title('Network-to-Network ITE (using SVMs) (Thresholded)\nTopDown Tasks',fontsize=16, y=1.02)
plt.colorbar(fraction=0.046)
plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.ylabel('Network ActFlow FROM',fontsize=15)
plt.xlabel('Network ActFlow TO',fontsize=15)
plt.tight_layout()
plt.savefig(outputdir + 'SFig_CompModel_SVM_topdownOnly.pdf')
####
## Plot figures for Top Down AND Bottom Up Task
# Unthresholded map
plt.figure(figsize=(12,10))
plt.subplot(121)
mat = np.mean(svm_topdownbottomup,axis=2)
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat,origin='lower',interpolation='None',norm=norm,cmap='bwr')
plt.title('Network-to-Network ITE (using SVMs) (Unthresholded)\nTopDownBottomUp Tasks',fontsize=16, y=1.02)
plt.colorbar(fraction=0.046)
plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.ylabel('Network ActFlow FROM',fontsize=15)
plt.xlabel('Network ActFlow TO',fontsize=15)
# Thresholded map
plt.subplot(122)
mat = np.mean(svm_topdownbottomup,axis=2)
mat = np.multiply(binary_mat_topdownbottomup_svm,mat)
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat,origin='lower',interpolation='None',norm=norm,cmap='bwr')
plt.title('Network-to-Network ITE (using SVMs) (Thresholded)\nTopDownBottomUp Tasks',fontsize=16, y=1.02)
plt.colorbar(fraction=0.046)
plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.ylabel('Network ActFlow FROM',fontsize=15)
plt.xlabel('Network ActFlow TO',fontsize=15)
plt.tight_layout()
plt.savefig(outputdir + 'SFig_CompModel_SVM_topdownbottomup.pdf')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ESSENTIAL parameters to modify
Step2: Basic simulation parameters
Step3: 1.0 Construct sample network matrix and visualize group FC matrices
Step4: 1.2 Visualize group average resting-state FC from simulated data (analogous to Fig. 4B)
Step5: 2.0 Compute out-of-network intrinsic FC (analogous to Fig. 4D)
Step6: 3.0 Run group analysis on network-to-network information transfer mapping output using simulated data (Supplementary Fig. 3A-D)
Step7: 3.2 Statistical testing on results and plot
Step8: 4.0 Run group analysis on network-to-network information transfer mapping output using SVM decoding (as opposed to predicted-to-actual RSA analysis) (Supplementary Fig. 3E-H)
Step9: 4.2 Statistical testing on results and plot
|
6,116
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Network Architecture
Step2: Training
Step3: Denoising
Step4: Checking out the performance
|
6,117
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
# print(cancer.DESCR)
cancer.keys()
# You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
# This function returns the number of features of the breast cancer dataset, which is an integer.
# The assignment question description will tell you the general format the autograder is expecting
return len(cancer['feature_names'])
# You can examine what your function returns by calling it in the cell. If you have questions
# about the assignment formats, check out the discussion forums for any FAQs
answer_zero()
def answer_one():
# Your code here
columns = cancer['feature_names']
columns = np.append(columns, ["target"])
index = range(0, 569, 1)
cancerdf = pd.DataFrame(data=np.c_[cancer.data, cancer.target], columns=columns, index=index)
return cancerdf
answer_one()
def answer_two():
cancerdf = answer_one()
series = cancerdf['target']
malignant = series[series == 0]
benign = series[series == 1]
target = pd.Series(np.array([len(malignant), len(benign)]), index=['malignant', 'benign'])
return target
answer_two()
def answer_three():
cancerdf = answer_one()
X = cancerdf.iloc[:,:30]
y = cancerdf["target"]
return X, y
answer_three()
from sklearn.model_selection import train_test_split
def answer_four():
X, y = answer_three()
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
return X_train, X_test, y_train, y_test
answer_four()
from sklearn.neighbors import KNeighborsClassifier
def answer_five():
X_train, X_test, y_train, y_test = answer_four()
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
return knn
answer_five()
def answer_six():
cancerdf = answer_one()
means = cancerdf.mean()[:-1].values.reshape(1, -1)
knn = answer_five()
score = knn.predict(means)
return score
answer_six()
def answer_seven():
X_train, X_test, y_train, y_test = answer_four()
knn = answer_five()
np_val = []
for row in X_test.iterrows():
np_val.append(knn.predict(row[1].values.reshape(1,-1))[0])
return np_val
answer_seven()
def answer_eight():
X_train, X_test, y_train, y_test = answer_four()
knn = answer_five()
score = knn.score(X_test, y_test)
return score
answer_eight()
def accuracy_plot():
import matplotlib.pyplot as plt
%matplotlib notebook
X_train, X_test, y_train, y_test = answer_four()
# Find the training and testing accuracies by target value (i.e. malignant, benign)
mal_train_X = X_train[y_train==0]
mal_train_y = y_train[y_train==0]
ben_train_X = X_train[y_train==1]
ben_train_y = y_train[y_train==1]
mal_test_X = X_test[y_test==0]
mal_test_y = y_test[y_test==0]
ben_test_X = X_test[y_test==1]
ben_test_y = y_test[y_test==1]
knn = answer_five()
scores = [knn.score(mal_train_X, mal_train_y), knn.score(ben_train_X, ben_train_y),
knn.score(mal_test_X, mal_test_y), knn.score(ben_test_X, ben_test_y)]
plt.figure()
# Plot the scores as a bar chart
bars = plt.bar(np.arange(4), scores, color=['#4c72b0','#4c72b0','#55a868','#55a868'])
# directly label the score onto the bars
for bar in bars:
height = bar.get_height()
plt.gca().text(bar.get_x() + bar.get_width()/2, height*.90, '{0:.{1}f}'.format(height, 2),
ha='center', color='w', fontsize=11)
# remove all the ticks (both axes), and tick labels on the Y axis
plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='off', labelbottom='on')
# remove the frame of the chart
for spine in plt.gca().spines.values():
spine.set_visible(False)
plt.xticks([0,1,2,3], ['Malignant\nTraining', 'Benign\nTraining', 'Malignant\nTest', 'Benign\nTest'], alpha=0.8);
plt.title('Training and Test Accuracies for Malignant and Benign Cells', alpha=0.8)
# Uncomment the plotting function to see the visualization,
# Comment out the plotting function when submitting your notebook for grading
accuracy_plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The object returned by load_breast_cancer() is a scikit-learn Bunch object, which is similar to a dictionary.
Step2: Question 0 (Example)
Step3: Question 1
Step4: Question 2
Step5: Question 3
Step6: Question 4
Step7: Question 5
Step8: Question 6
Step9: Question 7
Step10: Question 8
Step11: Optional plot
|
6,118
|
<ASSISTANT_TASK:>
Python Code:
# Imports / plotting configuration
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('poster')
plt.rcParams['image.cmap'] = 'viridis'
np.random.seed(13)
import json
import os
# Change this to `'tensorflow'` if you prefer
backend = 'theano'
config = {'image_dim_ordering': 'tf', 'epsilon': 1e-07,
'floatx': 'float32', 'backend': backend}
!mkdir -p ~/.keras
with open(os.path.expanduser('~/.keras/keras.json'), 'w') as f:
json.dump(config, f)
!pip install -q keras-tqdm # Install Jupyter-friendly progress bar
from keras.datasets import mnist
from keras.utils import np_utils
nrow = 28; ncol = 28; nb_classes = 10 # MNIST data parameters
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.astype('float32'); X_test = X_test.astype('float32') # int -> float
X_train = X_train.reshape(-1, nrow * ncol); X_test = X_test.reshape(-1, nrow * ncol) # flatten
X_train /= 255; X_test /= 255 # normalize pixels to between 0 and 1
# convert class vectors to binary class matrices (i.e., one-hot encoding)
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
fig, ax = plt.subplots(2, 5, figsize=(15, 8))
for i in range(10):
plt.sca(ax.ravel()[i])
plt.imshow(X_train[i].reshape(nrow, ncol))
# Define model architecture
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
model = Sequential()
model.add(Dense(8, input_dim=nrow * ncol))
model.add(Activation('relu'))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.summary()
# Fit model to training data and check accuracy
from keras_tqdm import TQDMNotebookCallback
batch_size = 128
nb_epoch = 20
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
history = model.fit(X_train, Y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
validation_data=(X_test, Y_test),
verbose=0, callbacks=[TQDMNotebookCallback()])
score, accuracy = model.evaluate(X_test, Y_test, verbose=0)
print('Test score: {}; test accuracy: {}'.format(score, accuracy))
# Examine misclassified examples here
pass
# Multi-layer classifier here
pass
# First we'll reshape the data back into two-dimensional form
X_train = X_train.reshape(X_train.shape[0], nrow, ncol, 1)
X_test = X_test.reshape(X_test.shape[0], nrow, ncol, 1)
input_shape = (nrow, ncol, 1) # only 1 channel since the images are black and white
from keras.layers import Conv2D, Flatten
# Convolutional network here
pass
from keras.layers import MaxPooling2D
# Convolutional network w/ pooling here
pass
from keras.layers import Dropout
from keras.regularizers import l2
# Convolutions + pooling + dropout/regularization
# First, reshape the data into one-dimensional sequences
X_train = X_train.reshape(X_train.shape[0], -1, 1)
X_test = X_test.reshape(X_test.shape[0], -1, 1)
input_shape = X_train.shape[1:]
from keras.layers import LSTM
# LSTM classifier here
pass
from keras.datasets import cifar10
# Repeat w/ cifar10 dataset
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: We'll start with our old friend the single-layer perceptron that we implemented in the "Basic Neural Network Exercise." The perceptron consists of a single fully-connected (a.k.a. dense) layer with some activation function, plus an output that we pass to the softmax function. An example of a keras implementation of such a network is given below; you'll want to use this as the template for the rest of your models in this exercise.
Step3: Part 1a
Step4: Part 1b
Step5: Part 2
Step6: Part 2a
Step7: Part 2b
Step8: Part 2c
Step9: Part 3
Step10: The most popular type of recurrent cell is called a "Long Short-Term Memory" cell, or LSTM. Follow the same structure as above and implement a simple recurrent neural network classifier using the LSTM layer.
Step11: Part 4
|
6,119
|
<ASSISTANT_TASK:>
Python Code:
import os
from IPython.display import HTML
from IPython.display import display, Image
from PIL import Image as PILImage
def files_at_relpath(rel_path):
return [os.path.join(rel_path, f) for f in os.listdir(rel_path)]
def display_images(img_path, **kwargs):
scale = kwargs.get('scale')
if scale is not None:
kwargs.pop('scale', None)
kwargs.pop("key", None)
files = files_at_relpath(img_path)
for filename in files:
try:
if scale is None:
display(Image(filename=filename, **kwargs))
else:
w, h = None, None
with PILImage.open(filename) as imgfile:
w, h = imgfile.size
kwargs['width'] = float(w) * scale
kwargs['height'] = float(h) * scale
display(Image(filename=filename, **kwargs))
except ValueError as e:
continue
display_images('images/ratebeer/profiles-MiP/', scale=0.7)
display_images('images/ratebeer/activity-MiP/')
# Example Dataset Review Entry
__ = {
'beer/ABV': 7.2,
'beer/beerId': '59261',
'beer/brewerId': '67',
'beer/name': 'Sierra Nevada Torpedo Extra IPA',
'beer/style': 'India Pale Ale (IPA)',
'review/appearance': 1.0,
'review/aroma': 0.8,
'review/overall': 0.9,
'review/palate': 0.8,
'review/taste': 1.0,
'review/text': 'Aroma is lemon and orange citrus, reddish orange hue, flavor is very well balanced between the malt and hop. this beer is very drinkable. It is not "over the top" like some of the double ipas. The alcohol content is well contained. The finish is hoppy as one would expect for this style. An excellent beer from the folks at Sierra Nevada!',
'review/timeStruct': {
'hour': 0,
'isdst': 0,
'mday': 17,
'min': 0,
'mon': 2,
'sec': 0,
'wday': 1,
'yday': 48,
'year': 2009
},
'review/timeUnix': 1234828800,
'user/profileName': 'Blazhock'
}
%matplotlib inline
import json
from itertools import groupby
from collections import defaultdict
from datetime import date, datetime
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import seaborn as sns
import numpy as np
Constants for data processing
SOURCE_FILE = 'data/reduced_data.txt'
SORTED_FILE = 'data/sorted_data.txt'
START_DATE = date(2008,5,1)
END_DATE = date(2010,5,1)
LAUNCH_DATE = date(2009,5,3)
Helper to return unicode string as ascii with special characters removed.
def clean_unicode(instr):
return unicodedata.normalize('NFKD', instr).encode('ascii', 'ignore')
Helper to parse and yield data from file to json
def parse_json(filename, normalize=True):
with open(filename) as f:
for line in f:
out = json.loads(line, encoding='latin-1')
if normalize:
temp = {}
for key in out:
try:
key = clean_unicode(key)
except:
key = key
try:
temp[key] = clean_unicode(out[key])
except:
temp[key] = out[key]
out = temp
yield out
f.close()
# via http://matplotlib.org/examples/pylab_examples/finance_work2.html
def moving_average(x, n, type='simple'):
compute an n period moving average.
type is 'simple' | 'exponential'
x = np.asarray(x)
if type=='simple':
weights = np.ones(n)
else:
weights = np.exp(np.linspace(-1., 0., n))
weights /= weights.sum()
a = np.convolve(x, weights, mode='full')[:len(x)]
a[:n] = a[n]
return a
# Sorted data doesn't fit comfortably in memory. This function provides access to an iterator over the data.
def sorted_data_iter():
return parse_json(SORTED_FILE)
timestamp_keyfunc = lambda r: r.get('review/timeUnix')
date_keyfunc = lambda r: date.fromtimestamp(r.get('review/timeUnix'))
def groupby_daily(sorted_reviews):
reviews_by_date = {}
for key, group in groupby(sorted_reviews, date_keyfunc):
reviews_by_date[key] = len(list(group))
return reviews_by_date
reviews_by_date = groupby_daily(sorted_data_iter())
dates = sorted(reviews_by_date.keys())
review_counts = [reviews_by_date[k] for k in dates]
# Moving average of review counts by day
ma_days = 45
ma = moving_average(review_counts, ma_days, 'exponential')
# Linear trendline of review counts by day
polynomial_order = 3
trend_line = np.polyfit(range(len(dates)), review_counts, polynomial_order)
Groups reviews by month
user_keyfunc = lambda r: r.get('user/profileName')
def month_keyfunc(review):
# Return the first day of the month for any day in a month
review_date = date.fromtimestamp(review.get('review/timeUnix'))
return date(review_date.year, review_date.month, 1)
def groupby_monthly(sorted_reviews):
monthly_user_count = {}
for key, group in groupby(sorted_reviews, month_keyfunc):
user_sorted = sorted(group, key=user_keyfunc)
monthly_user_count[key] = len(list(groupby(user_sorted, user_keyfunc)))
return monthly_user_count
monthly_user_count = groupby_monthly(sorted_data_iter())
months = sorted(monthly_user_count.keys())
user_counts = [monthly_user_count[m] for m in months]
Groups reviews by month, then by beer/style.
beer_style_keyfunc = lambda r: r.get(u'beer/style')
def groupby_styles_monthly(sorted_reviews):
monthly_beer_count = {}
all_styles = {}
for key_month, group in groupby(sorted_reviews, month_keyfunc):
style_sorted = sorted(group, key=beer_style_keyfunc)
monthly_beer_count[key_month] = {}
for key_style, subgroup in groupby(style_sorted, beer_style_keyfunc):
style_count = len(list(subgroup))
monthly_beer_count[key_month][key_style] = style_count
all_styles[key_style] = all_styles.setdefault(key_style, 0) + style_count
return monthly_beer_count, all_styles
monthly_beer_count, all_styles = groupby_styles_monthly(sorted_data_iter())
Maintain detailed monthly review counts for the top N styles.
Styles below this N threshold are aggregated into the new 'Other' category.
N_styles = 10
top_N_styles = sorted(all_styles.items(), key=lambda s: s[1], reverse=True)[0:N_styles]
top_styles = [s[0] for s in top_N_styles]
months = sorted(monthly_beer_count.keys())
style_counts = {}
for style in top_styles:
month_counts = []
for month in months:
month_counts.append(monthly_beer_count[month][style])
style_counts[style] = month_counts
# Aggregation of reviews for beers that are not in top_N_styles
other_counts = []
for i in range(len(months)):
month = months[i]
# Total of all reviews
total_reviews = sum(monthly_beer_count[month].values())
# Total count of reviews for top_styles beers
top_style_reviews = sum([style_counts[s][i] for s in top_styles])
other_counts.append(total_reviews - top_style_reviews)
# other_counts.insert(0, total_reviews - top_style_reviews)
style_counts['Other'] = other_counts
top_styles.insert(0, 'Other')
# Linear trendline of review counts by day
polynomial_order = 2
other_style_trend_line = np.polyfit(range(len(months)), other_counts, polynomial_order)
print other_style_trend_line
current_palette = sns.color_palette("Paired", 10)
def plot_reviews(dates, review_counts, ma, ma_days, trend_line, polynomial_order, current_palette):
fig, axes = plt.subplots(1, figsize=(16, 10))
# Review count plot
axes.plot(dates, review_counts, color=current_palette[1], lw=1, label='Daily reviews')
axes.fill_between(dates, 0, review_counts, facecolor=current_palette[0], alpha=0.8)
# Review count plot
axes.plot(dates, ma, color=current_palette[3], lw=4, alpha=0.75, label=('%d day moving avg.' % ma_days))
# N order linear model plot
axes.plot(dates, np.polyval(trend_line, range(len(dates))), color=current_palette[5], lw=4, alpha=0.75, label=('Order %d linear model' % polynomial_order))
axes.annotate("Profile redesign",
xy=(LAUNCH_DATE, 0),
xytext=(LAUNCH_DATE, 500),
arrowprops=dict(arrowstyle="->"),
fontsize='large'
)
# Labels and axes formatting
axes.set_title('RateBeer Reviews by Date')
axes.set_xlabel('Dates')
axes.set_ylabel('Reviews')
fig.autofmt_xdate()
axes.fmt_xdata = mdates.DateFormatter('%Y-%m-%d')
handles, labels = axes.get_legend_handles_labels()
axes.legend(handles, labels)
plt.show()
plot_reviews(dates, review_counts, ma, ma_days, trend_line, polynomial_order, current_palette)
def plot_users_monthly(months, user_counts, current_palette):
fig, axes = plt.subplots(1, figsize=(16, 10))
axes.plot(months, user_counts, color=current_palette[3], lw=1, label='Unique users')
axes.fill_between(months, 0, user_counts, facecolor=current_palette[2], alpha=0.8)
axes.annotate("Profile redesign",
xy=(LAUNCH_DATE, 0),
xytext=(LAUNCH_DATE, 500),
arrowprops=dict(arrowstyle="->"),
fontsize='large'
)
fig.autofmt_xdate()
axes.set_title('RateBeer Unique Users by Month')
axes.set_xlabel('Months')
axes.set_ylabel('Users')
axes.fmt_xdata = mdates.DateFormatter('%Y-%m')
plt.show()
plot_users_monthly(months, user_counts, current_palette)
users_html = [<table>
<tr>
<th>Date</th>
<th>User Counts</th>
</tr>]
for i in range(len(months)):
users_html.append(<tr><td>%s</td><td>%s</td></tr> % (months[i], "{:,}".format(user_counts[i])))
users_html.append(</table>)
h = HTML(''.join(users_html)); h
current_palette = sns.color_palette("hls", len(top_styles))
def plot_styles_monthly(months, style_counts, current_palette):
fig, axes = plt.subplots(1, figsize=(16, 10))
area_bottoms = [0 for i in range(len(months))]
cp_index = 0
for style in top_styles:
area_tops = [style_counts[style][j] + area_bottoms[j] for j in range(len(months))]
axes.plot(months, area_tops, color=current_palette[cp_index], lw=1, label=('%s' % style))
axes.fill_between(months, area_bottoms, area_tops, facecolor=current_palette[cp_index], alpha=0.8)
area_bottoms = list(area_tops)
cp_index += 1
axes.set_title('RateBeer Style Reviews by Date')
axes.set_xlabel('Dates')
axes.set_ylabel('Reviews')
fig.autofmt_xdate()
axes.fmt_xdata = mdates.DateFormatter('%Y-%m-%d')
handles, labels = axes.get_legend_handles_labels()
axes.legend(handles, labels, loc=8, ncol=2, frameon=True, markerscale=5.0, fontsize='large', title='Beer Styles')
plt.show()
plot_styles_monthly(months, style_counts, current_palette)
styles_html = ['<table><tr>']
styles_html.append('<th></th>') # Blank upper left corner
for month in months:
styles_html.append(
('<th>%s</th>' % month.strftime('%b %Y'))
)
styles_html.append('</tr>')
# First half
for style in top_styles:
styles_html.append(('<tr><td>%s</td>' % style))
for i in range(len(months)):
styles_html.append(('<td>%s</td>' % ('{:,}'.format(style_counts[style][i]))))
styles_html.append('</tr>')
styles_html.append('</table>')
h = HTML(''.join(styles_html)); h
def plot_other_styles_monthly(months, other_count, other_style_trend_line, current_palette, top_N_num):
fig, axes = plt.subplots(1, figsize=(16, 10))
axes.plot(months, other_count, color=current_palette[0], lw=1, label='Reviews')
axes.fill_between(months, 0, other_count, facecolor=current_palette[1], alpha=0.8)
axes.plot(months, np.polyval(other_style_trend_line, range(len(months))), color=current_palette[5], lw=4, alpha=0.75, label='Linear model')
axes.annotate("Profile redesign",
xy=(LAUNCH_DATE, 0),
xytext=(LAUNCH_DATE, 5000),
arrowprops=dict(arrowstyle="->"),
fontsize='large'
)
fig.autofmt_xdate()
axes.set_title(('RateBeer Reviews of "Other" Beer Styles Outside Top %d Styles' % (top_N_num)))
axes.set_xlabel('Months')
axes.set_ylabel('"Other" Style Reviews')
axes.fmt_xdata = mdates.DateFormatter('%Y-%m')
plt.show()
plot_other_styles_monthly(months, other_counts, other_style_trend_line, current_palette, N_styles)
# import statements
import unicodedata
import random
import calendar
import math
import operator
A variety of helper methods to group data structures accordingto
def groupby_key(data, key_str):
key_map = {}
for datum in data:
key = datum.get(key_str)
key_map[key] = key_map.setdefault(key, [])
key_map[key].append(datum)
return key_map
def groupby_key_count(data, key_str, func=None):
key_map = {}
for datum in data:
key = datum.get(key_str)
try:
key = func(key)
except:
key = key
key_map[key] = key_map.setdefault(key, 0) + 1
return key_map
def group_time_split(user_array, key, func=None):
return [[groupby_key_count(time_per, key, func) for time_per in user] for user in user_array]
Reservoir sampling given an iterable input and k for number of items to be sampled
def reservoir_sample(iterator, k):
iterator = iter(iterator)
# fill the reservoir to start
result = [next(iterator) for _ in range(k)]
n = k
for item in iterator:
n += 1
s = random.randint(0, n)
if s < k:
result[s] = item
return result
Calculates the average using dictionary keys as "values" and dictionary values as "counts"
def avg_from_map(keymap):
count, total, a = 0, 0, None
for key in keymap:
if (key):
count += keymap[key]
total += key * keymap[key]
if (count):
a = total / count
return a
average from user tuple array
def avg_from_user_array_tuple(array):
out = []
for user in array:
user_avg = []
for time in user:
count, total, a = 0, 0, None
for item in time:
count += item[1]
total += item[0] * item[1]
if count > 0:
a = total/count
user_avg.append(a)
out.append(user_avg)
return out
# Parse and read in dataset from reduced_data.txt (as produced by our parsing code)
parsed_red = parse_json('data/reduced_data.txt', normalize=True)
dset = [i for i in parsed_red]
# parse dataset, grouping reviews by username
users = groupby_key(dset,'user/profileName')
user_vals = users.values()
print 'number of users', len(user_vals)
# split each users' group of reviews into two buckets: before and after site changes
user_vals_split = []
split_timestamp = calendar.timegm(datetime(2009,5,1).timetuple())
for i , posts in enumerate(user_vals):
pre = [post for post in posts if post.get('review/timeUnix') < split_timestamp]
post = [post for post in posts if post.get('review/timeUnix') >= split_timestamp]
# Only look at users who already contributed in the prior period
if len(pre) > 0:
user_vals_split.append([pre, post])
# sort reviews according to the number of reviews users contributed before
user_vals_split = sorted(user_vals_split, key=lambda user: len(user[0]))
print 'number of users post split', len(user_vals_split)
# sample "top" 5% and "low" 95% of users
users_top = user_vals_split[int(math.floor(len(user_vals_split)*0.95)):]
users_low = user_vals_split[:int(math.floor(len(user_vals_split)*0.95))]
# sample "random" user group to be equal to the "top" group
sample_size = len(users_top)
users_sampled = reservoir_sample(user_vals_split, sample_size)
Plot utility to plot the users's plus minus
Prints out the mean, median, and standard deviation for the plus minus
def plot_diff(user_list, func=lambda x: x, plot=True):
user_pre = [func(user[0]) for user in user_list]
user_post = [func(user[1]) for user in user_list]
np_pre = np.array(user_pre)
np_post = np.array(user_post)
np_pm = np_post - np_pre
print "Mean p/m: ", np.mean(np_pm)
print "Median p/m: ", np.median(np_pm)
print "Std Dev p/m: ", np.std(np_pm)
ind = np.arange(len(np_pm))
if (plot):
fig, ax = plt.subplots()
scatt = ax.scatter(ind, np_pm, c="grey",s=10,edgecolor='none')
mean = ax.axhline(y=np.mean(np_pm),xmin=0,xmax=ind[len(ind) - 1],c="blue",linewidth=0.9, linestyle = '-', zorder=1, label='mean')
median = ax.axhline(y=np.median(np_pm),xmin=0,xmax=ind[len(ind) - 1],c="red",linewidth=0.9,linestyle = '--', zorder=2, label='median')
plt.legend(loc=3, ncol=2)
# review counts
# all users stats
print 'all user review counts'
plot_diff(user_vals_split, lambda x: len(x))
plt.xlabel('User ID (sorted by pre-split contributions)')
plt.ylabel('review after - reviews before')
plt.title('Review Count Plus-Minus')
print
# top users stats
print 'top user review counts', len(users_top)
plot_diff(users_top, lambda x: len(x), False)
print
# low users stats
print 'low user review counts', len(users_low)
plot_diff(users_low, lambda x: len(x), False)
print
# sampled users stats
print 'sampled user review counts', len(users_sampled)
plot_diff(users_sampled, lambda x: len(x), False)
print
# remove users who did not post reviews after the split (for rest of metrics)
users_split_active = [user for user in user_vals_split if len(user[1]) > 0]
users_top_active = users_split_active[int(math.floor(len(users_split_active)*0.95)):]
users_low_active = users_split_active[:int(math.floor(len(users_split_active)*0.95))]
users_sampled_active = reservoir_sample(users_split_active, len(users_top_active))
# average number of styles reviewed before/after change
print 'all users style counts'
styles_all = group_time_split(users_split_active, 'beer/style')
# want number of styles
styles_all = map(lambda x: [len(x[0].keys()), len(x[1].keys())], styles_all)
plot_diff(styles_all)
plt.xlabel('User ID (sorted by pre-split contributions)')
plt.ylabel('avg beer styles after - avg beer styles before')
plt.title('Average Beer Style Count Plus-Minus')
print
print 'top users style counts'
styles_top = group_time_split(users_top_active, 'beer/style')
# want number of styles
styles_top = map(lambda x: [len(x[0].keys()), len(x[1].keys())], styles_top)
plot_diff(styles_top, lambda x: x, False)
print
print 'low users style counts'
styles_low = group_time_split(users_low_active, 'beer/style')
# want number of styles
styles_low = map(lambda x: [len(x[0].keys()), len(x[1].keys())], styles_low)
plot_diff(styles_low, lambda x: x, False)
print
print 'sample users style counts'
styles_samp = group_time_split(users_sampled_active, 'beer/style')
# want number of styles
styles_samp = map(lambda x: [len(x[0].keys()), len(x[1].keys())], styles_samp)
plot_diff(styles_samp, lambda x: x, False)
print
# average overall ratings before/after change
print 'all users rating avg'
rates_all = group_time_split(users_split_active, 'review/overall', lambda x: float(x))
rates_all = map(lambda x: [[(rating, x[0][rating]) for rating in x[0].keys()], [(rating, x[1][rating]) for rating in x[1].keys()]], rates_all)
avg_rates = avg_from_user_array_tuple(rates_all)
plot_diff(avg_rates)
plt.xlabel('User ID (sorted by pre-split contributions)')
plt.ylabel('avg overall rating after - avg overall rating before')
plt.title('Average Overall Rating Plus-Minus (Max Rating = 1)')
print
print 'top users rating avg'
rates_top = group_time_split(users_top_active, 'review/overall', lambda x: float(x))
rates_top = map(lambda x: [[(rating, x[0][rating]) for rating in x[0].keys()], [(rating, x[1][rating]) for rating in x[1].keys()]], rates_top)
avg_rates = avg_from_user_array_tuple(rates_top)
plot_diff(avg_rates, lambda x: x, False)
print
print 'low users rating avg'
rates_low = group_time_split(users_low_active, 'review/overall', lambda x: float(x))
rates_low = map(lambda x: [[(rating, x[0][rating]) for rating in x[0].keys()], [(rating, x[1][rating]) for rating in x[1].keys()]], rates_low)
avg_rates = avg_from_user_array_tuple(rates_low)
plot_diff(avg_rates, lambda x: x, False)
print
print 'sampled users rating avg'
rates_samp = group_time_split(users_sampled_active, 'review/overall', lambda x: float(x))
rates_samp = map(lambda x: [[(rating, x[0][rating]) for rating in x[0].keys()], [(rating, x[1][rating]) for rating in x[1].keys()]], rates_samp)
avg_rates = avg_from_user_array_tuple(rates_samp)
styles_plot = plot_diff(avg_rates, lambda x: x, False)
print
# average review lengths before/after change
print 'all users review length avg'
reviews_all = group_time_split(users_split_active, 'review/text', lambda x: float(x))
reviews_all = map(lambda x: [[(len(review.split()), x[0][review]) for review in x[0].keys()], [(len(review.split()), x[1][review]) for review in x[1].keys()]], reviews_all)
avg_review_len = avg_from_user_array_tuple(reviews_all)
styles_plot = plot_diff(avg_review_len)
plt.xlabel('User ID (sorted by pre-split contributions)')
plt.ylabel('avg review length after - avg review length before')
plt.title('Average Review Length Plus-Minus')
print
print 'top users review length avg'
reviews_top = group_time_split(users_top_active, 'review/text', lambda x: float(x))
reviews_top = map(lambda x: [[(len(review.split()), x[0][review]) for review in x[0].keys()], [(len(review.split()), x[1][review]) for review in x[1].keys()]], reviews_top)
avg_review_len = avg_from_user_array_tuple(reviews_top)
styles_plot = plot_diff(avg_review_len, lambda x: x, False)
print
print 'low users review length avg'
reviews_low = group_time_split(users_low_active, 'review/text', lambda x: float(x))
reviews_low = map(lambda x: [[(len(review.split()), x[0][review]) for review in x[0].keys()], [(len(review.split()), x[1][review]) for review in x[1].keys()]], reviews_low)
avg_review_len = avg_from_user_array_tuple(reviews_low)
styles_plot = plot_diff(avg_review_len, lambda x: x, False)
print
print 'sampled users review length avg'
reviews_samp = group_time_split(users_sampled_active, 'review/text', lambda x: float(x))
reviews_samp = map(lambda x: [[(len(review.split()), x[0][review]) for review in x[0].keys()], [(len(review.split()), x[1][review]) for review in x[1].keys()]], reviews_samp)
avg_review_len = avg_from_user_array_tuple(reviews_samp)
styles_plot = plot_diff(avg_review_len, lambda x: x, False)
print
# build feature vectors
x, y_after, y_diff = [], [], []
for user in user_vals_split:
pre_char = user[0]
style_map = groupby_key_count(pre_char, 'beer/style')
rating_map = groupby_key_count(pre_char, 'review/overall')
review_map = groupby_key_count(pre_char, 'review/text', lambda x : len(x.split()))
total_count = len(pre_char)
style_count = len(style_map.keys())
avg_rating = avg_from_map(rating_map)
rating_count = len(rating_map.keys())
avg_review = avg_from_map(review_map)
review_count = len(review_map.keys())
# throw away points that don't have rating / review
if (avg_rating is not None and avg_review is not None):
x.append([total_count, style_count, avg_rating, rating_count, avg_review, review_count])
y_after.append(len(user[1]))
y_diff.append(len(user[1])-len(user[0]))
data_pool = zip(x,y_diff)
def balance_pool(pool):
Given a pool of year-key formatted unbalanced outcome data, return a balanced set where positive outcomes have equal number of occurances to negative outcomes
The larger set is randomly sampled to reduce its size by using reservoir_sample
newpool = []
neg = []
pos = []
for user in pool:
if user[-1] < 0:
neg.append(user)
else:
pos.append(user)
minlen = min(len(pos), len(neg))
for elem in reservoir_sample(neg, minlen):
newpool.append(elem)
for elem in reservoir_sample(pos, minlen):
newpool.append(elem)
return newpool
def create_train_test(pool, test, train):
Split the data pool created in create_data_pool randomly into a 80/20 split between training data and testing data
Shuffles all the years and randomly splits 80/20 between training and test
Should only be ran once to randomly split train/test data as it will return different results between runs
random.shuffle(pool)
ind = int(len(pool) * 0.8)
train += pool[:ind]
test += pool[ind:]
# calculate pearson correlation coefficient for each variable
# use to predict plus-minus and actual number of reviews after
import scipy.stats
np_x = np.array(x)
np_y_after = np.array(y_after)
np_y_diff = np.array(y_diff)
Index to Pre-Split Feature Mapping
0 - total number of reviews
1 - number of styles reviewed
2 - average overall rating
3 - number of distinct ratings
4 - average review length
5 - number of distinct review lengths
print "Pearson Correlation Coefficients Against Post-Split Number of Reviews"
for i in xrange(len(np_x[0])):
print "pearson for char", i, ":", scipy.stats.pearsonr(np_x[:,i], np_y_after)
print
print "Pearson Correlation Coefficients Against Number of Reviews Plus-Minus"
for i in xrange(len(np_x[0])):
print "Difference -- pearson for char", i, ":", scipy.stats.pearsonr(np_x[:,i], np_y_diff)
slope, intercept, r_value, p_value, std_err= scipy.stats.linregress(np_x[:,0], np_y_after)
print "r-squared:", r_value**2
fit = np.polyfit(np_x[:,0],np_y_after,1)
fit_fn = np.poly1d(fit)
_ = plt.plot(np_x[:,0], np_y_after, 'b.', x, fit_fn(x), '--k')
plt.ylim(0)
plt.xlabel('# of Pre-Split Reviews')
plt.ylabel('# of Post-Split Reviews')
plt.title('Linear Regression of Pre/Post Split Review Count')
_ = plt.plot(np_x[:,1], np_y_after, 'b.')
plt.ylim(0)
plt.xlabel('# of Pre-Split Styles Reviewed')
plt.ylabel('# of Post-Split Reviews')
plt.title('Correlation btwn Pre Styles Types and Post Review Count')
_= plt.plot(np_x[:,2], np_y_after, 'b.')
plt.ylim(0)
plt.xlabel('Average Pre-Split Overall Rating of Reviews')
plt.ylabel('# of Post-Split Reviews')
plt.title('Correlation btwn Pre Average Rating and Post Review Count')
_= plt.plot(np_x[:,4], np_y_after, 'b.')
plt.ylim(0)
plt.xlabel('Average Pre-Split Review Length (Word Count)')
plt.ylabel('# of Post-Split Reviews')
plt.title('Correlation btwn Average Review Length and Post Review Count')
bal_data_pool = balance_pool(data_pool)
test, train = [], []
create_train_test(bal_data_pool, test, train)
train = [[user[0] for user in train], [1 if user[1] > 0 else -1 for user in train]]
test = [[user[0] for user in test], [1 if user[1] > 0 else -1 for user in test]]
from sklearn.preprocessing import StandardScaler
from sklearn import svm, neighbors, qda, metrics, cross_validation
from sklearn.pipeline import Pipeline
clf = Pipeline([('Scaler', StandardScaler()),
# ('Log-Reg', linear_model.LogisticRegression(penalty='l2', dual=True))])
# ('Log-Reg', linear_model.LogisticRegression(penalty='l2', dual=False))])
# ('SVC-linear', svm.SVC(kernel='linear'))])
('SVC-rbf', svm.SVC(kernel='rbf'))])
cv = cross_validation.KFold(len(train[0]), n_folds=4, shuffle=True)
scores = cross_validation.cross_val_score(clf, train[0], train[1], cv=cv)
print "Cross Validation Scores:", scores
print "Average Cross Validation Score:", np.average(scores)
print
# peforms test on selected model
clf = clf.fit(train[0], train[1])
predicted = clf.predict(test[0])
print "Model Accuracy:", metrics.accuracy_score(test[1], predicted)
print "Confusion Matrix"
print metrics.confusion_matrix(test[1], predicted)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: Activity Stream and Top Styles
Step3: Data Source
Step5: Data Processing
Step9: Helper functions
Step13: RateBeer Reviews by Date
Step14: Site Wide Results
Step15: RateBeer Reviews over Time
Step19: RateBeer Unique Users by Month
Step20: RateBeer Beer Style Review Trends by Month
Step21: RateBeer Beer Style Review Trends by Month (cont.)
Step26: User Level Results
Step28: High-level Feature Trends
Step29: Review Counts
Step30: Average Number of Beer Styles Reviewed
Step31: Average Overall Rating
Step32: Average Review Length
Step35: Explore Potential Correlation
Step37: Pearson Correlation Coefficient
Step38: In order to visualize this relationship, we plotted these two features and fit a linear line against them. The result of this operation is shown below.
Step39: While performing the linear correlation analysis, we noticed several interesting visually-identifiable, non-linear trends when plotting the following variables against the # of post-split review count
Step40: Explore Binary Classifier
|
6,120
|
<ASSISTANT_TASK:>
Python Code:
import pg8000
conn = pg8000.connect(database="homework2")
conn.rollback()
cursor = conn.cursor()
statement = "SELECT movie_title, release_date FROM uitem WHERE scifi = 1 AND horror = 1 ORDER BY release_date DESC;
"
cursor.execute(statement)
for row in cursor:
print(row[0])
cursor = conn.cursor()
statement = "SELECT count(*) from uitem WHERE musical = 1 or childrens =1;"
cursor.execute(statement)
for row in cursor:
print(row[0])
cursor = conn.cursor()
statement = "SELECT uuser.occupation, count(*) FROM uuser GROUP BY occupation HAVING count(*) > 50;
"
cursor.execute(statement)
for row in cursor:
print(row[0], row[1])
cursor = conn.cursor()
statement = "SELECT distinct(uitem.movie_title) FROM uitem JOIN udata ON uitem.movie_id = udata.item_id WHERE udata.rating = 5 AND uitem.documentary = 1 AND uitem.release_date < '1992-01-01';"
cursor.execute(statement)
for row in cursor:
print(row[0])
cursor = conn.cursor()
statement = "SELECT uitem.movie_title, avg(udata.rating) FROM uitem JOIN udata ON uitem.movie_id = udata.item_id WHERE uitem.horror = 1 GROUP BY uitem.movie_title HAVING count(udata.rating) >= 10 ORDER BY avg(udata.rating) limit 10;"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
cursor = conn.cursor()
statement = ""
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
Step2: Problem set 1
Step3: Problem set 2
Step4: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Step5: Problem set 3
Step6: Problem set 4
Step7: BONUS
|
6,121
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from numpy import linalg
#note z^2 doesn't affect our answer
a_matrix = [[6, 4,-1],\
[1, -1, 0],\
[2, -2, -1]]
b_matrix = [0, 6, -4]
#convert them to numpy arrays/matrices
np_a_matrix = np.array(a_matrix)
np_b_matrix = np.array(b_matrix).transpose()
#Solve the problem
np_a_inv = linalg.inv(np_a_matrix)
np_x_matrix = np_a_inv.dot(np_b_matrix)
#print the solution, making sure to use z
print(np_x_matrix[0], np_x_matrix[1], np.sqrt(np_x_matrix[2]))
c_mat = np.array([[-2, 1,1], [2, -4, 0], [0, 3, -1]])
e_l, e_v = linalg.eig(c_mat)
for i in range(3):
print(e_l[i], e_v[:, i])
print(c_mat.dot(e_v[:,2]))
#3.1 Answer
from ipywidgets import widgets
from IPython import display
button = widgets.Button(description="Print")
text = widgets.Text('')
def clear_and_print(b):
display.clear_output()
print(text.value)
button.on_click(clear_and_print)
display.display(text)
display.display(button)
#3.2 Answer
import random
def randint(b):
display.clear_output()
print(random.randint(0,10))
button = widgets.Button(description='random')
button.on_click(randint)
display.display(button)
#3.3 Answer
my_strings = ['(⊙ᗜ⊙)', '╚═╏ ⇀ ͜ر ↼ ╏═╝', 'ლ(́◉◞౪◟◉‵ლ)', 'ლ(ʘ̆〰ʘ̆)ლ']
def rand_string(b):
display.clear_output()
print(my_strings[random.randint(0,len(my_strings))])
button = widgets.Button(description='Raise your donger!')
button.on_click(rand_string)
display.display(button)
#3.4 Answer
mat = np.array([[3, 2, -6], [2, 6, 4], [3, 4, 0]])
e_l, e_v = linalg.eigh(mat)
def print_eig(i):
display.display(display.Latex('''
$$\\left[\\begin{{array}}{{c}}
{0:0.5}\\\\
{1:0.5}\\\\
{2:0.5}\\
\end{{array}}\\right]
$$'''.format(e_v[i, 0], e_v[i, 1], e_v[i, 2])))
widgets.interact(print_eig, i=(0,2,1))
#4.1 Answer
from scipy.integrate import quad
def fxn(x):
return np.sin(x)**2
ans, err = quad(fxn, 0, 1)
display.Latex('$$\int_0^1 \sin^2(x)\, dx = {0:.3}$$'.format(ans))
#4.2 Answer
ans,_ = quad(lambda x: x**-2, 0, np.infty)
display.Latex('$$\int_0^\infty x^{{-2}}\, dx = {:.3}$$'.format(ans))
#4.3 Answer
def pdf(x, mu=-4, sig=2):
return 1 / np.sqrt(sig**2 * 2 * np.pi) * np.exp(- (x - mu)**2 / (2 * sig**2))
ans,_ = quad(pdf, -2, 2)
display.Latex('$$\int_{{-2}}^{{-2}} \\frac{{1}}{{\\sigma\\sqrt{{2\\pi^2}}}} e^{{\cfrac{{(x - \\mu)^2}}{{2\\sigma^2}}}} = {:.3}$$'.format(ans))
data_5_x = [0.0, 0.2857, 0.5714, 0.8571, 1.1429, 1.4286, 1.7143, 2.0, 2.2857, 2.5714, 2.8571, 3.1429, 3.4286, 3.7143, 4.0, 4.2857, 4.5714, 4.8571, 5.1429, 5.4286, 5.7143, 6.0, 6.2857, 6.5714, 6.8571, 7.1429, 7.4286, 7.7143, 8.0, 8.2857, 8.5714, 8.8571, 9.1429, 9.4286, 9.7143, 10.0, 10.2857, 10.5714, 10.8571, 11.1429, 11.4286, 11.7143, 12.0, 12.2857, 12.5714, 12.8571, 13.1429, 13.4286, 13.7143, 14.0]
data_5_y = [67.9925, 67.5912, 67.4439, 66.7896, 66.4346, 66.3176, 65.7527, 65.1487, 65.7247, 65.1831, 64.5981, 64.5213, 63.6746, 63.9106, 62.6127, 63.3892, 62.6511, 62.601, 61.9718, 60.5553, 61.5862, 61.3173, 60.5913, 59.7061, 59.6535, 58.9301, 59.346, 59.2083, 60.3429, 58.752, 57.6269, 57.5139, 59.0293, 56.7979, 56.2996, 56.4188, 57.1257, 56.1569, 56.3077, 55.893, 55.4356, 56.7985, 55.6536, 55.8353, 54.4404, 54.2872, 53.9584, 53.3222, 53.2458, 53.7111]
#5.1 Answer
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
deriv = []
for i in range(1, len(data_5_x) - 1):
forward = (data_5_y[i + 1] - data_5_y[i]) / (data_5_x[i + 1] - data_5_x[i])
backward = (data_5_y[i] - data_5_y[i - 1]) / (data_5_x[i] - data_5_x[i - 1])
deriv.append((forward + backward) / 2)
plt.plot(data_5_x[1:-1], deriv)
plt.show()
#5.2 Answer
x = np.array(data_5_x)
y = np.array(data_5_y)
forward = (y[1:] - y[:-1]) / (x[1:] - x[:-1])
backward = forward
dervi = (forward[:-1] + backward[1:]) / 2.
plt.plot(x[1:-1], deriv)
plt.show()
#5.3 Answer
area = 0
for i in range(len(data_5_x) - 1):
width = data_5_x[i + 1] - data_5_x[i]
area += 0.5 * width * (data_5_y[i + 1] + data_5_y[i])
print(area)
#5.4 Answer
area = 0.5 * np.sum( (x[1:] - x[:-1]) * (y[1:] + y[:-1]) )
print(area)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Chemical Reaction (6 Points)
Step2: 2.4 Answer
Step3: 3. Python Practice (20 Points)
Step4: 4. Integration (12 Points)
Step5: 5. Numerical Integration/Differentiation (12 Points)
|
6,122
|
<ASSISTANT_TASK:>
Python Code:
% matplotlib inline
from pylab import *
import numpy as np
import scipy.stats as stats
import datetime
from netCDF4 import netcdftime
from netCDF4 import Dataset as netcdf # netcdf4-python module
from netcdftime import utime
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
import matplotlib.dates as mdates
from matplotlib.dates import MonthLocator, WeekdayLocator, DateFormatter
import matplotlib.ticker as ticker
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 15, 6
import warnings
warnings.simplefilter('ignore')
infile = r'data\V22_GPCP.1979-2010.nc'
ncset = netcdf(infile)
ncset.set_auto_mask(False)
lons = ncset['lon'][:]
lats = ncset['lat'][:]
nctime = ncset['time'][:]
t_unit = ncset['time'].units
pr = ncset['PREC'][:]
try :
t_cal = ncset['time'].calendar
except AttributeError : # Attribute doesn't exist
t_cal = u"gregorian" # or standard
undef = -99999.0
pr[pr==undef] = np.nan
nt,nlat,nlon = pr.shape
ngrd = nlat*nlon
utime = netcdftime.utime(t_unit, calendar = t_cal)
datevar = utime.num2date(nctime)
datevar[0:5]
pr_grd = pr.reshape((nt, ngrd), order='F')
pr_rate = np.empty((ngrd,1))
pr_rate[:,:] = np.nan
pr_val = np.empty((ngrd,1))
pr_val[:,:] = np.nan
for i in range(ngrd):
y = pr_grd[:,i]
y0 = y[~np.isnan(y)]
x = np.linspace(1,len(y0), len(y0))
slope, intercept, r_value, p_value, std_err = stats.linregress(x, y0)
pr_rate[i,0] = slope*120.0
pr_val[i,0] = p_value
pr_rate = pr_rate.reshape((nlat,nlon), order='F')
pr_val = pr_val.reshape((nlat,nlon), order='F')
#pr_rate = np.ma.masked_array(pr_rate, mask=(pr_val<0.05))
m = Basemap(projection='cyl', llcrnrlon=min(lons), llcrnrlat=min(lats),
urcrnrlon=max(lons), urcrnrlat=max(lats))
x, y = m(*np.meshgrid(lons, lats))
clevs = np.linspace(-0.6, 0.6, 25)
cs = m.contourf(x, y, pr_rate.squeeze(), clevs, cmap=plt.cm.RdBu_r)
m.drawcoastlines()
cb = m.colorbar(cs)
plt.title('GPCP v2.2 Changing Rate (mm/day/decade)', fontsize=16)
pr_ann_clm = np.nanmean(pr, axis=0)
m = Basemap(projection='cyl', llcrnrlon=min(lons), llcrnrlat=min(lats),
urcrnrlon=max(lons), urcrnrlat=max(lats))
x, y = m(*np.meshgrid(lons, lats))
clevs = np.linspace(0.0, 12.0, 25)
cs = m.contourf(x, y, pr_ann_clm.squeeze(), clevs, cmap=plt.cm.RdBu_r)
m.drawcoastlines()
cb = m.colorbar(cs)
plt.title('GPCP v2.2 Annual Mean (mm/day)', fontsize=16)
lonx, latx = np.meshgrid(lons, lats)
weights = np.cos(latx * np.pi / 180.)
pr_glb_avg = np.zeros(nt)
for it in np.arange(nt):
pone = pr[it,:, :]
pone = np.ma.masked_array(pone, mask=np.isnan(pone))
pr_glb_avg[it] = np.average(pone, weights=weights)
glb_avg = np.mean(pr_glb_avg)
print(glb_avg)
fig, ax = plt.subplots(1, 1, figsize=(15,6))
ax.plot(datevar, pr_glb_avg, color='b', linewidth=2)
ax.axhline(glb_avg, linewidth=2, color='r', label="Global areal mean: " + str(np.round(glb_avg*100)/100))
ax.legend()
ax.set_title('GPCP areal mean (1979-2010)', fontsize=16)
ax.set_xlabel('Month/Year #', fontsize=12)
ax.set_ylabel('Preicipitation [mm/day]', fontsize=12)
ax.set_ylim(2.4, 2.9)
ax.minorticks_on()
# rotate and align the tick labels so they look better
fig.autofmt_xdate()
# use a more precise date string for the x axis locations in the toolbar
ax.fmt_xdata = mdates.DateFormatter('%Y')
pr_ann_clm = np.nanmean(pr, axis=0)
pr_ann_zonal = np.nanmean(pr_ann_clm, axis=1)
fig, ax = plt.subplots(1, 1, figsize=(15,6))
ax.plot(lats, pr_ann_zonal, color='b', linewidth=2)
ax.axhline(glb_avg, linewidth=2, color='r', label="Global areal mean: " + str(np.round(glb_avg*100)/100))
ax.legend()
ax.set_title('GPCP zonal mean (1979-2010)', fontsize=16)
ax.set_xlabel('Month/Year #', fontsize=12)
ax.set_ylabel('Preicipitation [mm/day]', fontsize=12)
ax.set_ylim(0.0, 6.0)
ax.set_xlim(-90, 90)
ax.minorticks_on()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Read monthly precipitation data
Step2: 2.2 Parse times
Step3: 3. Trend analysis
Step4: 3.2 Visualize trend
Step5: 4. Climatological annual mean precipitation
Step6: 5. Areal weighted mean of precipitation rate
Step7: 5.2 Visualize areal weighted mean
Step8: 6. Zonal mean precipitation
|
6,123
|
<ASSISTANT_TASK:>
Python Code:
## Example of a simple python code cell
print "Hello little world"
a = 1
## The last statement in a cell prints its value
a
## (this is sometimes a little confusing - add a pass statement to get rid of this !)
#pass
print "Run number {}".format(a)
a += 1
## The simplest possible python program
print "You can run but you can't hide"
## This is a hidden cell !
## You can't usually see it but it still runs if you execute the notebook
print "Yes you can !"
print "You can hide and you can't run !"
# Try it !!
## Because b hasn't been defined.
try:
print "Run number {}".format(c)
except:
print "Run number 1"
c = 1
c += 1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: When you run a code cell, it is just the same as typing all the code into the interpreter. If you run a cell twice it is the same as if you typed everything twice ... the notebook remembers ! You can run some of a notebook or all of it, and you can run cells out of order. This is great for experimenting and getting things right, but be careful, this can break things easily.
Step2: Hidden cells
Step3: Exercise cells
Step4: A more interesting exercise
|
6,124
|
<ASSISTANT_TASK:>
Python Code:
import os
import random
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import tensorflow as tf
from tensorflow import keras
from learntools.core import binder; binder.bind(globals())
from learntools.embeddings.ex2_factorization import *
#_RM_
input_dir = '../input/movielens_preprocessed'
#_UNCOMMENT_
#input_dir = '../input/movielens-preprocessing'
df = pd.read_csv(os.path.join(input_dir, 'rating.csv'), usecols=['userId', 'movieId', 'rating', 'y'])
movies = pd.read_csv(os.path.join(input_dir, 'movie.csv'), index_col=0)
#_RM_
model_dir = '.'
#_UNCOMMENT_
#model_dir = '../input/matrix-factorization'
model_fname = 'factorization_model.h5'
model_path = os.path.join(model_dir, model_fname)
model = keras.models.load_model(model_path)
print("Setup complete!")
# Id of the user for whom we're predicting ratings
uid = 26556
candidate_movies = movies[
movies.title.str.contains('Naked Gun')
| (movies.title == 'The Sisterhood of the Traveling Pants')
| (movies.title == 'Lilo & Stitch')
].copy()
preds = model.predict([
[uid] * len(candidate_movies), # User ids
candidate_movies.index, # Movie ids
])
# Because our model was trained on a 'centered' version of rating (subtracting the mean, so that
# the target variable had mean 0), to get the predicted star rating on the original scale, we need
# to add the mean back in.
row0 = df.iloc[0]
offset = row0.rating - row0.y
candidate_movies['predicted_rating'] = preds + offset
candidate_movies.head()[ ['movieId', 'title', 'predicted_rating'] ]
def recommend(model, user_id, n=5):
Return a DataFrame with the n most highly recommended movies for the user with the
given id. (Where most highly recommended means having the highest predicted ratings
according to the given model).
The returned DataFrame should have a column for movieId and predicted_rating (it may also have
other columns).
pass
#_COMMENT_IF(PROD)_
part1.hint()
#_COMMENT_IF(PROD)_
part1.solution()
#%%RM_IF(PROD)%%
# Correct (solution code)
def recommend(model, user_id, n=5):
all_movie_ids = movies.index
preds = model.predict([
np.repeat(uid, len(all_movie_ids)),
all_movie_ids,
])
# Add back the offset calculated earlier, to 'uncenter' the ratings, and get back to a [0.5, 5] scale.
movies.loc[all_movie_ids, 'predicted_rating'] = preds + offset
reccs = movies.sort_values(by='predicted_rating', ascending=False).head(n)
return reccs
part1.check()
recommend(model, 26556)
uid = 26556
user_ratings = df[df.userId==uid]
movie_cols = ['movieId', 'title', 'genres', 'year', 'n_ratings', 'mean_rating']
user_ratings.sort_values(by='rating', ascending=False).merge(movies[movie_cols], on='movieId')
#_COMMENT_IF(PROD)_
part2.solution()
#_COMMENT_IF(PROD)_
part3.solution()
def recommend_nonobscure(model, user_id, n=5, min_ratings=1000):
Return a DataFrame with the n movies which the given model assigns the highest
predicted ratings for the given user, *limited to movies with at least the given
threshold of ratings*.
pass
#_COMMENT_IF(PROD)_
part4.hint()
#_COMMENT_IF(PROD)_
part4.solution()
#%%RM_IF(PROD)%%
# Correct (solution code)
def recommend_nonobscure(model, user_id, n=5, min_ratings=1000):
# Add predicted_rating column if we haven't already done so.
if 'predicted_rating' not in movies.columns:
all_movie_ids = df.movieId.unique()
preds = model.predict([
np.repeat(uid, len(all_movie_ids)),
all_movie_ids,
])
# Add back the offset calculated earlier, to 'uncenter' the ratings, and get back to a [0.5, 5] scale.
movies.loc[all_movie_ids, 'predicted_rating'] = preds + offset
nonobscure_movie_ids = movies.index[movies.n_ratings >= min_ratings]
return movies.loc[nonobscure_movie_ids].sort_values(by='predicted_rating', ascending=False).head(n)
part4.check()
recommend_nonobscure(model, uid)
#_COMMENT_IF(PROD)_
part5.solution()
movie_embedding_size = user_embedding_size = 8
user_id_input = keras.Input(shape=(1,), name='user_id')
movie_id_input = keras.Input(shape=(1,), name='movie_id')
movie_r12n = keras.regularizers.l2(1e-6)
user_r12n = keras.regularizers.l2(1e-7)
user_embedded = keras.layers.Embedding(df.userId.max()+1, user_embedding_size,
embeddings_regularizer=user_r12n,
input_length=1, name='user_embedding')(user_id_input)
movie_embedded = keras.layers.Embedding(df.movieId.max()+1, movie_embedding_size,
embeddings_regularizer=movie_r12n,
input_length=1, name='movie_embedding')(movie_id_input)
dotted = keras.layers.Dot(2)([user_embedded, movie_embedded])
out = keras.layers.Flatten()(dotted)
l2_model = keras.Model(
inputs = [user_id_input, movie_id_input],
outputs = out,
)
#_RM_
model_dir = '.'
#_UNCOMMENT_
#model_dir = '../input/regularized-movielens-factorization-model'
model_fname = 'movie_svd_model_8_r12n.h5'
model_path = os.path.join(model_dir, model_fname)
l2_model = keras.models.load_model(model_path)
# Use the recommend() function you wrote earlier to get the 5 best recommended movies
# for user 26556, and assign them to the variable l2_reccs.
l2_reccs = []
l2_reccs
#_COMMENT_IF(PROD)_
part6.solution()
#%%RM_IF(PROD)%%
# Bad solution (wrong model)
l2_reccs = recommend(model, uid)
part6.check()
l2_reccs
#%%RM_IF(PROD)%%
# Correct (soln code)
l2_reccs = recommend(l2_model, 26556)
part6.check()
l2_reccs
uid = 26556
obscure_reccs = recommend(model, uid)
obscure_mids = obscure_reccs.index
preds = l2_model.predict([
np.repeat(uid, len(obscure_mids)),
obscure_mids,
])
recc_df = movies.loc[obscure_mids].copy()
recc_df['l2_predicted_rating'] = preds + offset
recc_df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step3: Suppose we're interested in the somewhat more open-ended problem of generating recommendations. i.e. given some user ID and some number k, we need to generate a list of k movies we think the user will enjoy.
Step4: Part 2
Step5: Do these recommendations seem sensible? If you'd like a reminder of user 26556's tastes, run the cell below to see all their ratings (in descending order).
Step6: Review our top-recommended movies. Are they reasonable? If not, where did we go wrong? You may also find it interesting to look at
Step7: Part 3
Step9: Part 4
Step10: Run the cell below to take a look at our new recommended movies. Did this fix our problem? Do we get better results with a different threshold?
Step11: Part 5
Step12: Part 6
Step13: Training this model for a decent number of iterations takes around 15 minutes, so to save some time, I have an already trained model you can load from disk by running the cell below.
Step14: (If you're curious, you can check out the kernel where I trained this model here. You may notice that, aside from whether the addition of regularization improves the subjective quality of our recommendations, it already has the benefit of improving our validation error, by reducing overfitting.)
Step15: What do you think this model's predicted scores will look like for the 'obscure' movies that our earlier model highly recommended?
|
6,125
|
<ASSISTANT_TASK:>
Python Code:
# Array
import numpy as np
x0 = np.array(12)
x0
x1 = np.array([12, 3, 6, 14])
x1
x1.ndim
x2 = np.array([[5, 78, 2, 34, 0],
[6, 79, 3, 35, 1],
[7, 80, 4, 36, 2]])
x2.ndim
x3 = np.array([[[5, 78, 2, 34, 0],
[6, 79, 3, 35, 1],
[7, 80, 4, 36, 2]],
[[5, 78, 2, 34, 0],
[6, 79, 3, 35, 1],
[7, 80, 4, 36, 2]],
[[5, 78, 2, 34, 0],
[6, 79, 3, 35, 1],
[7, 80, 4, 36, 2]]])
x3.ndim
print("Dimensión:", x3.ndim)
print("Forma", x3.shape)
print("Tipo de datos: ", x3.dtype)
from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
print(train_images.ndim)
print(train_images.shape)
print(train_images.dtype)
digit = train_images[9]
import matplotlib.pyplot as plt
plt.imshow(digit, cmap=plt.cm.binary)
plt.show()
my_slice = train_images[10:100]
print(my_slice.shape)
my_slice = train_images[10:100, :, :]
my_slice.shape
my_slice = train_images[10:100, 0:28, 0:28]
my_slice.shape
my_slice2 = train_images[10:100, 0:14, 0:14]
my_slice2.shape
batch1 = train_images[:128]
batch2 = train_images[128:256]
def Exp(a,b):
exp = a**b
print(exp)
return
Exp(2,2)
def Exp(a,b):
return a**b
def Exp2(a,b):
return a**b,a**b+1,a**b+2
print(Exp(2,2),type(Exp(2,2)))
print(Exp2(2,2),type(Exp2(2,2)))
def mayor(vecIn):
max=vecIn[0]
for i in vecIn:
max=i if i>max else max
return max
# return np.array(vecIn).max()
mayor([1,5,4,9,8,3,4,5,78])
class Auto():
marca = ''
color = ''
Mi_auto = Auto()
Mi_auto.marca = 'ferrari'
Mi_auto.color = 'rojo'
print(Mi_auto.marca)
print(Mi_auto.color)
class Persona():
nombre = ''
edad = ''
def print_nombre(self):
print('nombre:',self.nombre)
def print_edad(self):
print('edad:',self.edad)
Person1 = Persona()
Person1.nombre = 'carolina'
Person1.edad = 20
Person1.print_nombre()
Person1.print_edad()
class Persona:
nombre = ''
edad = ''
def print_informacion(self, nombre, edad):
print('nombre:',self.nombre)
print('edad:',self.edad)
Person2 = Persona()
Person2.nombre = 'carlos'
Person2.edad = '30'
Person2.print_informacion(Person2.nombre, Person2.edad)
class Person:
def __init__(self,n,e):
self.nombre = n
self.edad = e
def __str__(self):
return self.nombre
Person3 = Person('mario','55')
print('nombre:',Person3.nombre)
print('edad:',Person3.edad)
print(Person3)
from tensorflow import keras
from keras.datasets import fashion_mnist
fashion_mnist = keras.datasets.fashion_mnist
(X_train_full, y_train_full), (X_test, y_test) = fashion_mnist.load_data()
X_train_full.shape
X_train_full.dtype
X_valid, X_train = X_train_full[:5000] / 255., X_train_full[5000:] / 255.
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
X_test = X_test / 255
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
plt.imshow(X_train[0], cmap="binary")
plt.axis('off');
import numpy as np
np.unique(y_train)
class_names = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat",
"Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"]
class_names[y_train[0]]
X_valid.shape
X_test.shape
fig , ax = plt.subplots(3,10, figsize=(15,5))
for i , ax in enumerate(ax.flat):
ax.imshow(X_train[i], cmap='binary')
ax.set_axis_off()
ax.set_title(class_names[y_train[i]])
keras.backend.clear_session()
model = keras.models.Sequential();
model.add(keras.layers.Flatten(input_shape=[28, 28]));
model.add(keras.layers.Dense(300, activation="relu"));
model.add(keras.layers.Dense(100, activation="relu"));
model.add(keras.layers.Dense(10, activation="softmax"));
model.summary()
model.compile(loss='sparse_categorical_crossentropy',
optimizer='sgd',
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid))
import pandas as pd
df=pd.DataFrame(history.history)
df.head()
df.plot(figsize=(8, 5))
model.evaluate(X_test,y_test)
X_new = X_test[:3]
y_proba = model.predict(X_new)
y_proba.round(2)
y_pred = model.predict_classes(X_new)
y_pred
np.array(class_names)[y_pred]
y_new = y_test[:3]
y_new
plt.figure(figsize=(7.2, 2.4))
for index, image in enumerate(X_new):
plt.subplot(1, 3, index + 1)
plt.imshow(image, cmap="binary", interpolation="nearest")
plt.axis('off')
plt.title(class_names[y_test[index]], fontsize=12)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vectores (Tensores 1D)
Step2: El anterior vector tiene 5 entradas, por lo tanto es llamado vector 5-dimensional. La dimensionalidad puede denotar, tanto el numero de entradas a lo larco de un eje específico, o el número de ejes en un tensor. En el caso anterior, y para evitar confusiones con los tensor 5D, es mejor referirnos éste como tensor de rango 5.
Step3: Recordemos que a las entradas del primer eje se le llaman fillas y las entradas del segundo eje, columnas.
Step4: Al agrupar tensores 3D en una arreglo, podemos crear tensores 4D, y así sucesivamente. En deep learning se trabajará con tensores de rango 0 a 4, y en caso como el procesamiento de videos, puede llegarse a tensores de rango 5.
Step5: Veamos un ejemplo concreto de una base de datos MNIST de keras
Step6: A continuación se resume la característica del dataset
Step7: Manipulación de tensores en Numpy
Step8: La anterior operación es equivalente a la selección de todas las dimensiones de los ejes mediante
Step9: Y lo anterior es equivalenta a
Step10: De forma general, podemos seleccionar entre dos índices cualquiera a lo largo de un eje del tensor.
Step11: Haga click aquí si tiene problemas con la solución.
Step12: El n-ésimo batch sería
Step13: La sentencia return termina la función, devolviendo opcionalmente uno o varios valores. Una declaración de retorno sin argumentos es lo mismo que devolver none (como en el caso anterior)
Step14: Problema
Step15: Haga click en esta celda si tiene problemas con la solución.
Step16: Creemos un objeto de la clase Auto con algunos atributos
Step17: Dentro de las clases podemos definir funciones, conocidas como métodos
Step18: Los métodos necesitan tener un argumento convenientemente llamado self, que se refiere al objeto del método que está siendo llamado. Podemos pasar más de un argumento si así lo deseamos
Step19: La operación de creación de instancias crea un objeto vacío. A muchas clases les gusta crear objetos con instancias personalizadas a un estado inicial específico. Por lo tanto, una clase puede definir un método especial llamado init (), como este
Step20: Problema
Step24: Ejemplo
Step25: El conjunto de entrenamiento contiene 60,000 imágenes en escala de grises, cada una de 28x28 píxeles
Step26: La intensidad de cada píxel se representa como un byte (enteros entre 0 a 255)
Step27: Ahora dividamos el conjunto de datos de entrenamiento en un conjunto de validación y un conjunto de entrenamiento (más pequeño). También escalamos las intensidades de píxeles en el rango 0-1 y las convertimos en flotantes al dividiéndolas por 255.
Step28: visualicemos la primera imagen del conjunto de datos usando matplotlib
Step29: Las etiquetas van de 0 a 9, por lo que hay 10 tipos diferentes de ropa.
Step30: los nombres para cada una de las clases son
Step31: Entonces, la primera imagen en el conjunto de entrenamiento es un abrigo(Coat)
Step32: El conjunto de validación contiene 5,000 imágenes, y el conjunto de prueba contiene 10,000 imágenes
Step33: visualicemos ahora algunas de las imágenes de los datos de entrenamiento con su respectiva etiqueta
Step34: Uso de Keras para crean un modelo de aprendizaje profundo
Step35: Veamos este código línea por línea
Step36: Compilando el modelo
Step37: Entrenando y evaluando el modelo
Step38: ¡Y eso es todo! La red neuronal está entrenada. En cada época durante el entrenamiento, Keras muestra el número de instancias procesadas hasta el momento (junto con una barra de progreso), el tiempo medio de entrenamiento, la pérdida (loss) y la precisión (accuracy), ambas calculadas en el conjunto de entrenamiento y el conjunto de validación.
Step39: Se puede ver que tanto la precisión de entrenamiento como la de validación aumentan constantemente durante el entrenamiento, mientras que la pérdida de entrenamiento y validación disminuye. Ademas , las curvas de validación están bastante cerca de las curvas de entrenamiento, lo que significa que no hay demasiado sobreajuste(overfitting).
Step40: Usando el modelo para hacer predicciones
Step41: Como puede ver, para cada caso, el modelo estima una probabilidad por clase, desde la clase 0 hasta la clase 9. Si solo le importa la clase con la probabilidad estimada más alta (incluso si esa probabilidad es bastante baja), puede usar el método pre dict_classes() en su lugar
|
6,126
|
<ASSISTANT_TASK:>
Python Code:
!date
import torch
import numpy as np
import math
import random
import torch.nn.functional as F
import matplotlib.pyplot as plt
from sklearn import gaussian_process
from sklearn.gaussian_process.kernels import Matern, WhiteKernel, ConstantKernel, RBF
from sklearn.utils import check_random_state
def add_prior_beam_energy(prior):
g = random_gaussian(mu=[30, 60], sigma=1.0)
add_prior(prior, g['mu'], g['sigma'])
def add_prior_fermi_constant(prior):
g = random_gaussian(mu=[0, 2], sigma=1.0)
add_prior(prior, g['mu'], g['sigma'])
def add_prior(prior, mu, sigma):
prior['mu'].append(mu)
prior['sigma'].append(sigma)
def random_gaussian(mu=[-1, 1], sigma=5.0):
return {'mu': np.random.uniform(mu[0], mu[1]),
'sigma': np.random.uniform(0.0, sigma)}
def draw_gaussian(d, num_samples, random_state=None):
num_parameters = len(d['mu'])
thetas = torch.zeros((num_samples, num_parameters))
mu = d['mu']
sigma = d['sigma'].exp()
for i in range(0, num_samples):
gaussian = torch.normal(mu, sigma)
thetas[i, :] = gaussian
return thetas
def real_experiment(theta, n_samples=100000):
return simulator(theta, n_samples)
def simulator(theta, n_samples, random_state=None):
rng = check_random_state(random_state)
samples = simulator_rej_sample_costheta(n_samples, theta, rng)
return torch.from_numpy(samples.reshape(-1, 1)).float()
def simulator_rej_sample_costheta(n_samples, theta, rng):
sqrtshalf = theta[0]
gf = theta[1]
ntrials = 0
samples = []
x = torch.linspace(-1, 1, steps=1000)
maxval = torch.max(simulator_diffxsec(x, sqrtshalf, gf))
while len(samples) < n_samples:
ntrials = ntrials + 1
xprop = rng.uniform(-1, 1)
ycut = rng.rand()
yprop = (simulator_diffxsec(xprop, sqrtshalf, gf) / maxval)[0]
if (yprop / maxval) < ycut:
continue
samples.append(xprop)
return np.array(samples)
def simulator_diffxsec(costheta, sqrtshalf, gf):
norm = 2. * (1. + 1. / 3.)
return ((1 + costheta ** 2) + simulator_a_fb(sqrtshalf, gf) * costheta) / norm
def simulator_a_fb(sqrtshalf, gf):
mz = 90
gf_nom = 0.9
sqrts = sqrtshalf * 2.
x = torch.FloatTensor([(sqrts - mz) / mz * 10])
a_fb_en = torch.tanh(x)
a_fb_gf = gf / gf_nom
return 2 * a_fb_en * a_fb_gf
def plot_observations(X_observed, theta, normed=True):
plt.grid(True)
plt.hist(X_observed, histtype="bar", range=(-1, 1), bins=100, normed=normed)
plt.xlim([-1, 1])
if normed:
plt.ylim([0, 2])
plt.ylabel("Normalized Number of Events")
else:
plt.ylim([0, 3000])
plt.ylabel("Number of Events")
plt.title(r"Distribution of $\cos(A)$ in $e^-e^+ \rightarrow \mu^-\mu^+$ events." + "\n"
r"$E_{beam}}$ = " + str(theta[0]) + "GeV - " + r"$G_f$ = " + str(theta[1]))
plt.xlabel(r"$\cos(A)$")
plt.show()
def sample_real_data(p_r, batch_size=256):
samples = torch.zeros((batch_size, 1))
num_samples_p_r = len(p_r)
for index in range(0, batch_size):
random_index = random.randint(0, num_samples_p_r - 1)
samples[index, :] = p_r[random_index]
return torch.autograd.Variable(samples, requires_grad=True)
def sample_generated_data(proposal, batch_size=256):
# Sample `batch_size` thetas according to our proposal distribution.
thetas = draw_gaussian(proposal, batch_size)
# Obtain the individual Gaussians.
theta_beam_energy = thetas[:, 0]
theta_fermi_constant = thetas[:, 1]
# Sample according to the proposal distribution.
samples = torch.zeros((batch_size, 1))
for sample_index, theta in enumerate(thetas):
samples[sample_index, :] = simulator(theta, 1)
return torch.autograd.Variable(samples, requires_grad=True)
def compute_gradient_penalty(critic, real, fake, l=5.0):
# Compute x_hat and its output.
epsilon = torch.rand(real.size())
x_hat = epsilon * real + ((1. - epsilon) * fake)
x_hat = torch.autograd.Variable(x_hat, requires_grad=True)
y_hat = critic(x_hat)
# Compute the associated gradients.
gradients = torch.autograd.grad(outputs=y_hat, inputs=x_hat,
grad_outputs=torch.ones(y_hat.size()),
create_graph=True, retain_graph=True, only_inputs=True)[0]
# Prevent norm 0 causing NaN.
gradients = gradients + 1e-16
# Compute the gradient penalty.
gradient_penalty = l * ((gradients.norm(2, dim=1) - 1.) ** 2)
return gradient_penalty
class Critic(torch.nn.Module):
def __init__(self, num_hidden):
super(Critic, self).__init__()
self.fc_1 = torch.nn.Linear(1, num_hidden)
self.fc_2 = torch.nn.Linear(num_hidden, num_hidden)
self.fc_3 = torch.nn.Linear(num_hidden, 1)
def forward(self, x):
x = F.relu(self.fc_1(x))
x = F.relu(self.fc_2(x))
x = (self.fc_3(x))
return x
class CriticWithSigmoid(torch.nn.Module):
def __init__(self, num_hidden):
super(CriticWithSigmoid, self).__init__()
self.fc_1 = torch.nn.Linear(1, num_hidden)
self.fc_2 = torch.nn.Linear(num_hidden, num_hidden)
self.fc_3 = torch.nn.Linear(num_hidden, 1)
def forward(self, x):
x = F.relu(self.fc_1(x))
x = F.relu(self.fc_2(x))
x = F.sigmoid(self.fc_3(x))
return x
class CriticWithTanh(torch.nn.Module):
def __init__(self, num_hidden):
super(CriticWithTanh, self).__init__()
self.fc_1 = torch.nn.Linear(1, num_hidden)
self.fc_2 = torch.nn.Linear(num_hidden, num_hidden)
self.fc_3 = torch.nn.Linear(num_hidden, 1)
def forward(self, x):
x = F.relu(self.fc_1(x))
x = F.relu(self.fc_2(x))
x = F.tanh(self.fc_3(x))
return x
class CriticWithRelu(torch.nn.Module):
def __init__(self, num_hidden):
super(CriticWithRelu, self).__init__()
self.fc_1 = torch.nn.Linear(1, num_hidden)
self.fc_2 = torch.nn.Linear(num_hidden, num_hidden)
self.fc_3 = torch.nn.Linear(num_hidden, 1)
def forward(self, x):
x = F.relu(self.fc_1(x))
x = F.relu(self.fc_2(x))
x = F.tanh(self.fc_3(x))
return x
# Define the true parameterization of theta.
theta_true = [45., .9]
# This parameterization will produce the following distribution in the real experiment.
real_data = real_experiment(theta_true).view(-1)
plot_observations(real_data.numpy(), theta_true)
# Display the total number of observations available.
num_real_samples = len(real_data)
print("Total number of true experimental observations: " + str(num_real_samples))
# Initialize the proposal distribution.
proposal = {
'mu': torch.FloatTensor([40., .9]), # Mean of the proposal distribution.
'sigma': torch.FloatTensor([.1, .1]) # Uncertainty of the proposal distribution.
}
loss_f = torch.nn.MSELoss(size_average=False)
# TODO BALANCE BOTH = GOAL -> FIX LOSS METHOD
def fit_critic(proposal, p_r, critic, optimizer, num_critic_iterations=100, batch_size=256):
ones = torch.autograd.Variable(torch.ones(batch_size, 1))
zeros = torch.autograd.Variable(torch.zeros(batch_size, 1))
# Fit the critic optimally.
for iteration in range(0, num_critic_iterations):
# Fetch the data batches.
x_r = sample_real_data(p_r, batch_size)
# Fetch the simulated data batch.
x_g = sample_generated_data(proposal, batch_size)
# Reset the gradients.
critic.zero_grad()
# Forward pass with real data.
y_r = critic(x_r)
loss_real = loss_f(y_r, zeros)
# Forward pass with generated data.
y_g = critic(x_g)
loss_generated = loss_f(y_g, ones)
# Compute the loss, and the accompanying gradients.
loss = (1 + (loss_generated + loss_real) ** 2).log()
loss.backward()
optimizer.step()
return loss.mean().data.numpy()[0]
# Initialize the interpolation parameters.
p_start = 40.0
p_end = 50.0
p_interpolation_step = 1
batch_size = 1024
# Initialize the proposal distribution.
proposal['mu'][0] = p_start
# Array to keep track of the losses over different parameterizations.
X = []
Y = []
# Compute the error space.
while proposal['mu'][0] <= p_end:
# Check if we need to skip the current step.
if proposal['mu'][0] == 45.:
proposal['mu'][0] += p_interpolation_step
continue
# Initialize a new critic (don't bias critic under different parameterizations).
critic = Critic(100)
optimizer = torch.optim.Adam(critic.parameters(), lr=0.01)
# Train the critic optimally.
loss = fit_critic(proposal, real_data, critic, optimizer, 100, batch_size)
X.append(proposal['mu'][0])
Y.append(loss)
print("Loss for beam energy " + str(proposal['mu'][0]) + ": " + str(loss))
# Set the new parameterization of the proposal distribution.
proposal['mu'][0] += p_interpolation_step
X_ = np.array(X)[:, np.newaxis]
Y_ = np.array(Y)[:, np.newaxis]
#kernel = ConstantKernel() + Matern(length_scale=2, nu=1/2) + WhiteKernel(noise_level=1)
kernel = 1. * RBF(length_scale=.5, length_scale_bounds=(1e-2, 1e3)) + WhiteKernel(noise_level=1e-5, noise_level_bounds=(1e-10, 1e+1))
gp = gaussian_process.GaussianProcessRegressor(kernel=kernel)
gp.fit(X_, Y_)
x = np.linspace(40, 50).reshape(-1,1)
y, s = gp.predict(x, return_std=True)
x = x.flatten()
y = y.flatten()
s = s.flatten()
y_min = y - s
y_max = y + s
plt.title("Estimated Loss over Beam Energy")
plt.plot(x, y)
plt.scatter(X, Y)
plt.fill_between(x, y_min, y_max, color='lightblue', alpha='0.5')
plt.xlim([40, 50])
plt.grid(True)
plt.show()
def find_max(gp, resolution=10000):
x = np.linspace(40, 50, resolution).reshape(-1,1)
y, s = gp.predict(x, return_std=True)
max = float("-inf")
current_max_element = None
for index in range(0, len(x)):
if y[index] > max:
max = y[index]
current_max_element = x[index]
return current_max_element[0]
print("Best estimated fit for Beam Energy: " + str(find_max(gp)))
critic = Critic(100)
optimizer = torch.optim.Adam(critic.parameters(), lr=0.01)
proposal['mu'][0] = 45.1020408163
loss = fit_critic(proposal, real_data, critic, optimizer, 100, batch_size)
print("Loss at " + str(45.1020408163) + " GeV: " + str(loss))
# Append new data for GP esimation.
X.append(45.1020408163)
Y.append(loss)
# Apply new fitting.
X_ = np.array(X)[:, np.newaxis]
Y_ = np.array(Y)[:, np.newaxis]
gp = gaussian_process.GaussianProcessRegressor(kernel=kernel)
gp.fit(X_, Y_)
x = np.linspace(40, 50).reshape(-1,1)
y, s = gp.predict(x, return_std=True)
x = x.flatten()
y = y.flatten()
s = s.flatten()
y_min = y - s
y_max = y + s
plt.title("Estimated Loss over Beam Energy")
plt.plot(x, y)
plt.scatter(X, Y)
plt.fill_between(x, y_min, y_max, color='lightblue', alpha='0.5')
plt.xlim([40, 50])
plt.grid(True)
plt.show()
print("New maximum at: " + str(find_max(gp)) + " GeV")
def evaluate(theta, batch_size=1024, iterations=100):
# Initialize a proposal distribution.
proposal = {}
proposal['mu'] = torch.FloatTensor(theta)
proposal['sigma'] = torch.FloatTensor([.1, .1])
# Initialize a new critic to now bias the computation.
critic = Critic(100)
optimizer = torch.optim.Adam(critic.parameters(), lr=0.01)
# Train the critic optimally.
loss = fit_critic(proposal, real_data, critic, optimizer, iterations, batch_size)
return theta, loss
X = []
Y = []
# Set the search space.
min_beam_energy = 40.
max_beam_energy = 50.
min_fermi_constant = .5
max_fermi_constant = 1.5
# Set the number of sampling bounds.
num_samples_beam_energy = 4
num_samples_fermi_constant = 2
# Display the number of initial seeds.
print("Number of initial seed-points: " + str(num_samples_beam_energy * num_samples_fermi_constant))
# Obtain the individual samples.
samples_beam_energy = np.linspace(min_beam_energy, max_beam_energy, num_samples_beam_energy)
samples_fermi_constant = np.linspace(min_fermi_constant, max_fermi_constant, num_samples_fermi_constant)
# Compute the seed.
for beam_energy in samples_beam_energy:
for fermi_constant in samples_fermi_constant:
theta, loss = evaluate([beam_energy, fermi_constant])
print("Loss for " + str(theta) + ": " + str(loss))
X.append(theta)
Y.append(loss)
# Apply new fitting.
X_ = np.array(X)
Y_ = np.array(Y)
kernel = 1. * RBF(length_scale=.5, length_scale_bounds=(1e-2, 1e3)) + WhiteKernel(noise_level=1e-5, noise_level_bounds=(1e-10, 1e+1))
gp = gaussian_process.GaussianProcessRegressor(kernel=kernel)
gp.fit(np.array(X), np.array(Y))
resolution = 100
x = np.linspace(min_beam_energy, max_beam_energy, resolution)
y = np.linspace(min_fermi_constant, max_fermi_constant, resolution)
z = np.zeros((len(x), len(y)))
for i in range(0, len(x)):
for j in range(0, len(y)):
input = np.array([x[i], y[j]]).reshape(1, -1)
z[i][j] = gp.predict(input)
extent = [min_fermi_constant, max_fermi_constant, \
min_beam_energy, max_beam_energy]
plt.clf()
#plt.imshow(z, extent=extent, origin='lower')
plt.imshow(z, origin='lower')
plt.colorbar()
plt.show()
max = float('-inf')
max_theta = None
for i in range(0, len(x)):
for j in range(0, len(y)):
theta = [x[i], y[j]]
if z[i][j] > max:
max_theta = theta
max = z[i][j]
print("Maximized theta: " + str(max_theta))
# Plot the generated distribution.
data = real_experiment(max_theta).view(-1)
plot_observations(data.numpy(), max_theta)
# Plot the real distribution.
plot_observations(real_data.numpy(), theta_true)
theta, loss = evaluate(max_theta)
loss
X.append(theta)
Y.append(loss)
gp.fit(np.array(X), np.array(Y))
resolution = 100
x = np.linspace(min_beam_energy, max_beam_energy, resolution)
y = np.linspace(min_fermi_constant, max_fermi_constant, resolution)
z = np.zeros((len(x), len(y)))
for i in range(0, len(x)):
for j in range(0, len(y)):
input = np.array([x[i], y[j]]).reshape(1, -1)
z[i][j] = gp.predict(input)
extent = [min_fermi_constant, max_fermi_constant, \
min_beam_energy, max_beam_energy]
plt.clf()
#plt.imshow(z, extent=extent, origin='lower')
plt.imshow(z, origin='lower')
plt.colorbar()
plt.show()
max = float('-inf')
max_theta = None
for i in range(0, len(x)):
for j in range(0, len(y)):
theta = [x[i], y[j]]
if z[i][j] > max:
max_theta = theta
max = z[i][j]
print("Maximized theta: " + str(max_theta))
# Plot the generated distribution.
data = real_experiment(max_theta).view(-1)
plot_observations(data.numpy(), max_theta)
# Plot the real distribution.
plot_observations(real_data.numpy(), theta_true)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: Utility methods
Step3: Simulator
Step4: Core Idea
Step5: In order to find the optimal parameterization for the proposal distribution, we infer the error space using a Gaussian Process, where the input variables are the parameters of interest.
Step6: Sample new point.
Step7: TODO
Step8: First, we need to obtain a seed of the parameter space. Initially, this means we evaluate a grid.
Step9: Fit the multi-dimensional Gaussian Process.
|
6,127
|
<ASSISTANT_TASK:>
Python Code:
!ls -la $LISA_HOME/libs/utils/platforms/
!cat $LISA_HOME/libs/utils/platforms/hikey.json
# Check which Android devices are available
!adb devices
ADB_DEVICE = '00b43d0b08a8a4b8'
# Unified configuration dictionary
my_conf = {
# Target platform
"platform" : 'android',
# Location of external tools (adb, fastboot, systrace, etc)
# These properties can be used to override the environment variables:
# ANDROID_HOME and CATAPULT_HOME
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
# Boards specific settings can be collected into a JSON
# platform description file, to be placed under:
# LISA_HOME/libs/utils/platforms
"board" : 'hikey',
# If you have multiple Android device connected, here
# we can specify which one to target
"device" : ADB_DEVICE,
# Folder where all the results will be collected
"results_dir" : "ReleaseNotes_v16.09",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
target = te.target
from time import sleep
def sample_energy(energy_meter, time_s):
# Reset the configured energy counters
energy_meter.reset()
# Run the workload you want to measure
#
# In this simple example we just wait some time while the
# energy counters accumulate power samples
sleep(time_s)
# Read and report the measured energy (since last reset)
return energy_meter.report(te.res_dir)
!ls -la /dev/ttyACM*
ACM_DEVICE = '/dev/ttyACM1'
# Energy Meters Configuration for ARM Energy Probe
aep_conf = {
'conf' : {
# Value of the shunt resistor [Ohm] for each channel
'resistor_values' : [0.010],
# Device entry assigned to the probe on the host
'device_entry' : ACM_DEVICE,
},
'channel_map' : {
'BAT' : 'CH0'
}
}
from energy import AEP
ape_em = AEP(target, aep_conf, '/tmp')
nrg_report = sample_energy(ape_em, 2)
print nrg_report
!cat $nrg_report.report_file
my_conf = {
# Configure the energy meter to use
"emeter" : {
# Require usage of an AEP meter
"instrument" : "aep",
# Configuration parameters require by the AEP device
"conf" : {
# Value of the shunt resistor in Ohm
'resistor_values' : [0.099],
# Device entry assigned to the probe on the host
'device_entry' : ACM_DEVICE,
},
# Map AEP's channels to logical names (used to generate reports)
'channel_map' : {
'BAT' : 'CH0'
}
},
# Other target configurations
"platform" : 'android',
"board" : 'hikey',
"device" : ADB_DEVICE,
"results_dir" : "ReleaseNotes_v16.09",
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
for i in xrange(1,11):
nrg_report = sample_energy(te.emeter, 1)
nrg_bat = float(nrg_report.channels['BAT'])
print "Sample {:2d}: {:.3f}".format(i, nrg_bat)
!ping -c1 baylibre-acme.local | grep '64 bytes'
# Energy Meters Configuration for BayLibre's ACME
acme_conf = {
"conf" : {
#'iio-capture' : '/usr/bin/iio-capture',
#'ip_address' : 'baylibre-acme.local',
},
"channel_map" : {
"Device0" : 0,
"Device1" : 1,
},
}
from energy import ACME
acme_em = ACME(target, acme_conf, '/tmp')
nrg_report = sample_energy(acme_em, 2)
print nrg_report
!cat $nrg_report.report_file
my_conf = {
# Configure the energy meter to use
"emeter" : {
# Require usage of an AEP meter
"instrument" : "acme",
"conf" : {
#'iio-capture' : '/usr/bin/iio-capture',
#'ip_address' : 'baylibre-acme.local',
},
'channel_map' : {
'Device0' : 0,
'Device1' : 1,
},
},
# Other target configurations
"platform" : 'android',
"board" : 'hikey',
"device" : ADB_DEVICE,
"results_dir" : "ReleaseNotes_v16.09",
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
target = te.target
for i in xrange(1,11):
nrg_report = sample_energy(te.emeter, 1)
nrg_bat = float(nrg_report.channels['Device1'])
print "Sample {:2d}: {:.3f}".format(i, nrg_bat)
from android import System
print "Supported functions:"
for f in dir(System):
if "__" in f:
continue
print " ", f
# logging.getLogger().setLevel(logging.DEBUG)
# Example (use tab to complete)
System.
System.menu(target)
System.back(target)
youtube_apk = System.list_packages(target, 'YouTube')
if youtube_apk:
System.start_app(target, youtube_apk[0])
logging.getLogger().setLevel(logging.INFO)
from android import Screen
print "Supported functions:"
for f in dir(Screen):
if "__" in f:
continue
print " ", f
# logging.getLogger().setLevel(logging.DEBUG)
# Example (use TAB to complete)
Screen.
Screen.set_brightness(target, auto=False, percent=100)
Screen.set_orientation(target, auto=False, portrait=False)
# logging.getLogger().setLevel(logging.INFO)
# logging.getLogger().setLevel(logging.DEBUG)
from android import Workload
# Get the list of available workloads
wloads = Workload(te)
wloads.availables(target)
yt = Workload.get(te, name='YouTube')
# Playback big bug bunny for 15s starting from 1:20s
video_id = 'XSGBVzeBUbk'
video_url = "https://youtu.be/{}?t={}s".format(video_id, 80)
# Play video and measure energy consumption
results = yt.run(te.res_dir,
video_url, video_duration_s=16,
collect='energy')
results
framestats = results[0]
!cat $framestats
from android import Workload
ui = Workload.get(te, name='UiBench')
# Play video and measure energy consumption
results = ui.run(te.res_dir,
ui.test_GlTextureView,
duration_s=5,
collect='energy')
results
framestats = results[0]
!cat $framestats
# The following exanples uses an HiKey board
ADB_DEVICE = '607A87C400055E6E'
# logging.getLogger().setLevel(logging.DEBUG)
# Unified configuration dictionary
my_conf = {
# Tools required
"tools" : ['rt-app', 'trace-cmd'],
# RTApp calibration
#"modules" : ['cpufreq'],
"rtapp-calib" : {
"0": 254, "1": 252, "2": 252, "3": 251,
"4": 251, "5": 252, "6": 251, "7": 251
},
# FTrace configuration
"ftrace" : {
# Events to trace
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_wakeup_tracking",
"sched_stat_wait",
"sched_overutilized",
"sched_contrib_scale_f",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_tune_config",
"sched_tune_filter",
"sched_tune_tasks_update",
"sched_tune_boostgroup_update",
"sched_boost_cpu",
"sched_boost_task",
"sched_energy_diff",
"cpu_capacity",
"cpu_frequency",
"cpu_idle",
"walt_update_task_ravg",
"walt_update_history",
"walt_migration_update_sum",
],
# # Kernel functions to profile
# "functions" : [
# "pick_next_task_fair",
# "select_task_rq_fair",
# "enqueue_task_fair",
# "update_curr_fair",
# "dequeue_task_fair",
# ],
# Per-CPU buffer configuration
"buffsize" : 10 * 1024,
},
# Target platform
"platform" : 'android',
"board" : 'hikey',
"device" : ADB_DEVICE,
"results_dir" : "ReleaseNotes_v16.09",
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
target = te.target
from wlgen import RTA,Ramp
# Let's run a simple RAMP task
rta = RTA(target, 'ramp')
rta.conf(
kind='profile',
params = {
'ramp' : Ramp().get()
}
);
te.ftrace.start()
target.execute("echo 'my_marker: label=START' > /sys/kernel/debug/tracing/trace_marker",
as_root=True)
rta.run(out_dir=te.res_dir)
target.execute("echo 'my_marker: label=STOP' > /sys/kernel/debug/tracing/trace_marker",
as_root=True)
te.ftrace.stop()
trace_file = os.path.join(te.res_dir, 'trace.dat')
te.ftrace.get_trace(trace_file)
from trace import Trace
events_to_parse = my_conf['ftrace']['events']
events_to_parse += ['my_marker']
trace = Trace(trace_file, events_to_parse, te.platform)
trace.available_events
# Use TAB to complete
trace.data_frame.
rt_tasks = trace.data_frame.rt_tasks()
rt_tasks.head()
lat_df = trace.data_frame.latency_df('ramp')
lat_df.head()
custom_df = trace.data_frame.trace_event('my_marker')
custom_df
ctxsw_df = trace.data_frame.trace_event('sched_switch')
ctxsw_df.head()
# Use TAB to complete
trace.analysis.
trace.analysis.tasks.plotTasks(tasks='ramp',
signals=['util_avg', 'boosted_util',
'sched_overutilized', 'residencies'])
lat_data = trace.analysis.latency.plotLatency('ramp')
lat_data.T
trace.analysis.frequency.plotClusterFrequencies()
trace.analysis.frequency.plotClusterFrequencyResidency(pct=True, active=True)
trace.analysis.frequency.plotClusterFrequencyResidency(pct=True, active=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Single configuration dictionary
Step2: Energy Meters Support
Step3: Channels mapping support
Step4: Direct usage
Step5: Usage via TestEnv
Step6: BayLibre's ACME board (ACME)
Step7: Direct usage
Step8: Usage via TestEnv
Step9: Android Integration
Step10: Capturing main useful actions, for example
Step11: Screen control APIs
Step12: Workloads Execution
Step13: Benchmarks
Step14: Improved Trace Analysis support
Step15: DataFrame namespace
Step16: Analysis namespace
|
6,128
|
<ASSISTANT_TASK:>
Python Code:
import gcp.bigquery as bq
%%sql --module hn
DEFINE QUERY top_types
SELECT type, COUNT(*) c
FROM [fh-bigquery:hackernews.full_201510]
GROUP BY 1
ORDER BY 2
LIMIT 100
DEFINE QUERY counts
SELECT a.month month, stories, comments, comment_authors, story_authors
FROM (
SELECT STRFTIME_UTC_USEC(time_ts, '%Y-%m') month, COUNT(*) stories, EXACT_COUNT_DISTINCT(author) story_authors
FROM [fh-bigquery:hackernews.stories]
GROUP BY 1
) a
JOIN (
SELECT STRFTIME_UTC_USEC(time_ts, '%Y-%m') month, COUNT(*) comments, EXACT_COUNT_DISTINCT(author) comment_authors
FROM [fh-bigquery:hackernews.comments]
GROUP BY 1
) b
ON a.month=b.month
ORDER BY 1
DEFINE QUERY score_per_hour
SELECT HOUR(SEC_TO_TIMESTAMP(time-3600*7)) hour, COUNT(*) stories, AVG(score) avg_score, SUM(score>30)/COUNT(*) prob
FROM [fh-bigquery:hackernews.stories]
WHERE YEAR(time_ts)=2015
GROUP BY 1
ORDER BY 1
DEFINE QUERY comments_per_hour
SELECT HOUR(SEC_TO_TIMESTAMP(time-3600*7)) hour, COUNT(*) comments, AVG(ranking) avg_ranking, SUM(ranking=0)/COUNT(*) prob
FROM [fh-bigquery:hackernews.comments]
WHERE YEAR(time_ts)=2015
GROUP BY 1
ORDER BY 1
DEFINE QUERY top_comments_per_hour
SELECT HOUR(SEC_TO_TIMESTAMP(time-3600*7)) hour, COUNT(*) comments, AVG(ranking) avg_ranking, SUM(ranking=0)/COUNT(*) prob
FROM [fh-bigquery:hackernews.comments]
WHERE YEAR(time_ts)=2015
AND parent IN (SELECT id FROM [fh-bigquery:hackernews.stories] WHERE score>10)
GROUP BY 1
ORDER BY 1
bq.Query(hn.top_types).to_dataframe()
data=bq.Query(hn.counts).to_dataframe()[0:-1]
my_plot=data.set_index('month').plot(kind='line', title='Hacker News comments and stories count | October 2010 - September 2015')
my_plot.set_xlabel("Month")
import matplotlib.pyplot as plt
plt.figtext(.9,.14,'@felipehoffa reddit.com/r/bigquery', fontsize=8, ha='right')
scores=bq.Query(hn.score_per_hour).to_dataframe()
my_plot=scores.set_index('hour')['avg_score'].plot(kind='line', title='Average score of HN stories - 2015')
my_plot.set_xlabel("hour UTC-7")
my_plot.set_ylim(ymin=0)
plt.figtext(.9,.14,'@felipehoffa reddit.com/r/bigquery', fontsize=8, ha='right')
my_plot=scores.set_index('hour')['prob'].plot(kind='line', title='Probability of having a HN story with score>30 - 2015')
my_plot.set_xlabel("hour UTC-7")
my_plot.set_ylim(ymin=0)
plt.figtext(.9,.14,'@felipehoffa reddit.com/r/bigquery', fontsize=8, ha='right')
comments = bq.Query(hn.comments_per_hour).to_dataframe()
my_plot=comments.set_index('hour')['comments'].plot(kind='line', title='Number of comments by time posted- 2015')
my_plot.set_xlabel("hour UTC-7")
my_plot.set_ylim(ymin=0)
plt.figtext(.9,.14,'@felipehoffa reddit.com/r/bigquery', fontsize=8, ha='right')
my_plot=comments.set_index('hour')['avg_ranking'].plot(kind='line', title='Average ranking of comments by time posted (lower is better)- 2015')
my_plot.set_xlabel("hour UTC-7")
my_plot.set_ylim(ymin=0)
plt.figtext(.9,.14,'@felipehoffa reddit.com/r/bigquery', fontsize=8, ha='right')
my_plot=bq.Query(hn.top_comments_per_hour).to_dataframe().set_index('hour')['prob'].plot(kind='line', title='Probability of having a top comment by time posted - 2015')
my_plot.set_xlabel("hour UTC-7")
my_plot.set_ylim(ymin=0)
plt.figtext(.9,.14,'@felipehoffa reddit.com/r/bigquery', fontsize=8, ha='right')
Copyright 2015, Google, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's see what's our content, and how many of each we have
Step2: Nice start
Step3: Why is there a big drop on comments in 2014? I don't know, but I'd like to know. Also
Step4: So there are 2 peaks for top stories
Step5: According to this chart, the best time to post a story is at 8pm Pacific.
Step6: The peak is at 10AM Pacific. And what's the average ranking of these comments? (lower is better)
Step7: So let's see the probability of having a top comment
Step9: Time for you to play with this data!
|
6,129
|
<ASSISTANT_TASK:>
Python Code:
import ipcoal
import toytree
import ipyrad.analysis as ipa
import ipyparallel as ipp
# connect to a running client
ipyclient = ipp.Client()
# show number of engines
ipyclient.ids
# make a random tree
tree = toytree.rtree.unittree(ntips=5, treeheight=5e5, seed=1243)
tree.draw(ts='p');
# simulate loci and write to HDF5
model = ipcoal.Model(tree, Ne=1e5, nsamples=4)
model.sim_loci(100, 500)
model.write_loci_to_hdf5(name="test", outdir="/tmp", diploid=True)
# create an IMAP
IMAP = {
'r' + str(i): [j for j in model.alpha_ordered_names if int(j[1]) == i][:2]
for i in range(5)
}
IMAP
# init bpp tool.
bpp1 = ipa.bpp(
data="/tmp/test.seqs.hdf5",
name="test1",
workdir="/tmp",
guidetree=tree,
imap=IMAP,
maxloci=100,
burnin=1000,
nsample=5000,
)
bpp1.kwargs
# submit 2 jobs to ipyclient
bpp1._run(nreps=2, ipyclient=ipyclient, force=True, block=False, dry_run=False)
# submit X other jobs to ipyclient (e.g., using diff job name)
bpp2 = bpp1.copy("test2")
bpp2._run(nreps=4, ipyclient=ipyclient, force=True, block=False, dry_run=False)
# see the jobs that are submitted
bpp1.asyncs
bpp2.asyncs
# see outstanding jobs (optional, this does NOT BLOCK)
ipyclient.outstanding
# BLOCK until all jobs on ipyclient are finished (returns True when done)
ipyclient.wait()
res, mcmc = bpp1.summarize_results("00")
res
res, mcmc = bpp2.summarize_results("00")
res
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Start an ipcluster instance
Step2: Simulate loci under a known scenario
Step3: Setup BPP
Step4: Submit BPP jobs to run on cluster (using ._run())
Step5: Submit more jobs on the same ipyclient
Step6: The asynchronous job objects
Step7: Block until jobs finish (or don't)
Step8: Summarize results (WHEN FINISHED)
|
6,130
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function, unicode_literals
[10.5, 5.2, 3.25, 7.0]
import numpy as np
video = np.array([10.5, 5.2, 3.25, 7.0])
video
video.size
video[2] # 3rd element
%matplotlib inline
import matplotlib.pyplot as plt
u = np.array([2, 5])
v = np.array([3, 1])
x_coords, y_coords = zip(u, v)
plt.scatter(x_coords, y_coords, color=["r","b"])
plt.axis([0, 9, 0, 6])
plt.grid()
plt.show()
def plot_vector2d(vector2d, origin=[0, 0], **options):
return plt.arrow(origin[0], origin[1], vector2d[0], vector2d[1],
head_width=0.2, head_length=0.3, length_includes_head=True,
**options)
plot_vector2d(u, color="r")
plot_vector2d(v, color="b")
plt.axis([0, 9, 0, 6])
plt.grid()
plt.show()
a = np.array([1, 2, 8])
b = np.array([5, 6, 3])
from mpl_toolkits.mplot3d import Axes3D
subplot3d = plt.subplot(111, projection='3d')
x_coords, y_coords, z_coords = zip(a,b)
subplot3d.scatter(x_coords, y_coords, z_coords)
subplot3d.set_zlim3d([0, 9])
plt.show()
def plot_vectors3d(ax, vectors3d, z0, **options):
for v in vectors3d:
x, y, z = v
ax.plot([x,x], [y,y], [z0, z], color="gray", linestyle='dotted', marker=".")
x_coords, y_coords, z_coords = zip(*vectors3d)
ax.scatter(x_coords, y_coords, z_coords, **options)
subplot3d = plt.subplot(111, projection='3d')
subplot3d.set_zlim([0, 9])
plot_vectors3d(subplot3d, [a,b], 0, color=("r","b"))
plt.show()
def vector_norm(vector):
squares = [element**2 for element in vector]
return sum(squares)**0.5
print("||", u, "|| =")
vector_norm(u)
import numpy.linalg as LA
LA.norm(u)
radius = LA.norm(u)
plt.gca().add_artist(plt.Circle((0,0), radius, color="#DDDDDD"))
plot_vector2d(u, color="red")
plt.axis([0, 8.7, 0, 6])
plt.grid()
plt.show()
print(" ", u)
print("+", v)
print("-"*10)
u + v
plot_vector2d(u, color="r")
plot_vector2d(v, color="b")
plot_vector2d(v, origin=u, color="b", linestyle="dotted")
plot_vector2d(u, origin=v, color="r", linestyle="dotted")
plot_vector2d(u+v, color="g")
plt.axis([0, 9, 0, 7])
plt.text(0.7, 3, "u", color="r", fontsize=18)
plt.text(4, 3, "u", color="r", fontsize=18)
plt.text(1.8, 0.2, "v", color="b", fontsize=18)
plt.text(3.1, 5.6, "v", color="b", fontsize=18)
plt.text(2.4, 2.5, "u+v", color="g", fontsize=18)
plt.grid()
plt.show()
t1 = np.array([2, 0.25])
t2 = np.array([2.5, 3.5])
t3 = np.array([1, 2])
x_coords, y_coords = zip(t1, t2, t3, t1)
plt.plot(x_coords, y_coords, "c--", x_coords, y_coords, "co")
plot_vector2d(v, t1, color="r", linestyle=":")
plot_vector2d(v, t2, color="r", linestyle=":")
plot_vector2d(v, t3, color="r", linestyle=":")
t1b = t1 + v
t2b = t2 + v
t3b = t3 + v
x_coords_b, y_coords_b = zip(t1b, t2b, t3b, t1b)
plt.plot(x_coords_b, y_coords_b, "b-", x_coords_b, y_coords_b, "bo")
plt.text(4, 4.2, "v", color="r", fontsize=18)
plt.text(3, 2.3, "v", color="r", fontsize=18)
plt.text(3.5, 0.4, "v", color="r", fontsize=18)
plt.axis([0, 6, 0, 5])
plt.grid()
plt.show()
print("1.5 *", u, "=")
1.5 * u
k = 2.5
t1c = k * t1
t2c = k * t2
t3c = k * t3
plt.plot(x_coords, y_coords, "c--", x_coords, y_coords, "co")
plot_vector2d(t1, color="r")
plot_vector2d(t2, color="r")
plot_vector2d(t3, color="r")
x_coords_c, y_coords_c = zip(t1c, t2c, t3c, t1c)
plt.plot(x_coords_c, y_coords_c, "b-", x_coords_c, y_coords_c, "bo")
plot_vector2d(k * t1, color="b", linestyle=":")
plot_vector2d(k * t2, color="b", linestyle=":")
plot_vector2d(k * t3, color="b", linestyle=":")
plt.axis([0, 9, 0, 9])
plt.grid()
plt.show()
plt.gca().add_artist(plt.Circle((0,0),1,color='c'))
plt.plot(0, 0, "ko")
plot_vector2d(v / LA.norm(v), color="k")
plot_vector2d(v, color="b", linestyle=":")
plt.text(0.3, 0.3, "$\hat{u}$", color="k", fontsize=18)
plt.text(1.5, 0.7, "$u$", color="b", fontsize=18)
plt.axis([-1.5, 5.5, -1.5, 3.5])
plt.grid()
plt.show()
def dot_product(v1, v2):
return sum(v1i * v2i for v1i, v2i in zip(v1, v2))
dot_product(u, v)
np.dot(u,v)
u.dot(v)
print(" ",u)
print("* ",v, "(NOT a dot product)")
print("-"*10)
u * v
def vector_angle(u, v):
cos_theta = u.dot(v) / LA.norm(u) / LA.norm(v)
return np.arccos(np.clip(cos_theta, -1, 1))
theta = vector_angle(u, v)
print("Angle =", theta, "radians")
print(" =", theta * 180 / np.pi, "degrees")
u_normalized = u / LA.norm(u)
proj = v.dot(u_normalized) * u_normalized
plot_vector2d(u, color="r")
plot_vector2d(v, color="b")
plot_vector2d(proj, color="k", linestyle=":")
plt.plot(proj[0], proj[1], "ko")
plt.plot([proj[0], v[0]], [proj[1], v[1]], "b:")
plt.text(1, 2, "$proj_u v$", color="k", fontsize=18)
plt.text(1.8, 0.2, "$v$", color="b", fontsize=18)
plt.text(0.8, 3, "$u$", color="r", fontsize=18)
plt.axis([0, 8, 0, 5.5])
plt.grid()
plt.show()
[
[10, 20, 30],
[40, 50, 60]
]
A = np.array([
[10,20,30],
[40,50,60]
])
A
A.shape
A.size
A[1,2] # 2nd row, 3rd column
A[1, :] # 2nd row vector (as a 1D array)
A[:, 2] # 3rd column vector (as a 1D array)
A[1:2, :] # rows 2 to 3 (excluded): this returns row 2 as a one-row matrix
A[:, 2:3] # columns 3 to 4 (excluded): this returns column 3 as a one-column matrix
np.diag([4, 5, 6])
D = np.array([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
])
np.diag(D)
np.eye(3)
B = np.array([[1,2,3], [4, 5, 6]])
B
A
A + B
B + A
C = np.array([[100,200,300], [400, 500, 600]])
A + (B + C)
(A + B) + C
2 * A
A * 2
2 * (3 * A)
(2 * 3) * A
2 * (A + B)
2 * A + 2 * B
D = np.array([
[ 2, 3, 5, 7],
[11, 13, 17, 19],
[23, 29, 31, 37]
])
E = A.dot(D)
E
40*5 + 50*17 + 60*31
E[1,2] # row 2, column 3
try:
D.dot(A)
except ValueError as e:
print("ValueError:", e)
F = np.array([
[5,2],
[4,1],
[9,3]
])
A.dot(F)
F.dot(A)
G = np.array([
[8, 7, 4, 2, 5],
[2, 5, 1, 0, 5],
[9, 11, 17, 21, 0],
[0, 1, 0, 1, 2]])
A.dot(D).dot(G) # (AB)G
A.dot(D.dot(G)) # A(BG)
(A + B).dot(D)
A.dot(D) + B.dot(D)
A.dot(np.eye(3))
np.eye(2).dot(A)
A * B # NOT a matrix multiplication
import sys
print("Python version: {}.{}.{}".format(*sys.version_info))
print("Numpy version:", np.version.version)
# Uncomment the following line if your Python version is ≥3.5
# and your NumPy version is ≥1.10:
#A @ D
A
A.T
A.T.T
(A + B).T
A.T + B.T
(A.dot(D)).T
D.T.dot(A.T)
D.dot(D.T)
u
u.T
u_row = np.array([u])
u_row
u[np.newaxis, :]
u[np.newaxis]
u[None]
u_row.T
u[:, np.newaxis]
P = np.array([
[3.0, 4.0, 1.0, 4.6],
[0.2, 3.5, 2.0, 0.5]
])
x_coords_P, y_coords_P = P
plt.scatter(x_coords_P, y_coords_P)
plt.axis([0, 5, 0, 4])
plt.show()
plt.plot(x_coords_P, y_coords_P, "bo")
plt.plot(x_coords_P, y_coords_P, "b--")
plt.axis([0, 5, 0, 4])
plt.grid()
plt.show()
from matplotlib.patches import Polygon
plt.gca().add_artist(Polygon(P.T))
plt.axis([0, 5, 0, 4])
plt.grid()
plt.show()
H = np.array([
[ 0.5, -0.2, 0.2, -0.1],
[ 0.4, 0.4, 1.5, 0.6]
])
P_moved = P + H
plt.gca().add_artist(Polygon(P.T, alpha=0.2))
plt.gca().add_artist(Polygon(P_moved.T, alpha=0.3, color="r"))
for vector, origin in zip(H.T, P.T):
plot_vector2d(vector, origin=origin)
plt.text(2.2, 1.8, "$P$", color="b", fontsize=18)
plt.text(2.0, 3.2, "$P+H$", color="r", fontsize=18)
plt.text(2.5, 0.5, "$H_{*,1}$", color="k", fontsize=18)
plt.text(4.1, 3.5, "$H_{*,2}$", color="k", fontsize=18)
plt.text(0.4, 2.6, "$H_{*,3}$", color="k", fontsize=18)
plt.text(4.4, 0.2, "$H_{*,4}$", color="k", fontsize=18)
plt.axis([0, 5, 0, 4])
plt.grid()
plt.show()
H2 = np.array([
[-0.5, -0.5, -0.5, -0.5],
[ 0.4, 0.4, 0.4, 0.4]
])
P_translated = P + H2
plt.gca().add_artist(Polygon(P.T, alpha=0.2))
plt.gca().add_artist(Polygon(P_translated.T, alpha=0.3, color="r"))
for vector, origin in zip(H2.T, P.T):
plot_vector2d(vector, origin=origin)
plt.axis([0, 5, 0, 4])
plt.grid()
plt.show()
P + [[-0.5], [0.4]] # same as P + H2, thanks to NumPy broadcasting
def plot_transformation(P_before, P_after, text_before, text_after, axis = [0, 5, 0, 4], arrows=False):
if arrows:
for vector_before, vector_after in zip(P_before.T, P_after.T):
plot_vector2d(vector_before, color="blue", linestyle="--")
plot_vector2d(vector_after, color="red", linestyle="-")
plt.gca().add_artist(Polygon(P_before.T, alpha=0.2))
plt.gca().add_artist(Polygon(P_after.T, alpha=0.3, color="r"))
plt.text(P_before[0].mean(), P_before[1].mean(), text_before, fontsize=18, color="blue")
plt.text(P_after[0].mean(), P_after[1].mean(), text_after, fontsize=18, color="red")
plt.axis(axis)
plt.grid()
P_rescaled = 0.60 * P
plot_transformation(P, P_rescaled, "$P$", "$0.6 P$", arrows=True)
plt.show()
U = np.array([[1, 0]])
U.dot(P)
def plot_projection(U, P):
U_P = U.dot(P)
axis_end = 100 * U
plot_vector2d(axis_end[0], color="black")
plt.gca().add_artist(Polygon(P.T, alpha=0.2))
for vector, proj_coordinate in zip(P.T, U_P.T):
proj_point = proj_coordinate * U
plt.plot(proj_point[0][0], proj_point[0][1], "ro")
plt.plot([vector[0], proj_point[0][0]], [vector[1], proj_point[0][1]], "r--")
plt.axis([0, 5, 0, 4])
plt.grid()
plt.show()
plot_projection(U, P)
angle30 = 30 * np.pi / 180 # angle in radians
U_30 = np.array([[np.cos(angle30), np.sin(angle30)]])
plot_projection(U_30, P)
angle120 = 120 * np.pi / 180
V = np.array([
[np.cos(angle30), np.sin(angle30)],
[np.cos(angle120), np.sin(angle120)]
])
V
V.dot(P)
P_rotated = V.dot(P)
plot_transformation(P, P_rotated, "$P$", "$VP$", [-2, 6, -2, 4], arrows=True)
plt.show()
F_shear = np.array([
[1, 1.5],
[0, 1]
])
plot_transformation(P, F_shear.dot(P), "$P$", "$F_{shear} P$",
axis=[0, 10, 0, 7])
plt.show()
Square = np.array([
[0, 0, 1, 1],
[0, 1, 1, 0]
])
plot_transformation(Square, F_shear.dot(Square), "$Square$", "$F_{shear} Square$",
axis=[0, 2.6, 0, 1.8])
plt.show()
F_squeeze = np.array([
[1.4, 0],
[0, 1/1.4]
])
plot_transformation(P, F_squeeze.dot(P), "$P$", "$F_{squeeze} P$",
axis=[0, 7, 0, 5])
plt.show()
plot_transformation(Square, F_squeeze.dot(Square), "$Square$", "$F_{squeeze} Square$",
axis=[0, 1.8, 0, 1.2])
plt.show()
F_reflect = np.array([
[1, 0],
[0, -1]
])
plot_transformation(P, F_reflect.dot(P), "$P$", "$F_{reflect} P$",
axis=[-2, 9, -4.5, 4.5])
plt.show()
F_inv_shear = np.array([
[1, -1.5],
[0, 1]
])
P_sheared = F_shear.dot(P)
P_unsheared = F_inv_shear.dot(P_sheared)
plot_transformation(P_sheared, P_unsheared, "$P_{sheared}$", "$P_{unsheared}$",
axis=[0, 10, 0, 7])
plt.plot(P[0], P[1], "b--")
plt.show()
F_inv_shear = LA.inv(F_shear)
F_inv_shear
plt.plot([0, 0, 1, 1, 0, 0.1, 0.1, 0, 0.1, 1.1, 1.0, 1.1, 1.1, 1.0, 1.1, 0.1],
[0, 1, 1, 0, 0, 0.1, 1.1, 1.0, 1.1, 1.1, 1.0, 1.1, 0.1, 0, 0.1, 0.1],
"r-")
plt.axis([-0.5, 2.1, -0.5, 1.5])
plt.show()
F_project = np.array([
[1, 0],
[0, 0]
])
plot_transformation(P, F_project.dot(P), "$P$", "$F_{project} \cdot P$",
axis=[0, 6, -1, 4])
plt.show()
try:
LA.inv(F_project)
except LA.LinAlgError as e:
print("LinAlgError:", e)
angle30 = 30 * np.pi / 180
F_project_30 = np.array([
[np.cos(angle30)**2, np.sin(2*angle30)/2],
[np.sin(2*angle30)/2, np.sin(angle30)**2]
])
plot_transformation(P, F_project_30.dot(P), "$P$", "$F_{project\_30} \cdot P$",
axis=[0, 6, -1, 4])
plt.show()
LA.inv(F_project_30)
F_shear.dot(LA.inv(F_shear))
LA.inv(LA.inv(F_shear))
F_involution = np.array([
[0, -2],
[-1/2, 0]
])
plot_transformation(P, F_involution.dot(P), "$P$", "$F_{involution} \cdot P$",
axis=[-8, 5, -4, 4])
plt.show()
F_reflect.dot(F_reflect.T)
M = np.array([
[1, 2, 3],
[4, 5, 6],
[7, 8, 0]
])
LA.det(M)
LA.det(F_project)
LA.det(F_project_30)
LA.det(F_shear)
F_scale = np.array([
[0.5, 0],
[0, 0.5]
])
plot_transformation(P, F_scale.dot(P), "$P$", "$F_{scale} \cdot P$",
axis=[0, 6, -1, 4])
plt.show()
LA.det(F_scale)
LA.det(F_reflect)
P_squeezed_then_sheared = F_shear.dot(F_squeeze.dot(P))
P_squeezed_then_sheared = (F_shear.dot(F_squeeze)).dot(P)
F_squeeze_then_shear = F_shear.dot(F_squeeze)
P_squeezed_then_sheared = F_squeeze_then_shear.dot(P)
LA.inv(F_shear.dot(F_squeeze)) == LA.inv(F_squeeze).dot(LA.inv(F_shear))
U, S_diag, V_T = LA.svd(F_shear) # note: in python 3 you can rename S_diag to Σ_diag
U
S_diag
S = np.diag(S_diag)
S
U.dot(np.diag(S_diag)).dot(V_T)
F_shear
plot_transformation(Square, V_T.dot(Square), "$Square$", "$V^T \cdot Square$",
axis=[-0.5, 3.5 , -1.5, 1.5])
plt.show()
plot_transformation(V_T.dot(Square), S.dot(V_T).dot(Square), "$V^T \cdot Square$", "$\Sigma \cdot V^T \cdot Square$",
axis=[-0.5, 3.5 , -1.5, 1.5])
plt.show()
plot_transformation(S.dot(V_T).dot(Square), U.dot(S).dot(V_T).dot(Square),"$\Sigma \cdot V^T \cdot Square$", "$U \cdot \Sigma \cdot V^T \cdot Square$",
axis=[-0.5, 3.5 , -1.5, 1.5])
plt.show()
eigenvalues, eigenvectors = LA.eig(F_squeeze)
eigenvalues # [λ0, λ1, …]
eigenvectors # [v0, v1, …]
eigenvalues2, eigenvectors2 = LA.eig(F_shear)
eigenvalues2 # [λ0, λ1, …]
eigenvectors2 # [v0, v1, …]
D = np.array([
[100, 200, 300],
[ 10, 20, 30],
[ 1, 2, 3],
])
np.trace(D)
np.trace(F_project)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vectors
Step2: Since we plan to do quite a lot of scientific calculations, it is much better to use NumPy's ndarray, which provides a lot of convenient and optimized implementations of essential mathematical operations on vectors (for more details about NumPy, check out the NumPy tutorial). For example
Step3: The size of a vector can be obtained using the size attribute
Step4: The $i^{th}$ element (also called entry or item) of a vector $\textbf{v}$ is noted $\textbf{v}_i$.
Step5: Plotting vectors
Step6: 2D vectors
Step7: These vectors each have 2 elements, so they can easily be represented graphically on a 2D graph, for example as points
Step8: Vectors can also be represented as arrows. Let's create a small convenience function to draw nice arrows
Step9: Now let's draw the vectors u and v as arrows
Step10: 3D vectors
Step11: Now let's plot them using matplotlib's Axes3D
Step12: It is a bit hard to visualize exactly where in space these two points are, so let's add vertical lines. We'll create a small convenience function to plot a list of 3d vectors with vertical lines attached
Step13: Norm
Step14: However, it is much more efficient to use NumPy's norm function, available in the linalg (Linear Algebra) module
Step15: Let's plot a little diagram to confirm that the length of vector $\textbf{v}$ is indeed $\approx5.4$
Step16: Looks about right!
Step17: Let's look at what vector addition looks like graphically
Step18: Vector addition is commutative, meaning that $\textbf{u} + \textbf{v} = \textbf{v} + \textbf{u}$. You can see it on the previous image
Step19: Finally, substracting a vector is like adding the opposite vector.
Step20: Graphically, scalar multiplication results in changing the scale of a figure, hence the name scalar. The distance from the origin (the point at coordinates equal to zero) is also multiplied by the scalar. For example, let's scale up by a factor of k = 2.5
Step21: As you might guess, dividing a vector by a scalar is equivalent to multiplying by its inverse
Step22: Dot product
Step23: But a much more efficient implementation is provided by NumPy with the dot function
Step24: Equivalently, you can use the dot method of ndarrays
Step25: Caution
Step26: Main properties
Step27: Note
Step28: Matrices
Step29: A much more efficient way is to use the NumPy library which provides optimized implementations of many matrix operations
Step30: By convention matrices generally have uppercase names, such as $A$.
Step31: Caution
Step32: Element indexing
Step33: The $i^{th}$ row vector is sometimes noted $M_i$ or $M_{i,}$, but again there is no standard notation so people often prefer to explicitely define their own names, for example
Step34: Similarly, the $j^{th}$ column vector is sometimes noted $M^j$ or $M_{,j}$, but there is no standard notation. We will use $M_{,j}$. For example, to access $A_{*,3}$ (ie. $A$'s 3rd column vector)
Step35: Note that the result is actually a one-dimensional NumPy array
Step36: Square, triangular, diagonal and identity matrices
Step37: If you pass a matrix to the diag function, it will happily extract the diagonal values
Step38: Finally, the identity matrix of size $n$, noted $I_n$, is a diagonal matrix of size $n \times n$ with $1$'s in the main diagonal, for example $I_3$
Step39: The identity matrix is often noted simply $I$ (instead of $I_n$) when its size is clear given the context. It is called the identity matrix because multiplying a matrix with it leaves the matrix unchanged as we will see below.
Step40: Addition is commutative, meaning that $A + B = B + A$
Step41: It is also associative, meaning that $A + (B + C) = (A + B) + C$
Step42: Scalar multiplication
Step43: Scalar multiplication is also defined on the right hand side, and gives the same result
Step44: This makes scalar multiplication commutative.
Step45: Finally, it is distributive over addition of matrices, meaning that $\lambda (Q + R) = \lambda Q + \lambda R$
Step46: Matrix multiplication
Step47: Let's check this result by looking at one element, just to be sure
Step48: Looks good! You can check the other elements until you get used to the algorithm.
Step49: This illustrates the fact that matrix multiplication is NOT commutative
Step50: On the other hand, matrix multiplication is associative, meaning that $Q(RS) = (QR)S$. Let's create a $4 \times 5$ matrix $G$ to illustrate this
Step51: It is also distributive over addition of matrices, meaning that $(Q + R)S = QS + RS$. For example
Step52: The product of a matrix $M$ by the identity matrix (of matching size) results in the same matrix $M$. More formally, if $M$ is an $m \times n$ matrix, then
Step53: Caution
Step54: The @ infix operator
Step55: Note
Step56: As you might expect, transposing a matrix twice returns the original matrix
Step57: Transposition is distributive over addition of matrices, meaning that $(Q + R)^T = Q^T + R^T$. For example
Step58: Moreover, $(Q \cdot R)^T = R^T \cdot Q^T$. Note that the order is reversed. For example
Step59: A symmetric matrix $M$ is defined as a matrix that is equal to its transpose
Step60: Converting 1D arrays to 2D arrays in NumPy
Step61: We want to convert $\textbf{u}$ into a row vector before transposing it. There are a few ways to do this
Step62: Notice the extra square brackets
Step63: This quite explicit
Step64: This is equivalent, but a little less explicit.
Step65: This is the shortest version, but you probably want to avoid it because it is unclear. The reason it works is that np.newaxis is actually equal to None, so this is equivalent to the previous version.
Step66: Great! We now have a nice column vector.
Step67: Plotting a matrix
Step68: Of course we could also have stored the same 4 vectors as row vectors instead of column vectors, resulting in a $4 \times 2$ matrix (the transpose of $P$, in fact). It is really an arbitrary choice.
Step69: Or you can represent it as a polygon
Step70: Geometric applications of matrix operations
Step71: If we add a matrix full of identical vectors, we get a simple geometric translation
Step72: Although matrices can only be added together if they have the same size, NumPy allows adding a row vector or a column vector to a matrix
Step73: Scalar multiplication
Step74: Matrix multiplication – Projection onto an axis
Step75: Now let's look at the dot product $P \cdot U$
Step76: These are the horizontal coordinates of the vectors in $P$. In other words, we just projected $P$ onto the horizontal axis
Step77: We can actually project on any other axis by just replacing $U$ with any other unit vector. For example, let's project on the axis that is at a 30° angle above the horizontal axis
Step78: Good! Remember that the dot product of a unit vector and a matrix basically performs a projection on an axis and gives us the coordinates of the resulting points on that axis.
Step79: Let's look at the product $VP$
Step80: The first row is equal to $V_{1,} P$, which is the coordinates of the projection of $P$ onto the 30° axis, as we have seen above. The second row is $V_{2,} P$, which is the coordinates of the projection of $P$ onto the 120° axis. So basically we obtained the coordinates of $P$ after rotating the horizontal and vertical axes by 30° (or equivalently after rotating the polygon by -30° around the origin)! Let's plot $VP$ to see this
Step81: Matrix $V$ is called a rotation matrix.
Step82: Let's look at how this transformation affects the unit square
Step83: Now let's look at a squeeze mapping
Step84: The effect on the unit square is
Step85: Let's show a last one
Step86: Matrix inverse
Step87: We applied a shear mapping on $P$, just like we did before, but then we applied a second transformation to the result, and lo and behold this had the effect of coming back to the original $P$ (we plotted the original $P$'s outline to double check). The second transformation is the inverse of the first one.
Step88: Only square matrices can be inversed. This makes sense when you think about it
Step89: Looking at this image, it is impossible to tell whether this is the projection of a cube or the projection of a narrow rectangular object. Some information has been lost in the projection.
Step90: This transformation matrix performs a projection onto the horizontal axis. Our polygon gets entirely flattened out so some information is entirely lost and it is impossible to go back to the original polygon using a linear transformation. In other words, $F_{project}$ has no inverse. Such a square matrix that cannot be inversed is called a singular matrix (aka degenerate matrix). If we ask NumPy to calculate its inverse, it raises an exception
Step91: Here is another example of a singular matrix. This one performs a projection onto the axis at a 30° angle above the horizontal axis
Step92: But this time, due to floating point rounding errors, NumPy manages to calculate an inverse (notice how large the elements are, though)
Step93: As you might expect, the dot product of a matrix by its inverse results in the identity matrix
Step94: Another way to express this is that the inverse of the inverse of a matrix $M$ is $M$ itself
Step95: Also, the inverse of scaling by a factor of $\lambda$ is of course scaling by a factor or $\frac{1}{\lambda}$
Step96: Finally, a square matrix $H$ whose inverse is its own transpose is an orthogonal matrix
Step97: Determinant
Step98: One of the main uses of the determinant is to determine whether a square matrix can be inversed or not
Step99: That's right, $F_{project}$ is singular, as we saw earlier.
Step100: This determinant is suspiciously close to 0
Step101: Perfect! This matrix can be inversed as we saw earlier. Wow, math really works!
Step102: We rescaled the polygon by a factor of 1/2 on both vertical and horizontal axes so the surface area of the resulting polygon is 1/4$^{th}$ of the original polygon. Let's compute the determinant and check that
Step103: Correct!
Step104: Composing linear transformations
Step105: Since the dot product is associative, the following code is equivalent
Step106: Note that the order of the transformations is the reverse of the dot product order.
Step107: From now on we can perform both transformations in just one dot product, which can lead to a very significant performance boost.
Step108: Singular Value Decomposition
Step109: Note that this is just a 1D array containing the diagonal values of Σ. To get the actual matrix Σ, we can use NumPy's diag function
Step110: Now let's check that $U \cdot \Sigma \cdot V^T$ is indeed equal to F_shear
Step111: It worked like a charm. Let's apply these transformations one by one (in reverse order) on the unit square to understand what's going on. First, let's apply the first rotation $V^T$
Step112: Now let's rescale along the vertical and horizontal axes using $\Sigma$
Step113: Finally, we apply the second rotation $U$
Step114: And we can see that the result is indeed a shear mapping of the original unit square.
Step115: Indeed the horizontal vectors are stretched by a factor of 1.4, and the vertical vectors are shrunk by a factor of 1/1.4=0.714…, so far so good. Let's look at the shear mapping matrix $F_{shear}$
Step116: Wait, what!? We expected just one unit eigenvector, not two. The second vector is almost equal to $\begin{pmatrix}-1 \ 0 \end{pmatrix}$, which is on the same line as the first vector $\begin{pmatrix}1 \ 0 \end{pmatrix}$. This is due to floating point errors. We can safely ignore vectors that are (almost) colinear (ie. on the same line).
Step117: The trace does not have a simple geometric interpretation (in general), but it has a number of properties that make it useful in many areas
|
6,131
|
<ASSISTANT_TASK:>
Python Code:
x = (1,2,3,0,2,1) # Declaración de una tupla con valores numéricos
x # Imprimo tupla
x = (0, 'Hola', (1,2)) # Declaración de una tupla con diferentes tipos de datos
x[1] # Imprimo contenido de la posición 1
id(x)
x = (0, 'Cambio', (1,2))
id(x)
x
x = [1,2,3] # Declaración de una Lista
x.append('Nuevo valor') # Agrego nuevo contenido
x # Imprimo Lista completa
x.insert(2, 'Valor Intermedio') # Inserto otro valor
x
import timeit
timeit.timeit('x = (1,2,3,4,5,6)') # Mido tiempo de ejecución de una Tupla
timeit.timeit('x = [1,2,3,4,5,6]') # Mido tiempo de ejecución de una
x = [1,2,3] # Asignación
y = [0, x] # Referencia
y
x[0] = -1 # Asigno otra lista a x
y # al cambiar el valor en x se cambio en y debido a que y apunta a x
dir_tel = {'juan':5512345, 'pedro':5554321, 'itam':'is fun'} # Defino un diccionario
dir_tel['juan'] # Obtengo el valor de la clave 'juan'
dir_tel.keys() # Obtengo el listado de las claves del dicc.
dir_tel.values() # Obtengo el listado de los valores del dicc.
A = set([1,2,3]) # Defino 2 sets
B = set([2,3,4])
A | B # Union
A & B # Intersección
A - B # Diferencia de conj.
A ^ B # Diferencia simetrica
range(1000)
for i in range(5):
print(i)
for i in range(10):
if i % 2 == 0:
print(str(i) + ' Par')
else:
print(str(i) + ' Impar')
i = 0
while i < 10:
print(i)
i = i + 1
# Definición de clase:
class Person:
def __init__(self, first, last): # Constructor
self.first = first
self.last = last
def greet(self, add_msg = ''): # Método
print('Hello ' + self.first + ' ' + add_msg)
juan = Person('juan', 'dominguez') # Creo un objeto del tipo Person
juan.first # Obtengo el valor del atributo first
juan.greet() # Ejecuto método
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Las tuplas son inmutables.
Step2: Listas
Step3: ¿Qué es más rapido
Step4: Referencia / asignacion
Step5: Diccionarios
Step6: Sets
Step7: Condicionales y Loops, For, While, If, Elif
Step8: Clases
|
6,132
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import statsmodels.api as sm
dta = sm.datasets.macrodata.load_pandas().data
dta.index = pd.PeriodIndex(start='1959Q1', end='2009Q3', freq='Q')
class LocalLevel(sm.tsa.statespace.MLEModel):
_start_params = [1., 1.]
_param_names = ['var.level', 'var.irregular']
def __init__(self, endog):
super(LocalLevel, self).__init__(endog, k_states=1, initialization='diffuse')
self['design', 0, 0] = 1
self['transition', 0, 0] = 1
self['selection', 0, 0] = 1
def transform_params(self, unconstrained):
return unconstrained**2
def untransform_params(self, unconstrained):
return unconstrained**0.5
def update(self, params, **kwargs):
params = super(LocalLevel, self).update(params, **kwargs)
self['state_cov', 0, 0] = params[0]
self['obs_cov', 0, 0] = params[1]
mod = LocalLevel(dta.infl)
res = mod.fit()
print(res.summary())
print(res.mle_retvals)
class LocalLevelConcentrated(sm.tsa.statespace.MLEModel):
_start_params = [1.]
_param_names = ['ratio.irregular']
def __init__(self, endog):
super(LocalLevelConcentrated, self).__init__(endog, k_states=1, initialization='diffuse')
self['design', 0, 0] = 1
self['transition', 0, 0] = 1
self['selection', 0, 0] = 1
self['state_cov', 0, 0] = 1
self.ssm.filter_concentrated = True
def transform_params(self, unconstrained):
return unconstrained**2
def untransform_params(self, unconstrained):
return unconstrained**0.5
def update(self, params, **kwargs):
params = super(LocalLevelConcentrated, self).update(params, **kwargs)
self['obs_cov', 0, 0] = params[0]
mod_conc = LocalLevelConcentrated(dta.infl)
res_conc = mod_conc.fit()
print(res_conc.summary())
print(res_conc.mle_retvals)
print('Original model')
print('var.level = %.5f' % res.params[0])
print('var.irregular = %.5f' % res.params[1])
print('\nConcentrated model')
print('scale = %.5f' % res_conc.scale)
print('h * scale = %.5f' % (res_conc.params[0] * res_conc.scale))
# Typical approach
mod_ar = sm.tsa.SARIMAX(dta.cpi, order=(1, 0, 0), trend='ct')
res_ar = mod_ar.fit()
# Estimating the model with the scale concentrated out
mod_ar_conc = sm.tsa.SARIMAX(dta.cpi, order=(1, 0, 0), trend='ct', concentrate_scale=True)
res_ar_conc = mod_ar_conc.fit()
print('Loglikelihood')
print('- Original model: %.4f' % res_ar.llf)
print('- Concentrated model: %.4f' % res_ar_conc.llf)
print('\nParameters')
print('- Original model: %.4f, %.4f, %.4f, %.4f' % tuple(res_ar.params))
print('- Concentrated model: %.4f, %.4f, %.4f, %.4f' % (tuple(res_ar_conc.params) + (res_ar_conc.scale,)))
print('Optimizer iterations')
print('- Original model: %d' % res_ar.mle_retvals['iterations'])
print('- Concentrated model: %d' % res_ar_conc.mle_retvals['iterations'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: There are two parameters in this model that must be chosen
Step3: We can look at the results from the numerical optimizer in the results attribute mle_retvals
Step4: Concentrating out the scale
Step5: Again, we can use the built-in fit method to find the maximum likelihood estimate of $h$.
Step6: The estimate of $h$ is provided in the middle table of parameters (ratio.irregular), while the estimate of the scale is provided in the upper table. Below, we will show that these estimates are consistent with those from the previous approach.
Step7: Comparing estimates
Step8: Example
Step9: These two approaches produce about the same loglikelihood and parameters, although the model with the concentrated scale was able to improve the fit very slightly
Step10: This time, about 1/3 fewer iterations of the optimizer are required under the concentrated approach
|
6,133
|
<ASSISTANT_TASK:>
Python Code:
fruit = "pinapple"
letter = fruit[1]
print(letter)
letter = fruit[0]
print(letter)
letter = fruit[1.5]
fruit = 'banana'
len(fruit)
length = len(fruit)
fruit[length]
fruit[length-1]
fruit = 'pinapple'
index = 0
while index < len(fruit):
letter = fruit[index]
print(letter)
index = index + 1
for char in fruit:
print(char)
prefixes = 'JKLMNOPQ'
suffix = 'ack'
for letter in prefixes:
print(letter + suffix)
s = 'Monty Python'
print(s[0:5])
print(s[6:12])
fruit = 'banana'
fruit[:3]
fruit[3:]
fruit = 'banana'
fruit[3:3]
greeting = 'Hello, world!'
greeting[0] = 'J'
greeting = 'Hello, world!'
new_greeting = 'J' + greeting[1:]
print(new_greeting)
word = 'banana'
word.upper()
word = 'banana'
word.find('a')
word.find('na')
word.find('na', 3)
name = 'bob'
name.find('b', 1, 2)
'a' in 'Banana'
'seed' in 'banana'
def in_both(word1, word2):
for letter in word1:
if letter in word2:
print(letter)
in_both("apples", "oranges")
if word == 'banana':
print('All right, bananas.')
if word < 'banana':
print('Your word,' + word + ', comes before banana.')
elif word > 'banana':
print('Your word,' + word + ', comes after banana.')
else:
print('All right, bananas.')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The second statement selects character number 1 from fruit and assigns it to letter.
Step2: For most people, the first letter of 'pinapple' is p, not i. But for computer scientists, the index is an offset from the beginning of the string, and the offset of the first letter is zero.
Step3: So b is the 0th letter (“zero-eth”) of 'pinapple', a is the 1th letter (“one-eth”), and n is the 2th d(“two-eth”) letter.
Step4: The len function
Step5: If you try to get the last letter in variable fruit, you should use n-1 for last item as in indexing, otherwise you will get an index out of range error.
Step6: Alternatively, you can use negative indices, which count backward from the end of the string. The expression fruit[-1] yields the last letter, fruit[-2] yields the second to last, and so on.
Step7: This loop traverses the string and displays each letter on a line by itself. The loop condition is index < len(fruit), so when index is equal to the length of the string, the condition is false, and the body of the loop is not executed. The last character accessed is the one with the index len(fruit)-1, which is the last character in the string.
Step8: Each time through the loop, the next character in the string is assigned to the variable char. The loop continues until no characters are left.
Step9: Of course, that’s not quite right because “Ouack” and “Quack” are misspelled.
Step10: The operator [n
Step11: If the first index is greater than or equal to the second the result is an empty string, represented by two quotation marks
Step12: An empty string contains no characters and has length 0, but other than that, it is the same as any other string.
Step13: The “object” in this case is the string and the “item” is the character you tried to assign. For now, an object is the same thing as a value, but we will refine that definition later. An item is one of the values in a sequence.
Step14: Searching
Step15: This form of dot notation specifies the name of the method, upper, and the name of the string to apply the method to, word. The empty parentheses indicate that this method takes no argument.
Step16: In this example, we invoke find on word and pass the letter we are looking for as a parameter. Actually, the find method is more general than our function; it can find substrings, not just characters
Step17: It can take as a second argument the index where it should start
Step18: And as a third argument the index where it should stop
Step19: This search fails because b does not appear in the index range from 1 to 2 (not including 2).
Step20: For example, the following function prints all the letters from word1 that also appear in word2
Step21: String Comparison
Step22: Other relational operations are useful for putting words in alphabetical order
|
6,134
|
<ASSISTANT_TASK:>
Python Code:
# Plotting library.
import matplotlib.pyplot as plt
# For some math we need to do.
import numpy as np
# The HTRU 2 profile data is split - one file containing the real pulsar
# profiles, one file containing noise/interference profiles. We load both
# these data sources here. First we construct relative paths to the files.
data_dir = 'data/HTRU2'
pulsar_file = data_dir + '/HTRU2_pulsar.csv'
nonpulsar_file = data_dir + '/HTRU2_nonpulsar.csv'
# Now simply load the data.
pulsar_data = np.genfromtxt(pulsar_file, dtype=np.int,delimiter=',')
non_pulsar_data = np.genfromtxt(nonpulsar_file, dtype=np.int,delimiter=',')
# Print overview details.
print ('\n\nTotal number of pulsar profiles: ', len(pulsar_data))
print ('Total number of noise/RFI profiles: ', len(non_pulsar_data))
plt.figure(1)
plt.plot(pulsar_data[7], 'r')
plt.xlabel('Bin')
plt.ylabel('Normalised Intensity')
plt.title('Example Integrated Profile for a pulsar')
plt.show()
plt.figure(2)
plt.plot(non_pulsar_data[0], 'b')
plt.xlabel('Bin')
plt.ylabel('Normalised Intensity')
plt.title('Example Integrated Profile for a non-pulsar')
plt.show()
import operator
def centre_on_peak(data):
Centre the data such that the maximum y-axis value is in the
centre of the data.
Parameters
----------
:param data: the data to be centred.
Returns
----------
:return: the centred data array.
# Stores the centred data.
centred_data = []
# Get the index of the maximum value.
index, value = max(enumerate(data), key=operator.itemgetter(1))
# Find midpoint of the data.
midpoint = int(len(data)/2)
# Figure out the shift required to centre the data (put max value in centre bin).
n = midpoint - index # N gives the number of bins the data should be shifted.
a = n % len(data)
# Apply the correction.
centred_data = np.concatenate([data[-a:],data[:-a]])
return centred_data
# Here we simply loop over each item in the data arrays,
# and update their values.
for i in range(0, len(pulsar_data)):
pulsar_data[i] = centre_on_peak(pulsar_data[i])
for i in range(0, len(non_pulsar_data)):
non_pulsar_data[i] = centre_on_peak(non_pulsar_data[i])
plt.figure(3)
plt.plot(pulsar_data[7], 'r')
plt.xlabel('Bin')
plt.ylabel('Normalised Intensity')
plt.title('Example Integrated Profile for a pulsar - Centred')
plt.show()
plt.figure(4)
plt.plot(non_pulsar_data[0], 'b')
plt.xlabel('Bin')
plt.ylabel('Normalised Intensity')
plt.title('Example Integrated Profile for a non-pulsar - Centred')
plt.show()
def compute_features(data):
Computes machine learning feature values for the supplied data array.
Parameters
----------
:param data: a data array.
Returns
----------
:return: the computed machine learning features as a list [mean, stdev, shew, kurtosis].
if data is not None: # Check data is not empty
if len(data) > 0:
# Sums computed during calculation.
mean_sum = 0
mean_subtracted_sum_power_2 = 0
mean_subtracted_sum_power_3 = 0
mean_subtracted_sum_power_4 = 0
# The number of data points in the array.
n = len(data)
# Necessary first loop to calculate the sum, min and max
for d in data:
mean_sum += float(d)
if mean_sum > 0 or mean_sum < 0: # If the mean is less than or greater than zero (should be)
# Update the mean value.
mean_value = mean_sum / float(n)
# Now try to compute the standard deviation, using
# the mean computed above... we also compute values in
# this loop required to compute the excess Kurtosis and
# standard deviation.
for d in data:
mean_subtracted_sum_power_2 += np.power((float(d) - mean_value), 2.0)
# Used to compute skew
mean_subtracted_sum_power_3 += np.power((float(d) - mean_value), 3.0)
# Used to compute Kurtosis
mean_subtracted_sum_power_4 += np.power((float(d) - mean_value), 4.0)
# Update the standard deviation value.
stdev = np.sqrt(mean_subtracted_sum_power_2 / (n - 1.0))
# Next try to calculate the excess Kurtosis and skew using the
# information gathered above.
one_over_n = 1.0 / n # Used multiple times...
kurt = ((one_over_n * mean_subtracted_sum_power_4) / np.power((one_over_n * mean_subtracted_sum_power_2), 2) ) - 3
skew = (one_over_n * mean_subtracted_sum_power_3) / np.power(np.sqrt(one_over_n * mean_subtracted_sum_power_2), 3)
return [mean_value, stdev, skew, kurt]
else: # Data sums to zero, i.e. no data!
return [0,0,0,0]
else: # Data empty for some reason...
return [0,0,0,0]
import random as rnd
from scipy.stats import skew
from scipy.stats import kurtosis
# Now generate some random data, and test the extracted values.
gaussian_data = []
for i in range(0, 100000):
gaussian_data.append(rnd.gauss(0.0, 1.0))
# Get the feature data
[mean_,stdev_,skew_,kurt_] = compute_features(gaussian_data)
# Check the results
print ('Test 1 for Gaussian Distribution: Our computed values vs. theoretical values\n')
print ('\tGaussian data mean: ' , str('%.12f' % mean_) , '\t\texpected: 0.0')
print ('\tGaussian data stdev:' , str('%.12f' % stdev_), '\t\texpected: 1.0')
print ('\tGaussian data skew: ' , str('%.12f' % skew_) , '\t\texpected: 0.0')
print ('\tGaussian data kurt: ' , str('%.12f' % kurt_) , '\t\texpected: 0.0\n\n')
# Check the results
print ('Test 2 for Gaussian Distribution: Our computed values vs. numpy function values\n')
print ('\tGaussian data mean: ' , str('%.12f' % mean_) , '\t\tnumpy: ' , str('%.12f' % np.mean(gaussian_data)))
print ('\tGaussian data stdev:' , str('%.12f' % stdev_) , '\t\tnumpy: ' , str('%.12f' % np.std(gaussian_data,ddof=1)))
print ('\tGaussian data skew: ' , str('%.12f' % skew_) , '\t\tnumpy: ' , str('%.12f' % skew(gaussian_data)))
print ('\tGaussian data kurt: ' , str('%.12f' % kurt_) , '\t\tnumpy: ' , str('%.12f' % kurtosis(gaussian_data)), '\n\n')
# Now generate some random data, and test the extracted values.
uniform_data = []
for i in range(0, 100000):
uniform_data.append(rnd.uniform(0.0, 1.0))
[mean_,stdev_,skew_,kurt_] = compute_features(uniform_data)
# Standard deviation of uniform distribution is given by:
#
# Sqrt((1/12) (b-a)^2)
#
# where a is the lower limit, and b the upper limit. So...
expected_std = np.sqrt((1.0/12.0) * np.power((1.0-0.0), 2))
# Kurtosis of uniform distribution is given by:
#
# -(6.0/5.0)
expected_kurt = -(6.0/5.0)
# Skew of uniform distribution is given by:
#
# 0
expected_skew = 0
# See this site for details on these computations:
# http:#mathworld.wolfram.com/UniformDistribution.html
# Check the results
print ('Test 1 for Uniform Distribution: Our computed values vs. theoretical values\n')
print ('\tUniform data mean: ' , str('%.12f' % mean_) , '\t\texpected: 0.5')
print ('\tUniform data stdev:' , str('%.12f' % stdev_), '\t\texpected: ' , str(expected_std))
print ('\tUniform data skew: ' , str('%.12f' % skew_) , '\t\texpected: ' , str(expected_skew))
print ('\tUniform data kurt: ' , str('%.12f' % kurt_) , '\t\texpected: ' , str(expected_kurt) , '\n\n')
# Check the results
print ('Test 2 for Uniform Distribution: Our computed values vs. numpy function values\n')
print ('\tUniform data mean: ' , str('%.12f' % mean_) , '\t\tnumpy: ' , str('%.12f' % np.mean(uniform_data)))
print ('\tUniform data stdev:' , str('%.12f' % stdev_), '\t\tnumpy: ' , str('%.12f' % np.std(uniform_data,ddof=1)))
print ('\tUniform data skew: ' , str('%.12f' % skew_) , '\t\tnumpy: ' , str('%.12f' % skew(uniform_data)))
print ('\tUniform data kurt: ' , str('%.12f' % kurt_) , '\t\tnumpy: ' , str('%.12f' % kurtosis(uniform_data)))
def scale(data,new_min, new_max):
Scales data to within the range [new_min,new_max].
Parameters
----------
:param data: the data to scale.
:param new_min: the new minimum value for the data range.
:param new_max: the new maximum value for the data range.
Returns
----------
:return: A new array with the data scaled to within the range [new_min,new_max].
min_ = min(data)
max_ = max(data)
new_data = []
for n in range(len(data)):
value = data[n]
x = (new_min * (1-( (value-min_) /( max_- min_ )))) + (new_max * ( (value-min_) /( max_- min_ ) ))
new_data.append(x)
return new_data
from sklearn.model_selection import train_test_split
X = [] # Stores the feature data.
Y = [] # Stores the class labels.
# Add pulsar examples.
for i in range(0, len(pulsar_data)):
# Now here we extract the features with the call
# to compute_features(). We also scale each profile
# so that its values fall in the range [0,1]. This is
# done via the call to scale(...).
X.append(compute_features(scale(pulsar_data[i],0.0,1.0)))
Y.append(1)
# Add non-pulsar examples.
for i in range(0, len(non_pulsar_data)):
# Now here we extract the features with the call
# to compute_features(). We also scale each profile
# so that its values fall in the range [0,1]. This is
# done via the call to scale(...).
X.append(compute_features(scale(non_pulsar_data[i],0.0,1.0)))
Y.append(0)
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.999)
print ('\nExamples in training set: ' , str(len(x_train)))
print ('Examples in testing set: ' , str(len(x_test)))
# There should be 4 features per example. Lets just check this is
# the case.
print ('Dimensions of training set: ' , str(np.asarray(x_train).shape))
print ('Dimensions of testing set: ' , str(np.asarray(x_test).shape))
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
# First train the classifier with call to fit.
classifier.fit(x_train, y_train)
# Now obtain the classifiers 'score'
accuracy = classifier.score(x_test, y_test)
print ("Naive Bayes Classifier accuracy: ", (100* accuracy), "%.")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we plot a single example of both classes, to show what the data looks like. First the pulsar example.
Step2: It is clear that the peak is not in the centre. For most examples it is, but not for all. How about for the non-pulsar examples?
Step4: The non-pulsar example doesn't appear to be correctly centred either. So we centre the data using a simple function. We define this function below
Step5: Now we execute this centering function.
Step7: Now the data is correctly loaded and centred, we can move on. How about we compute some machine learning features from the data? We can use the features devised by Lyon et.al. 2016. The code provided below will allow us to extract these.
Step8: Feature Extraction Unit Test
Step9: It's clear that the function is producing values very close to those expected from the theory. It is also clear that our function is giving the same answers to the numpy function. So it appears to be working well. Now for another test, this time on the uniform distribution.
Step11: It's clear that the function is producing values very close to those expected from the theory. It is also clear that our function is giving the same answers to the numpy function. Based on these results, I trust that the function is correct.
Step12: Pulsar Classification
Step13: Now let's create a test/train data set split.
Step14: Now we build and test the classifier. We'll just use a basic calssfier here to keep things simple.
|
6,135
|
<ASSISTANT_TASK:>
Python Code:
from openpathsampling.ensemble import SlicedTrajectoryEnsemble, SequentialEnsemble, AllInXEnsemble, AllOutXEnsemble, LengthEnsemble
from openpathsampling.collectivevariable import FunctionCV
from openpathsampling.volume import CVDefinedVolume
from openpathsampling.engines import Trajectory
# This is a hack to easily create test sequences that act as "trajectories" for us
from openpathsampling.tests.test_helpers import CallIdentity
op = CallIdentity()
vol = CVDefinedVolume(op, -0.5, 0.5)
ens = SequentialEnsemble([
AllInXEnsemble(vol),
AllOutXEnsemble(vol),
AllInXEnsemble(vol) & LengthEnsemble(1)
])
inV = 0.0
outV = 1.0
even_slice = slice(None,None,2)
total_sliced_whole = SlicedTrajectoryEnsemble(ens, even_slice)
subtraj_sliced_member = SequentialEnsemble([
AllInXEnsemble(vol),
SlicedTrajectoryEnsemble(AllOutXEnsemble(vol), even_slice),
AllInXEnsemble(vol) & LengthEnsemble(1)
])
# TODO: We have yet to implement SlicedSequentialEnsemble.
# This would be nice for completeness, but it really shouldn't be a priority.
traj = {}
traj[0] = Trajectory([inV, outV, inV])
traj[1] = Trajectory([inV, inV, outV, outV, inV, inV])
traj[2] = Trajectory([inV, outV, outV, outV, inV])
traj[3] = Trajectory([inV, outV, inV, outV, inV, outV, inV, inV])
traj[4] = Trajectory([inV, outV, outV, inV, outV, inV, outV, inV, inV])
traj[5] = Trajectory([inV, outV, outV, inV, outV, inV, outV, inV])
traj[6] = Trajectory([inV, outV, inV, outV, inV, inV])
traj[7] = Trajectory([inV, outV, outV, outV, inV, outV])
for i in range(len(traj)):
print "ens(traj["+str(i)+"]) ==", ens(traj[i])
for i in range(len(traj)):
print "total_sliced_whole(traj["+str(i)+"]) ==", total_sliced_whole(traj[i])
for i in range(len(traj)):
print "subtraj_sliced_member(traj["+str(i)+"]) ==", subtraj_sliced_member(traj[i])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Slicing the global trajectory for the whole SequentialEnsemble
Step2: Slicing the subtrajectory for a member of the SequentialEnsemble
Step3: Slicing the total trajectory for a member of the SequentialEnsemble
Step4: Example trajectories for these ensembles
Step5: First, let's use the original ensemble ens
Step6: What about when we slice the whole trajectory into its even components as with total_sliced_whole? Remembering that Python counts from 0, that means a trajectory like traj[4] becomes "in, out, out, out, in", which satifies the original ensemble. What happens with the other ensembles?
Step7: Notice that traj[0] no longer works
|
6,136
|
<ASSISTANT_TASK:>
Python Code:
from kaggle_environments import make, evaluate
# Create the game environment
# Set debug=True to see the errors if your agent refuses to run
env = make("connectx", debug=True)
# List of available default agents
print(list(env.agents))
# Two random agents play one game round
env.run(["random", "random"])
# Show the game
env.render(mode="ipython")
#$HIDE_INPUT$
import random
import numpy as np
# Selects random valid column
def agent_random(obs, config):
valid_moves = [col for col in range(config.columns) if obs.board[col] == 0]
return random.choice(valid_moves)
# Selects middle column
def agent_middle(obs, config):
return config.columns//2
# Selects leftmost valid column
def agent_leftmost(obs, config):
valid_moves = [col for col in range(config.columns) if obs.board[col] == 0]
return valid_moves[0]
# Agents play one game round
env.run([agent_leftmost, agent_random])
# Show the game
env.render(mode="ipython")
#$HIDE_INPUT$
def get_win_percentages(agent1, agent2, n_rounds=100):
# Use default Connect Four setup
config = {'rows': 6, 'columns': 7, 'inarow': 4}
# Agent 1 goes first (roughly) half the time
outcomes = evaluate("connectx", [agent1, agent2], config, [], n_rounds//2)
# Agent 2 goes first (roughly) half the time
outcomes += [[b,a] for [a,b] in evaluate("connectx", [agent2, agent1], config, [], n_rounds-n_rounds//2)]
print("Agent 1 Win Percentage:", np.round(outcomes.count([1,-1])/len(outcomes), 2))
print("Agent 2 Win Percentage:", np.round(outcomes.count([-1,1])/len(outcomes), 2))
print("Number of Invalid Plays by Agent 1:", outcomes.count([None, 0]))
print("Number of Invalid Plays by Agent 2:", outcomes.count([0, None]))
get_win_percentages(agent1=agent_middle, agent2=agent_random)
get_win_percentages(agent1=agent_leftmost, agent2=agent_random)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The "random" agent selects (uniformly) at random from the set of valid moves. In Connect Four, a move is considered valid if there's still space in the column to place a disc (i.e., if the board has seven rows, the column has fewer than seven discs).
Step2: You can use the player above to view the game in detail
Step3: So, what are obs and config, exactly?
Step4: The outcome of a single game is usually not enough information to figure out how well our agents are likely to perform. To get a better idea, we'll calculate the win percentages for each agent, averaged over multiple games. For fairness, each agent goes first half of the time.
Step5: Which agent do you think performs better against the random agent
|
6,137
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data.gl/')
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to int, before using it below
sales['floors'] = sales['floors'].astype(int)
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
X = np.array([[3.,5.,8.],[4.,12.,15.]])
print X
norms = np.linalg.norm(X, axis=0) # gives [norm(X[:,0]), norm(X[:,1]), norm(X[:,2])]
print norms
print X / norms # gives [X[:,0]/norm(X[:,0]), X[:,1]/norm(X[:,1]), X[:,2]/norm(X[:,2])]
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis = 0)
normalized_features = feature_matrix / norms
return (normalized_features, norms)
features, norms = normalize_features(np.array([[3.,6.,9.],[4.,8.,12.]]))
print features
# should print
# [[ 0.6 0.6 0.6]
# [ 0.8 0.8 0.8]]
print norms
# should print
# [5. 10. 15.]
simple_features = ['sqft_living', 'bedrooms']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output)
simple_feature_matrix, norms = normalize_features(simple_feature_matrix)
weights = np.array([1., 4., 1.])
prediction = predict_output(simple_feature_matrix, weights)
ro = [ np.dot(simple_feature_matrix[:,i], output - prediction + weights[i]*simple_feature_matrix[:,i]) for i in range(0, len(simple_features))]
[x * 2 for x in ro]
def lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty):
# compute prediction
prediction = predict_output(feature_matrix, weights)
feature_i = feature_matrix[:,i]
# compute ro[i] = SUM[ [feature_i]*(output - prediction + weight[i]*[feature_i]) ]
ro_i = np.dot(feature_i, output - prediction + weights[i]*feature_i)
if i == 0: # intercept -- do not regularize
new_weight_i = ro_i
elif ro_i < -l1_penalty/2.:
new_weight_i = ro_i + l1_penalty/2.
elif ro_i > l1_penalty/2.:
new_weight_i = ro_i - l1_penalty/2
else:
new_weight_i = 0.
return new_weight_i
# should print 0.425558846691
import math
print lasso_coordinate_descent_step(1, np.array([[3./math.sqrt(13),1./math.sqrt(10)],[2./math.sqrt(13),3./math.sqrt(10)]]),
np.array([1., 1.]), np.array([1., 4.]), 0.1)
from math import *
def lasso_cyclical_coordinate_descent(feature_matrix, output, initial_weights, l1_penalty, tolerance):
weights = list(initial_weights)
stop = False
while not stop:
maxChange = 0
for i in range(len(weights)):
old_weights_i = weights[i]
weights[i] = lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty)
change = abs(weights[i] - old_weights_i)
if change > maxChange:
maxChange = change
if maxChange < tolerance:
stop = True
return weights
simple_features = ['sqft_living', 'bedrooms']
my_output = 'price'
initial_weights = np.zeros(3)
l1_penalty = 1e7
tolerance = 1.0
(simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output)
(normalized_simple_feature_matrix, simple_norms) = normalize_features(simple_feature_matrix) # normalize features
weights = lasso_cyclical_coordinate_descent(normalized_simple_feature_matrix, output,
initial_weights, l1_penalty, tolerance)
def rss(predictions, output):
df = predictions - output
return np.dot(df, df)
print rss(predict_output(normalized_simple_feature_matrix, weights), output)
print weights
train_data,test_data = sales.random_split(.8,seed=0)
all_features = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated']
(feature_matrix, output) = get_numpy_data(train_data, all_features, my_output)
(normalized_feature_matrix, norms) = normalize_features(feature_matrix) # normalize features
tolerance = 1
l1_penalty = 1e7
initial_weights = np.zeros(1+len(all_features))
weights1e7 = lasso_cyclical_coordinate_descent(normalized_feature_matrix, output,
initial_weights, l1_penalty, tolerance)
features_inc_constant = all_features[:]
features_inc_constant.insert(0, 'constant')
print initial_weights
print weights1e7
for i in range(len(weights1e7)):
if abs(weights1e7[i]) > l1_penalty/2.0:
print features_inc_constant[i]
l1_penalty = 1e8
tolerance=1
weights1e8 = lasso_cyclical_coordinate_descent(normalized_feature_matrix, output,
initial_weights, l1_penalty, tolerance)
print weights1e8
for i in range(len(weights1e8)):
if abs(weights1e8[i]) > l1_penalty/2.0:
print features_inc_constant[i]
l1_penalty = 1e4
tolerance=5e5
weights1e4 = lasso_cyclical_coordinate_descent(normalized_feature_matrix, output,
initial_weights, l1_penalty, tolerance)
print weights1e4
for i in range(len(weights1e4)):
if abs(weights1e4[i]) > l1_penalty/2.0:
print features_inc_constant[i]
normalized_weights1e7 = weights1e7 / norms
normalized_weights1e8 = weights1e8 / norms
normalized_weights1e4 = weights1e4 / norms
print normalized_weights1e7[3]
print normalized_weights1e7
print normalized_weights1e8
print normalized_weights1e4
(test_feature_matrix, test_output) = get_numpy_data(test_data, all_features, 'price')
def cal_rss(feature_matrix, weights, output):
return rss(predict_output(feature_matrix, weights), output)
cal_rss(test_feature_matrix, normalized_weights1e7, test_output)
cal_rss(test_feature_matrix, normalized_weights1e8, test_output)
cal_rss(test_feature_matrix, normalized_weights1e4, test_output)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Step3: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights
Step4: Normalize features
Step5: Numpy provides a shorthand for computing 2-norms of each column
Step6: To normalize, apply element-wise division
Step7: Using the shorthand we just covered, write a short function called normalize_features(feature_matrix), which normalizes columns of a given feature matrix. The function should return a pair (normalized_features, norms), where the second item contains the norms of original features. As discussed in the lectures, we will use these norms to normalize the test data in the same way as we normalized the training data.
Step8: To test the function, run the following
Step9: Implementing Coordinate Descent with normalized features
Step10: Don't forget to normalize features
Step11: We assign some random set of initial weights and inspect the values of ro[i]
Step12: Use predict_output() to make predictions on this data.
Step13: Compute the values of ro[i] for each feature in this simple model, using the formula given above, using the formula
Step14: QUIZ QUESTION
Step15: QUIZ QUESTION
Step16: To test the function, run the following cell
Step17: Cyclical coordinate descent
Step18: Using the following parameters, learn the weights on the sales dataset.
Step19: First create a normalized version of the feature matrix, normalized_simple_feature_matrix
Step20: Then, run your implementation of LASSO coordinate descent
Step21: QUIZ QUESTIONS
Step22: Let us consider the following set of features.
Step23: First, create a normalized feature matrix from the TRAINING data with these features. (Make you store the norms for the normalization, since we'll use them later)
Step24: First, learn the weights with l1_penalty=1e7, on the training data. Initialize weights to all zeros, and set the tolerance=1. Call resulting weights weights1e7, you will need them later.
Step25: QUIZ QUESTION
Step26: QUIZ QUESTION
Step27: QUIZ QUESTION
Step28: To check your results, if you call normalized_weights1e7 the normalized version of weights1e7, then
Step29: Compute the RSS of each of the three normalized weights on the (unnormalized) test_feature_matrix
|
6,138
|
<ASSISTANT_TASK:>
Python Code:
b = True
if b:
print('b is True')
b = False
if b:
print('b is True')
print('b is False')
b = False
if b:
print('b is True')
print('b is False')
b = False
if b:
print('b is True')
print('b is False')
b = True
if b:
print('b is True')
else:
print('b is False')
num = 7
if num > 5:
print('num is greater than 5')
t = 'this is text'
if t == 'this is text':
print('the text matches')
num1 = 3
num2 = 7
if num1 > 5:
print('num1 is greater than 5')
elif num2 > 5:
print('num2 is greater than 5')
else:
print("they're both too small!")
num1 = 3
num2 = 7
if num1 < 5 and num2 < 5:
print("they're both too small!")
if num1 < 5 or num2 < 5:
print("at least one of them is too small!")
fruits = ['apples', 'oranges', 'bananas']
for fruit in fruits:
print(fruit)
dict = {'a': 1, 'b': 2, 'c': 3}
for key in dict.keys():
print(dict[key])
print(range(5))
for i in range(5):
print('Hello')
fruits = ['apples', 'oranges', 'bananas']
for i in range(len(fruits)):
print(fruits[i])
fruits = ['apples', 'oranges', 'bananas']
for i, fruit in enumerate(fruits):
print('the', fruit, 'are in position', i)
i = 0
while i < 5:
print(i)
i += 1
i = i + 1
# while True:
# print('infinity')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here, since b is in fact True, it passes the test, causing the code that is inset after the 'if b
Step2: will skip both print lines if b is False. However, by deleting the indent on the last line, you take that line out of the nested structure, and it will now execute regardless of whether b is True or False
Step3: On the other hand, if you inset the last line one level further
Step4: You will get an error saying
Step5: In this case, when b is True the first statement will execute, and when b is False the second statement will execute. Try this code both ways to see.
Step6: Or the contents of a string
Step7: In this example I use the double equals '==' operator to check if one thing equals another. This is the standard way to check equality, since the single equals '=' is reserved for assigning values to variables. The most common comparison operators are
Step8: This creates a chain of tests that happen in order. If the first test passes, that block of code is executed, and the rest of the conditional is skipped. If it fails, the second test (after the 'elif
Step9: 3. Loops
Step10: The 'for item in list
Step11: If you run this code, you will see that the entries are not necessarily returned in the same order that they are typed. This is because dictionaries, unlike lists, do not enforce a specific order. However, iterating through the keys using the .key() function will ensure that you go through each item in the dictionary.
Step12: Using the range() function, we can set up a basic loop like
Step13: This will simply run the code inside the loop five times, since in effect we are creating a list of five sequential numbers, and then iterating over every item in that list. In addition, we are also storing each successive number in the variable 'i', which we can also use within the loop. A common example is to combine both strategies by tying the range() function to the length of a list (using the len() function), and then using the iterating number to get items from that list
Step14: Although this might seem redundant given the first example, there are times when you want to build a loop that has access to both an item within a list, as well as an iterator which specifies its index. In such cases, you can use a special function called enumerate() which takes in a list and returns both the item and its index
Step15: While the 'for' loop will serve most purposes, there is another kind of loop which will iterate over a piece of code until a certain condition is met
Step16: In this case, the loop will keep going while it's condition is satisfied, and only stop once the variable 'i' obtains a value greater or equal to 5. This type of loop can be useful if you do not know how long the loop should be run for, or if you want to make the termination criteria somehow dynamic relative to other activities within the script. It requires a bit more setup, however, as the value tested must first be initialized (i = 0), and there has to be code within the loop which changes that value in such a way that it eventually meets the exit criteria. The '+=' notation here is a shorthand in Python for adding a value to a variable. You can write the same thing explicitly like
Step17: This type of loop is inherently more dangerous than a 'for' loop, because it can easily create a situation where the loop can never exit. In theory, such a loop will run indefinitely, although in practice it will most certainly cause Python to crash. The most dangerous kind of loop is also the simplest
|
6,139
|
<ASSISTANT_TASK:>
Python Code:
# give access to importing dwarfz
import os, sys
dwarfz_package_dir = os.getcwd().split("dwarfz")[0]
if dwarfz_package_dir not in sys.path:
sys.path.insert(0, dwarfz_package_dir)
import dwarfz
# back to regular import statements
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
sns.set(context="poster", style="ticks", font_scale=1.4)
import numpy as np
import pandas as pd
from scipy.special import expit
import pathlib
import matplotlib as mpl
mpl.rcParams['savefig.dpi'] = 80
mpl.rcParams['figure.dpi'] = 80
mpl.rcParams['figure.figsize'] = 2*np.array((8,6))
mpl.rcParams['figure.facecolor'] = "white"
COSMOS_filename = pathlib.Path(dwarfz.data_dir_default) / "COSMOS_reference.sqlite"
COSMOS = dwarfz.datasets.COSMOS(COSMOS_filename)
HSC_filename = pathlib.Path(dwarfz.data_dir_default) / "HSC_COSMOS_median_forced.sqlite3"
HSC = dwarfz.datasets.HSC(HSC_filename)
matches_filename = pathlib.Path(dwarfz.data_dir_default) / "matches.sqlite3"
matches_df = dwarfz.matching.Matches.load_from_filename(matches_filename)
combined = matches_df[matches_df.match].copy()
combined["ra"] = COSMOS.df.loc[combined.index].ra
combined["dec"] = COSMOS.df.loc[combined.index].dec
combined["photo_z"] = COSMOS.df.loc[combined.index].photo_z
combined["log_mass"] = COSMOS.df.loc[combined.index].mass_med
photometry_cols = [
"gcmodel_flux","gcmodel_flux_err","gcmodel_flux_flags", "gcmodel_mag",
"rcmodel_flux","rcmodel_flux_err","rcmodel_flux_flags", "rcmodel_mag",
"icmodel_flux","icmodel_flux_err","icmodel_flux_flags", "icmodel_mag",
"zcmodel_flux","zcmodel_flux_err","zcmodel_flux_flags", "zcmodel_mag",
"ycmodel_flux","ycmodel_flux_err","ycmodel_flux_flags", "ycmodel_mag",
]
for col in photometry_cols:
combined[col] = HSC.df.loc[combined.catalog_2_ids][col].values
combined["g_minus_r"] = combined.gcmodel_mag - combined.rcmodel_mag
combined["r_minus_i"] = combined.rcmodel_mag - combined.icmodel_mag
combined["i_minus_z"] = combined.icmodel_mag - combined.zcmodel_mag
combined["z_minus_y"] = combined.zcmodel_mag - combined.ycmodel_mag
mask = np.isfinite(combined["g_minus_r"]) & np.isfinite(combined["r_minus_i"]) \
& np.isfinite(combined["i_minus_z"]) & np.isfinite(combined["z_minus_y"]) \
& np.isfinite(combined["icmodel_mag"]) \
& (~combined.gcmodel_flux_flags) & (~combined.rcmodel_flux_flags) \
& (~combined.icmodel_flux_flags) & (~combined.zcmodel_flux_flags) \
& (~combined.ycmodel_flux_flags)
combined = combined[mask]
df_frankenz = pd.read_sql_table("photo_z",
"sqlite:///{}".format(
pathlib.Path(dwarfz.data_dir_default)
/ "HSC_matched_to_FRANKENZ.sqlite"),
index_col="object_id")
df_frankenz.head()
combined = combined.join(df_frankenz[["photoz_best", "photoz_risk_best"]],
on="catalog_2_ids")
low_z = (combined.photo_z < .15)
low_mass = (combined.log_mass > 8) & (combined.log_mass < 9)
combined["low_z_low_mass"] = (low_z & low_mass)
combined.low_z_low_mass.mean()
combined.low_z_low_mass.sum()
combined.shape
features = combined.loc[:,["g_minus_r", "r_minus_i", "i_minus_z", "z_minus_y",
"icmodel_mag",
"photoz_best",
"photoz_risk_best" # The risk of photoz_best being outside of the range z_true +- 0.15(1+z_true). It ranges from 0 (safe) to 1(risky)
]]
target = combined.loc[:,["low_z_low_mass"]]
target.mean()
COSMOS_field_area = 2 # sq. degree
N_COSMOS_total = HSC.df.shape[0]
N_COSMOS_good = combined.shape[0]
true_dwarf_density = target.sum().values[0] / COSMOS_field_area
print("true dwarf density: {:.2f} / sq. deg.".format(true_dwarf_density))
testing_fraction = .1
np.random.seed(0)
shuffled_indices = np.random.permutation(target.index.values)
N_testing_indices = int(testing_fraction*shuffled_indices.size)
testing_set_indices = shuffled_indices[:N_testing_indices]
training_set_indices = shuffled_indices[N_testing_indices:]
features_train = features.loc[training_set_indices]
features_test = features.loc[testing_set_indices]
target_train = target.loc[training_set_indices]
target_test = target.loc[testing_set_indices]
true_dwarf = target_test.values.flatten()
true_non_dwarf = ~target_test.values.flatten()
# def get_classification_characteristics(target_prob, threshold_prob, verbose=False):
# target_prediction = (target_prob > threshold_prob)
# prediction_dwarf = target_prediction
# prediction_non_dwarf = ~target_prediction
# completeness = (true_dwarf & prediction_dwarf).sum() / true_dwarf.sum()
# purity = (true_dwarf & prediction_dwarf).sum() / prediction_dwarf.sum()
# sample_size_reduction = prediction_dwarf.size / prediction_dwarf.sum()
# true_positives = np.sum(true_dwarf & prediction_dwarf)
# false_positives = np.sum(true_non_dwarf & prediction_dwarf)
# true_negatives = np.sum(true_non_dwarf & prediction_non_dwarf)
# false_negatives = np.sum(true_dwarf & prediction_non_dwarf)
# true_positive_rate = true_positives / true_dwarf.sum()
# false_positive_rate = false_positives / true_non_dwarf.sum()
# objects_per_sq_deg = N_COSMOS_good / COSMOS_field_area / sample_size_reduction
# if verbose:
# print("completeness: ", completeness)
# print("purity: ", purity)
# print("sample_size_reduction: ", sample_size_reduction)
# print("true positive rate: ", true_positive_rate)
# print("false positive rate: ", false_positive_rate)
# print("objects per sq deg: ", objects_per_sq_deg)
# return {
# "completeness": completeness,
# "purity": purity,
# "sample_size_reduction": sample_size_reduction,
# "threshold_prob": threshold_prob,
# "true_positive_rate": true_positive_rate,
# "false_positive_rate": false_positive_rate,
# "objects_per_sq_deg" : objects_per_sq_deg,
# }
color_RF = "g"
color_LR = "b"
color_MC = "r"
label_RF = "Random Forest"
label_LR = "Logistic Regression"
label_MC = "Magnitude Cut"
linewidth = 4
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
n_folds_default = 10
def get_cross_validation_matrix(classifier, seed=0, folds=n_folds_default, only_i_mag=False):
np.random.seed(seed)
testing_fraction = 1/folds
shuffled_indices = np.random.permutation(target.index.values)
all_indices_set = set(shuffled_indices)
results = {HSC_id: []
for HSC_id in combined.loc[shuffled_indices].catalog_2_ids.drop_duplicates().values}
for fold in range(folds):
print("\rfold: {} / {}".format(fold+1, folds), end="", flush=True)
if fold == folds-1:
testing_set_indices = shuffled_indices[fold*N_testing_indices:]
else:
testing_set_indices = shuffled_indices[fold*N_testing_indices:(fold+1)*N_testing_indices]
training_set_indices = np.array(list(all_indices_set - set(testing_set_indices)))
features_train = features.loc[training_set_indices]
features_test = features.loc[testing_set_indices]
if only_i_mag:
features_train = features_train[["icmodel_mag"]]
features_test = features_test[["icmodel_mag"]]
target_train = target.loc[training_set_indices]
target_test = target.loc[testing_set_indices]
classifier.fit(features_train, target_train.values.flatten())
target_prob = classifier.predict_proba(features_test)[:,1]
for i, COSMOS_id in enumerate(testing_set_indices):
HSC_id = combined.loc[COSMOS_id].catalog_2_ids
results[HSC_id].append(target_prob[i])
return results
classifier_i_mag = LogisticRegression(class_weight=None,
solver="lbfgs",
max_iter=300)
classifier_LR = LogisticRegression(class_weight=None,
solver="lbfgs",
max_iter=300)
classifier_RF = RandomForestClassifier(n_estimators=1000,
n_jobs=4)
hdf_file = pathlib.Path("results_cross-validated_all.hdf5")
overwrite = False
if (not hdf_file.is_file()) or overwrite:
results_RF = get_cross_validation_matrix(classifier_RF)
results_LR = get_cross_validation_matrix(classifier_LR)
results_i_mag = get_cross_validation_matrix(classifier_i_mag,
only_i_mag=True)
HSC_ids = list(sorted(results_LR.keys()))
HSC_ids = [HSC_id for HSC_id in HSC_ids
if len(results_LR[HSC_id])==1]
df_results = pd.DataFrame({
"HSC_id": HSC_ids,
"LR_prob": [results_LR[HSC_id][0] for HSC_id in HSC_ids],
"RF_prob": [results_RF[HSC_id][0] for HSC_id in HSC_ids],
"i_mag_prob": [results_i_mag[HSC_id][0] for HSC_id in HSC_ids],
"target": combined.set_index("catalog_2_ids").loc[HSC_ids].low_z_low_mass
})
df_results.to_hdf(hdf_file, key="results")
else:
df_results = pd.read_hdf(hdf_file)
df_results.head()
threshold_probs = expit(np.linspace(-9, 6))
threshold_probs = np.array([-1e-6, *threshold_probs, 1+1e-6])
def get_purities(key, df_results=df_results, threshold_probs=threshold_probs):
purities = np.empty_like(threshold_probs)
df_tmp = df_results[[key, "target"]]
for i, threshold_prob in enumerate(threshold_probs):
mask = df_tmp[key] > threshold_prob
purities[i] = df_tmp["target"][mask].mean()
return purities
def get_completenesses(key, df_results=df_results, threshold_probs=threshold_probs):
completenesses = np.empty_like(threshold_probs)
df_tmp = df_results[[key, "target"]]
df_tmp = df_tmp[df_tmp.target]
for i, threshold_prob in enumerate(threshold_probs):
mask = df_tmp[key] > threshold_prob
completenesses[i] = mask.mean()
return completenesses
def get_selected_object_density(key, df_results=df_results, threshold_probs=threshold_probs):
per sq deg
object_density = np.empty_like(threshold_probs)
df_tmp = df_results[[key, "target"]]
for i, threshold_prob in enumerate(threshold_probs):
mask = df_tmp[key] > threshold_prob
object_density[i] = mask.sum()
return object_density / COSMOS_field_area
def get_FPRs(key, df_results=df_results, threshold_probs=threshold_probs):
FPRs = np.empty_like(threshold_probs)
df_tmp = df_results[[key, "target"]]
df_tmp = df_tmp[~df_tmp.target]
for i, threshold_prob in enumerate(threshold_probs):
mask = df_tmp[key] > threshold_prob
FPRs[i] = mask.mean()
return FPRs
purities_RF = get_purities("RF_prob")
completenesses_RF = get_completenesses("RF_prob")
TPR_RF = completenesses_RF
FPR_RF = get_FPRs("RF_prob")
object_density_RF = get_selected_object_density("RF_prob")
purities_LR = get_purities("LR_prob")
completenesses_LR = get_completenesses("LR_prob")
TPR_LR = completenesses_LR
FPR_LR = get_FPRs("LR_prob")
object_density_LR = get_selected_object_density("LR_prob")
purities_i_mag = get_purities("i_mag_prob")
completenesses_i_mag = get_completenesses("i_mag_prob")
TPR_i_mag = completenesses_i_mag
FPR_i_mag = get_FPRs("i_mag_prob")
object_density_i_mag = get_selected_object_density("i_mag_prob")
import sklearn
import sklearn.metrics
AUC_RF = sklearn.metrics.average_precision_score(df_results.target, df_results.RF_prob)
plt.plot(completenesses_RF, purities_RF,
marker="o", color=color_RF, label="Random Forest (AUC={:.2f})".format(AUC_RF),
linewidth=linewidth,
)
AUC_LR = sklearn.metrics.average_precision_score(df_results.target, df_results.LR_prob)
plt.plot(completenesses_LR, purities_LR,
marker="o", color=color_LR, label="Logistic Regression (AUC={:.2f})".format(AUC_LR),
linestyle="dashed",
linewidth=linewidth,
)
AUC_i_mag = sklearn.metrics.average_precision_score(df_results.target, df_results.i_mag_prob)
plt.plot(completenesses_i_mag, purities_i_mag,
marker="o", color=color_MC, label="$i$-band cut (AUC={:.2f})".format(AUC_i_mag),
linestyle="dotted",
linewidth=linewidth,
)
plt.xlabel("Completeness")
plt.ylabel("Purity")
plt.ylim(0,1)
leg = plt.legend(loc="best")
filename = "plots_for_thesis/purity-completeness-all"
plt.tight_layout()
plt.savefig(filename + ".pdf")
plt.savefig(filename + ".png")
AUC_RF = sklearn.metrics.roc_auc_score(df_results.target, df_results.RF_prob)
plt.plot(FPR_RF, TPR_RF,
marker="o", color=color_RF, label="Random Forest (AUC={:.3f})".format(AUC_RF),
drawstyle="steps-post",
linewidth=linewidth,
)
AUC_LR = sklearn.metrics.roc_auc_score(df_results.target, df_results.LR_prob)
plt.plot(FPR_LR, TPR_LR,
marker="o", color=color_LR, label="Logistic Regression (AUC={:.3f})".format(AUC_LR),
linestyle="dashed",
drawstyle="steps-post",
linewidth=linewidth,
)
AUC_i_mag = sklearn.metrics.roc_auc_score(df_results.target, df_results.i_mag_prob)
plt.plot(FPR_i_mag, TPR_i_mag,
marker="o", color=color_MC, label="$i$-band cut (AUC={:.3f})".format(AUC_i_mag),
linestyle="dotted",
drawstyle="steps-post",
linewidth=linewidth,
)
plt.plot([0,1], [0,1 ], linestyle="dotted", color="k", label="Random guessing",
linewidth=linewidth,
)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
# plt.xlim(0,1)
# plt.ylim(0,1)
plt.legend(loc="best")
filename = "plots_for_thesis/ROC-all"
plt.tight_layout()
plt.savefig(filename + ".pdf")
plt.savefig(filename + ".png")
f, (ax1, ax2) = plt.subplots(2, sharex=True)
f.subplots_adjust(hspace=0.1)
ax1.plot(object_density_RF, purities_RF,
marker="o", color=color_RF, label=label_RF,
linewidth=linewidth,
)
ax1.axvline(1e3,
color="black", linestyle="dashed", label="DNN Training Set Density")
ax1.set_ylabel("Purity")
ax1.set_xscale("log")
ax1.set_ylim(0,1)
ax2.plot(object_density_RF, completenesses_RF,
marker="o", color=color_RF, label=label_RF,
linewidth=linewidth,
)
ax2.axvline(1e3,
color="black", linestyle="dashed", label="DNN Training Set Density",
linewidth=linewidth,
)
ax2.set_xlabel("Number of Selected Objects per sq. deg.")
ax2.set_ylabel("Completeness")
ax2.set_xscale("log")
ax2.set_ylim(0,1)
ax2.legend(loc="best")
plt.tight_layout()
filename = "plots_for_thesis/purity-completeness-RF"
plt.tight_layout()
plt.savefig(filename + ".pdf")
plt.savefig(filename + ".png")
theoretical_probs=np.linspace(0,1,num=11)
empirical_probs_RF = np.empty(theoretical_probs.size-1)
num_in_bin_RF = np.empty_like(empirical_probs_RF)
for i in range(theoretical_probs.size-1):
prob_lim_low = theoretical_probs[i]
prob_lim_high = theoretical_probs[i+1]
mask_RF = (df_results["RF_prob"] >= prob_lim_low) & (df_results["RF_prob"] < prob_lim_high)
empirical_probs_RF[i] = df_results["target"][mask_RF].mean()
num_in_bin_RF[i] = df_results["target"][mask_RF].size
f, (ax1, ax2) = plt.subplots(2, sharex=True,
gridspec_kw = {'height_ratios':[1, 3]},
)
ax1.plot(theoretical_probs, [num_in_bin_RF[0], *num_in_bin_RF],
drawstyle="steps", color=color_RF,
linewidth=linewidth,
)
ax1.set_yscale("log")
ax1.set_ylim(bottom=10**-.5, top=10**6.5)
ax1.yaxis.set_ticks([1e0, 1e3, 1e6])
ax1.set_ylabel("Number of \nGalaxies in Bin")
ax2.step(theoretical_probs, [empirical_probs_RF[0], *empirical_probs_RF],
linestyle="steps", color=color_RF, label=label_RF,
linewidth=linewidth,
)
ax2.fill_between(theoretical_probs, theoretical_probs-theoretical_probs[1], theoretical_probs,
step="pre", color="black", label="ideal", alpha=.2,
linewidth=linewidth,
)
plt.xlabel("Reported Probability")
plt.ylabel("Actual (Binned) Probability")
plt.legend(loc="best")
plt.xlim(0,1)
plt.ylim(0,1)
plt.tight_layout()
filename = "plots_for_thesis/probability-calibration-RF"
plt.tight_layout()
plt.savefig(filename + ".pdf")
plt.savefig(filename + ".png")
sklearn.metrics.log_loss(df_results.target, df_results.RF_prob)
n_trees = classifier_RF.n_estimators
n_pseudo_obs = 2
pseudo_obs_class_balance = 0.5
df_results["RF_prob_softened"] = (df_results["RF_prob"] * n_trees + n_pseudo_obs * pseudo_obs_class_balance) \
/ (n_trees + n_pseudo_obs)
sklearn.metrics.log_loss(df_results.target, df_results.RF_prob_softened)
sklearn.metrics.log_loss(df_results.target, df_results.LR_prob)
sklearn.metrics.log_loss(df_results.target, df_results.i_mag_prob)
plt.hist(df_results["RF_prob"], bins=np.linspace(0,1), alpha=.5, color=color_RF, label=label_RF)
plt.hist(df_results["LR_prob"], bins=np.linspace(0,1), alpha=.5, color=color_LR, label=label_LR)
plt.yscale("log")
plt.xlabel("p(dwarf | model)")
plt.ylabel("Number of objects")
plt.legend(loc="best")
plt.hist(1-df_results["RF_prob"],
cumulative=True, alpha=0.9,
label="RF",
color=color_RF)
plt.hist(1-df_results["LR_prob"],
cumulative=True,
label="LR",
color=color_LR)
plt.ylim(ymin=1e-5)
plt.yscale("log")
plt.legend(loc="best")
plt.xlabel("1 - prob(dwarf)")
plt.ylabel("CDF")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Turn magnitudes into colors
Step2: Filter out bad data
Step3: Get FRANKENZ photo-z's
Step4: Create classification labels
Step5: Build Classifiers
Step6: Partition training and testing sets
Step7: Shared Plotting Parameters
Step8: Cross-validation Wrapper
Step9: i-mag cut
Step10: Logistic Regression
Step11: Random Forest
Step12: Save all model results
Step14: Get Metrics from cross-validated results
Step15: Predicted Probabilities
|
6,140
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import uncertainties as uct
from uncertainties import unumpy as unp
import pandas as pd
import pytheos as eos
v0 = uct.ufloat(74.698, 0.004)
k0 = uct.ufloat(160., 3.)
k0p = uct.ufloat(4.0, 0.3)
n_pts = 20
vv0 = np.linspace(1.,0.8, n_pts)
v = vv0 * v0
p = eos.bm3_p(v, v0, k0, k0p)
help(eos.bm3_p)
df = pd.DataFrame()
df['unit-cell volume'] = v
df['pressure'] = p
df
#print(df.to_string(index=False)) # for fancier print
f = plt.figure()
plt.errorbar(unp.nominal_values(p), unp.nominal_values(v), fmt='ko', \
xerr=unp.std_devs(p), yerr=unp.std_devs(v))
plt.xlabel('Pressure (GPa)'); plt.ylabel('Unit-cell volume ($\mathrm{\AA}^3$)');
v_cal = eos.bm3_v(p, v0, k0, k0p)
df = pd.DataFrame()
df['pressure'] = p
df['unit-cell volume'] = v_cal
df
# print(df.to_string(index=False))
k = eos.bm3_k(p, v0, k0, k0p)
df = pd.DataFrame()
df['pressure'] = p
df['bulk modulus'] = k
df
#print(df.to_string(index=False))
f = plt.figure()
plt.errorbar( unp.nominal_values(p), unp.nominal_values(k), \
xerr=unp.std_devs(p), yerr=unp.std_devs(k), fmt='o')
plt.xlabel('Pressure (GPa)'); plt.ylabel('Bulk modulus (GPa)');
gamma0 = uct.ufloat(1.45, 0.02)
q = uct.ufloat(0.8, 0.3)
theta0 = uct.ufloat(800., 0.)
help(eos.constq_pth)
p_hT = eos.bm3_p(v, v0, k0, k0p) + \
eos.constq_pth(v, np.ones_like(v)*2000., v0, gamma0, q, theta0, 2, 4)
df = pd.DataFrame()
df['unit-cell volume'] = v
df['pressure@300K'] = p
df['pressure@2000K'] = p_hT
df
# print(df.to_string(index=False))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Assign uncertainties to the EOS parameters
Step2: We make a numpy array for volume at high pressure.
Step3: Calculate pressure from pytheos.
Step4: How to get help
Step5: Now you can see that error bars for the EOS parameters are used in error propagation calculation for pressure value. Note that the uncertainties in the EOS parameters are correctly applied for propagating uncertainties to both molar volume and pressure.
Step6: Unfortunately to plot with matplotlib, you need to separate nominal values from standard deviation.
Step7: 3. Calculate volume from pressure using pytheos
Step8: Compare this table with the one we showed above for accuracy check.
Step9: 5. High temperature equation of state
Step10: We will use constq_pth for calculating the thermal pressure part of the EOS. Below, I demonstrate how to get help for the function.
Step11: We calculate total pressure at 2000 K below. eos.constq_pth requires input of volume and temperature with the same number of elements. For 2000-K isotherm, we generate a temperature array with 2000 for all elements.
|
6,141
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('C:\Anaconda2\envs\dato-env\Lib\site-packages')
import graphlab
sales = graphlab.SFrame('kc_house_data.gl/')
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to float, before creating a new feature.
sales['floors'] = sales['floors'].astype(float)
sales['floors_square'] = sales['floors']*sales['floors']
all_features = ['bedrooms', 'bedrooms_square',
'bathrooms',
'sqft_living', 'sqft_living_sqrt',
'sqft_lot', 'sqft_lot_sqrt',
'floors', 'floors_square',
'waterfront', 'view', 'condition', 'grade',
'sqft_above',
'sqft_basement',
'yr_built', 'yr_renovated']
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=1e10)
# non_zero_weight = model_all.get("coefficients")["value"]
non_zero_weight = model_all["coefficients"][model_all["coefficients"]["value"] > 0]
non_zero_weight.print_rows(num_rows=20)
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate
import numpy as np
import pprint
validation_rss = {}
for l1_penalty in np.logspace(1, 7, num=13):
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0., l1_penalty=l1_penalty)
predictions = model.predict(validation)
residuals = validation['price'] - predictions
rss = sum(residuals**2)
validation_rss[l1_penalty] = rss
# pprint.pprint(result_dict)
print min(validation_rss.items(), key=lambda x: x[1])
model_test = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0., l1_penalty=10.0)
predictions_test = model.predict(testing)
residuals_test = testing['price'] - predictions_test
rss_test = sum(residuals_test**2)
print rss_test
non_zero_weight_test = model_test["coefficients"][model_test["coefficients"]["value"] > 0]
print model_test["coefficients"]["value"].nnz()
non_zero_weight_test.print_rows(num_rows=20)
max_nonzeros = 7
l1_penalty_values = np.logspace(8, 10, num=20)
print l1_penalty_values
coef_dict = {}
for l1_penalty in l1_penalty_values:
model = graphlab.linear_regression.create(training, target ='price', features=all_features,
validation_set=None, verbose=None,
l2_penalty=0., l1_penalty=l1_penalty)
coef_dict[l1_penalty] = model['coefficients']['value'].nnz()
pprint.pprint(coef_dict)
l1_penalty_min = 2976351441.6313128
l1_penalty_max = 3792690190.7322536
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
validation_rss = {}
for l1_penalty in l1_penalty_values:
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0., l1_penalty=l1_penalty)
predictions = model.predict(validation)
residuals = validation['price'] - predictions
rss = sum(residuals**2)
validation_rss[l1_penalty] = rss, model['coefficients']['value'].nnz()
for k,v in validation_rss.iteritems():
if (v[1] == max_nonzeros) and (v[0] < bestRSS):
bestRSS = v[0]
bestl1 = k
print bestRSS, bestl1
for k,v in validation_rss.iteritems():
if (v[1] == max_nonzeros) and (v[0] < bestRSS):
bestRSS = v[0]
print k, bestRSS
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0., l1_penalty=3448968612.16)
non_zero_weight_test = model["coefficients"][model["coefficients"]["value"] > 0]
non_zero_weight_test.print_rows(num_rows=8)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: Create new features
Step3: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
Step4: Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
Step5: Find what features had non-zero weight.
Step6: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
Step7: Next, we write a loop that does the following
Step8: QUIZ QUESTIONS
Step9: QUIZ QUESTION
Step10: Limit the number of nonzero weights
Step11: Exploring the larger range of values to find a narrow range with the desired sparsity
Step12: Now, implement a loop that search through this space of possible l1_penalty values
Step13: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
Step14: QUIZ QUESTIONS
Step15: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20)
Step16: QUIZ QUESTIONS
|
6,142
|
<ASSISTANT_TASK:>
Python Code:
from pywed import * # Tore Supra database library
%pylab inline
pulse_list = np.loadtxt('data/liste_choc_fci.txt', dtype=int)
pulse_list = np.arange(44092, 48311, dtype='int')
ts_max_power = []
ts_max_duration = []
for pulse in pulse_list:
#print('Retrieve date for pulse {}'.format(pulse))
# retrieve ICRH power from Tore Supra database
try:
data = tsbase(int(pulse), 'GPUIFCI')
# Check the case when power is alway nul during shot
non_zero_values = np.squeeze(np.nonzero(data[0][:,3]))
if non_zero_values.size>1:
# Max power in MW, addition of all launchers
# (4th columns of Gpuifci power signal)
ts_max_power.append(np.max(data[0][:,3], axis=0))
# Max duration : take the max power
# and calculates the time duration btw first and last non-zero values
t_start = data[1][non_zero_values[0],0]
t_end = data[1][non_zero_values[-1],0]
t_duration = t_end - t_start
ts_max_duration.append(t_duration)
except PyWEDException:
pass
#print('no data')
pulse
np.save('TS_data_44092-48310.npy', np.vstack([np.array(ts_max_power), np.array(ts_max_duration)]))
ts_data_35000_44091.shape
ts_data_35000_44091 = np.load('TS_data_35000-44092.npy')
ts_data_44092_48310 = np.load('TS_data_44092-48310.npy')
ts_max_power = np.concatenate((ts_data_35000_44091[0,:],ts_data_44092_48310[0,:]))
ts_max_duration = np.concatenate((ts_data_35000_44091[1,:],ts_data_44092_48310[1,:]))
scatter(ts_max_power, ts_max_duration, alpha=0.2)
ylim(1,1.1*60*60)
xlim(0,10)
xlabel('RF Max. Coupled Power [MW]', fontsize=14)
ylabel('RF Max. Duration [s]', fontsize=14)
yscale('log')
yticks([1, 10, 100, 1000], ['1', '10', '100', '1000'], fontsize=14)
xticks(fontsize=14)
import MDSplus as mds
conx = mds.Connection('mdsplus.jet.efda.org')
print(conx.hostspec)
jet_pulse_list = [68752, 68809, 68110, 65947, 78069, 73520,77894,78125,77404,78070,77293,76721,76722]
jet_pulse_list = range(68955, 76723) # CW
jet_pulse_list = range(80000, 87944) # ILW
jet_max_power = []
jet_max_duration = []
for pulse in jet_pulse_list:
try:
y = np.array(conx.get('_sig=jet("ppf/icrh/ptot", '+str(pulse)+')')) / 1e6 # total ICRH power in MW
t = np.array(conx.get('dim_of(_sig)')) # time vector
non_zero_values = np.squeeze(np.nonzero(y))
# continue only if the y vector is not 0
if non_zero_values.size:
jet_max_power.append(np.max(y))
t_start = t[non_zero_values[0]]
t_end = t[non_zero_values[-1]]
t_duration = t_end - t_start
jet_max_duration.append(t_duration)
except KeyError :
pass#print('no data')
np.save('JET_power_ILW.npy', np.array(jet_max_power))
np.save('JET_duration_ILW.npy', np.array(jet_max_duration))
JET_max_power = np.load('JET_power_ILW.npy')
JET_max_power.size
# references
# Seki 2013
# Kasahara 2010 _Study of High power ICRF antenna design in LHD
lhd_power = [0.55, 0.52, 0.23, 0.49, 0.24, 0.7, 0.9, 0.4, 3, 3.5, 4.5, 0.96]
lhd_duration = [1*60*60, 0.5*60*60, 1*60*60, 0.5*60*60, 1*60*60, 1135, 48*60, 54*60, 2, 2, 2, 2859]
# references:
# B.Wan NF 2013
# Y.P.Zhao FED 2014
east_power = [0.6, 1.6, 2, 0.8]
east_duration = [5, 6, 4, 30]
import matplotlib as mpl
#To make sure we have always the same matplotlib settings
#(the ones in comments are the ipython notebook settings)
#mpl.rcParams['figure.figsize']=(8.0,6.0) #(6.0,4.0)
#mpl.rcParams['font.size']=12 #10
mpl.rcParams['savefig.dpi']=100 #72
#mpl.rcParams['figure.subplot.bottom']=.1 #.125
jet_max_power = np.load('JET_power_ILW.npy')
jet_max_duration = np.load('JET_duration_ILW.npy')
scatter(ts_max_power, ts_max_duration, marker='.', s=30, color=(31/255, 119/255, 180/255), alpha=0.8)
scatter(jet_max_power, jet_max_duration, marker='.', s=30, color=(214/255, 39/255, 40/255), alpha=0.3)
scatter(lhd_power, lhd_duration, s=30, marker='s', color='k')
scatter(east_power, east_duration, marker='D', s=30, color='#FFB800')
ylim(1,1.5*60*60)
xlim(0,10)
xlabel('RF Max. Coupled Power [MW]', fontsize=14)
ylabel('RF Max. Duration [s]', fontsize=14)
yscale('log')
yticks([1, 10, 100, 1000], ['1', '10', '100', '1000'], fontsize=14)
xticks(fontsize=14)
grid(True, axis='y')
# Put a legend to the right of the current axis
lgd = legend(('Tore Supra', 'JET-ILW', 'LHD', 'EAST'), loc=5, bbox_to_anchor=(1.02, 0, 0.5, 1),
ncol=1, mode="expand", borderaxespad=0., frameon=False, fontsize=14, scatterpoints=1)
# Remove the plot frame lines. They are unnecessary chartjunk.
gca().spines["top"].set_visible(False)
gca().spines["bottom"].set_visible(False)
gca().spines["right"].set_visible(False)
gca().spines["left"].set_visible(False)
# Ensure that the axis ticks only show up on the bottom and left of the plot.
# Ticks on the right and top of the plot are generally unnecessary chartjunk.
gca().get_xaxis().tick_bottom()
gca().get_yaxis().tick_left()
# Remove the tick marks; they are unnecessary with the tick lines we just plotted.
tick_params(axis="x", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
gcf().set_size_inches(5,3)
savefig('ICRF_Power-vs-duration.png', dpi=120, bbox_inches='tight', pad_inches=0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: JET database
Step2: LHD
Step3: EAST
Step4: Plot
|
6,143
|
<ASSISTANT_TASK:>
Python Code:
theta = 0.6
rv = sp.stats.bernoulli(theta)
rv
xx = [0, 1]
plt.bar(xx, rv.pmf(xx), align="center")
plt.xlim(-1, 2)
plt.ylim(0, 1)
plt.xticks([0, 1], ["X=0", "X=1"])
plt.ylabel("P(x)")
plt.title("pmf of Bernoulli distribution")
plt.show()
x = rv.rvs(100, random_state=0)
x
sns.countplot(x)
plt.show()
y = np.bincount(x, minlength=2)/float(len(x))
df = pd.DataFrame({"theoretic": rv.pmf(xx), "simulation": y}).stack()
df = df.reset_index()
df.columns = ["value", "type", "ratio"]
df.pivot("value", "type", "ratio")
sns.barplot(x="value", y="ratio", hue="type", data=df)
plt.show()
np.mean(x)
np.var(x, ddof=1)
s = sp.stats.describe(x)
s[2], s[3]
pd.Series(x).describe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: pmf 메서드를 사용하면 확률 질량 함수(pmf
Step2: 시뮬레이션을 하려면 rvs 메서드를 사용한다.
Step3: 결과를 seaborn의 countplot 명령으로 시각화한다.
Step4: 이론적인 확률 분포와 샘플의 확률 분포를 동시에 나타내려면 다음과 같은 코드를 사용한다.
Step5: seaborn의 barplot 명령으로 시각화하면 다음과 같다.
Step6: 베르누이 분포의 모멘트
Step7: SciPy의 describe 명령을 쓰면 다음과 같이 계산할 수 있다.
Step8: 또는 Pandas의 Series객체로 바꾸어 describe 메서드를 써서 다음과 같이 계산한다.
|
6,144
|
<ASSISTANT_TASK:>
Python Code:
import os
import collections
import json
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_hub as hub
import tensorflow_text as text
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from tqdm import tqdm
# Suppressing tf.hub warnings
tf.get_logger().setLevel("ERROR")
root_dir = "datasets"
annotations_dir = os.path.join(root_dir, "annotations")
images_dir = os.path.join(root_dir, "train2014")
tfrecords_dir = os.path.join(root_dir, "tfrecords")
annotation_file = os.path.join(annotations_dir, "captions_train2014.json")
# Download caption annotation files
if not os.path.exists(annotations_dir):
annotation_zip = tf.keras.utils.get_file(
"captions.zip",
cache_dir=os.path.abspath("."),
origin="http://images.cocodataset.org/annotations/annotations_trainval2014.zip",
extract=True,
)
os.remove(annotation_zip)
# Download image files
if not os.path.exists(images_dir):
image_zip = tf.keras.utils.get_file(
"train2014.zip",
cache_dir=os.path.abspath("."),
origin="http://images.cocodataset.org/zips/train2014.zip",
extract=True,
)
os.remove(image_zip)
print("Dataset is downloaded and extracted successfully.")
with open(annotation_file, "r") as f:
annotations = json.load(f)["annotations"]
image_path_to_caption = collections.defaultdict(list)
for element in annotations:
caption = f"{element['caption'].lower().rstrip('.')}"
image_path = images_dir + "/COCO_train2014_" + "%012d.jpg" % (element["image_id"])
image_path_to_caption[image_path].append(caption)
image_paths = list(image_path_to_caption.keys())
print(f"Number of images: {len(image_paths)}")
train_size = 30000
valid_size = 5000
captions_per_image = 2
images_per_file = 2000
train_image_paths = image_paths[:train_size]
num_train_files = int(np.ceil(train_size / images_per_file))
train_files_prefix = os.path.join(tfrecords_dir, "train")
valid_image_paths = image_paths[-valid_size:]
num_valid_files = int(np.ceil(valid_size / images_per_file))
valid_files_prefix = os.path.join(tfrecords_dir, "valid")
tf.io.gfile.makedirs(tfrecords_dir)
def bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def create_example(image_path, caption):
feature = {
"caption": bytes_feature(caption.encode()),
"raw_image": bytes_feature(tf.io.read_file(image_path).numpy()),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
def write_tfrecords(file_name, image_paths):
caption_list = []
image_path_list = []
for image_path in image_paths:
captions = image_path_to_caption[image_path][:captions_per_image]
caption_list.extend(captions)
image_path_list.extend([image_path] * len(captions))
with tf.io.TFRecordWriter(file_name) as writer:
for example_idx in range(len(image_path_list)):
example = create_example(
image_path_list[example_idx], caption_list[example_idx]
)
writer.write(example.SerializeToString())
return example_idx + 1
def write_data(image_paths, num_files, files_prefix):
example_counter = 0
for file_idx in tqdm(range(num_files)):
file_name = files_prefix + "-%02d.tfrecord" % (file_idx)
start_idx = images_per_file * file_idx
end_idx = start_idx + images_per_file
example_counter += write_tfrecords(file_name, image_paths[start_idx:end_idx])
return example_counter
train_example_count = write_data(train_image_paths, num_train_files, train_files_prefix)
print(f"{train_example_count} training examples were written to tfrecord files.")
valid_example_count = write_data(valid_image_paths, num_valid_files, valid_files_prefix)
print(f"{valid_example_count} evaluation examples were written to tfrecord files.")
feature_description = {
"caption": tf.io.FixedLenFeature([], tf.string),
"raw_image": tf.io.FixedLenFeature([], tf.string),
}
def read_example(example):
features = tf.io.parse_single_example(example, feature_description)
raw_image = features.pop("raw_image")
features["image"] = tf.image.resize(
tf.image.decode_jpeg(raw_image, channels=3), size=(299, 299)
)
return features
def get_dataset(file_pattern, batch_size):
return (
tf.data.TFRecordDataset(tf.data.Dataset.list_files(file_pattern))
.map(
read_example,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False,
)
.shuffle(batch_size * 10)
.prefetch(buffer_size=tf.data.AUTOTUNE)
.batch(batch_size)
)
def project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
):
projected_embeddings = layers.Dense(units=projection_dims)(embeddings)
for _ in range(num_projection_layers):
x = tf.nn.gelu(projected_embeddings)
x = layers.Dense(projection_dims)(x)
x = layers.Dropout(dropout_rate)(x)
x = layers.Add()([projected_embeddings, x])
projected_embeddings = layers.LayerNormalization()(x)
return projected_embeddings
def create_vision_encoder(
num_projection_layers, projection_dims, dropout_rate, trainable=False
):
# Load the pre-trained Xception model to be used as the base encoder.
xception = keras.applications.Xception(
include_top=False, weights="imagenet", pooling="avg"
)
# Set the trainability of the base encoder.
for layer in xception.layers:
layer.trainable = trainable
# Receive the images as inputs.
inputs = layers.Input(shape=(299, 299, 3), name="image_input")
# Preprocess the input image.
xception_input = tf.keras.applications.xception.preprocess_input(inputs)
# Generate the embeddings for the images using the xception model.
embeddings = xception(xception_input)
# Project the embeddings produced by the model.
outputs = project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
)
# Create the vision encoder model.
return keras.Model(inputs, outputs, name="vision_encoder")
def create_text_encoder(
num_projection_layers, projection_dims, dropout_rate, trainable=False
):
# Load the BERT preprocessing module.
preprocess = hub.KerasLayer(
"https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/2",
name="text_preprocessing",
)
# Load the pre-trained BERT model to be used as the base encoder.
bert = hub.KerasLayer(
"https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1",
"bert",
)
# Set the trainability of the base encoder.
bert.trainable = trainable
# Receive the text as inputs.
inputs = layers.Input(shape=(), dtype=tf.string, name="text_input")
# Preprocess the text.
bert_inputs = preprocess(inputs)
# Generate embeddings for the preprocessed text using the BERT model.
embeddings = bert(bert_inputs)["pooled_output"]
# Project the embeddings produced by the model.
outputs = project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
)
# Create the text encoder model.
return keras.Model(inputs, outputs, name="text_encoder")
class DualEncoder(keras.Model):
def __init__(self, text_encoder, image_encoder, temperature=1.0, **kwargs):
super(DualEncoder, self).__init__(**kwargs)
self.text_encoder = text_encoder
self.image_encoder = image_encoder
self.temperature = temperature
self.loss_tracker = keras.metrics.Mean(name="loss")
@property
def metrics(self):
return [self.loss_tracker]
def call(self, features, training=False):
# Place each encoder on a separate GPU (if available).
# TF will fallback on available devices if there are fewer than 2 GPUs.
with tf.device("/gpu:0"):
# Get the embeddings for the captions.
caption_embeddings = text_encoder(features["caption"], training=training)
with tf.device("/gpu:1"):
# Get the embeddings for the images.
image_embeddings = vision_encoder(features["image"], training=training)
return caption_embeddings, image_embeddings
def compute_loss(self, caption_embeddings, image_embeddings):
# logits[i][j] is the dot_similarity(caption_i, image_j).
logits = (
tf.matmul(caption_embeddings, image_embeddings, transpose_b=True)
/ self.temperature
)
# images_similarity[i][j] is the dot_similarity(image_i, image_j).
images_similarity = tf.matmul(
image_embeddings, image_embeddings, transpose_b=True
)
# captions_similarity[i][j] is the dot_similarity(caption_i, caption_j).
captions_similarity = tf.matmul(
caption_embeddings, caption_embeddings, transpose_b=True
)
# targets[i][j] = avarage dot_similarity(caption_i, caption_j) and dot_similarity(image_i, image_j).
targets = keras.activations.softmax(
(captions_similarity + images_similarity) / (2 * self.temperature)
)
# Compute the loss for the captions using crossentropy
captions_loss = keras.losses.categorical_crossentropy(
y_true=targets, y_pred=logits, from_logits=True
)
# Compute the loss for the images using crossentropy
images_loss = keras.losses.categorical_crossentropy(
y_true=tf.transpose(targets), y_pred=tf.transpose(logits), from_logits=True
)
# Return the mean of the loss over the batch.
return (captions_loss + images_loss) / 2
def train_step(self, features):
with tf.GradientTape() as tape:
# Forward pass
caption_embeddings, image_embeddings = self(features, training=True)
loss = self.compute_loss(caption_embeddings, image_embeddings)
# Backward pass
gradients = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
# Monitor loss
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
def test_step(self, features):
caption_embeddings, image_embeddings = self(features, training=False)
loss = self.compute_loss(caption_embeddings, image_embeddings)
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
num_epochs = 5 # In practice, train for at least 30 epochs
batch_size = 256
vision_encoder = create_vision_encoder(
num_projection_layers=1, projection_dims=256, dropout_rate=0.1
)
text_encoder = create_text_encoder(
num_projection_layers=1, projection_dims=256, dropout_rate=0.1
)
dual_encoder = DualEncoder(text_encoder, vision_encoder, temperature=0.05)
dual_encoder.compile(
optimizer=tfa.optimizers.AdamW(learning_rate=0.001, weight_decay=0.001)
)
print(f"Number of GPUs: {len(tf.config.list_physical_devices('GPU'))}")
print(f"Number of examples (caption-image pairs): {train_example_count}")
print(f"Batch size: {batch_size}")
print(f"Steps per epoch: {int(np.ceil(train_example_count / batch_size))}")
train_dataset = get_dataset(os.path.join(tfrecords_dir, "train-*.tfrecord"), batch_size)
valid_dataset = get_dataset(os.path.join(tfrecords_dir, "valid-*.tfrecord"), batch_size)
# Create a learning rate scheduler callback.
reduce_lr = keras.callbacks.ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=3
)
# Create an early stopping callback.
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor="val_loss", patience=5, restore_best_weights=True
)
history = dual_encoder.fit(
train_dataset,
epochs=num_epochs,
validation_data=valid_dataset,
callbacks=[reduce_lr, early_stopping],
)
print("Training completed. Saving vision and text encoders...")
vision_encoder.save("vision_encoder")
text_encoder.save("text_encoder")
print("Models are saved.")
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.ylabel("Loss")
plt.xlabel("Epoch")
plt.legend(["train", "valid"], loc="upper right")
plt.show()
print("Loading vision and text encoders...")
vision_encoder = keras.models.load_model("vision_encoder")
text_encoder = keras.models.load_model("text_encoder")
print("Models are loaded.")
def read_image(image_path):
image_array = tf.image.decode_jpeg(tf.io.read_file(image_path), channels=3)
return tf.image.resize(image_array, (299, 299))
print(f"Generating embeddings for {len(image_paths)} images...")
image_embeddings = vision_encoder.predict(
tf.data.Dataset.from_tensor_slices(image_paths).map(read_image).batch(batch_size),
verbose=1,
)
print(f"Image embeddings shape: {image_embeddings.shape}.")
def find_matches(image_embeddings, queries, k=9, normalize=True):
# Get the embedding for the query.
query_embedding = text_encoder(tf.convert_to_tensor(queries))
# Normalize the query and the image embeddings.
if normalize:
image_embeddings = tf.math.l2_normalize(image_embeddings, axis=1)
query_embedding = tf.math.l2_normalize(query_embedding, axis=1)
# Compute the dot product between the query and the image embeddings.
dot_similarity = tf.matmul(query_embedding, image_embeddings, transpose_b=True)
# Retrieve top k indices.
results = tf.math.top_k(dot_similarity, k).indices.numpy()
# Return matching image paths.
return [[image_paths[idx] for idx in indices] for indices in results]
query = "a family standing next to the ocean on a sandy beach with a surf board"
matches = find_matches(image_embeddings, [query], normalize=True)[0]
plt.figure(figsize=(20, 20))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(mpimg.imread(matches[i]))
plt.axis("off")
def compute_top_k_accuracy(image_paths, k=100):
hits = 0
num_batches = int(np.ceil(len(image_paths) / batch_size))
for idx in tqdm(range(num_batches)):
start_idx = idx * batch_size
end_idx = start_idx + batch_size
current_image_paths = image_paths[start_idx:end_idx]
queries = [
image_path_to_caption[image_path][0] for image_path in current_image_paths
]
result = find_matches(image_embeddings, queries, k)
hits += sum(
[
image_path in matches
for (image_path, matches) in list(zip(current_image_paths, result))
]
)
return hits / len(image_paths)
print("Scoring training data...")
train_accuracy = compute_top_k_accuracy(train_image_paths)
print(f"Train accuracy: {round(train_accuracy * 100, 3)}%")
print("Scoring evaluation data...")
eval_accuracy = compute_top_k_accuracy(image_paths[train_size:])
print(f"Eval accuracy: {round(eval_accuracy * 100, 3)}%")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepare the data
Step2: Process and save the data to TFRecord files
Step3: Create tf.data.Dataset for training and evaluation
Step4: Implement the projection head
Step5: Implement the vision encoder
Step6: Implement the text encoder
Step7: Implement the dual encoder
Step8: Train the dual encoder model
Step9: Note that training the model with 60,000 image-caption pairs, with a batch size of 256,
Step10: Plotting the training loss
Step11: Search for images using natural language queries
Step12: Retrieve relevant images
Step13: Set the query variable to the type of images you want to search for.
Step14: Evaluate the retrieval quality
|
6,145
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import emcee
import matplotlib.pyplot as plt
def lnp(x, mu, icov):
diff = x-mu
return -np.dot(diff, np.dot(icov, diff))/2.0
ndim = 50
means = np.random.rand(ndim)
cov = 0.5 - np.random.rand(ndim**2).reshape((ndim,ndim))
cov = np.triu(cov)
cov += cov.T - np.diag(cov.diagonal())
cov = np.dot(cov, cov)
icov = np.linalg.inv(cov)
nwalkers = 250
p0 = np.random.rand(ndim*nwalkers).reshape((nwalkers,ndim))
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnp, args=[means,icov])
pos, prob, state = sampler.run_mcmc(p0, 100)
sampler.reset()
pos, prob, state = sampler.run_mcmc(pos, 1000)
print(np.mean(sampler.acceptance_fraction))
print(sampler.chain.shape)
print(sampler.flatchain.shape)
for i in range(ndim):
plt.figure()
plt.hist(sampler.flatchain[:, i], 100, color='k', histtype='step')
plt.title('Parameter {0:d}'.format(i))
# touch the file
fname = 'chain.dat'
f = open(fname, 'w')
f.close()
# note that they recommend opening the file in the loop, but whe should probably do it outside the loop.
# we should also parallelize somehow for our numpy arrays
with open(fname, 'a') as f:
for result in sampler.sample(p0, iterations=10, storechain=False):
position = result[0]
for k in range(position.shape[0]):
str(k)
str(position[k])
f.write("{0:4d} {1:s}\n".format(k, " ".join(map(str,position[k]))))
import sys
nsteps = 5000
width = 30
for i, result in enumerate(sampler.sample(p0, iterations=nsteps)):
n = int((width+1) * float(i) / nsteps)
sys.stdout.write("\r[{0}{1}]".format('#' * n, ' ' * (width - n)))
sys.stdout.write("\n")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: function to evaluate $p(\vec{x}, \vec{\mu}, \Sigma^-1)$. Note that emcee requires log probability ($\ln p$) so this simplifies this problem.
Step2: set up 50 parameters to act as the true values
Step3: initialize some walkers (250!) with random guesses for the parameters
Step4: emcee uses EnsembleSampler objects to do the sampling.
Step5: Now we'll call the sampler with a 1000 steps and use it as a burn in. We'll pass in our initial guesses as an array of parameters (p0.shape
Step6: Finally pass in the burned in walkers
Step7: Look at the results
Step8: incrementally saving the chain
Step9: progress bar
|
6,146
|
<ASSISTANT_TASK:>
Python Code:
from IPython.html.widgets import interact
from math import (sin, cos, tan)
from ipytangle import tangle
@interact
def interactor(fn=dict(sin=sin, cos=cos, tan=tan), x=(0, 360)):
print(fn(x))
trig_talk = tangle(interactor)
trig_talk
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you have defined an interact function, you can pull out all of the variables and put them in a tangle.
Step2: The fn_label function
|
6,147
|
<ASSISTANT_TASK:>
Python Code:
import numpy
import pandas
from sklearn.cross_validation import cross_val_score
from sklearn.preprocessing import LabelEncoder, label_binarize
from sklearn.cross_validation import StratifiedKFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
import matplotlib.pyplot as plt
from sklearn import cross_validation
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import roc_auc_score, make_scorer, f1_score
from sklearn.grid_search import GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
from math import ceil, sqrt
from sklearn import decomposition
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2, f_classif
import os.path
import xgboost
import tsne
if os.path.isfile("../data/processed/completeDataset.csv"):
data = pandas.read_csv("../data/processed/completeDataset.csv", sep=',', quotechar='"')
elif os.path.isfile("../data/processed/completeDataset.csv.gz"):
data = pandas.read_csv("../data/processed/completeDataset.csv.gz", compression='gzip', sep=',', quotechar='"')
else:
print("The dataset is not in ../data/processed !")
print data.shape
print data.head(3)
print "..."
print data.tail(3)
#Cleanup the data, like R's
# We only look for predicting 4 states of activity and 3 of social, the rest (incl.NA) we bunch in 'Other'
#fulldata$Activity.clean <- ifelse(is.na(as.character(fulldata$Activity.win)) |
# as.character(fulldata$Activity.win)=='OFF' |
# as.character(fulldata$Activity.win)=='TDT' |
# as.character(fulldata$Activity.win)=='TEC',
# 'Other',as.character(fulldata$Activity.win))
#fulldata$Social.clean <- ifelse(is.na(as.character(fulldata$Social.win)),
# 'Other',as.character(fulldata$Social.win))
#names(fulldata)[7562:7563] <- c('Activity','Social')
#fulldata <- fulldata[,-c(1,4,5,6)]
#fulldata$Activity <- factor(fulldata$Activity)
#fulldata$Social <- factor(fulldata$Social)
#test <- fulldata[fulldata$session=='case2-day3-session1-teacher2' | fulldata$session=='case1-day1-session1-teacher1',]
#train <- fulldata[fulldata$session!='case2-day3-session1-teacher2' & fulldata$session!='case1-day1-session1-teacher1',]
# Cleans up a vector of values, by assigning NaNs and others to a certain value
def clean_values(x, others=[], value='Other'):
if pandas.isnull(x) or (x in others):
return value
else:
return x
others = ['OFF','TDT','TEC']
data['Activity.clean'] = data['Activity.win'].apply(clean_values, args=(others,))
data['Social.clean'] = data['Social.win'].apply(clean_values)
data.shape
# Drop the first meaningless column in the csv, plus the others we don't use anymore
cleandata = data.drop(data.columns[[0]], axis=1)
cleandata = cleandata.drop(['timestamp.orig', 'Activity.win', 'Social.win'], axis=1)
cleandata.rename(index=str, columns={'Activity.clean': 'Activity', 'Social.clean': 'Social'}, inplace=True)
#print cleandata.head(3)
#print cleandata.columns[-2:].values
# Uncomment to get the actual column names
# print cleandata.columns[0]
# print cleandata.columns[1]
# print cleandata.columns[2:12].values
# print cleandata.columns[12:152].values
# print cleandata.columns[152:6557].values
# print cleandata.columns[6557:7557].values
# print cleandata.columns[7557:].values
# feature_names of X
feature_names = cleandata.columns[2:7557]
idx_eyetracking = range(0,10)
idx_acc = range(10,150)
idx_audio = range(150,6555)
idx_video = range(6555,7555)
#print feature_names[idx_audio].values
sessions = cleandata['session'].unique()
activities = cleandata['Activity'].unique()
socials = cleandata['Social'].unique()
label_encoderA = LabelEncoder()
label_encoderA = label_encoderA.fit(cleandata['Activity'].values)
label_encoderS = LabelEncoder()
label_encoderS = label_encoderS.fit(cleandata['Social'].values)
# for s in sessions: ...
s = sessions[9]
test = cleandata.loc[cleandata['session'] == s]
train = cleandata.loc[cleandata['session'] != s]
print train.shape
print test.shape
# Separate the target values (Activity and Social) from features, etc.
X_train = train[feature_names].values
Y_trainA = train['Activity'].values
Y_trainS = train['Social'].values
X_test = test[feature_names].values
Y_testA = test['Activity'].values
Y_testS = test['Social'].values
print X_train[0:5,0:15]
print Y_trainA[0:5]
# Model preparation and feature selection
# Predicting Activity
#X = X_train[:,idx_eyetracking]
X = X_train
Y = Y_trainA
label_encoded_y = label_encoderA.transform(Y)
Xval = X_test
Yval = Y_testA
print X.shape
# Model training
model = xgboost.XGBClassifier()
model.fit(X, label_encoded_y)
print model
# Model evaluation
Ypred = model.predict(Xval)
label_encoded_yval = label_encoderA.transform(Yval)
print classification_report(label_encoded_yval, Ypred)
print confusion_matrix(label_encoded_yval, Ypred)
print "Accuracy %f " % accuracy_score(label_encoded_yval, Ypred)
#print "AUC %f " % roc_auc_score(label_encoded_yval, Ypred, average='macro')
#print "Kappa %f " % cohen_kappa_score(label_encoded_yval, Ypred, average='macro')
#TODO: Install scikit learn 0.18+
# Model preparation and feature selection
# Predicting Activity
#X = X_train[:,idx_eyetracking]
X = X_train
Y = Y_trainS
label_encoded_y = label_encoderS.transform(Y)
Xval = X_test
Yval = Y_testS
print X.shape
# Model training
model = xgboost.XGBClassifier()
model.fit(X, label_encoded_y)
print model
# Model evaluation
Ypred = model.predict(Xval)
label_encoded_yval = label_encoderS.transform(Yval)
print classification_report(label_encoded_yval, Ypred)
print confusion_matrix(label_encoded_yval, Ypred)
print "Accuracy %f " % accuracy_score(label_encoded_yval, Ypred)
#print "AUC %f " % roc_auc_score(label_encoded_yval, Ypred, average='macro')
#print "Kappa %f " % cohen_kappa_score(label_encoded_yval, Ypred, average='macro')
#TODO: Install scikit learn 0.18+
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load dataset
Step2: Dataset overview
Step3: Basic split
Step4: A basic benchmark
Step5: Social plane
|
6,148
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function, unicode_literals
import numpy as np
np.zeros(5)
np.zeros((3,4))
a = np.zeros((3,4))
a
a.shape
a.ndim # equal to len(a.shape)
a.size
np.zeros((2,3,4))
type(np.zeros((3,4)))
np.ones((3,4))
np.full((3,4), np.pi)
np.empty((2,3))
np.array([[1,2,3,4], [10, 20, 30, 40]])
np.arange(1, 5)
np.arange(1.0, 5.0)
np.arange(1, 5, 0.5)
print(np.arange(0, 5/3, 1/3)) # depending on floating point errors, the max value is 4/3 or 5/3.
print(np.arange(0, 5/3, 0.333333333))
print(np.arange(0, 5/3, 0.333333334))
print(np.linspace(0, 5/3, 6))
np.random.rand(3,4)
np.random.randn(3,4)
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(np.random.rand(100000), normed=True, bins=100, histtype="step", color="blue", label="rand")
plt.hist(np.random.randn(100000), normed=True, bins=100, histtype="step", color="red", label="randn")
plt.axis([-2.5, 2.5, 0, 1.1])
plt.legend(loc = "upper left")
plt.title("Random distributions")
plt.xlabel("Value")
plt.ylabel("Density")
plt.show()
def my_function(z, y, x):
return x * y + z
np.fromfunction(my_function, (3, 2, 10))
c = np.arange(1, 5)
print(c.dtype, c)
c = np.arange(1.0, 5.0)
print(c.dtype, c)
d = np.arange(1, 5, dtype=np.complex64)
print(d.dtype, d)
e = np.arange(1, 5, dtype=np.complex64)
e.itemsize
f = np.array([[1,2],[1000, 2000]], dtype=np.int32)
f.data
if (hasattr(f.data, "tobytes")):
data_bytes = f.data.tobytes() # python 3
else:
data_bytes = memoryview(f.data).tobytes() # python 2
data_bytes
g = np.arange(24)
print(g)
print("Rank:", g.ndim)
g.shape = (6, 4)
print(g)
print("Rank:", g.ndim)
g.shape = (2, 3, 4)
print(g)
print("Rank:", g.ndim)
g2 = g.reshape(4,6)
print(g2)
print("Rank:", g2.ndim)
g2[1, 2] = 999
g2
g
g.ravel()
a = np.array([14, 23, 32, 41])
b = np.array([5, 4, 3, 2])
print("a + b =", a + b)
print("a - b =", a - b)
print("a * b =", a * b)
print("a / b =", a / b)
print("a // b =", a // b)
print("a % b =", a % b)
print("a ** b =", a ** b)
h = np.arange(5).reshape(1, 1, 5)
h
h + [10, 20, 30, 40, 50] # same as: h + [[[10, 20, 30, 40, 50]]]
k = np.arange(6).reshape(2, 3)
k
k + [[100], [200]] # same as: k + [[100, 100, 100], [200, 200, 200]]
k + [100, 200, 300] # after rule 1: [[100, 200, 300]], and after rule 2: [[100, 200, 300], [100, 200, 300]]
k + 1000 # same as: k + [[1000, 1000, 1000], [1000, 1000, 1000]]
try:
k + [33, 44]
except ValueError as e:
print(e)
k1 = np.arange(0, 5, dtype=np.uint8)
print(k1.dtype, k1)
k2 = k1 + np.array([5, 6, 7, 8, 9], dtype=np.int8)
print(k2.dtype, k2)
k3 = k1 + 1.5
print(k3.dtype, k3)
m = np.array([20, -5, 30, 40])
m < [15, 16, 35, 36]
m < 25 # equivalent to m < [25, 25, 25, 25]
m[m < 25]
a = np.array([[-2.5, 3.1, 7], [10, 11, 12]])
print(a)
print("mean =", a.mean())
for func in (a.min, a.max, a.sum, a.prod, a.std, a.var):
print(func.__name__, "=", func())
c=np.arange(24).reshape(2,3,4)
c
c.sum(axis=0) # sum across matrices
c.sum(axis=1) # sum across rows
c.sum(axis=(0,2)) # sum across matrices and columns
0+1+2+3 + 12+13+14+15, 4+5+6+7 + 16+17+18+19, 8+9+10+11 + 20+21+22+23
a = np.array([[-2.5, 3.1, 7], [10, 11, 12]])
np.square(a)
print("Original ndarray")
print(a)
for func in (np.abs, np.sqrt, np.exp, np.log, np.sign, np.ceil, np.modf, np.isnan, np.cos):
print("\n", func.__name__)
print(func(a))
a = np.array([1, -2, 3, 4])
b = np.array([2, 8, -1, 7])
np.add(a, b) # equivalent to a + b
np.greater(a, b) # equivalent to a > b
np.maximum(a, b)
np.copysign(a, b)
a = np.array([1, 5, 3, 19, 13, 7, 3])
a[3]
a[2:5]
a[2:-1]
a[:2]
a[2::2]
a[::-1]
a[3]=999
a
a[2:5] = [997, 998, 999]
a
a[2:5] = -1
a
try:
a[2:5] = [1,2,3,4,5,6] # too long
except ValueError as e:
print(e)
try:
del a[2:5]
except ValueError as e:
print(e)
a_slice = a[2:6]
a_slice[1] = 1000
a # the original array was modified!
a[3] = 2000
a_slice # similarly, modifying the original array modifies the slice!
another_slice = a[2:6].copy()
another_slice[1] = 3000
a # the original array is untouched
a[3] = 4000
another_slice # similary, modifying the original array does not affect the slice copy
b = np.arange(48).reshape(4, 12)
b
b[1, 2] # row 1, col 2
b[1, :] # row 1, all columns
b[:, 1] # all rows, column 1
b[1, :]
b[1:2, :]
b[(0,2), 2:5] # rows 0 and 2, columns 2 to 4 (5-1)
b[:, (-1, 2, -1)] # all rows, columns -1 (last), 2 and -1 (again, and in this order)
b[(-1, 2, -1, 2), (5, 9, 1, 9)] # returns a 1D array with b[-1, 5], b[2, 9], b[-1, 1] and b[2, 9] (again)
c = b.reshape(4,2,6)
c
c[2, 1, 4] # matrix 2, row 1, col 4
c[2, :, 3] # matrix 2, all rows, col 3
c[2, 1] # Return matrix 2, row 1, all columns. This is equivalent to c[2, 1, :]
c[2, ...] # matrix 2, all rows, all columns. This is equivalent to c[2, :, :]
c[2, 1, ...] # matrix 2, row 1, all columns. This is equivalent to c[2, 1, :]
c[2, ..., 3] # matrix 2, all rows, column 3. This is equivalent to c[2, :, 3]
c[..., 3] # all matrices, all rows, column 3. This is equivalent to c[:, :, 3]
b = np.arange(48).reshape(4, 12)
b
rows_on = np.array([True, False, True, False])
b[rows_on, :] # Rows 0 and 2, all columns. Equivalent to b[(0, 2), :]
cols_on = np.array([False, True, False] * 4)
b[:, cols_on] # All rows, columns 1, 4, 7 and 10
b[np.ix_(rows_on, cols_on)]
np.ix_(rows_on, cols_on)
b[b % 3 == 1]
c = np.arange(24).reshape(2, 3, 4) # A 3D array (composed of two 3x4 matrices)
c
for m in c:
print("Item:")
print(m)
for i in range(len(c)): # Note that len(c) == c.shape[0]
print("Item:")
print(c[i])
for i in c.flat:
print("Item:", i)
q1 = np.full((3,4), 1.0)
q1
q2 = np.full((4,4), 2.0)
q2
q3 = np.full((3,4), 3.0)
q3
q4 = np.vstack((q1, q2, q3))
q4
q4.shape
q5 = np.hstack((q1, q3))
q5
q5.shape
try:
q5 = np.hstack((q1, q2, q3))
except ValueError as e:
print(e)
q7 = np.concatenate((q1, q2, q3), axis=0) # Equivalent to vstack
q7
q7.shape
q8 = np.stack((q1, q3))
q8
q8.shape
r = np.arange(24).reshape(6,4)
r
r1, r2, r3 = np.vsplit(r, 3)
r1
r2
r3
r4, r5 = np.hsplit(r, 2)
r4
r5
t = np.arange(24).reshape(4,2,3)
t
t1 = t.transpose((1,2,0))
t1
t1.shape
t2 = t.transpose() # equivalent to t.transpose((2, 1, 0))
t2
t2.shape
t3 = t.swapaxes(0,1) # equivalent to t.transpose((1, 0, 2))
t3
t3.shape
m1 = np.arange(10).reshape(2,5)
m1
m1.T
m2 = np.arange(5)
m2
m2.T
m2r = m2.reshape(1,5)
m2r
m2r.T
n1 = np.arange(10).reshape(2, 5)
n1
n2 = np.arange(15).reshape(5,3)
n2
n1.dot(n2)
import numpy.linalg as linalg
m3 = np.array([[1,2,3],[5,7,11],[21,29,31]])
m3
linalg.inv(m3)
linalg.pinv(m3)
m3.dot(linalg.inv(m3))
np.eye(3)
q, r = linalg.qr(m3)
q
r
q.dot(r) # q.r equals m3
linalg.det(m3) # Computes the matrix determinant
eigenvalues, eigenvectors = linalg.eig(m3)
eigenvalues # λ
eigenvectors # v
m3.dot(eigenvectors) - eigenvalues * eigenvectors # m3.v - λ*v = 0
m4 = np.array([[1,0,0,0,2], [0,0,3,0,0], [0,0,0,0,0], [0,2,0,0,0]])
m4
U, S_diag, V = linalg.svd(m4)
U
S_diag
S = np.zeros((4, 5))
S[np.diag_indices(4)] = S_diag
S # Σ
V
U.dot(S).dot(V) # U.Σ.V == m4
np.diag(m3) # the values in the diagonal of m3 (top left to bottom right)
np.trace(m3) # equivalent to np.diag(m3).sum()
coeffs = np.array([[2, 6], [5, 3]])
depvars = np.array([6, -9])
solution = linalg.solve(coeffs, depvars)
solution
coeffs.dot(solution), depvars # yep, it's the same
np.allclose(coeffs.dot(solution), depvars)
import math
data = np.empty((768, 1024))
for y in range(768):
for x in range(1024):
data[y, x] = math.sin(x*y/40.5) # BAD! Very inefficient.
x_coords = np.arange(0, 1024) # [0, 1, 2, ..., 1023]
y_coords = np.arange(0, 768) # [0, 1, 2, ..., 767]
X, Y = np.meshgrid(x_coords, y_coords)
X
Y
data = np.sin(X*Y/40.5)
import matplotlib.pyplot as plt
import matplotlib.cm as cm
fig = plt.figure(1, figsize=(7, 6))
plt.imshow(data, cmap=cm.hot, interpolation="bicubic")
plt.show()
a = np.random.rand(2,3)
a
np.save("my_array", a)
with open("my_array.npy", "rb") as f:
content = f.read()
content
a_loaded = np.load("my_array.npy")
a_loaded
np.savetxt("my_array.csv", a)
with open("my_array.csv", "rt") as f:
print(f.read())
np.savetxt("my_array.csv", a, delimiter=",")
a_loaded = np.loadtxt("my_array.csv", delimiter=",")
a_loaded
b = np.arange(24, dtype=np.uint8).reshape(2, 3, 4)
b
np.savez("my_arrays", my_a=a, my_b=b)
with open("my_arrays.npz", "rb") as f:
content = f.read()
repr(content)[:180] + "[...]"
my_arrays = np.load("my_arrays.npz")
my_arrays
my_arrays.keys()
my_arrays["my_a"]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's import numpy. Most people import it as np
Step2: np.zeros
Step3: It's just as easy to create a 2D array (ie. a matrix) by providing a tuple with the desired number of rows and columns. For example, here's a 3x4 matrix
Step4: Some vocabulary
Step5: N-dimensional arrays
Step6: Array type
Step7: np.ones
Step8: np.full
Step9: np.empty
Step10: np.array
Step11: np.arange
Step12: It also works with floats
Step13: Of course you can provide a step parameter
Step14: However, when dealing with floats, the exact number of elements in the array is not always predictible. For example, consider this
Step15: np.linspace
Step16: np.rand and np.randn
Step17: Here's a 3x4 matrix containing random floats sampled from a univariate normal distribution (Gaussian distribution) of mean 0 and variance 1
Step18: To give you a feel of what these distributions look like, let's use matplotlib (see the matplotlib tutorial for more details)
Step19: np.fromfunction
Step20: NumPy first creates three ndarrays (one per dimension), each of shape (3, 2, 10). Each array has values equal to the coordinate along a specific axis. For example, all elements in the z array are equal to their z-coordinate
Step21: Instead of letting NumPy guess what data type to use, you can set it explicitly when creating an array by setting the dtype parameter
Step22: Available data types include int8, int16, int32, int64, uint8|16|32|64, float16|32|64 and complex64|128. Check out the documentation for the full list.
Step23: data buffer
Step24: In python 2, f.data is a buffer. In python 3, it is a memoryview.
Step25: Several ndarrays can share the same data buffer, meaning that modifying one will also modify the others. We will see an example in a minute.
Step26: reshape
Step27: Set item at row 1, col 2 to 999 (more about indexing below).
Step28: The corresponding element in g has been modified.
Step29: ravel
Step30: Arithmetic operations
Step31: Note that the multiplication is not a matrix multiplication. We will discuss matrix operations below.
Step32: Now let's try to add a 1D array of shape (5,) to this 3D array of shape (1,1,5). Applying the first rule of broadcasting!
Step33: Second rule
Step34: Let's try to add a 2D array of shape (2,1) to this 2D ndarray of shape (2, 3). NumPy will apply the second rule of broadcasting
Step35: Combining rules 1 & 2, we can do this
Step36: And also, very simply
Step37: Third rule
Step38: Broadcasting rules are used in many NumPy operations, not just arithmetic operations, as we will see below.
Step39: Note that int16 is required to represent all possible int8 and uint8 values (from -128 to 255), even though in this case a uint8 would have sufficed.
Step40: Conditional operators
Step41: And using broadcasting
Step42: This is most useful in conjunction with boolean indexing (discussed below).
Step43: Mathematical and statistical functions
Step44: Note that this computes the mean of all elements in the ndarray, regardless of its shape.
Step45: These functions accept an optional argument axis which lets you ask for the operation to be performed on elements along the given axis. For example
Step46: You can also sum over multiple axes
Step47: Universal functions
Step48: Here are a few more useful unary ufuncs
Step49: Binary ufuncs
Step50: Array indexing
Step51: Of course, you can modify elements
Step52: You can also modify an ndarray slice
Step53: Differences with regular python arrays
Step54: Also, you cannot grow or shrink ndarrays this way
Step55: You cannot delete elements either
Step56: Last but not least, ndarray slices are actually views on the same data buffer. This means that if you create a slice and modify it, you are actually going to modify the original ndarray as well!
Step57: If you want a copy of the data, you need to use the copy method
Step58: Multi-dimensional arrays
Step59: Caution
Step60: The first expression returns row 1 as a 1D array of shape (12,), while the second returns that same row as a 2D array of shape (1, 12).
Step61: If you provide multiple index arrays, you get a 1D ndarray containing the values of the elements at the specified coordinates.
Step62: Higher dimensions
Step63: If you omit coordinates for some axes, then all elements in these axes are returned
Step64: Ellipsis (...)
Step65: Boolean indexing
Step66: np.ix_
Step67: If you use a boolean array that has the same shape as the ndarray, then you get in return a 1D array containing all the values that have True at their coordinate. This is generally used along with conditional operators
Step68: Iterating
Step69: If you want to iterate on all elements in the ndarray, simply iterate over the flat attribute
Step70: Stacking arrays
Step71: vstack
Step72: This was possible because q1, q2 and q3 all have the same shape (except for the vertical axis, but that's ok since we are stacking on that axis).
Step73: This is possible because q1 and q3 both have 3 rows. But since q2 has 4 rows, it cannot be stacked horizontally with q1 and q3
Step74: concatenate
Step75: As you might guess, hstack is equivalent to calling concatenate with axis=1.
Step76: Splitting arrays
Step77: Now let's split it in three equal parts, vertically
Step78: There is also a split function which splits an array along any given axis. Calling vsplit is equivalent to calling split with axis=0. There is also an hsplit function, equivalent to calling split with axis=1
Step79: Transposing arrays
Step80: Now let's create an ndarray such that the axes 0, 1, 2 (depth, height, width) are re-ordered to 1, 2, 0 (depth→width, height→depth, width→height)
Step81: By default, transpose reverses the order of the dimensions
Step82: NumPy provides a convenience function swapaxes to swap two axes. For example, let's create a new view of t with depth and height swapped
Step83: Linear algebra
Step84: The T attribute has no effect on rank 0 (empty) or rank 1 arrays
Step85: We can get the desired transposition by first reshaping the 1D array to a single-row matrix (2D)
Step86: Matrix dot product
Step87: Caution
Step88: You can also compute the pseudoinverse using pinv
Step89: Identity matrix
Step90: You can create an identity matrix of size NxN by calling eye
Step91: QR decomposition
Step92: Determinant
Step93: Eigenvalues and eigenvectors
Step94: Singular Value Decomposition
Step95: The svd function just returns the values in the diagonal of Σ, but we want the full Σ matrix, so let's create it
Step96: Diagonal and trace
Step97: Solving a system of linear scalar equations
Step98: Let's check the solution
Step99: Looks good! Another way to check the solution
Step100: Vectorization
Step101: Sure, this works, but it's terribly inefficient since the loops are taking place in pure python. Let's vectorize this algorithm. First, we will use NumPy's meshgrid function which generates coordinate matrices from coordinate vectors.
Step102: As you can see, both X and Y are 768x1024 arrays, and all values in X correspond to the horizontal coordinate, while all values in Y correspond to the the vertical coordinate.
Step103: Now we can plot this data using matplotlib's imshow function (see the matplotlib tutorial).
Step104: Saving and loading
Step105: Done! Since the file name contains no file extension was provided, NumPy automatically added .npy. Let's take a peek at the file content
Step106: To load this file into a NumPy array, simply call load
Step107: Text format
Step108: Now let's look at the file content
Step109: This is a CSV file with tabs as delimiters. You can set a different delimiter
Step110: To load this file, just use loadtxt
Step111: Zipped .npz format
Step112: Again, let's take a peek at the file content. Note that the .npz file extension was automatically added.
Step113: You then load this file like so
Step114: This is a dict-like object which loads the arrays lazily
|
6,149
|
<ASSISTANT_TASK:>
Python Code:
from tf.fabric import Fabric
ETCBC = 'hebrew/etcbc4c'
PHONO = 'hebrew/phono'
TF = Fabric( modules=[ETCBC, PHONO], silent=False )
api = TF.load('''
book chapter verse
sp nu gn ps vt vs st
otype
det
g_word_utf8 trailer_utf8
lex_utf8 lex voc_utf8
g_prs_utf8 g_uvf_utf8
prs_gn prs_nu prs_ps g_cons_utf8
gloss
''')
api.makeAvailableIn(globals())
verseNode = T.nodeFromSection(('Genesis', 1, 2))
wordsNode = L.d(verseNode, otype='word')
print(wordsNode)
F.g_word_utf8.v(wordsNode[0])
절수 추가
verse = str(T.sectionFromNode(verseNode)[2])
for w in wordsNode:
verse += F.g_word_utf8.v(w)
if F.trailer_utf8.v(w):
verse += F.trailer_utf8.v(w)
print(verse)
chpNode = T.nodeFromSection(('Genesis', 1))
verseNode = L.d(chpNode, otype='verse')
verse = ""
for v in verseNode:
verse += str(T.sectionFromNode(v)[2])
wordsNode = L.d(v, otype='word')
for w in wordsNode:
verse += F.g_word_utf8.v(w)
if F.trailer_utf8.v(w):
verse += F.trailer_utf8.v(w)
print(verse)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 성서 본문을 큰 단위의 word node가 아닌 각 단어 요소들로 잘라서 출력함
Step2: Text feature가 아닌 feature의 g_word_utf8의 값을 이용하여 첫 번째 word node 출력
Step4: 위를 응용하여 창세기 1
Step5: 창세기 1장 전체 출력
|
6,150
|
<ASSISTANT_TASK:>
Python Code:
import scipy.optimize as so
import numpy
import toyplot
# if the coin is fair (p=0.5) then the probability isn't very high
p = 0.5
p * p * p * p * p
# but if the coin is really unfair then the probability if quite high
p = 0.99
p * p * p * p * p
# the probability of observing 20 heads for a coin with p=0.6
p = 0.6
n = 20
p**n
# the probability of observing 10 heads and 10 tails for p=0.6
p = 0.6
q = 1 - p
np = 10
nq = 10
p**np * q**nq
# our observed data
np = 62
nq = 40
# let's see which parameter for p best fits the data
for p in [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]:
likelihood = p**np * (1-p)**nq
print("p={}; likelihood={}".format(p, likelihood))
# let's see which parameter for p best fits the data
for p in [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]:
likelihood = p**np * (1-p)**nq
print("p={}; -loglik={:.2f}".format(p, -numpy.log(likelihood)))
def coin_flip_log(p, nheads, ntails):
## calculate likelihood
logp = nheads*numpy.log(p) + ntails*numpy.log(1.-p)
## return negative log-likelihood
return -1*logp
coin_flip_log(0.5, 100, 100)
# starting value=0.5; observed flips = (50, 200)
so.fmin(coin_flip_log, x0=(0.5), args=(50, 200), disp=0)[0]
# starting value=0.5; observed flips = (133, 385)
so.fmin(coin_flip_log, x0=(0.5), args=(133, 385), disp=0)[0]
## generate data across 100 equally spaced points for lambda
data = [coin_flip_log(p, 50, 200) for p in numpy.linspace(0.01, 0.99, 100)]
## plot the likelihood surface
toyplot.plot(
b=numpy.log(data),
a=numpy.linspace(0.01, 0.99, 100),
width=500, height=300,
ylabel="-log-likelihood",
xlabel="probability of heads");
## generate data across 100 equally spaced points for lambda
data = [coin_flip_log(p, 50, 50) for p in numpy.linspace(0.01, 0.99, 100)]
## plot the likelihood surface
toyplot.plot(
b=numpy.log(data),
a=numpy.linspace(0.01, 0.99, 100),
width=500, height=300,
ylabel="-log-likelihood",
xlabel="probability of heads");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Optimization
Step2: Making things more concise
Step3: The goal of Maximum likelihood
Step4: Here we print the parameter of value of p used to calculate the likelihood, and the likelihood score next to each other on each line. You can see that the value of 0.6 has the highest likelihood... but it's kind of hard to interpret because all of the likelihood values are such small numbers.
Step5: For this reason, people usually look at the negative log of the likelihood, which is easier to interpret. Although the method is called "maximum likelihood", when working with the negative log-likelihood we are actually trying to minimize this score, which still means finding the parameter that best fits the data. Below you can see that for p=0.6 the -loglik score is lowest (68.32).
Step6: Functions
Step7: Exhaustive parameter search
Step8: Here is another trial where we enter a different set of observations. Now when the data is 133 heads and 385 tails the ML parameter estimate of p is 0.2567.
Step9: Plot the likelihood over different parameter inputs
|
6,151
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-3', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
6,152
|
<ASSISTANT_TASK:>
Python Code:
pct = cpm.PortfolioConsumerType(T_sim=5000, AgentCount=200)
pct.cycles = 0
# Solve the model under the given parameters
pct.solve()
pct.track_vars += [
"mNrm",
"cNrm",
"Share",
"aNrm",
"Risky",
"Adjust",
"PermShk",
"TranShk",
"bNrm",
"who_dies"
]
pct.make_shock_history()
pct.read_shocks = True
pct.initialize_sim()
pct.simulate()
pcft = cpfm.PortfolioConsumerFrameType(
T_sim=5000,
AgentCount=200,
read_shocks = True
)
pcft.cycles = 0
# Solve the model under the given parameters
pcft.solve()
pcft.track_vars += [
"mNrm",
"cNrm",
"Share",
"aNrm",
"Adjust",
"PermShk",
"TranShk",
"bNrm",
'U'
]
pcft.shock_history = pct.shock_history
pcft.newborn_init_history = pct.newborn_init_history
pcft.initialize_sim()
pcft.simulate()
import matplotlib.pyplot as plt
plt.plot(range(5000), pct.history['PermShk'].mean(axis=1), label = 'original')
plt.plot(range(5000), pcft.history['PermShk'].mean(axis=1), label = 'frames', alpha = 0.5)
plt.legend()
plt.plot(range(5000), pct.history['TranShk'].mean(axis=1), label = 'original')
plt.plot(range(5000), pcft.history['TranShk'].mean(axis=1), label = 'frames', alpha = 0.5)
plt.legend()
plt.plot(range(5000), pct.history['bNrm'].mean(axis=1), label = 'original')
plt.plot(range(5000), pcft.history['bNrm'].mean(axis=1), label = 'frames', alpha = 0.5)
plt.legend()
#plt.plot(range(5000), pct.history['Risky'].mean(axis=1), label = 'original')
#plt.plot(range(5000), pcft.history['Risky'].mean(axis=1), label = 'frames', alpha = 0.5)
#plt.legend()
plt.plot(range(5000), pct.history['aNrm'].mean(axis=1), label = 'original')
plt.plot(range(5000), pcft.history['aNrm'].mean(axis=1), label = 'frames', alpha = 0.5)
plt.legend()
plt.plot(range(5000), pct.history['mNrm'].mean(axis=1), label = 'original')
plt.plot(range(5000), pcft.history['mNrm'].mean(axis=1), label = 'frames', alpha = 0.5)
plt.legend()
plt.plot(range(5000), pct.history['cNrm'].mean(axis=1), label = 'original')
plt.plot(range(5000), pcft.history['cNrm'].mean(axis=1), label = 'frames', alpha = 0.5)
plt.legend()
#pct.history['Risky'][:3, :3]
#pcft.history['Risky'][:3, :3]
plt.plot(range(5000), pct.history['Share'].mean(axis=1), label = 'original')
plt.plot(range(5000), pcft.history['Share'].mean(axis=1), label = 'frames', alpha = 0.5)
plt.legend()
plt.plot(range(5000), pcft.history['cNrm'].mean(axis=1), label = 'frames - cNrm', alpha = 0.5)
plt.plot(range(5000), pcft.history['U'].mean(axis=1), label = 'frames - U', alpha = 0.5)
plt.legend()
pcft.history['U']
pcft.history['U'].mean(axis=1)
pcft.history['U'][0,:]
pcft.history['cNrm'][0,:]
pcft.parameters['CRRA']
CRRAutility(pcft.history['cNrm'][0,:], 5)
draw_frame_model(pcft.model, figsize=(14,12))
controls = [frame for frame in pcft.frames.values() if frame.control]
def get_expected_return_function(control: Frame):
# Input: a control frame
# Returns: function of the control variable (control frame target)
# that returns the expected return, which is
# the sum of:
# - direct rewards
# - expected value of next-frame states (not yet implemented)
#
rewards = [child for child in control.children if child.reward]
expected_values = [] # TODO
## note: function signature is what's needed for scipy.optimize
def expected_return_function(x, *args):
## returns the sum of
## the reward functions evaluated in context of
## - parameters
## - the control variable input
# x - array of inputs, here the control frame target
# args - a tuple of other parameters needed to complete the function
expected_return = 0
for reward in rewards:
## TODO: figuring out the ordering of `x` and `args` needed for multiple downstream scopes
local_context = {}
# indexing through the x and args values
i = 0
num_control_vars = None
# assumes that all frame scopes list model variables first, parameters later
# should enforce or clarify at the frame level.
for var in reward.scope:
if var in control.target:
local_context[var] = x[i]
i = i + 1
elif var in pcft.parameters:
if num_control_vars is None:
num_control_vars = i
local_context[var] = args[i - num_control_vars]
i = i + 1
# can `self` be implicit here?
expected_return += reward.transition(reward, **local_context)
return expected_return
return expected_return_function
def optimal_policy_function(control: Frame):
erf = get_expected_return_function(control)
constraints = control.constraints ## these will reference the context of the control transition, including scope
## Returns function:
## input: control frame scope
## output: result of scipy.optimize of the erf with respect to constraints
## getting the optimal input (control variable) value
return func
def approximate_optimal_policy_function(control, grid):
## returns a new function:
## that is an interpolation over optimal_policy_function
## over the grid
return func
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TODO
Step2: Visualizing the Transition Equations
Step3: Building the Solver [INCOMPLETE]
|
6,153
|
<ASSISTANT_TASK:>
Python Code:
from operator import itemgetter
rows_by_fname = sorted(rows, key=itemgetter('fname'))
rows_by_uid = sorted(rows, key=itemgetter('uid'))
rows_by_fname
rows_by_uid
rows_by_lfname = sorted(rows, key=itemgetter('lname','fname'))
rows_by_lfname
rows_by_fname = sorted(rows, key=lambda r: r['fname'])
rows_by_fname
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The itemgetter() function can also accept multiple keys.
Step2: The functionality of itemgetter() is sometimes replaced by lambda expressions.
|
6,154
|
<ASSISTANT_TASK:>
Python Code:
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 16, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(16, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(5): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i]
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The output of torchvision datasets are PILImage images of range [0, 1].
Step2: Let us show some of the training images, for fun.
Step3: Define a Convolution Neural Network
Step4: Define a Loss function and optimizer
Step5: Train the network
|
6,155
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
loans = graphlab.SFrame('lending-club-data.gl/')
loans.column_names()
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
target = 'safe_loans'
features = ['grade', # grade of the loan (categorical)
'sub_grade_num', # sub-grade of the loan as a number from 0 to 1
'short_emp', # one year or less of employment
'emp_length_num', # number of years of employment
'home_ownership', # home_ownership status: own, mortgage or rent
'dti', # debt to income ratio
'purpose', # the purpose of the loan
'payment_inc_ratio', # ratio of the monthly payment to income
'delinq_2yrs', # number of delinquincies
'delinq_2yrs_zero', # no delinquincies in last 2 years
'inq_last_6mths', # number of creditor inquiries in last 6 months
'last_delinq_none', # has borrower had a delinquincy
'last_major_derog_none', # has borrower had 90 day or worse rating
'open_acc', # number of open credit accounts
'pub_rec', # number of derogatory public records
'pub_rec_zero', # no derogatory public records
'revol_util', # percent of available credit being used
'total_rec_late_fee', # total late fees received to day
'int_rate', # interest rate of the loan
'total_rec_int', # interest received to date
'annual_inc', # annual income of borrower
'funded_amnt', # amount committed to the loan
'funded_amnt_inv', # amount committed by investors for the loan
'installment', # monthly payment owed by the borrower
]
loans, loans_with_na = loans[[target] + features].dropna_split()
# Count the number of rows with missing data
num_rows_with_na = loans_with_na.num_rows()
num_rows = loans.num_rows()
print 'Dropping %s observations; keeping %s ' % (num_rows_with_na, num_rows)
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
train_data, validation_data = loans_data.random_split(.8, seed=1)
model_5 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 5)
# Select all positive and negative examples.
validation_safe_loans = validation_data[validation_data[target] == 1]
validation_risky_loans = validation_data[validation_data[target] == -1]
# Select 2 examples from the validation set for positive & negative loans
sample_validation_data_risky = validation_risky_loans[0:2]
sample_validation_data_safe = validation_safe_loans[0:2]
# Append the 4 examples into a single dataset
sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky)
sample_validation_data
model_5.predict(sample_validation_data)
model_5.predict(sample_validation_data, output_type='probability')
eval_ret = model_5.evaluate(validation_data)
eval_ret['confusion_matrix']
false_positives = 1618
false_negatives = 1463
false_positives * 20000 +false_negatives * 10000
validation_data['predictions'] = model_5.predict(validation_data, output_type='probability')
print "Your loans : %s\n" % validation_data['predictions'].head(4)
print "Expected answer : %s" % [0.4492515948736132, 0.6119100103640573,
0.3835981314851436, 0.3693306705994325]
validation_data.sort('predictions', ascending=False)
validation_data.sort('predictions', ascending=True)
model_10 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 10, verbose=False)
model_50 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None, target = target, features = features, max_iterations = 50, verbose=False)
model_100 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None, target = target, features = features, max_iterations = 100, verbose=False)
model_200 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None, target = target, features = features, max_iterations = 200, verbose=False)
model_500 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None, target = target, features = features, max_iterations = 500, verbose=False)
print model_10.evaluate(validation_data)['accuracy']
print model_50.evaluate(validation_data)['accuracy']
print model_100.evaluate(validation_data)['accuracy']
print model_200.evaluate(validation_data)['accuracy']
print model_500.evaluate(validation_data)['accuracy']
import matplotlib.pyplot as plt
%matplotlib inline
def make_figure(dim, title, xlabel, ylabel, legend):
plt.rcParams['figure.figsize'] = dim
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
if legend is not None:
plt.legend(loc=legend, prop={'size':15})
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
train_err_10 = 1. - model_10.evaluate(train_data)['accuracy']
train_err_50 = 1. - model_50.evaluate(train_data)['accuracy']
train_err_100 = 1. - model_100.evaluate(train_data)['accuracy']
train_err_200 = 1. - model_200.evaluate(train_data)['accuracy']
train_err_500 = 1. - model_500.evaluate(train_data)['accuracy']
training_errors = [train_err_10, train_err_50, train_err_100,
train_err_200, train_err_500]
validation_err_10 = 1. - model_10.evaluate(validation_data)['accuracy']
validation_err_50 = 1. - model_50.evaluate(validation_data)['accuracy']
validation_err_100 = 1. - model_100.evaluate(validation_data)['accuracy']
validation_err_200 = 1. - model_200.evaluate(validation_data)['accuracy']
validation_err_500 = 1. - model_500.evaluate(validation_data)['accuracy']
validation_errors = [validation_err_10, validation_err_50, validation_err_100,
validation_err_200, validation_err_500]
plt.plot([10, 50, 100, 200, 500], training_errors, linewidth=4.0, label='Training error')
plt.plot([10, 50, 100, 200, 500], validation_errors, linewidth=4.0, label='Validation error')
make_figure(dim=(10,5), title='Error vs number of trees',
xlabel='Number of trees',
ylabel='Classification error',
legend='best')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load LendingClub dataset
Step2: Let's quickly explore what the dataset looks like. First, let's print out the column names to see what features we have in this dataset. We have done this in previous assignments, so we won't belabor this here.
Step3: Modifying the target column
Step4: Selecting features
Step5: Skipping observations with missing values
Step6: Fortunately, there are not too many missing values. We are retaining most of the data.
Step7: Checkpoint
Step8: Gradient boosted tree classifier
Step9: Making predictions
Step10: Predicting on sample validation data
Step11: Quiz question
Step12: Quiz Question
Step13: Calculate the number of false positives made by the model.
Step14: Comparison with decision trees
Step15: Reminder
Step16: Checkpoint
Step17: Now, we are ready to go to Step 3. You can now use the prediction column to sort the loans in validation_data (in descending order) by prediction probability. Find the top 5 loans with the highest probability of being predicted as a safe loan.
Step18: Quiz question
Step19: Checkpoint
Step20: Now, train 4 models with max_iterations to be
Step21: Compare accuracy on entire validation set
Step22: Quiz Question
Step23: In order to plot the classification errors (on the train_data and validation_data) versus the number of trees, we will need lists of these accuracies, which we get by applying the method .evaluate.
Steps to follow
Step24: Now, let us run Step 2. Save the training errors into a list called training_errors
Step25: Now, onto Step 3. Write code to compute the classification error on the validation_data for models model_10, model_50, model_100, model_200, and model_500.
Step26: Now, let us run Step 4. Save the training errors into a list called validation_errors
Step27: Now, we will plot the training_errors and validation_errors versus the number of trees. We will compare the 10, 50, 100, 200, and 500 tree models. We provide some plotting code to visualize the plots within this notebook.
|
6,156
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt # Plotting library
from sklearn.utils import shuffle
# Allow matplotlib to plot inside this notebook
%matplotlib inline
# Set the seed of the numpy random number generator so that the result is reproducable
np.random.seed(seed=1)
# check the data first
df = pd.read_csv('../data/ecommerce_data.csv')
df.head()
# 4 unique values for time_of_day
df.time_of_day.unique()
def get_data():
df = pd.read_csv('../data/ecommerce_data.csv')
data = df.as_matrix()
X = data[:, :-1] # last column is label
Y = data[:, -1]
# Normalization for 2nd and 3rd columns
X[:, 1] = (X[:, 1] - X[: ,1].mean())/X[:, 1].std()
X[:, 2] = (X[:, 2] - X[: ,2].mean())/X[:, 2].std()
# handle time_of_day
R, C = X.shape
# we will have 4 more columns for each value in time_of_day (4 unique values)
X2 = np.zeros((R, C+3)) # initialized as zero
Z = np.zeros((R, 4))
Z[np.arange(R), X[:, C-1].astype(np.int32)] = 1
# copy data from X except time_of_day
X2[:, 0:(C-1)] = X[:, 0:(C-1)]
# add 4 dummy variables for time_of_day
X2[:, (C-1):(C+3)] = Z
return X2, Y
# Produce multi-class indicator for Y
def y2indicator(y, K):
N = len(y)
ind = np.zeros((N, K))
for i in range(N):
ind[i, y[i]] = 1
return ind
def softmax(a):
expA = np.exp(a)
return expA / expA.sum(axis=1, keepdims=True)
def forward(X, W1, b1, W2, b2):
Z = np.tanh(X.dot(W1) + b1)
return softmax(Z.dot(W2) + b2), Z # also return cost of hidden layer to calculate derivatives
def predict(P_Y_given_X):
return np.argmax(P_Y_given_X, axis=1)
def classification_rate(Y, P):
return np.mean(Y == P)
def cross_entropy(T, pY):
return -np.mean(T*np.log(pY))
# create train data
X, Y = get_data()
X, Y = shuffle(X, Y)
Y = Y.astype(np.int32)
M = 5 # n of hidden units
D = X.shape[1] # n of inputs
K = len(set(Y)) # n of class/ output nodes
# training data
Xtrain = X[:-100]
Ytrain = Y[:-100]
Ytrain_ind = y2indicator(Ytrain, K)
# test/validation data
Xtest = X[-100:]
Ytest = Y[-100:]
Ytest_ind = y2indicator(Ytest, K)
# initialize weight
W1 = np.random.randn(D, M)
b1 = np.zeros(M)
W2 = np.random.randn(M, K)
b2 = np.zeros(K)
# start training
train_costs = []
test_costs = []
learning_rate = 0.001
for i in range(10000):
pYtrain, Ztrain = forward(Xtrain, W1, b1, W2, b2)
pYtest, Ztest = forward(Xtest, W1, b1, W2, b2)
ctrain = cross_entropy(Ytrain_ind, pYtrain)
ctest = cross_entropy(Ytest_ind, pYtest)
train_costs.append(ctrain)
test_costs.append(ctest)
W2 -= learning_rate*Ztrain.T.dot(pYtrain - Ytrain_ind)
b2 -= learning_rate*(pYtrain - Ytrain_ind).sum(axis=0)
dZ = (pYtrain - Ytrain_ind).dot(W2.T) * (1- Ztrain*Ztrain)
W1 -= learning_rate*Xtrain.T.dot(dZ)
b1 -= learning_rate*dZ.sum(axis=0)
if i % 1000 == 0:
print(i, ctrain, ctest)
print("Final train classification_rate:", classification_rate(Ytrain, predict(pYtrain)))
print("Final test classification_rate:", classification_rate(Ytest, predict(pYtest)))
legend1, = plt.plot(train_costs, label='train cost')
legend2, = plt.plot(test_costs, label='test cost')
plt.legend([legend1, legend2])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The 2nd and 3rd column is numeric and need to be normalized. 1st, 4th and 5th colums are categorized variable. 5th column time_of_day will need to be transformed to 4 one-hot encoding variables. Last column user_action is the label. Code below will tranform the raw data into the format for training.
Step2: Forward Step, Cost Function
Step3: Below will train a neural network model with 1 hidden layer with logistic function and output layer activating with softmax function.
Step4: Gradient Decent with Backpropgation
|
6,157
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
# Create your dictionary that maps vocab words to integers here
word_set = set()
for word in words:
word_set.add(word)
vocab_to_int = {word: i for i, word in enumerate(word_set)}
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = []
for review in reviews:
review_ints = [vocab_to_int[word] for word in review.split()]
reviews_ints.append(review_ints)
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
# Filter out that review with 0 length
print("before", reviews_ints[-2:])
print(len(reviews_ints))
print(len(labels))
for i, review_ints in enumerate(reviews_ints):
if len(review_ints) <= 0:
print("delete index : ", i)
del reviews_ints[i]
np.delete(labels, i)
print("after", reviews_ints[-2:])
print(len(reviews_ints))
print(len(labels))
#reviews_ints = [review_ints for review_ints in reviews_ints if len(review_ints) > 0 ]
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype=int)
for i, review_ints in enumerate(reviews_ints):
features[i, -len(review_ints):] = review_ints[:seq_len]
features[:10,:100]
print(len(features))
print(len(labels))
split_frac = 0.8
split_index = int(len(features) * split_frac)
print(split_index)
print(len(features))
print(len(labels))
split_frac = 0.8
split_index = int(len(features) * split_frac)
print(split_index)
test_split_frac = 0.9
test_split_index = int(len(features) * test_split_frac)
print(test_split_index)
train_x, val_x = features[:split_index], features[split_index:test_split_index]
train_y, val_y = labels[:split_index], labels[split_index:test_split_index]
val_x, test_x = features[split_index:test_split_index], features[test_split_index:]
val_y, test_y = labels[split_index:test_split_index], labels[test_split_index:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step11: Embedding
Step12: LSTM cell
Step13: RNN forward pass
Step14: Output
Step15: Validation accuracy
Step16: Batching
Step17: Training
Step18: Testing
|
6,158
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
! pip uninstall -y tensorflow
! pip install -q tf-nightly
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tensorflow_model_optimization as tfmot
import tempfile
input_shape = [20]
x_train = np.random.randn(1, 20).astype(np.float32)
y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20)
def setup_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(20, input_shape=input_shape),
tf.keras.layers.Flatten()
])
return model
def setup_pretrained_weights():
model= setup_model()
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model.fit(x_train, y_train)
_, pretrained_weights = tempfile.mkstemp('.tf')
model.save_weights(pretrained_weights)
return pretrained_weights
def setup_pretrained_model():
model = setup_model()
pretrained_weights = setup_pretrained_weights()
model.load_weights(pretrained_weights)
return model
setup_model()
pretrained_weights = setup_pretrained_weights()
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
quant_aware_model.summary()
# Create a base model
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
# Helper function uses `quantize_annotate_layer` to annotate that only the
# Dense layers should be quantized.
def apply_quantization_to_dense(layer):
if isinstance(layer, tf.keras.layers.Dense):
return tfmot.quantization.keras.quantize_annotate_layer(layer)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_quantization_to_dense`
# to the layers of the model.
annotated_model = tf.keras.models.clone_model(
base_model,
clone_function=apply_quantization_to_dense,
)
# Now that the Dense layers are annotated,
# `quantize_apply` actually makes the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
quant_aware_model.summary()
print(base_model.layers[0].name)
# Use `quantize_annotate_layer` to annotate that the `Dense` layer
# should be quantized.
i = tf.keras.Input(shape=(20,))
x = tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(10))(i)
o = tf.keras.layers.Flatten()(x)
annotated_model = tf.keras.Model(inputs=i, outputs=o)
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
# For deployment purposes, the tool adds `QuantizeLayer` after `InputLayer` so that the
# quantized model can take in float inputs instead of only uint8.
quant_aware_model.summary()
# Use `quantize_annotate_layer` to annotate that the `Dense` layer
# should be quantized.
annotated_model = tf.keras.Sequential([
tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=input_shape)),
tf.keras.layers.Flatten()
])
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
quant_aware_model.summary()
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
# Save or checkpoint the model.
_, keras_model_file = tempfile.mkstemp('.h5')
quant_aware_model.save(keras_model_file)
# `quantize_scope` is needed for deserializing HDF5 models.
with tfmot.quantization.keras.quantize_scope():
loaded_model = tf.keras.models.load_model(keras_model_file)
loaded_model.summary()
base_model = setup_pretrained_model()
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
# Typically you train the model here.
converter = tf.lite.TFLiteConverter.from_keras_model(quant_aware_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_tflite_model = converter.convert()
LastValueQuantizer = tfmot.quantization.keras.quantizers.LastValueQuantizer
MovingAverageQuantizer = tfmot.quantization.keras.quantizers.MovingAverageQuantizer
class DefaultDenseQuantizeConfig(tfmot.quantization.keras.QuantizeConfig):
# Configure how to quantize weights.
def get_weights_and_quantizers(self, layer):
return [(layer.kernel, LastValueQuantizer(num_bits=8, symmetric=True, narrow_range=False, per_axis=False))]
# Configure how to quantize activations.
def get_activations_and_quantizers(self, layer):
return [(layer.activation, MovingAverageQuantizer(num_bits=8, symmetric=False, narrow_range=False, per_axis=False))]
def set_quantize_weights(self, layer, quantize_weights):
# Add this line for each item returned in `get_weights_and_quantizers`
# , in the same order
layer.kernel = quantize_weights[0]
def set_quantize_activations(self, layer, quantize_activations):
# Add this line for each item returned in `get_activations_and_quantizers`
# , in the same order.
layer.activation = quantize_activations[0]
# Configure how to quantize outputs (may be equivalent to activations).
def get_output_quantizers(self, layer):
return []
def get_config(self):
return {}
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class CustomLayer(tf.keras.layers.Dense):
pass
model = quantize_annotate_model(tf.keras.Sequential([
quantize_annotate_layer(CustomLayer(20, input_shape=(20,)), DefaultDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `DefaultDenseQuantizeConfig` with `quantize_scope`
# as well as the custom Keras layer.
with quantize_scope(
{'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig,
'CustomLayer': CustomLayer}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
# Configure weights to quantize with 4-bit instead of 8-bits.
def get_weights_and_quantizers(self, layer):
return [(layer.kernel, LastValueQuantizer(num_bits=4, symmetric=True, narrow_range=False, per_axis=False))]
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this Dense layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
def get_activations_and_quantizers(self, layer):
# Skip quantizing activations.
return []
def set_quantize_activations(self, layer, quantize_activations):
# Empty since `get_activaations_and_quantizers` returns
# an empty list.
return
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this Dense layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class FixedRangeQuantizer(tfmot.quantization.keras.quantizers.Quantizer):
Quantizer which forces outputs to be between -1 and 1.
def build(self, tensor_shape, name, layer):
# Not needed. No new TensorFlow variables needed.
return {}
def __call__(self, inputs, training, weights, **kwargs):
return tf.keras.backend.clip(inputs, -1.0, 1.0)
def get_config(self):
# Not needed. No __init__ parameters to serialize.
return {}
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
# Configure weights to quantize with 4-bit instead of 8-bits.
def get_weights_and_quantizers(self, layer):
# Use custom algorithm defined in `FixedRangeQuantizer` instead of default Quantizer.
return [(layer.kernel, FixedRangeQuantizer())]
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this `Dense` layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Quantization aware training comprehensive guide
Step2: Define quantization aware model
Step3: Quantize some layers
Step4: While this example used the type of the layer to decide what to quantize, the easiest way to quantize a particular layer is to set its name property, and look for that name in the clone_function.
Step5: More readable but potentially lower model accuracy
Step6: Sequential example
Step7: Checkpoint and deserialize
Step8: Create and deploy quantized model
Step9: Experiment with quantization
Step10: Quantize custom Keras layer
Step11: Modify quantization parameters
Step12: Applying the configuration is the same across
Step13: Modify parts of layer to quantize
Step14: Applying the configuration is the same across
Step16: Use custom quantization algorithm
Step17: Applying the configuration is the same across
|
6,159
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import requests as req
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.stats import ttest_ind, ttest_rel
from scipy.stats import gaussian_kde
from statsmodels.formula.api import ols, mixedlm, gee
from statsmodels.stats.outliers_influence import OLSInfluence
from statsmodels.regression.linear_model import OLSResults
from patsy import dmatrix
np.set_printoptions(precision=3)
idhm_df = pd.read_csv("../data/brazil_states_idhl_2000_2010.csv", index_col=0)
idhm_df
idhm_df.describe()
f = plt.figure(14)
idhm_df[["I2000","I2010","Ratio"]].hist(bins=10)
plt.figure()
sns.kdeplot(idhm_df["I2000"], shade=True);
sns.kdeplot(idhm_df["I2010"], shade=True);
sns.kdeplot(idhm_df["Ratio"], shade=True);
ttest_rel(idhm_df['I2000'], idhm_df['I2010'])
import scipy
import scikits.bootstrap as bootstrap
# compute 95% confidence intervals around the mean
CIs00 = bootstrap.ci(data=idhm_df["I2000"])
CIs10 = bootstrap.ci(data=idhm_df["I2010"])
CIsR = bootstrap.ci(data=idhm_df["Ratio"])
print("IDHM 2000 mean 95% confidence interval. Low={0:.3f}\tHigh={1:.3f}".format(*tuple(CIs00)))
print("IDHM 2010 mean 95% confidence interval. Low={0:.3f}\tHigh={1:.3f}".format(*tuple(CIs10)))
print("IDHM ratio mean 95% confidence interval. Low={0:.3f}\tHigh={1:.3f}".format(*tuple(CIsR)))
CIs00 = bootstrap.ci(data=idhm_df["I2000"], statfunction=scipy.median)
CIs10 = bootstrap.ci(data=idhm_df["I2010"], statfunction=scipy.median)
CIsR = bootstrap.ci(data=idhm_df["Ratio"], statfunction=scipy.median)
print("IDHM 2000 median 95% confidence interval. Low={0:.3f}\tHigh={1:.3f}".format(*tuple(CIs00)))
print("IDHM 2010 median 95% confidence interval. Low={0:.3f}\tHigh={1:.3f}".format(*tuple(CIs10)))
print("IDHM ratio median 95% confidence interval. Low={0:.3f}\tHigh={1:.3f}".format(*tuple(CIsR)))
state_parties_df = pd.read_csv("../data/brazil_states_parties_2000-2010.csv", index_col=0)
state_parties_df
state_regions_df = pd.read_csv("../data/brazil_states_regions.csv", index_col=0)
state_regions_df
df = idhm_df.merge(state_parties_df, on="Estado")
df = df.merge(state_regions_df, on="Estado")
df
sns.factorplot("idh_level_2000","Ratio",data=df, kind="box")
sns.factorplot("Regiao","Ratio",data=df, kind="box")
sns.set()
sns.pairplot(df, hue="idh_level_2000", size=2.5)
sns.coefplot("Ratio ~ PT + PSDB + Outros + C(idh_level_2000) - 1", df, palette="Set1");
sns.coefplot("Ratio ~ Outros==0 + Outros - 1", df, palette="Set1");
sns.set(style="whitegrid")
sns.residplot(df.Outros,df.Ratio, color="navy", lowess=True, order=1)
sns.coefplot("Ratio ~ PT==0 + PT - 1", df, palette="Set1");
sns.set(style="whitegrid")
sns.residplot(df[df.PT>0].PT, df[df.PT>0].Ratio, color="navy", order=1)
sns.coefplot("Ratio ~ PSDB==0 + PSDB + np.multiply(PSDB, PSDB) - 1", df, palette="Set1");
sns.set(style="whitegrid")
sns.residplot(df[df.PSDB>0].PSDB, df[df.PSDB>0].Ratio, color="navy", lowess=True, order=2)
sns.coefplot("Ratio ~ PT + PSDB + Outros + C(idh_level_2000) - 1", df, palette="Set1");
sns.coefplot("Ratio ~ PT + PSDB + C(idh_level_2000)", df, palette="Set1");
sns.coefplot("Ratio ~ PT + Outros + C(idh_level_2000)", df, palette="Set1");
sns.coefplot("Ratio ~ PSDB + Outros + C(idh_level_2000)", df, palette="Set1");
formula = "Ratio ~ PT + PSDB + C(idh_level_2000) + C(Regiao)"
model = ols(formula, df).fit()
model.summary()
sns.lmplot("I2000", "I2010", data=df, legend=True, size=10, n_boot=10000, ci=95)
sns.jointplot("I2000", "I2010", data=df, kind='resid',color=sns.color_palette()[2], size=10)
sns.coefplot("I2010 ~ I2000", data=df, intercept=True)
sns.coefplot("I2010 ~ I2000", data=df, groupby="idh_level_2000", intercept=True)
sns.lmplot("I2000", "I2010", data=df, hue="idh_level_2000", col="idh_level_2000", legend=True, size=6, n_boot=10000, ci=99)
sns.lmplot("I2000", "I2010", data=df, hue="Regiao", col="Regiao", col_wrap=2, legend=True, size=6, n_boot=10000, ci=99)
md = ols("I2010 ~ I2000 + C(Regiao)", df).fit()
print(md.summary())
rrr = md.get_robustcov_results()
rrp = rrr.outlier_test("fdr_bh", 0.1)
idx = rrp[rrp["fdr_bh(p)"] <= 0.1].index
print("Estados fora da média:\n",df.ix[idx.values])
rrp[rrp["fdr_bh(p)"] <= 0.1]
import statsmodels.api as sm
md = gee("Ratio ~ PT + PSDB ", df.idh_level_2000, df, cov_struct=sm.cov_struct.Exchangeable())
mdf = md.fit()
print(mdf.summary())
print(mdf.cov_struct.summary())
plt.plot(mdf.fittedvalues, mdf.resid, 'o', alpha=0.5)
plt.xlabel("Fitted values", size=17)
plt.ylabel("Residuals", size=17)
sns.jointplot(mdf.fittedvalues, mdf.resid, size=10, kind="kde")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Carregando dados de IDH-M da Wikipedia
Step2: Análise
Step3: Testando hipótese
Step4: A resposta de diversos testes, para um nível de 5% de significância, mostra que há fortes evidências que sim.
Step5: Impacto por partido ou nível do IDH-M em 2000
Step6: Não foi possível observar diferença significantiva entre os partidos.
Step7: GEE
|
6,160
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(filename="YoungTwoSlitExperiment.JPG")
from IPython.display import Image
Image(filename="ExperimentoYoung.jpg")
from matplotlib.pyplot import *
from numpy import *
%matplotlib inline
style.use('fivethirtyeight')
###################################################################################
# PARÁMETROS. SE PUEDEN MODIFICAR SUS VALORES
###################################################################################
Lambda = 500e-9 # en metros, longitud de onda de la radiación
D = 3.5 # en metros, distancia entre el plano que contiene las fuentes y la pantalla de observación
a = 0.003 # en metros, separación entre fuentes
###################################################################################
interfranja=Lambda*D/a # cálculo de la interfranja
k = 2.0*pi/Lambda
x = linspace(-5*interfranja,5*interfranja,500)
I1 = 1 # Consideramos irradiancias normalizadas a un cierto valor.
I2 = 1
X,Y = meshgrid(x,x)
delta = k*a*X/D
Itotal = I1 + I2 + 2.0*sqrt(I1*I2)*cos(delta)
figure(figsize=(14,5))
subplot(121)
pcolormesh(x*1e3,x*1e3,Itotal,cmap = 'gray',vmin=0,vmax=4)
xlabel("x (mm)"); ylabel("y (mm)")
subplot(122)
plot(x*1e3,Itotal[x.shape[0]/2,:])
xlabel("x (mm)"); ylabel("Irradiancia total normalizada")
interfranja=Lambda*D/a # cálculo de la interfranja
C = (Itotal.max() - Itotal.min())/(Itotal.max() + Itotal.min()) # cálculo del contraste
print "a=",a*1e3,"mm ","D=",D,"m ","Longitud de onda=",Lambda*1e9,"nm" # valores de los parámetros
print "Interfranja=",interfranja*1e3,"mm" # muestra el valor de la interfranja en mm
print 'Contraste=',C # muestra el valor del contraste
from IPython.display import Image
Image(filename="FranjasYoungWhiteLight.jpg")
from IPython.display import YouTubeVideo
YouTubeVideo("B34bAGtQL9A")
from IPython.core.display import HTML
HTML('<iframe src=http://www.ub.edu/javaoptics/applets/YoungEn.html width =800 px height=600px>')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ''The experiments I am about to relate ... may be repeated with great ease,
Step2: Según la figura, $\Delta = r_2 - r_1$ lo podemos escribir como $\Delta = a sen(\theta)$, siendo $a$ la separación entre las rendijas. Si éste ángulo es pequeño (lo que significa que la distancia entre las fuentes y la pantalla de observación sea grande comparada con la separación entre las fuentes), esta expresión la podemos simplificar,
Step3: Como podemos ver, los máximos están equiespaciados (lo mismo sucede con los míminos), siendo la distancia entre dos máximos consecutivos
Step4: Cuestiones. Preguntas
Step5: Como se puede observar, en el caso de luz blanca, cada una de las longitudes de onda que la componen forma un sistema de franjas con los máximos situados en posiciones distintas y con una interfranja diferente. Esto dificulta enormemente la visualización de la interferencia y nos llevará a definir el concepto de luz coherente e incoherente.
Step6: Aplicación en java de la Universidad de Barcelona. Se puede modificar la distancia entre el plano que contiene las rendijas y la pantalla, la separación entre las rendijas, etc. Podéis observar cómo cambia el patrón de interferencia al modificar estos parámetros.
|
6,161
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro arviz
!pip install arviz
!pip install seaborn
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import arviz as az
import seaborn as sns
import numpyro
from numpyro.infer import MCMC, NUTS, Predictive
import numpyro.distributions as dist
from jax import random
url = "https://github.com/twiecki/WhileMyMCMCGentlySamples/blob/master/content/downloads/notebooks/radon.csv?raw=true"
data = pd.read_csv(url)
county_names = data.county.unique()
county_idx = data["county_code"].values
data[["county", "log_radon", "floor"]].head()
def chart(county_name, ax):
c_data = data[data.county == county_name]
c_data = c_data.reset_index(drop=True)
x = c_data.index
y = c_data.log_radon
ax.set_title(county_name)
ax = sns.regplot(x=x, y=y, ax=ax, ci=None, line_kws={"color": "red"})
f, axes = plt.subplots(1, 3, figsize=(15, 5))
chart("AITKIN", axes[0])
chart("ANOKA", axes[1])
chart("WRIGHT", axes[2])
def model(floor, log_radon):
# Intercept prior
a = numpyro.sample("alpha", dist.Normal(0.0, 1))
# Slope prior
b = numpyro.sample("beta", dist.Normal(0.0, 1))
# Model error prior
eps = numpyro.sample("eps", dist.HalfCauchy(1.0))
# Linear model
mu = a + b * floor
# Data likelihood
numpyro.sample("obs", dist.Normal(mu, eps), obs=log_radon)
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
indiv_traces, num_divergences = {}, {}
for county_name in county_names:
# Select subset of data belonging to county
c_data = data.loc[data.county == county_name]
c_data = c_data.reset_index(drop=True)
c_log_radon = c_data.log_radon.values
c_floor_measure = c_data.floor.values
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=1000, progress_bar=False)
mcmc.run(rng_key_, c_floor_measure, c_log_radon)
num_divergences[county_name] = np.sum(mcmc.get_extra_fields()["diverging"])
indiv_traces[county_name] = mcmc.get_samples()
num_divergences
def hierarchical_model_centered(county, floor, log_of_radon):
# Hyperpriors
mu_a = numpyro.sample("mu_alpha", dist.Normal(0.0, 1.0))
sigma_a = numpyro.sample("sigma_alpha", dist.HalfCauchy(1.0))
mu_b = numpyro.sample("mu_beta", dist.Normal(0.0, 1.0))
sigma_b = numpyro.sample("sigma_beta", dist.HalfCauchy(1.0))
unique_counties = np.unique(county)
n_counties = len(unique_counties)
with numpyro.plate("counties", n_counties):
# Intercept for each county, distributed around group mean mu_a
alpha = numpyro.sample("alpha", dist.Normal(mu_a, sigma_a))
# Intercept for each county, distributed around group mean mu_a
beta = numpyro.sample("beta", dist.Normal(mu_b, sigma_b))
# Model error
eps = numpyro.sample("eps", dist.HalfCauchy(scale=1.0))
# Expected value
radon_est = alpha[county_idx] + beta[county_idx] * floor
with numpyro.plate("data", len(county)):
# Data likelihood
obs = numpyro.sample("obs", dist.Normal(radon_est, eps), obs=log_of_radon)
return obs
nuts_kernel = NUTS(hierarchical_model_centered)
mcmc = MCMC(nuts_kernel, num_samples=1000, num_warmup=1000)
mcmc.run(rng_key, data.county.values, data.floor.values, data.log_radon.values)
hierarchical_trace = mcmc.get_samples()
inference_data = az.from_numpyro(mcmc)
az.plot_trace(inference_data, compact=True);
selection = ["CASS", "CROW WING", "FREEBORN"]
fig, axis = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True)
axis = axis.ravel()
for i, c in enumerate(selection):
c_data = data.loc[data.county == c]
c_data = c_data.reset_index(drop=True)
z = list(c_data["county_code"])[0]
xvals = np.linspace(-0.2, 1.2)
for a_val, b_val in zip(indiv_traces[c]["alpha"][::10], indiv_traces[c]["beta"][::10]):
axis[i].plot(xvals, a_val + b_val * xvals, "b", alpha=0.05)
axis[i].plot(
xvals,
indiv_traces[c]["alpha"][::10].mean() + indiv_traces[c]["beta"][::10].mean() * xvals,
"b",
alpha=1,
lw=2.0,
label="individual",
)
for a_val, b_val in zip(hierarchical_trace["alpha"][::10][z], hierarchical_trace["beta"][::10][z]):
axis[i].plot(xvals, a_val + b_val * xvals, "g", alpha=0.05)
axis[i].plot(
xvals,
hierarchical_trace["alpha"][::10][z].mean() + hierarchical_trace["beta"][::10][z].mean() * xvals,
"g",
alpha=1,
lw=2.0,
label="hierarchical",
)
axis[i].scatter(
c_data.floor + np.random.randn(len(c_data)) * 0.01,
c_data.log_radon,
alpha=1,
color="k",
marker=".",
s=80,
label="original data",
)
axis[i].set_xticks([0, 1])
axis[i].set_xticklabels(["basement", "first floor"])
axis[i].set_ylim(-1, 4)
axis[i].set_title(c)
if not i % 3:
axis[i].legend()
axis[i].set_ylabel("log radon level")
hier_a = hierarchical_trace["alpha"].mean(axis=0)
hier_b = hierarchical_trace["beta"].mean(axis=0)
indv_a = [indiv_traces[c]["alpha"].mean() for c in county_names]
indv_b = [indiv_traces[c]["beta"].mean() for c in county_names]
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(
111,
xlabel="Intercept",
ylabel="Floor Measure",
title="Hierarchical vs. Non-hierarchical Bayes",
xlim=(0.25, 2),
ylim=(-2, 1.5),
)
ax.scatter(indv_a, indv_b, s=26, alpha=0.4, label="non-hierarchical")
ax.scatter(hier_a, hier_b, c="red", s=26, alpha=0.4, label="hierarchical")
for i in range(len(indv_b)):
ax.arrow(
indv_a[i],
indv_b[i],
hier_a[i] - indv_a[i],
hier_b[i] - indv_b[i],
fc="k",
ec="k",
length_includes_head=True,
alpha=0.4,
head_width=0.02,
)
ax.legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then, we'll load the data
Step2: The relevant part of the data we will model looks as follows
Step3: As you can see, we have multiple radon measurements (log-converted to be on the real line) in a county and whether the measurement has been taken in the basement (floor == 0) or on the first floor (floor == 1). Here we want to test the prediction that radon concentrations are higher in the basement.
Step4: Hierarchical Regression
Step5: It takes 5 minutes. You can remove change progress_bar as True unless you want to suppress the output.
Step6: In order to check that mcmc worked, let's look at the number of divergences
Step7: Hierarchical Model
Step8: The marginal posteriors in the left column are highly informative. mu_alpha tells us the group mean (log) radon levels. mu_beta tells us that the slope is significantly negative (no mass above zero), meaning that radon concentrations are higher in the basement than first floor. We can also see by looking at the marginals for a that there is quite some differences in radon levels between counties; the different widths are related to how much measurements we have per county, the more, the higher our confidence in that parameter estimate.
Step9: In the above plot we have the data points in black of three selected counties. The thick lines represent the mean estimate of the regression line of the individual (blue) and hierarchical model (in green). The thinner lines are regression lines of individual samples from the posterior and give us a sense of how variable the estimates are.
|
6,162
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import seaborn as sns
sns.set(rc={'figure.figsize':(10, 10)})
def set_payment_type(prob):
# 30% of transactions are cash
if prob < 0.3:
return 'Cash'
# stretch the remaining 0.3-1.0 to 0-1
prob = (prob-0.3)/0.7
if prob < 0.1:
return 'Gift'
if prob < 0.4:
return 'Debit'
return 'Credit'
def create_dataset(N, is_old_format):
order_amount = 5 + np.abs(np.random.normal(scale=10, size=N)) # 5+, peaks at 15
order_distance = 1 + np.random.rand(N)*4 # 1-5
payment_type = np.random.rand(N)
df = pd.DataFrame.from_dict({
'order_amount' : np.round(order_amount, 2),
'order_distance': np.round(order_distance, 1),
'payment_type': np.vectorize(set_payment_type)(payment_type),
'tip': np.zeros_like(order_amount)
})
df.loc[df['payment_type'] == 'Cash', 'tip'] = 0.2
df.loc[df['payment_type'] == 'Gift', 'tip'] = 0.1
df.loc[df['payment_type'] == 'Debit', 'tip'] = 0.15
df.loc[df['payment_type'] == 'Credit', 'tip'] = 0.25
df['tip'] = np.round(df['tip'] * df['order_amount'] + 0.5*df['order_distance'], 1)
if is_old_format:
df.loc[df['payment_type'] != 'Cash', 'payment_type'] = 'Card'
return df
df = create_dataset(5, False)
df
df = create_dataset(5, True)
df
df_new = create_dataset(5000, False)
df_old = create_dataset(1000*1000, True)
N_train = (int) (len(df_old) * 0.5)
N_eval = len(df_old) - N_train
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import OneHotEncoder
model = LinearRegression()
enc = OneHotEncoder(handle_unknown='ignore')
df_train = enc.fit_transform( df_old.loc[:N_train, df.columns != 'tip'] )
model.fit(df_train, df_old.loc[:N_train, 'tip'])
df_eval = enc.transform( df_old.loc[N_train:, df.columns != 'tip'] )
model.score(df_eval, df_old.loc[N_train:, 'tip'])
sizes = []
score_mean = []
score_stddev = []
for subset_size in range(100, 5000, 100): # 5000 here because that is the number of new examples we have
sizes.append(subset_size)
# compute variability of the eval metric at this subset size
scores = []
for x in range(1, 25):
indices = np.random.choice(N_eval, size=subset_size, replace=False)
scores.append(
model.score(df_eval[indices], df_old.loc[N_train+indices, 'tip'])
)
score_mean.append(np.mean(scores))
score_stddev.append(np.std(scores))
scores = pd.DataFrame.from_dict({
'eval_size': sizes,
'score': score_mean,
'stddev': score_stddev
})
scores.plot(x='eval_size', y='stddev', linewidth=2);
N_NEW = 2000
df_train_new = df_new.loc[N_NEW:, :]
counts = df_train_new.groupby('payment_type').count()
counts
num_credit = counts.loc['Credit', 'tip']
num_debit = counts.loc['Debit', 'tip']
num_gift = counts.loc['Gift', 'tip']
num_total = num_gift + num_debit + num_credit
imputed_value = [
0,
num_gift / num_total,
num_debit / num_total,
num_credit / num_total
]
print(imputed_value)
bridged_old = df_old.copy()
bridged_old['encoded_cash'] = 1 # default to cash
bridged_old['encoded_gift'] = 0 # default to cash
bridged_old['encoded_debit'] = 0 # default to cash
bridged_old['encoded_credit'] = 0 # default to cash
bridged_old.loc[bridged_old['payment_type'] == 'Card', 'encoded_cash'] = imputed_value[0]
bridged_old.loc[bridged_old['payment_type'] == 'Card', 'encoded_gift'] = imputed_value[1]
bridged_old.loc[bridged_old['payment_type'] == 'Card', 'encoded_debit'] = imputed_value[2]
bridged_old.loc[bridged_old['payment_type'] == 'Card', 'encoded_credit'] = imputed_value[3]
bridged_old.drop(columns='payment_type', inplace=True)
bridged_old[:10]
encoded_new = df_new.copy()
encoded_new['encoded_cash'] = 0 # default to zero
encoded_new['encoded_gift'] = 0 # default to zero
encoded_new['encoded_debit'] = 0 # default to zero
encoded_new['encoded_credit'] = 0 # default to zero
encoded_new.loc[encoded_new['payment_type'] == 'Cash', 'encoded_cash'] = 1 # one-hot encode
encoded_new.loc[encoded_new['payment_type'] == 'Gift', 'encoded_gift'] = 1 # one-hot encode
encoded_new.loc[encoded_new['payment_type'] == 'Debit', 'encoded_debit'] = 1 # one-hot encode
encoded_new.loc[encoded_new['payment_type'] == 'Credit', 'encoded_credit'] = 1 # one-hot encode
encoded_new.drop(columns='payment_type', inplace=True)
encoded_new[:5]
encoded_new_eval = encoded_new.loc[:N_NEW, :]
evalX = encoded_new_eval.loc[:, encoded_new_eval.columns != 'tip']
evalY = encoded_new_eval.loc[:, 'tip']
encoded_new_train = encoded_new.loc[N_NEW:, :]
# add N bridged examples from old and train.
bridged_examples = []
scores = []
for num_bridged in range(0, 1000*100, 1000):
bridged_examples.append(num_bridged)
df_train = pd.concat([encoded_new_train, bridged_old[:num_bridged]], ignore_index=True)
X = df_train.loc[:, df_train.columns != 'tip']
Y = df_train.loc[:, 'tip']
model.fit(X, Y)
scores.append(model.score(evalX, evalY))
scores = pd.DataFrame.from_dict({
'number_bridged': bridged_examples,
'eval_metric': scores
})
scores.plot(x='number_bridged', y='eval_metric', linewidth=2);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bridging
Step2: How many samples do we need to evaluate properly?
Step3: Looking at this, it is clear that (on this problem) 3500 eval samples would be ideal, but 2000 is tolerable.
Step4: Bridging schema using static approach
Step5: Prepare new data
Step6: Augmented training dataset
|
6,163
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD, RMSprop
from keras.utils import np_utils
from keras.regularizers import l2
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(60000, 28**2).astype('float32') / 255
X_test = X_test.reshape(10000, 28**2).astype('float32') / 255
Y_train = np_utils.to_categorical(y_train, 10)
Y_test = np_utils.to_categorical(y_test, 10)
model = Sequential()
model.add(Dense(512, input_shape=(28 * 28,)))
model.add(Activation("sigmoid"))
model.add(Dense(10))
sgd = SGD(lr = 0.01, momentum = 0.9, nesterov = True)
model.compile(loss='mse', optimizer=sgd)
model.fit(X_train, Y_train, batch_size=32, nb_epoch=10,
verbose=1, show_accuracy=True, validation_split=0.1)
print("Test classification rate %0.05f" % model.evaluate(X_test, Y_test, show_accuracy=True)[1])
y_hat = model.predict_classes(X_test)
pd.crosstab(y_hat, y_test)
model = Sequential()
model.add(Dense(512, input_shape=(28 * 28,), init="glorot_normal"))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(Dense(512, init="glorot_normal"))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(Dense(512, init="glorot_normal"))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(Dense(512, init="glorot_normal"))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
sgd = SGD(lr = 0.01, momentum = 0.9, nesterov = True)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.fit(X_train, Y_train, batch_size=32, nb_epoch=10,
verbose=1, show_accuracy=True, validation_split=0.1)
print("Test classification rate %0.05f" % model.evaluate(X_test, Y_test, show_accuracy=True)[1])
fy_hat = model.predict_classes(X_test)
pd.crosstab(y_hat, y_test)
test_wrong = [im for im in zip(X_test,y_hat,y_test) if im[1] != im[2]]
plt.figure(figsize=(15, 15))
for ind, val in enumerate(test_wrong[:100]):
plt.subplot(10, 10, ind + 1)
im = 1 - val[0].reshape((28,28))
axis("off")
plt.imshow(im, cmap='gray')
model = Sequential()
model.add(Dense(16, input_shape=(28 * 28,), init="glorot_normal"))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(Dense(16, init="glorot_normal"))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
rms = RMSprop()
model.compile(loss='categorical_crossentropy', optimizer=rms)
model.fit(X_train, Y_train, batch_size=32, nb_epoch=10,
verbose=1, show_accuracy=True, validation_split=0.1)
print(model.layers) # list of the layers
print(model.layers[0].get_weights()[0].shape) # the weights
W1 = model.layers[0].get_weights()[0]
for ind, val in enumerate(W1.T):
plt.figure(figsize=(3, 3), frameon=False)
im = val.reshape((28,28))
plt.axis("off")
plt.imshow(im, cmap='seismic')
W2 = model.layers[3].get_weights()[0]
plt.figure(figsize=(3, 3))
im = W2.reshape((16,16))
plt.axis("off")
plt.imshow(im, cmap='seismic')
model = Sequential()
model.add(Dense(128, input_shape=(28 * 28,), init="glorot_normal"))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(Dense(512, init="glorot_normal",W_regularizer=l2(0.1)))
model.add(Activation("relu"))
model.add(Dropout(0.2))
model.add(Dense(512, init="glorot_normal",W_regularizer=l2(0.1)))
model.add(Activation("relu"))
model.add(Dropout(0.2))
model.add(Dense(10))
model.add(Activation('softmax'))
rms = RMSprop()
model.compile(loss='categorical_crossentropy', optimizer=rms)
model.fit(X_train, Y_train, batch_size=32, nb_epoch=5,
verbose=1, show_accuracy=True, validation_split=0.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the MNIST dataset, flatten the images, convert the class labels, and scale the data.
Step2: I. Basic example
Step3: Fit the model over 25 epochs.
Step4: Evaluate model on the test set
Step5: Predict classes on the test set.
Step6: II. Deeper model with dropout and cross entropy
Step7: III. Small model
Step8: The classification rate on the validation set is not nearly as predictive, but it is still not too bad overall. A model object contains a list of its layers. The weights are easy to pull out.
Step9: The first set of weights will be given as weights the same size as the input space. Notice how
Step10: The second layer of weights will be given as a single 16x16 matrix of weights.
Step11: IV. Further tweaks
|
6,164
|
<ASSISTANT_TASK:>
Python Code:
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print("Downloaded " + local)
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/first.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct")
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz"
)
import numpy as np
import nsfg
import first
import thinkstats2
import thinkplot
preg = nsfg.ReadFemPreg()
live = preg[preg.outcome == 1]
hist = thinkstats2.Hist(live.birthwgt_lb, label="birthwgt_lb")
thinkplot.Hist(hist)
thinkplot.Config(xlabel="Birth weight (pounds)", ylabel="Count")
n = hist.Total()
pmf = hist.Copy()
for x, freq in hist.Items():
pmf[x] = freq / n
thinkplot.Hist(pmf)
thinkplot.Config(xlabel="Birth weight (pounds)", ylabel="PMF")
pmf = thinkstats2.Pmf([1, 2, 2, 3, 5])
pmf
pmf.Prob(2)
pmf[2]
pmf.Incr(2, 0.2)
pmf[2]
pmf.Mult(2, 0.5)
pmf[2]
pmf.Total()
pmf.Normalize()
pmf.Total()
pmf = thinkstats2.Pmf(live.prglngth, label="prglngth")
thinkplot.Hist(pmf)
thinkplot.Config(xlabel="Pregnancy length (weeks)", ylabel="Pmf")
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel="Pregnancy length (weeks)", ylabel="Pmf")
live, firsts, others = first.MakeFrames()
first_pmf = thinkstats2.Pmf(firsts.prglngth, label="firsts")
other_pmf = thinkstats2.Pmf(others.prglngth, label="others")
width = 0.45
axis = [27, 46, 0, 0.6]
thinkplot.PrePlot(2, cols=2)
thinkplot.Hist(first_pmf, align="right", width=width)
thinkplot.Hist(other_pmf, align="left", width=width)
thinkplot.Config(xlabel="Pregnancy length(weeks)", ylabel="PMF", axis=axis)
thinkplot.PrePlot(2)
thinkplot.SubPlot(2)
thinkplot.Pmfs([first_pmf, other_pmf])
thinkplot.Config(xlabel="Pregnancy length(weeks)", axis=axis)
weeks = range(35, 46)
diffs = []
for week in weeks:
p1 = first_pmf.Prob(week)
p2 = other_pmf.Prob(week)
diff = 100 * (p1 - p2)
diffs.append(diff)
thinkplot.Bar(weeks, diffs)
thinkplot.Config(xlabel='Pregnancy length(weeks)', ylabel='Difference (percentage points)')
d = {7: 8, 12: 8, 17: 14, 22: 4, 27: 6, 32: 12, 37: 8, 42: 3, 47: 2}
pmf = thinkstats2.Pmf(d, label="actual")
def BiasPmf(pmf, label):
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf.Mult(x, x)
new_pmf.Normalize()
return new_pmf
biased_pmf = BiasPmf(pmf, label="observed")
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, biased_pmf])
thinkplot.Config(xlabel="Class size", ylabel="PMF")
print("Actual mean", pmf.Mean())
print("Observed mean", biased_pmf.Mean())
def UnbiasPmf(pmf, label=None):
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf[x] *= 1 / x
new_pmf.Normalize()
return new_pmf
unbiased = UnbiasPmf(biased_pmf, label="unbiased")
print("Unbiased mean", unbiased.Mean())
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, unbiased])
thinkplot.Config(xlabel="Class size", ylabel="PMF")
import numpy as np
import pandas
array = np.random.randn(4, 2)
df = pandas.DataFrame(array)
df
columns = ["A", "B"]
df = pandas.DataFrame(array, columns=columns)
df
index = ["a", "b", "c", "d"]
df = pandas.DataFrame(array, columns=columns, index=index)
df
df["A"]
df.loc["a"]
df.iloc[0]
indices = ["a", "c"]
df.loc[indices]
df["a":"c"]
df[0:2]
def PmfMean(pmf):
Computes the mean of a PMF.
Returns:
float mean
return sum(p * x for x, p in pmf.Items())
def PmfVar(pmf, mu=None):
Computes the variance of a PMF.
mu: the point around which the variance is computed;
if omitted, computes the mean
returns: float variance
if mu is None:
mu = PmfMean(pmf)
return sum(p * (x - mu) ** 2 for x, p in pmf.Items())
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dct")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dat.gz")
resp = nsfg.ReadFemResp()
# Solution
pmf = thinkstats2.Pmf(resp.numkdhh, label="numkdhh")
# Solution
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel="Number of children", ylabel="PMF")
# Solution
biased = BiasPmf(pmf, label="biased")
# Solution
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, biased])
thinkplot.Config(xlabel="Number of children", ylabel="PMF")
# Solution
pmf.Mean()
# Solution
biased.Mean()
live, firsts, others = first.MakeFrames()
preg_map = nsfg.MakePregMap(live)
# Solution
hist = thinkstats2.Hist()
for caseid, indices in preg_map.items():
if len(indices) >= 2:
pair = preg.loc[indices[0:2]].prglngth
diff = np.diff(pair)[0]
hist[diff] += 1
# Solution
thinkplot.Hist(hist)
# Solution
pmf = thinkstats2.Pmf(hist)
pmf.Mean()
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/relay.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/Apr25_27thAn_set1.shtml")
import relay
results = relay.ReadResults()
speeds = relay.GetSpeeds(results)
speeds = relay.BinData(speeds, 3, 12, 100)
pmf = thinkstats2.Pmf(speeds, "actual speeds")
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel="Speed (mph)", ylabel="PMF")
# Solution
def ObservedPmf(pmf, speed, label=None):
Returns a new Pmf representing speeds observed at a given speed.
The chance of observing a runner is proportional to the difference
in speed.
Args:
pmf: distribution of actual speeds
speed: speed of the observing runner
label: string label for the new dist
Returns:
Pmf object
new = pmf.Copy(label=label)
for val in new.Values():
diff = abs(val - speed)
new[val] *= diff
new.Normalize()
return new
# Solution
biased = ObservedPmf(pmf, 7, label="observed speeds")
thinkplot.Pmf(biased)
thinkplot.Config(xlabel="Speed (mph)", ylabel="PMF")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Again, I'll load the NSFG pregnancy file and select live births
Step2: Here's the histogram of birth weights
Step3: To normalize the disrtibution, we could divide through by the total count
Step4: The result is a Probability Mass Function (PMF).
Step5: More directly, we can create a Pmf object.
Step6: Pmf provides Prob, which looks up a value and returns its probability
Step7: The bracket operator does the same thing.
Step8: The Incr method adds to the probability associated with a given values.
Step9: The Mult method multiplies the probability associated with a value.
Step10: Total returns the total probability (which is no longer 1, because we changed one of the probabilities).
Step11: Normalize divides through by the total probability, making it 1 again.
Step12: Here's the PMF of pregnancy length for live births.
Step13: Here's what it looks like plotted with Hist, which makes a bar graph.
Step14: Here's what it looks like plotted with Pmf, which makes a step function.
Step15: We can use MakeFrames to return DataFrames for all live births, first babies, and others.
Step16: Here are the distributions of pregnancy length.
Step17: And here's the code that replicates one of the figures in the chapter.
Step18: Here's the code that generates a plot of the difference in probability (in percentage points) between first babies and others, for each week of pregnancy (showing only pregnancies considered "full term").
Step19: Biasing and unbiasing PMFs
Step20: This function computes the biased PMF we would get if we surveyed students and asked about the size of the classes they are in.
Step21: The following graph shows the difference between the actual and observed distributions.
Step22: The observed mean is substantially higher than the actual.
Step23: If we were only able to collect the biased sample, we could "unbias" it by applying the inverse operation.
Step24: We can unbias the biased PMF
Step25: And plot the two distributions to confirm they are the same.
Step26: Pandas indexing
Step27: We can specify column names when we create the DataFrame
Step28: We can also specify an index that contains labels for the rows.
Step29: Normal indexing selects columns.
Step30: We can use the loc attribute to select rows.
Step31: If you don't want to use the row labels and prefer to access the rows using integer indices, you can use the iloc attribute
Step32: loc can also take a list of labels.
Step33: If you provide a slice of labels, DataFrame uses it to select rows.
Step34: If you provide a slice of integers, DataFrame selects rows by integer index.
Step37: But notice that one method includes the last elements of the slice and one does not.
Step38: Exercise
Step39: Exercise
Step41: Exercise
|
6,165
|
<ASSISTANT_TASK:>
Python Code:
import collections
import glob
import os
from os import path
import matplotlib_venn
import pandas as pd
rome_path = path.join(os.getenv('DATA_FOLDER'), 'rome/csv')
OLD_VERSION = '337'
NEW_VERSION = '338'
old_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(OLD_VERSION)))
new_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(NEW_VERSION)))
new_files = new_version_files - frozenset(f.replace(OLD_VERSION, NEW_VERSION) for f in old_version_files)
deleted_files = old_version_files - frozenset(f.replace(NEW_VERSION, OLD_VERSION) for f in new_version_files)
print('{:d} new files'.format(len(new_files)))
print('{:d} deleted files'.format(len(deleted_files)))
# Load all ROME datasets for the two versions we compare.
VersionedDataset = collections.namedtuple('VersionedDataset', ['basename', 'old', 'new'])
rome_data = [VersionedDataset(
basename=path.basename(f),
old=pd.read_csv(f.replace(NEW_VERSION, OLD_VERSION)),
new=pd.read_csv(f))
for f in sorted(new_version_files)]
def find_rome_dataset_by_name(data, partial_name):
for dataset in data:
if 'unix_{}_v{}_utf8.csv'.format(partial_name, NEW_VERSION) == dataset.basename:
return dataset
raise ValueError('No dataset named {}, the list is\n{}'.format(partial_name, [d.basename for d in data]))
for dataset in rome_data:
if set(dataset.old.columns) != set(dataset.new.columns):
print('Columns of {} have changed.'.format(dataset.basename))
same_row_count_files = 0
for dataset in rome_data:
diff = len(dataset.new.index) - len(dataset.old.index)
if diff > 0:
print('{:d}/{:d} values added in {}'.format(
diff, len(dataset.new.index), dataset.basename))
elif diff < 0:
print('{:d}/{:d} values removed in {}'.format(
-diff, len(dataset.old.index), dataset.basename))
else:
same_row_count_files += 1
print('{:d}/{:d} files with the same number of rows'.format(
same_row_count_files, len(rome_data)))
jobs = find_rome_dataset_by_name(rome_data, 'referentiel_appellation')
new_jobs = set(jobs.new.code_ogr) - set(jobs.old.code_ogr)
obsolete_jobs = set(jobs.old.code_ogr) - set(jobs.new.code_ogr)
stable_jobs = set(jobs.new.code_ogr) & set(jobs.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_jobs), len(new_jobs), len(stable_jobs)), (OLD_VERSION, NEW_VERSION));
pd.options.display.max_colwidth = 2000
jobs.new[jobs.new.code_ogr.isin(new_jobs)][['code_ogr', 'libelle_appellation_long', 'code_rome']]
items = find_rome_dataset_by_name(rome_data, 'item')
new_items = set(items.new.code_ogr) - set(items.old.code_ogr)
obsolete_items = set(items.old.code_ogr) - set(items.new.code_ogr)
stable_items = set(items.new.code_ogr) & set(items.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_items), len(new_items), len(stable_items)), (OLD_VERSION, NEW_VERSION));
items.new[items.new.code_ogr.isin(new_items)].head()
items.old[items.old.code_ogr.isin(obsolete_items)].head()
links = find_rome_dataset_by_name(rome_data, 'liens_rome_referentiels')
old_links_on_stable_items = links.old[links.old.code_ogr.isin(stable_items)]
new_links_on_stable_items = links.new[links.new.code_ogr.isin(stable_items)]
old = old_links_on_stable_items[['code_rome', 'code_ogr']]
new = new_links_on_stable_items[['code_rome', 'code_ogr']]
links_merged = old.merge(new, how='outer', indicator=True)
links_merged['_diff'] = links_merged._merge.map({'left_only': 'removed', 'right_only': 'added'})
links_merged._diff.value_counts()
job_group_names = find_rome_dataset_by_name(rome_data, 'referentiel_code_rome').new.set_index('code_rome').libelle_rome
item_names = items.new.set_index('code_ogr').libelle.drop_duplicates()
links_merged['job_group_name'] = links_merged.code_rome.map(job_group_names)
links_merged['item_name'] = links_merged.code_ogr.map(item_names)
display(links_merged[links_merged._diff == 'removed'].dropna().head(5))
links_merged[links_merged._diff == 'added'].dropna().head(5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First let's check if there are new or deleted files (only matching by file names).
Step2: So we have the same set of files in both versions
Step3: Let's make sure the structure hasn't changed
Step4: All files have the same columns as before
Step5: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files.
Step6: Alright, so the only change seems to be 4 new jobs added. Let's take a look (only showing interesting fields)
Step7: These seems to be refinements of existing jobs, but that's fine.
Step8: As anticipated it is a very minor change (hard to see it visually)
Step9: The new ones seem legit to me. Let's check the obsolete one
Step10: Hmm, it seems to be simple renaming, but they preferred to create a new one and retire the old one.
Step11: So in addition to the added and removed items, there are few fixes. Let's have a look at them
|
6,166
|
<ASSISTANT_TASK:>
Python Code:
temperature = float(input("Please enter the temperature: "))
if temperature<15:
print("It is too cold.")
print("Turn up the heating.")
temperature = float(input("Please enter the temperature: "))
if temperature<15:
print("It is too cold.")
print("Turn up the heating.")
else:
print("You can turn the heating off now")
temperature = float(input("Please enter the temperature: "))
if temperature<15:
print("It is too cold.")
print("Turn up the heating.")
elif temperature<25:
print("Please turn the heating off")
print("Please turn the air-con off")
else:
print("It is too hot.")
print("Please turn on the air-con")
temperature = float(input("Please enter the temperature: "))
if temperature<15:
print("It is too cold.")
print("Turn up the heating.")
elif temperature<25:
print("Please turn the heating off")
print("Please turn the air-con off")
elif temperature<35:
print("It is too hot.")
print("Please turn on the air-con")
else:
print("HMGW: Holey Moley Global Warming")
print("Break out the barbecue!")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you've done it right, you should see a message to turn up heating the first time, but no message the second time
Step2: If you've done it right, you should see an appropriate message for your selected temperatures
Step3: If you've done it right you should see the following outputs, one at a time
|
6,167
|
<ASSISTANT_TASK:>
Python Code:
DATE = "170530" # "170704", "170530"
PLATE = "SI0012"
CONF = "conf170511mpc" # "conf170623mpc", "conf170511mpc"
QUADRANTS = [1] # [1, 2, 3, 4]
WRITE_PKL = False
UPDATE_SIMILAR = False
UPDATE_DATASTORE = False
for quadrant in QUADRANTS:
SRC_DIR = "/home/pahl/comas/projects/painting/{}-{}-{}_{}".format(DATE, PLATE, quadrant, CONF)
REPORTNAME = "report_{}-{}".format(PLATE, quadrant)
# REPORTNAME = "report"
keep = ["Compound_Id", "Container_Id", "Producer", "Conc_uM", "Activity", "Toxic", "Pure_Flag", "Rel_Cell_Count",
'Act_Profile', "Metadata_Well", "Plate", 'Smiles']
data_keep = keep.copy()
cpt.create_dirs(op.join(REPORTNAME, "details"))
print("\nProcessing plate {}_{}-{}_{} ...".format(DATE, PLATE, quadrant, CONF))
ds_plate = cpp.load(op.join(SRC_DIR, "Results.tsv"))
ds_plate = ds_plate.group_on_well()
ds_plate = ds_plate.remove_skipped_echo_direct_transfer(op.join(SRC_DIR, "*_print.xml"))
ds_plate = ds_plate.well_type_from_position()
ds_plate = ds_plate.flag_toxic()
ds_plate = ds_plate.activity_profile()
ds_plate = ds_plate.join_layout_1536(PLATE, quadrant)
ds_plate.data["Plate"] = "{}-{}-{}".format(DATE, PLATE, quadrant)
ds_plate = ds_plate.join_smiles()
ds_profile = ds_plate[keep]
if UPDATE_SIMILAR:
ds_profile.update_similar_refs(write=False)
if WRITE_PKL:
ds_profile.write_pkl("{}-{}-{}_profile.pkl".format(DATE, PLATE, quadrant))
# ds_profile = cpp.load_pkl("{}-{}-{}_profile.pkl".format(DATE, PLATE, quadrant))
ds_report = ds_profile.sort_values(["Toxic", "Activity"], ascending=[True, False])
# ds_report = ds_profile.remove_toxic()[0].sort_values("Activity", ascending = False)
# ds_report.data = ds_report.data.head(10)
cpr.full_report(ds_report, SRC_DIR, report_name=REPORTNAME,
plate="{}-{}".format(PLATE, quadrant), highlight=True)
if UPDATE_DATASTORE:
ds_profile.update_datastore(mode="cpd", write=False)
if UPDATE_SIMILAR:
cpp.write_sim_refs()
if UPDATE_DATASTORE:
cpp.write_datastore()
cpp.write_datastore()
cpp.clear_resources()
DATE = "170530" # "170704", "170530"
PLATE = "SI0012"
CONF = "conf170511mpc" # "conf170623mpc", "conf170511mpc"
QUADRANTS = [1] # [1, 2, 3, 4]
for quadrant in QUADRANTS:
SRC_DIR = "/home/pahl/comas/projects/painting/{}-{}-{}_{}".format(DATE, PLATE, quadrant, CONF)
REPORTNAME = "report_{}-{}".format(PLATE, quadrant)
# REPORTNAME = "report"
cpt.create_dirs(op.join(REPORTNAME, "details"))
print("\nProcessing plate {}_{}-{}_{} ...".format(DATE, PLATE, quadrant, CONF))
ds_profile = cpp.load_pkl("{}-{}-{}_profile.pkl".format(DATE, PLATE, quadrant))
ds_report = ds_profile.sort_values(["Toxic", "Activity"], ascending=[True, False])
# ds_report = ds_profile.remove_toxic()[0].sort_values("Activity", ascending = False)
# ds_report.data = ds_report.data.head(10)
cpr.full_report(ds_report, SRC_DIR, report_name=REPORTNAME,
plate="{}-{}".format(PLATE, quadrant), highlight=True)
REF_DIR = "/home/pahl/comas/projects/painting/references"
PLATE_NAMES = ["S0195", "S0198", "S0203"] # "S0195", "S0198", "S0203"
DATES = {"S0195": "170523", "S0198": "170516", "S0203": "170512"}
REPORTNAME = "references"
cpt.create_dirs(op.join(REPORTNAME, "details"))
pb = nbt.ProgressbarJS()
ds_ref = cpp.load("references_act_prof.tsv")
num_steps = 4 * len(PLATE_NAMES)
step = 0
for plate in PLATE_NAMES:
for idx in range(1, 5):
step += 1
pb.update(100 * step / num_steps)
SRC_DIR = "{}/{}-{}".format(REF_DIR, plate, idx)
print("\nProcessing plate {}-{} ...".format(plate, idx))
ds_profile = ds_ref[ds_ref["Plate"] == "{}-{}-{}".format(DATES[plate], plate, idx)].copy()
ds_report = ds_profile.sort_values(["Toxic", "Activity"], ascending=[True, False])
cpr.full_report(ds_profile, SRC_DIR, report_name=REPORTNAME,
plate="{}-{}".format(plate, idx), highlight=True, mode="ref")
pb.done()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Report Current Plate with Existing Data
Step2: Reference Plates
|
6,168
|
<ASSISTANT_TASK:>
Python Code:
# Prepare my slides
%pylab inline
%cd working
!pncaqsraw4pnceval.py --help
from shapely.wkt import loads
geom = loads("POLYGON ((30 10, 40 35, 20 40, 10 20, 30 10))")
x, y = geom.exterior.xy
plt.plot(x, y, ls = '-', marker = 'o')
!pncaqsraw4pnceval.py -O --timeresolution=daily \
--start-date 2013-05-01 --end-date 2013-07-01 \
--wktpolygon "POLYGON ((-181.25 0, 178.75 0, 178.75 90, -181.25 90, -181.25 0))"
%ls -l AQS_DATA_20130501-20130701.nc
!pncdump.py --header AQS_DATA_20130501-20130701.nc
!pncgen -O -f "bpch,vertgrid='GEOS-5-NATIVE',nogroup=('IJ-AVG-$',)" \
--extract-file AQS_DATA_20130501-20130701.nc --stack=time -v O3 -s layer72,0 \
bpch/ctm.bpch.v10-01-public-Run0.2013050100 \
bpch/ctm.bpch.v10-01-public-Run0.2013050100 \
bpch_aqs_extract.nc
!pncdump.py --header bpch_aqs_extract.nc
!pnceval.py --help
%%bash
pnceval.py --funcs NO,NP,NOP,MO,MP,MB,RMSE,IOA,AC -v O3 \
--pnc " --expr O3=Ozone*1000;O3.units=\'ppb\' -r time,mean AQS_DATA_20130501-20130701.nc" \
--pnc " -r time,mean bpch_aqs_extract.nc"
from PseudoNetCDF import pnceval
help(pnceval)
from PseudoNetCDF import PNC, pnceval
aqs = PNC("--reduce=time,mean", "--expr=O3=Ozone*1000", "AQS_DATA_20130501-20130701.nc")
geos = PNC("--reduce=time,mean", "bpch_aqs_extract.nc")
aqso3 = aqs.ifiles[0].variables['O3']
geoso3 = geos.ifiles[0].variables['O3']
print(aqso3.shape)
print(geoso3.shape)
print(pnceval.RMSE(aqso3, geoso3))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Process AQS for evaluation
Step2: wktpolygon
Step3: CHECK POINT
Step4: Review Output
Step5: Extract GEOS-Chem at AQS
Step6: Reproduced in Python
|
6,169
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import traceback
import contextlib
# Some helper code to demonstrate the kinds of errors you might encounter.
@contextlib.contextmanager
def assert_raises(error_class):
try:
yield
except error_class as e:
print('Caught expected exception \n {}:'.format(error_class))
traceback.print_exc(limit=2)
except Exception as e:
raise e
else:
raise Exception('Expected {} to be raised but no error was raised!'.format(
error_class))
@tf.function
def add(a, b):
return a + b
add(tf.ones([2, 2]), tf.ones([2, 2])) # [[2., 2.], [2., 2.]]
v = tf.Variable(1.0)
with tf.GradientTape() as tape:
result = add(v, 1.0)
tape.gradient(result, v)
@tf.function
def dense_layer(x, w, b):
return add(tf.matmul(x, w), b)
dense_layer(tf.ones([3, 2]), tf.ones([2, 2]), tf.ones([2]))
import timeit
conv_layer = tf.keras.layers.Conv2D(100, 3)
@tf.function
def conv_fn(image):
return conv_layer(image)
image = tf.zeros([1, 200, 200, 100])
# warm up
conv_layer(image); conv_fn(image)
print("Eager conv:", timeit.timeit(lambda: conv_layer(image), number=10))
print("Function conv:", timeit.timeit(lambda: conv_fn(image), number=10))
print("Note how there's not much difference in performance for convolutions")
# Functions are polymorphic
@tf.function
def double(a):
print("Tracing with", a)
return a + a
print(double(tf.constant(1)))
print()
print(double(tf.constant(1.1)))
print()
print(double(tf.constant("a")))
print()
# This doesn't print 'Tracing with ...'
print(double(tf.constant("b")))
print(double.pretty_printed_concrete_signatures())
print("Obtaining concrete trace")
double_strings = double.get_concrete_function(tf.constant("a"))
print("Executing traced function")
print(double_strings(tf.constant("a")))
print(double_strings(a=tf.constant("b")))
# You can also call get_concrete_function on an InputSpec
double_strings_from_inputspec = double.get_concrete_function(tf.TensorSpec(shape=[], dtype=tf.string))
print(double_strings_from_inputspec(tf.constant("c")))
print(double_strings)
print(double_strings.structured_input_signature)
print(double_strings.structured_outputs)
with assert_raises(tf.errors.InvalidArgumentError):
double_strings(tf.constant(1))
@tf.function
def pow(a, b):
return a ** b
square = pow.get_concrete_function(a=tf.TensorSpec(None, tf.float32), b=2)
print(square)
assert square(tf.constant(10.0)) == 100
with assert_raises(TypeError):
square(tf.constant(10.0), b=3)
graph = double_strings.graph
for node in graph.as_graph_def().node:
print(f'{node.input} -> {node.name}')
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))
def next_collatz(x):
print("Tracing with", x)
return tf.where(x % 2 == 0, x // 2, 3 * x + 1)
print(next_collatz(tf.constant([1, 2])))
# We specified a 1-D tensor in the input signature, so this should fail.
with assert_raises(ValueError):
next_collatz(tf.constant([[1, 2], [3, 4]]))
# We specified an int32 dtype in the input signature, so this should fail.
with assert_raises(ValueError):
next_collatz(tf.constant([1.0, 2.0]))
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))
def g(x):
print('Tracing with', x)
return x
# No retrace!
print(g(tf.constant([1, 2, 3])))
print(g(tf.constant([1, 2, 3, 4, 5])))
def train_one_step():
pass
@tf.function
def train(num_steps):
print("Tracing with num_steps = ", num_steps)
tf.print("Executing with num_steps = ", num_steps)
for _ in tf.range(num_steps):
train_one_step()
print("Retracing occurs for different Python arguments.")
train(num_steps=10)
train(num_steps=20)
print()
print("Traces are reused for Tensor arguments.")
train(num_steps=tf.constant(10))
train(num_steps=tf.constant(20))
def f():
print('Tracing!')
tf.print('Executing')
tf.function(f)()
tf.function(f)()
@tf.function
def f(x):
print("Traced with", x)
tf.print("Executed with", x)
f(1)
f(1)
f(2)
external_var = tf.Variable(0)
@tf.function
def buggy_consume_next(iterator):
external_var.assign_add(next(iterator))
tf.print("Value of external_var:", external_var)
iterator = iter([0, 1, 2, 3])
buggy_consume_next(iterator)
# This reuses the first value from the iterator, rather than consuming the next value.
buggy_consume_next(iterator)
buggy_consume_next(iterator)
external_list = []
def side_effect(x):
print('Python side effect')
external_list.append(x)
@tf.function
def f(x):
tf.py_function(side_effect, inp=[x], Tout=[])
f(1)
f(1)
f(1)
# The list append happens all three times!
assert len(external_list) == 3
# The list contains tf.constant(1), not 1, because py_function casts everything to tensors.
assert external_list[0].numpy() == 1
@tf.function
def f(x):
v = tf.Variable(1.0)
v.assign_add(x)
return v
with assert_raises(ValueError):
f(1.0)
class Count(tf.Module):
def __init__(self):
self.count = None
@tf.function
def __call__(self):
if self.count is None:
self.count = tf.Variable(0)
return self.count.assign_add(1)
c = Count()
print(c())
print(c())
external_var = tf.Variable(3)
@tf.function
def f(x):
return x * external_var
traced_f = f.get_concrete_function(4)
print("Calling concrete function...")
print(traced_f(4))
del external_var
print()
print("Calling concrete function after garbage collecting its closed Variable...")
with assert_raises(tf.errors.FailedPreconditionError):
traced_f(4)
# Simple loop
@tf.function
def f(x):
while tf.reduce_sum(x) > 1:
tf.print(x)
x = tf.tanh(x)
return x
f(tf.random.uniform([5]))
print(tf.autograph.to_code(f.python_function))
@tf.function
def fizzbuzz(n):
for i in tf.range(1, n + 1):
print('Tracing for loop')
if i % 15 == 0:
print('Tracing fizzbuzz branch')
tf.print('fizzbuzz')
elif i % 3 == 0:
print('Tracing fizz branch')
tf.print('fizz')
elif i % 5 == 0:
print('Tracing buzz branch')
tf.print('buzz')
else:
print('Tracing default branch')
tf.print(i)
fizzbuzz(tf.constant(5))
fizzbuzz(tf.constant(20))
def measure_graph_size(f, *args):
g = f.get_concrete_function(*args).graph
print("{}({}) contains {} nodes in its graph".format(
f.__name__, ', '.join(map(str, args)), len(g.as_graph_def().node)))
@tf.function
def train(dataset):
loss = tf.constant(0)
for x, y in dataset:
loss += tf.abs(y - x) # Some dummy computation.
return loss
small_data = [(1, 1)] * 3
big_data = [(1, 1)] * 10
measure_graph_size(train, small_data)
measure_graph_size(train, big_data)
measure_graph_size(train, tf.data.Dataset.from_generator(
lambda: small_data, (tf.int32, tf.int32)))
measure_graph_size(train, tf.data.Dataset.from_generator(
lambda: big_data, (tf.int32, tf.int32)))
batch_size = 2
seq_len = 3
feature_size = 4
def rnn_step(inp, state):
return inp + state
@tf.function
def dynamic_rnn(rnn_step, input_data, initial_state):
# [batch, time, features] -> [time, batch, features]
input_data = tf.transpose(input_data, [1, 0, 2])
max_seq_len = input_data.shape[0]
states = tf.TensorArray(tf.float32, size=max_seq_len)
state = initial_state
for i in tf.range(max_seq_len):
state = rnn_step(input_data[i], state)
states = states.write(i, state)
return tf.transpose(states.stack(), [1, 0, 2])
dynamic_rnn(rnn_step,
tf.random.uniform([batch_size, seq_len, feature_size]),
tf.zeros([batch_size, feature_size]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 使用 tf.function 提高性能
Step2: 定义一个辅助函数来演示可能遇到的错误类型:
Step3: 基础知识
Step4: Function 中可以嵌套其他 Function。
Step5: Function 的执行速度比 Eager 代码快,尤其是对于包含很多简单运算的计算图。但是,对于包含一些复杂运算(如卷积)的计算图,速度提升不会太明显。
Step6: 跟踪
Step7: 请注意,如果重复调用包含相同参数类型的 Function,TensorFlow 会重复使用之前跟踪的计算图,因为后面的调用生成的计算图将相同。
Step8: (以下更改存在于 TensorFlow Nightly 版本中,并且将在 TensorFlow 2.3 中提供。)
Step9: 目前,您已经了解 tf.function 通过 TensorFlow 的计算图跟踪逻辑创建缓存的动态调度层。对于术语的含义,更具体的解释如下:
Step10: (以下更改存在于 TensorFlow Nightly 版本中,并且将在 TensorFlow 2.3 中提供。)
Step11: 您也可以直接检索具体函数的签名。
Step12: 对不兼容的类型使用具体跟踪会引发错误
Step13: 您可能会注意到,在具体函数的输入签名中对 Python 参数进行了特别处理。TensorFlow 2.3 之前的版本会将 Python 参数直接从具体函数的签名中删除。从 TensorFlow 2.3 开始,Python 参数会保留在签名中,但是会受到约束,只能获取在跟踪期间设置的值。
Step14: 获取计算图
Step15: 调试
Step16: 在 tf.TensorSpec 中指定 [None] 维度可灵活运用跟踪重用。
Step17: 将 Python 参数转换为张量以减少回溯。
Step18: 如果需要强制执行回溯,可以创建一个新的 Function。单独的 Function 对象肯定不会共享跟踪记录。
Step19: Python 副作用
Step20: 很多 Python 功能(如生成器和迭代器)依赖 Python 运行时来跟踪状态。通常,虽然这些构造在 Eager 模式下可以正常工作,但由于跟踪行为,tf.function 中会发生许多意外情况:
Step21: 某些迭代构造通过 AutoGraph 获得支持。有关概述,请参阅 AutoGraph 转换部分。
Step22: 变量
Step23: 您也可以在 Function 内部创建变量,不过只能在第一次执行该函数时创建这些变量。
Step24: 您可能遇到的另一个错误是变量被回收。与常规 Python 函数不同,具体函数只会保留对它们闭包时所在变量的弱引用,因此,您必须保留对任何变量的引用。
Step25: AutoGraph 转换
Step26: 如果您有兴趣,可以检查 Autograph 生成的代码。
Step27: 条件语句
Step28: 有关 AutoGraph 转换的 if 语句的其他限制,请参阅参考文档。
Step29: 在数据集中包装 Python/Numpy 数据时,要注意 tf.data.Dataset.from_generator 与 tf.data.Dataset.from_tensors。前者将数据保留在 Python 中,并通过 tf.py_function 获取,这可能会影响性能;后者将数据的副本捆绑成计算图中的一个大 tf.constant() 节点,这可能会消耗较多内存。
|
6,170
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
pylab.style.use('ggplot')
import numpy as np
import pandas as pd
import cv2
import os
image_dir = os.path.join(os.getcwd(), 'font_images')
if not os.path.isdir(image_dir) or len(os.listdir(image_dir)) == 0:
print('no images found in {}'.format(image_dir))
img_mat = cv2.imread(os.path.join(image_dir, 'arial.png'))
# Convert to grayscale
gs = cv2.cvtColor(img_mat, cv2.COLOR_BGR2GRAY)
gs.shape
pylab.imshow(gs, cmap='gray')
pylab.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom='off', top='off', left='off', right='off', # don't display ticks
labelbottom='off', labeltop='off', labelleft='off', labelright='off' # don't display ticklabels
)
# Partition the columns into 10 equal parts
split_positions = np.linspace(0, gs.shape[1], num=12).astype(np.int)
split_positions = split_positions[1:-1]
# manual tweak by inspection
split_positions[0] += 10
split_positions
parts = np.array_split(gs, split_positions, axis=1)
fig, axes = pylab.subplots(1, len(parts))
for part, ax in zip(parts, axes):
ax.imshow(part, cmap='gray')
ax.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom='off', top='off', left='off', right='off', # don't display ticks
labelbottom='off', labeltop='off', labelleft='off', labelright='off' # don't display ticklabels
)
fig, axes = pylab.subplots(1, len(parts))
binarized = []
for ax, p in zip(axes, parts):
resized = cv2.resize(p, (32, 32))
_, bin_img = cv2.threshold(resized, 127, 255, cv2.THRESH_BINARY)
binarized.append(bin_img)
ax.imshow(bin_img, cmap='gray')
ax.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom='off', top='off', left='off', right='off', # don't display ticks
labelbottom='off', labeltop='off', labelleft='off', labelright='off' # don't display ticklabels
)
def calc_on_pixel_fraction(part_img):
# Note that on pixel == 0, off pixel == 255
_, counts = np.unique(part_img, return_counts=True)
return counts[0] / counts[1]
on_pixel_fractions = [calc_on_pixel_fraction(p) for p in binarized]
on_pixel_fractions = pd.Series(on_pixel_fractions, index=list('0123456789,'))
on_pixel_fractions.plot(kind='bar', title='On pixel fractions for all chars')
# Again, note that on pixel == 0, off pixel == 255
def calc_f_on_pixel_pos(part_img, f, axis=0):
assert axis in (0, 1)
on_x, on_y = np.where(part_img==0)
on_dim = on_x if axis == 0 else on_y
return f(on_dim)
m_x = [calc_f_on_pixel_pos(p, np.mean, axis=0) for p in binarized]
m_y = [calc_f_on_pixel_pos(p, np.mean, axis=1) for p in binarized]
mean_on_pixel_xy = pd.DataFrame(np.column_stack([m_x, m_y]),
index=list('0123456789,'),
columns=['mean_x', 'mean_y'])
mean_on_pixel_xy.plot(kind='bar', subplots=True)
v_x = [calc_f_on_pixel_pos(p, np.var, axis=0) for p in binarized]
v_y = [calc_f_on_pixel_pos(p, np.var, axis=1) for p in binarized]
var_on_pixel_xy = pd.DataFrame(np.column_stack([v_x, v_y]),
index=list('0123456789,'),
columns=['var_x', 'var_y'])
var_on_pixel_xy.plot(kind='bar', subplots=True)
def calc_on_pixel_x_y_corr(part_img):
coef = np.corrcoef(np.where(part_img == 0))
return coef[1, 0]
x_y_corrs = [calc_on_pixel_x_y_corr(p) for p in binarized]
x_y_corrs = pd.Series(x_y_corrs, index=list('0123456789,'))
x_y_corrs.plot(kind='bar')
def calc_moments(part_img):
moments = cv2.moments(part_img, binaryImage=True)
return moments
m_list = [calc_moments(p) for p in binarized]
m_df = pd.DataFrame.from_records(m_list)
chars = ('zero', 'one', 'two', 'three', 'four',
'five', 'six', 'seven', 'eight', 'nine', 'comma')
m_df.index = chars
m_df.head()
figure, axes = pylab.subplots(8, 3, figsize=(20, 24))
moment_cols = m_df.columns.values.reshape(8, 3)
for i, row in enumerate(moment_cols):
for j, col in enumerate(row):
m_df.loc[:, col].plot(kind='bar', title=col, ax=axes[i][j])
pylab.tight_layout()
from scipy.fftpack import dct
def calc_dct2d_zigzagged_coeffs(part_img, n_diags=3):
dct_result = dct(dct(part_img, norm='ortho').T, norm='ortho')
# To make a feature vector out of the DCT results by taking the elements
# of dct_result in a zigzagged fashion.
# We can access these efficiently
# by taking the mirror image and accessing the diagonals.
mirrored = np.fliplr(dct_result)
idx_first = mirrored.shape[0] - 1
idx_last = idx_first - n_diags
zigzagged_coeffs = np.concatenate([np.diag(mirrored, k)
for k in range(idx_first, idx_last, -1)])
return zigzagged_coeffs
diag_var_dct = [calc_dct2d_zigzagged_coeffs(p, n_diags=3) for p in binarized]
dct_df = pd.DataFrame.from_records(diag_var_dct, index=chars)
dct_df.plot(kind='bar', subplots=True, figsize=(10, 20))
def partition_image(img_file, n_chars, size=32, threshold=127):
* Read the RGB image `img_file`
* Convert to grayscale
* Split into one subarray per character
* Resize to `size * size`
* Binarize with threshold `threshold`
Return a list of subarrays for each character.
assert os.path.isfile(img_file)
img_mat = cv2.imread(img_file)
gs = cv2.cvtColor(img_mat, cv2.COLOR_BGR2GRAY)
split_positions = np.linspace(0, gs.shape[1], num=n_chars+1).astype(np.int)
split_positions = split_positions[1:-1]
# manual tweak by inspection
split_positions[0] += 10
parts = np.array_split(gs, split_positions, axis=1)
resized_images = []
for p in parts:
p_new = cv2.resize(p, (size, size))
_, bin_img = cv2.threshold(p_new, threshold, 255, cv2.THRESH_BINARY)
resized_images.append(bin_img)
return resized_images
from functools import partial
def calc_on_pixel_fraction(part_img):
_, counts = np.unique(part_img, return_counts=True)
return counts[0] /counts[1]
def calc_f_on_pixel_pos(part_img, f, axis=0):
assert axis in (0, 1)
on_x, on_y = np.where(part_img==0)
on_dim = on_x if axis == 0 else on_y
return f(on_dim)
def calc_on_pixel_x_y_corr(part_img):
coef = np.corrcoef(np.where(part_img == 0))
return coef[0, 1]
def calc_moments(part_img, moments_to_keep={'nu03', 'nu11', 'nu12'}):
moments = cv2.moments(part_img, binaryImage=True)
return {k: v for k, v in moments.items() if k in moments_to_keep}
from scipy.fftpack import dct
def calc_dct2d_zigzagged_coeffs(part_img, n_diags=3):
Return a 1D numpy array with the zigzagged 2D DCT coefficients.
dct_result = dct(dct(part_img, norm='ortho').T, norm='ortho')
mirrored = np.fliplr(dct_result)
idx_first = mirrored.shape[0] - 1
idx_last = idx_first - n_diags
zigzagged_coeffs = np.concatenate([np.diag(mirrored, k)
for k in range(idx_first, idx_last, -1)])
return zigzagged_coeffs
# dictionary of functions
feature_calc = {
'on_pixel_frac': calc_on_pixel_fraction,
# 'on_pixel_x_mean': partial(calc_f_on_pixel_pos, f=np.mean, axis=0),
# 'on_pixel_y_mean': partial(calc_f_on_pixel_pos, f=np.mean, axis=1),
'on_pixel_x_var': partial(calc_f_on_pixel_pos, f=np.var, axis=0),
'on_pixel_y_var': partial(calc_f_on_pixel_pos, f=np.var, axis=1),
# 'on_pixel_x_y_corr': calc_on_pixel_x_y_corr,
}
def extract_features(img_file, chars):
Extract_features for a combined image. Returns a DataFrame with 1 row per character.
char_images = partition_image(img_file, len(chars))
font_name = os.path.basename(img_file).split('.')[0]
features = []
for char_img in char_images:
feature_vals = {fname: fgen(char_img) for fname, fgen in feature_calc.items()}
# Calculate the moment feature values separately and update feature_vals.
moment_features = calc_moments(char_img)
feature_vals.update(moment_features)
features.append(feature_vals)
features = pd.DataFrame.from_records(features, index=chars)
features.index.name = 'char_name'
features['font_name'] = font_name
# Include the DCT features
dct_features = [calc_dct2d_zigzagged_coeffs(p) for p in char_images]
dct_features = pd.DataFrame.from_records(diag_var_dct, index=chars)
dct_features.columns = ['dct_{}'.format(c) for c in dct_features.columns]
# Combine DCT and other features
all_features = pd.concat([features, dct_features], axis=1)
return all_features
from IPython.display import display
from ipywidgets import FloatProgress
font_files = [os.path.join(image_dir, f) for f in os.listdir(image_dir)]
prog = FloatProgress(min=1, max=len(font_files), description='Extracting features...')
display(prog)
all_features = []
chars = ('zero', 'one', 'two', 'three', 'four',
'five', 'six', 'seven', 'eight', 'nine', 'comma')
for font_file in font_files:
feature_df = extract_features(font_file, chars)
all_features.append(feature_df)
prog.value += 1
prog.bar_style = 'success'
all_features = pd.concat(all_features, axis=0)
all_features.info()
num_values = all_features.drop('font_name', axis=1)
np.isfinite(num_values).sum(axis=0)
# This is only necessary if on_pixel_x_y_corr is included.
if 'on_pixel_x_y_corr' in all_features.keys():
invalid_corr = ~np.isfinite(all_features['on_pixel_x_y_corr'])
all_features.loc[invalid_corr, 'on_pixel_x_y_corr']
comma_mean = all_features.loc['comma', 'on_pixel_x_y_corr'].mean()
four_mean = all_features.loc['four', 'on_pixel_x_y_corr'].mean()
invalid_comma_idx = (all_features.index == 'comma') & invalid_corr
all_features.loc[invalid_comma_idx, 'on_pixel_x_y_corr'] = comma_mean
invalid_four_idx = (all_features.index == 'four') & invalid_corr
all_features.loc[invalid_four_idx, 'on_pixel_x_y_corr'] = four_mean
all_features.to_csv('char_features.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, we outline the processing for a single image.
Step2: Now we're ready to build image features. Let's take one of the images and work out the feature extraction process.
Step3: Mean x, y Positions of All On Pixels
Step4: Variance of x-y Positions of All on Pixels
Step5: Correlation of x-y positions of All Pixels
Step6: Note
Step7: So, among all the moments, we choose the normalized moments
Step11: Putting it Together
|
6,171
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# We are going to implement five strategies.
# Each strategy takes as input the history of the turns played so far
# and returns 1 for cooperation and 0 for defection.
# 1) Always defect
def always_defect(previous_steps):
return 0
# 2) Always cooperate
def always_cooperate(previous_steps):
return 1
# 3) Purely random, with probability of defecting 0.5
def random(previous_steps):
if np.random.random(1) > 0.5:
return 1
return 0
# 4) Tit for tat
def tit_for_tat(previous_steps):
if len(previous_steps) == 0:
return 1
return previous_steps[-1]
# 5) Tit for two tat
def tit_for_two_tat(previous_steps):
if len(previous_steps) < 2:
return 1
# if the other player defected twice
if sum(previous_steps[-2:]) == 0:
# retaliate
return 0
return 1
def play_strategies(strategy_1, strategy_2, nsteps = 200):
pl1 = globals()[strategy_1]
pl2 = globals()[strategy_2]
# We create two vectors to store the moves of the players
steps_pl1 = []
steps_pl2 = []
# and two variables for keeping the scores.
# (because we said these are numbers of years in prison, we
# use negative payoffs, with less negative being better)
points_pl1 = 0
points_pl2 = 0
# Iterate over the number of steps
for i in range(nsteps):
# decide strategy:
# player 1 chooses using the history of the moves by player 2
last_pl1 = pl1(steps_pl2)
# and vice versa
last_pl2 = pl2(steps_pl1)
# calculate payoff
if last_pl1 == 1 and last_pl2 == 1:
# both cooperate -> -1 point each
points_pl1 = points_pl1 - 1
points_pl2 = points_pl2 - 1
elif last_pl1 == 0 and last_pl2 == 1:
# pl2 lose
points_pl1 = points_pl1 - 0
points_pl2 = points_pl2 - 3
elif last_pl1 == 1 and last_pl2 == 0:
# pl1 lose
points_pl1 = points_pl1 - 3
points_pl2 = points_pl2 - 0
else:
# both defect
points_pl1 = points_pl1 - 2
points_pl2 = points_pl2 - 2
# add the moves to the history
steps_pl1.append(last_pl1)
steps_pl2.append(last_pl2)
# return the final scores
return((points_pl1, points_pl2))
# Your numbers will differ given the involved randomness
play_strategies("random", "always_defect")
def round_robin(strategies, nround, nstep):
nstrategies = len(strategies)
# initialize list for results
strategies_points = [0] * nstrategies
# for each pair
for i in range(nstrategies):
for j in range(i, nstrategies):
print("Playing", strategies[i], "vs.", strategies[j])
for k in range(nround):
res = play_strategies(strategies[i],
strategies[j],
nstep)
# print(res)
strategies_points[i] = strategies_points[i] + res[0]
strategies_points[j] = strategies_points[j] + res[1]
print("\nThe final results are:")
for i in range(nstrategies):
print(strategies[i] + ":", strategies_points[i])
print("\nand the winner is....")
print(strategies[strategies_points.index(max(strategies_points))])
my_strategies = ["always_defect",
"always_cooperate",
"random",
"tit_for_tat",
"tit_for_two_tat"]
# Your numbers will differ slightly given the involved randomness
round_robin(my_strategies, 10, 1000)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implement the five strategies
Step2: Write a function that accepts the name of two strategies and competes them in a game of iterated prisoner's dilemma for a given number of turns.
Step3: Implement a round-robin tournament, in which each strategy is played against every other (including against itself) for 10 rounds of 1000 turns each.
|
6,172
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
def A(P, r, n):
return P / r * ((1 + r)**n - 1)
n = numpy.linspace(0, 20, 100)
target = 5000
plt.hold(True)
for r in [0.02, 0.05, 0.08, 0.1, 0.12]:
plt.plot(n, A(100, r, n))
plt.plot(n, numpy.ones(n.shape) * target, 'k--')
plt.legend(["r = 0.02", "r = 0.05", "r = 0.08", "r = 0.1", "r = 0.12", "Target"], loc=2)
plt.xlabel("Years")
plt.ylabel("Annuity Value (Dollars)")
plt.show()
x = numpy.linspace(0, 4 * numpy.pi)
plt.plot(x, 2.0 * numpy.sin(x))
plt.title("River Sine")
plt.xlabel("x")
plt.ylabel("y")
plt.axis([0, 4*numpy.pi, -2, 2])
plt.show()
data = numpy.loadtxt("./data/sunspot.dat")
data.shape
plt.plot(data[:, 0], data[:, 1])
plt.xlabel("Year")
plt.ylabel("Number")
plt.title("Number of Sunspots")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Boat race
Step2: We need to calculate the function $f(x)$'s arc-length from $[0, 4 \pi]$
|
6,173
|
<ASSISTANT_TASK:>
Python Code:
sp.random.seed(0)
x = sp.random.normal(size=1000)
x
ns, bins, ps = plt.hist(x, bins=10)
ns
bins
ps
pd.DataFrame([bins, ns/1000])
ns, bins, ps = plt.hist(x, bins=100)
pd.DataFrame([bins, ns/1000])
x = np.linspace(-3, 3, 100)
y = sp.stats.norm.pdf(x)
plt.plot(x, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 이 히스토그램에서 -0.143394 부터 0.437156 사이의 값이 전체의 약 24%를 차지하고 있음을 알 수 있다. 그럼 만약 -0.01 부터 0.01 사이의 구간에 대한 정보를 얻고 싶다면? 더 세부적인 구간에 대해 정보를 구하고 싶다면 히스토그램의 구간을 더 작게 나누어야 한다.
Step2: 정확한 묘사를 위해 구간의 수를 증가시키면 몇 가지 문제가 발생한다.
|
6,174
|
<ASSISTANT_TASK:>
Python Code:
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
import numpy as np
# data I/O
data = open('methamorphosis.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print 'data has %d characters, %d unique.' % (data_size, vocab_size)
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
# hyperparameters
hidden_size = 100 # size of hidden layer of neurons
seq_length = 25 # number of steps to unroll the RNN for
learning_rate = 1e-1
# model parameters
Wxh = np.random.randn(hidden_size, vocab_size)*0.01 # input to hidden
Whh = np.random.randn(hidden_size, hidden_size)*0.01 # hidden to hidden
Why = np.random.randn(vocab_size, hidden_size)*0.01 # hidden to output
bh = np.zeros((hidden_size, 1)) # hidden bias
by = np.zeros((vocab_size, 1)) # output bias
def lossFun(inputs, targets, hprev):
inputs,targets are both list of integers.
hprev is Hx1 array of initial hidden state
returns the loss, gradients on model parameters, and last hidden state
xs, hs, ys, ps = {}, {}, {}, {}
hs[-1] = np.copy(hprev)
loss = 0
# forward pass
for t in xrange(len(inputs)):
xs[t] = np.zeros((vocab_size,1)) # encode in 1-of-k representation
xs[t][inputs[t]] = 1
hs[t] = np.tanh(np.dot(Wxh, xs[t]) + np.dot(Whh, hs[t-1]) + bh) # hidden state
ys[t] = np.dot(Why, hs[t]) + by # unnormalized log probabilities for next chars
ps[t] = np.exp(ys[t]) / np.sum(np.exp(ys[t])) # probabilities for next chars
loss += -np.log(ps[t][targets[t],0]) # softmax (cross-entropy loss)
# backward pass: compute gradients going backwards
dWxh, dWhh, dWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why)
dbh, dby = np.zeros_like(bh), np.zeros_like(by)
dhnext = np.zeros_like(hs[0])
for t in reversed(xrange(len(inputs))):
dy = np.copy(ps[t])
dy[targets[t]] -= 1 # backprop into y
dWhy += np.dot(dy, hs[t].T)
dby += dy
dh = np.dot(Why.T, dy) + dhnext # backprop into h
dhraw = (1 - hs[t] * hs[t]) * dh # backprop through tanh nonlinearity
dbh += dhraw
dWxh += np.dot(dhraw, xs[t].T)
dWhh += np.dot(dhraw, hs[t-1].T)
dhnext = np.dot(Whh.T, dhraw)
for dparam in [dWxh, dWhh, dWhy, dbh, dby]:
np.clip(dparam, -5, 5, out=dparam) # clip to mitigate exploding gradients
return loss, dWxh, dWhh, dWhy, dbh, dby, hs[len(inputs)-1]
def sample(h, seed_ix, n):
sample a sequence of integers from the model
h is memory state, seed_ix is seed letter for first time step
x = np.zeros((vocab_size, 1))
x[seed_ix] = 1
ixes = []
for t in xrange(n):
h = np.tanh(np.dot(Wxh, x) + np.dot(Whh, h) + bh)
y = np.dot(Why, h) + by
p = np.exp(y) / np.sum(np.exp(y))
ix = np.random.choice(range(vocab_size), p=p.ravel())
x = np.zeros((vocab_size, 1))
x[ix] = 1
ixes.append(ix)
return ixes
n, p = 0, 0
mWxh, mWhh, mWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why)
mbh, mby = np.zeros_like(bh), np.zeros_like(by) # memory variables for Adagrad
smooth_loss = -np.log(1.0/vocab_size)*seq_length # loss at iteration 0
while n<=1000: # was while True: in original code
# prepare inputs (we're sweeping from left to right in steps seq_length long)
if p+seq_length+1 >= len(data) or n == 0:
hprev = np.zeros((hidden_size,1)) # reset RNN memory
p = 0 # go from start of data
inputs = [char_to_ix[ch] for ch in data[p:p+seq_length]]
targets = [char_to_ix[ch] for ch in data[p+1:p+seq_length+1]]
# sample from the model now and then
if n % 100 == 0:
sample_ix = sample(hprev, inputs[0], 200)
txt = ''.join(ix_to_char[ix] for ix in sample_ix)
print '----\n %s \n----' % (txt, )
# forward seq_length characters through the net and fetch gradient
loss, dWxh, dWhh, dWhy, dbh, dby, hprev = lossFun(inputs, targets, hprev)
smooth_loss = smooth_loss * 0.999 + loss * 0.001
if n % 100 == 0: print 'iter %d, loss: %f' % (n, smooth_loss) # print progress
# perform parameter update with Adagrad
for param, dparam, mem in zip([Wxh, Whh, Why, bh, by],
[dWxh, dWhh, dWhy, dbh, dby],
[mWxh, mWhh, mWhy, mbh, mby]):
mem += dparam * dparam
param += -learning_rate * dparam / np.sqrt(mem + 1e-8) # adagrad update
p += seq_length # move data pointer
n += 1 # iteration counter
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
import numpy as np
# data I/O
data = open('methamorphosis.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print 'data has %d characters, %d unique.' % (data_size, vocab_size)
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
print char_to_ix
print ix_to_char
%matplotlib notebook
import matplotlib
import matplotlib.pyplot as plt
vector_for_char_a = np.zeros((vocab_size, 1))
vector_for_char_a[char_to_ix['a']] = 1
#print vector_for_char_a
print vector_for_char_a.ravel()
x = range(0,len(chars))
plt.figure(figsize=(10,2))
plt.bar(x, vector_for_char_a.ravel(), 0.3)
plt.xticks(x, chars)
plt.show()
# hyperparameters
hidden_size = 100 # size of hidden layer of neurons
seq_length = 25 # number of steps to unroll the RNN for
learning_rate = 1e-1
# model parameters
Wxh = np.random.randn(hidden_size, vocab_size)*0.01 # input to hidden
print 'Wxh contain', Wxh.size, 'parameters'
Whh = np.random.randn(hidden_size, hidden_size)*0.01 # hidden to hidden
print 'Whh contain', Whh.size, 'parameters'
Why = np.random.randn(vocab_size, hidden_size)*0.01 # hidden to output
print 'Why contain', Why.size, 'parameters'
bh = np.zeros((hidden_size, 1)) # hidden bias
print 'bh contain', bh.size, 'parameters'
by = np.zeros((vocab_size, 1)) # output bias
print 'by contain', by.size, 'parameters'
def sample(h, seed_ix, n):
sample a sequence of integers from the model
h is memory state, seed_ix is seed letter for first time step
x = np.zeros((vocab_size, 1))
x[seed_ix] = 1
ixes = []
for t in xrange(n):
h = np.tanh(np.dot(Wxh, x) + np.dot(Whh, h) + bh)
y = np.dot(Why, h) + by
p = np.exp(y) / np.sum(np.exp(y))
ix = np.random.choice(range(vocab_size), p=p.ravel())
x = np.zeros((vocab_size, 1))
x[ix] = 1
ixes.append(ix)
txt = ''.join(ix_to_char[ix] for ix in ixes)
print '----\n %s \n----' % (txt, )
hprev = np.zeros((hidden_size,1)) # reset RNN memory
sample(hprev,char_to_ix['a'],200)
def lossFun(inputs, targets, hprev):
inputs,targets are both list of integers.
hprev is Hx1 array of initial hidden state
returns the loss, gradients on model parameters, and last hidden state
xs, hs, ys, ps = {}, {}, {}, {}
hs[-1] = np.copy(hprev)
loss = 0
# forward pass
for t in xrange(len(inputs)):
xs[t] = np.zeros((vocab_size,1)) # encode in 1-of-k representation
xs[t][inputs[t]] = 1
hs[t] = np.tanh(np.dot(Wxh, xs[t]) + np.dot(Whh, hs[t-1]) + bh) # hidden state
ys[t] = np.dot(Why, hs[t]) + by # unnormalized log probabilities for next chars
ps[t] = np.exp(ys[t]) / np.sum(np.exp(ys[t])) # probabilities for next chars
loss += -np.log(ps[t][targets[t],0]) # softmax (cross-entropy loss)
# backward pass: compute gradients going backwards
dWxh, dWhh, dWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why)
dbh, dby = np.zeros_like(bh), np.zeros_like(by)
dhnext = np.zeros_like(hs[0])
for t in reversed(xrange(len(inputs))):
dy = np.copy(ps[t])
dy[targets[t]] -= 1 # backprop into y
dWhy += np.dot(dy, hs[t].T)
dby += dy
dh = np.dot(Why.T, dy) + dhnext # backprop into h
dhraw = (1 - hs[t] * hs[t]) * dh # backprop through tanh nonlinearity
dbh += dhraw
dWxh += np.dot(dhraw, xs[t].T)
dWhh += np.dot(dhraw, hs[t-1].T)
dhnext = np.dot(Whh.T, dhraw)
for dparam in [dWxh, dWhh, dWhy, dbh, dby]:
np.clip(dparam, -5, 5, out=dparam) # clip to mitigate exploding gradients
return loss, dWxh, dWhh, dWhy, dbh, dby, hs[len(inputs)-1]
# uncomment the print to get some details
xs, hs, ys, ps = {}, {}, {}, {}
hs[-1] = np.copy(hprev)
# forward pass
t=0 # for t in xrange(len(inputs)):
xs[t] = np.zeros((vocab_size,1)) # encode in 1-of-k representation
xs[t][inputs[t]] = 1
# print xs[t]
hs[t] = np.tanh(np.dot(Wxh, xs[t]) + np.dot(Whh, hs[t-1]) + bh) # hidden state
ys[t] = np.dot(Why, hs[t]) + by # unnormalized log probabilities for next chars
# print ys[t]
ps[t] = np.exp(ys[t]) / np.sum(np.exp(ys[t])) # probabilities for next chars
# print ps[t].ravel()
# Let's build a dict to see witch probablity is associated with witch char
probability_per_char = { ch:ps[t].ravel()[i] for i,ch in enumerate(chars) }
# uncoment the next line to see the raw result
# print probability_per_char
# To print the probability in a way that is more easy to read.
for x in range(vocab_size):
print 'p('+ ix_to_char[x] + ")=", "%.4f" % ps[t].ravel()[x],
if (x%7==0):
print ""
else:
print "",
x = range(0,len(chars))
plt.figure(figsize=(10,5))
plt.bar(x, ps[t], 0.3)
plt.xticks(x, chars)
plt.show()
# We can create the next char from the above distribution
ix = np.random.choice(range(vocab_size), p=ps[t].ravel())
print
print "Next char code is:", ix
print "Next char is:", ix_to_char[ix]
print 'Next char from training (target) was number', targets[t], 'witch is "' + ix_to_char[targets[t]] + '"'
print 'Probability for this letter was', ps[t][targets[t],0]
loss = -np.log(ps[t][targets[t],0]) # softmax (cross-entropy loss)
print 'loss for this input&target pair is', loss
x = 10
a = 3
b = 7
loss = a+x + b
print 'initial loss =', loss
# dx stand for d(loss)/dx
dx = a #Calculate dx=d(loss)/dx analytically
step_size = 0.1
# use dx and step size to calculate new x
new_x = x - dx * step_size
new_loss = a+new_x + b
print 'new loss =',new_loss
if (new_loss<loss): print 'New loss is smaller, Yeah!'
# backward pass: compute gradients going backwards
dWxh, dWhh, dWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why)
dbh, dby = np.zeros_like(bh), np.zeros_like(by)
dhnext = np.zeros_like(hs[0])
t=0 #for t in reversed(xrange(len(inputs))):
dy = np.copy(ps[t])
dy[targets[t]] -= 1 # backprop into y
#print dy.ravel()
dWhy += np.dot(dy, hs[t].T)
#print dWhy.ravel()
dby += dy
#print dby.ravel()
dh = np.dot(Why.T, dy) + dhnext # backprop into h
dhraw = (1 - hs[t] * hs[t]) * dh # backprop through tanh nonlinearity
dbh += dhraw
dWxh += np.dot(dhraw, xs[t].T)
dWhh += np.dot(dhraw, hs[t-1].T)
dhnext = np.dot(Whh.T, dhraw)
for dparam in [dWxh, dWhh, dWhy, dbh, dby]:
np.clip(dparam, -5, 5, out=dparam) # clip to mitigate exploding gradients
#print dparam
p=0
inputs = [char_to_ix[ch] for ch in data[p:p+seq_length]]
print "inputs", inputs
targets = [char_to_ix[ch] for ch in data[p+1:p+seq_length+1]]
print "targets", targets
n, p = 0, 0
mWxh, mWhh, mWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why)
mbh, mby = np.zeros_like(bh), np.zeros_like(by) # memory variables for Adagrad
smooth_loss = -np.log(1.0/vocab_size)*seq_length # loss at iteration 0
while n<=1000*100:
# prepare inputs (we're sweeping from left to right in steps seq_length long)
# check "How to feed the loss function to see how this part works
if p+seq_length+1 >= len(data) or n == 0:
hprev = np.zeros((hidden_size,1)) # reset RNN memory
p = 0 # go from start of data
inputs = [char_to_ix[ch] for ch in data[p:p+seq_length]]
targets = [char_to_ix[ch] for ch in data[p+1:p+seq_length+1]]
# forward seq_length characters through the net and fetch gradient
loss, dWxh, dWhh, dWhy, dbh, dby, hprev = lossFun(inputs, targets, hprev)
smooth_loss = smooth_loss * 0.999 + loss * 0.001
# sample from the model now and then
if n % 1000 == 0:
print 'iter %d, loss: %f' % (n, smooth_loss) # print progress
sample(hprev, inputs[0], 200)
# perform parameter update with Adagrad
for param, dparam, mem in zip([Wxh, Whh, Why, bh, by],
[dWxh, dWhh, dWhy, dbh, dby],
[mWxh, mWhh, mWhy, mbh, mby]):
mem += dparam * dparam
param += -learning_rate * dparam / np.sqrt(mem + 1e-8) # adagrad update
p += seq_length # move data pointer
n += 1 # iteration counter
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Minimal character-level Vanilla RNN model.
Step5: If you are not a NN expert, the code is not easy to understand.
Step6: Encode/Decode char/vector
Step7: Then we create 2 dictionary to encode and decode a char to an int
Step8: Finaly we create a vector from a char like this
Step9: Definition of the network
Step11: The model parameters are adjusted during the trainning.
Step13: Define the loss function
Step14: Forward pass
Step15: You can run the previous code several time. A char is generated for a given probability.
Step16: Backward pass
Step17: goal is to calculate gradients for the forward formula
Step18: Training
Step19: Adagrad to update the parameters
|
6,175
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
from statsmodels import regression
from scipy import poly1d
x = np.arange(10)
y = 2*np.random.randn(10) + x**2
xs = np.linspace(-0.25, 9.25, 200)
lin = np.polyfit(x, y, 1)
quad = np.polyfit(x, y, 2)
many = np.polyfit(x, y, 9)
plt.scatter(x, y)
plt.plot(xs, poly1d(lin)(xs))
plt.plot(xs, poly1d(quad)(xs))
plt.plot(xs, poly1d(many)(xs))
plt.ylabel('Y')
plt.xlabel('X')
plt.legend(['Underfit', 'Good fit', 'Overfit']);
# Load one year's worth of pricing data for five different assets
start = '2013-01-01'
end = '2014-01-01'
x1 = get_pricing('PEP', fields='price', start_date=start, end_date=end)
x2 = get_pricing('MCD', fields='price', start_date=start, end_date=end)
x3 = get_pricing('ATHN', fields='price', start_date=start, end_date=end)
x4 = get_pricing('DOW', fields='price', start_date=start, end_date=end)
y = get_pricing('PG', fields='price', start_date=start, end_date=end)
# Build a linear model using only x1 to explain y
slr = regression.linear_model.OLS(y, sm.add_constant(x1)).fit()
slr_prediction = slr.params[0] + slr.params[1]*x1
# Run multiple linear regression using x1, x2, x3, x4 to explain y
mlr = regression.linear_model.OLS(y, sm.add_constant(np.column_stack((x1,x2,x3,x4)))).fit()
mlr_prediction = mlr.params[0] + mlr.params[1]*x1 + mlr.params[2]*x2 + mlr.params[3]*x3 + mlr.params[4]*x4
# Compute adjusted R-squared for the two different models
print 'SLR R-squared:', slr.rsquared_adj
print 'SLR p-value:', slr.f_pvalue
print 'MLR R-squared:', mlr.rsquared_adj
print 'MLR p-value:', mlr.f_pvalue
# Plot y along with the two different predictions
y.plot()
slr_prediction.plot()
mlr_prediction.plot()
plt.ylabel('Price')
plt.xlabel('Date')
plt.legend(['PG', 'SLR', 'MLR']);
# Load the next of pricing data
start = '2014-01-01'
end = '2015-01-01'
x1 = get_pricing('PEP', fields='price', start_date=start, end_date=end)
x2 = get_pricing('MCD', fields='price', start_date=start, end_date=end)
x3 = get_pricing('ATHN', fields='price', start_date=start, end_date=end)
x4 = get_pricing('DOW', fields='price', start_date=start, end_date=end)
y = get_pricing('PG', fields='price', start_date=start, end_date=end)
# Extend our model from before to the new time period
slr_prediction2 = slr.params[0] + slr.params[1]*x1
mlr_prediction2 = mlr.params[0] + mlr.params[1]*x1 + mlr.params[2]*x2 + mlr.params[3]*x3 + mlr.params[4]*x4
# Manually compute adjusted R-squared over the new time period
# Adjustment 1 is for the SLR model
p = 1
N = len(y)
adj1 = float(N - 1)/(N - p - 1)
# Now for MLR
p = 4
N = len(y)
adj2 = float(N - 1)/(N - p - 1)
SST = sum((y - np.mean(y))**2)
SSRs = sum((slr_prediction2 - y)**2)
print 'SLR R-squared:', 1 - adj1*SSRs/SST
SSRm = sum((mlr_prediction2 - y)**2)
print 'MLR R-squared:', 1 - adj2*SSRm/SST
# Plot y along with the two different predictions
y.plot()
slr_prediction2.plot()
mlr_prediction2.plot()
plt.ylabel('Price')
plt.xlabel('Date')
plt.legend(['PG', 'SLR', 'MLR']);
# Load the pricing data for a stock
start = '2011-01-01'
end = '2013-01-01'
pricing = get_pricing('MCD', fields='price', start_date=start, end_date=end)
# Compute rolling averages for various window lengths
mu_30d = pd.rolling_mean(pricing, window=30)
mu_60d = pd.rolling_mean(pricing, window=60)
mu_100d = pd.rolling_mean(pricing, window=100)
# Plot asset pricing data with rolling means from the 100th day, when all the means become available
plt.plot(pricing[100:], label='Asset')
plt.plot(mu_30d[100:], label='30d MA')
plt.plot(mu_60d[100:], label='60d MA')
plt.plot(mu_100d[100:], label='100d MA')
plt.xlabel('Day')
plt.ylabel('Price')
plt.legend();
# Trade using a simple mean-reversion strategy
def trade(stock, length):
# If window length is 0, algorithm doesn't make sense, so exit
if length == 0:
return 0
# Compute rolling mean and rolling standard deviation
mu = pd.rolling_mean(stock, window=length)
std = pd.rolling_std(stock, window=length)
# Compute the z-scores for each day using the historical data up to that day
zscores = (stock - mu)/std
# Simulate trading
# Start with no money and no positions
money = 0
count = 0
for i in range(len(stock)):
# Sell short if the z-score is > 1
if zscores[i] > 1:
money += stock[i]
count -= 1
# Buy long if the z-score is < 1
elif zscores[i] < -1:
money -= stock[i]
count += 1
# Clear positions if the z-score between -.5 and .5
elif abs(zscores[i]) < 0.5:
money += count*stock[i]
count = 0
return money
# Find the window length 0-254 that gives the highest returns using this strategy
length_scores = [trade(pricing, l) for l in range(255)]
best_length = np.argmax(length_scores)
print 'Best window length:', best_length
# Get pricing data for a different timeframe
start2 = '2013-01-01'
end2 = '2015-01-01'
pricing2 = get_pricing('MCD', fields='price', start_date=start2, end_date=end2)
# Find the returns during this period using what we think is the best window length
length_scores2 = [trade(pricing2, l) for l in range(255)]
print best_length, 'day window:', length_scores2[best_length]
# Find the best window length based on this dataset, and the returns using this window length
best_length2 = np.argmax(length_scores2)
print best_length2, 'day window:', length_scores2[best_length2]
plt.plot(length_scores)
plt.plot(length_scores2)
plt.xlabel('Window length')
plt.ylabel('Score')
plt.legend(['2011-2013', '2013-2015']);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: When working with real data, there is unlikely to ever be a situation where a ninth-degree polynomial is appropriate
Step2: However, when we use the same estimated parameters to model a different time period, we find that the single-variable model fits worse, while the multiple-variable model is entirely useless. It seems that the relationships we found are not consistent and are particular to the original sample period.
Step3: If we wanted, we could scan our universe for variables that were correlated with the dependent variable, and construct an extremely overfitted model. However, in most cases the correlation will be spurious, and the relationship will not continue into the future.
Step4: If we pick the length based on which seems best - say, on how well our model or algorithm performs - we are overfitting. Below we have a simple trading algorithm which bets on the stock price reverting to the rolling mean (for more details, check out the mean reversion notebook). We use the performance of this algorithm to score window lengths and find the best one. However, when we consider a different timeframe, this window length is far from optimal. This is because our original choice was overfitted to the sample data.
Step5: Clearly fitting to our sample data doesn't always give good results in the future. Just for fun, let's plot the length scores computed from the two different timeframes
|
6,176
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 15
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
from sklearn.preprocessing import minmax_scale
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
shape = x.shape
return minmax_scale(x.flatten()).reshape(shape)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
from sklearn.preprocessing import label_binarize
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
return label_binarize(x,classes=[0,1,2,3,4,5,6,7,8,9])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32,shape=(None,)+image_shape,name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32,shape=(None,n_classes),name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32,name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
#create filter(weights)
weights_shape = list(conv_ksize) + [x_tensor.get_shape().as_list()[-1], conv_num_outputs]
wc = tf.Variable(tf.truncated_normal(weights_shape,stddev=0.1),name="wc")
#create filter(biases)
bc = tf.Variable(tf.zeros(conv_num_outputs),name="bc")
#stride shape is [1,x,y,1]
stride_shape = [1] + list(conv_strides)+[1]
#do convoultion padding ="SAME"
conv_layer = tf.nn.conv2d(x_tensor,wc, strides= stride_shape,padding='SAME')
#do biases addtion
conv_layer = tf.nn.bias_add(conv_layer,bc)
#do relu activation (nonlinear activation)
conv_layer = tf.nn.relu(conv_layer)
#do max pooling
pksize=[1]+list(pool_ksize)+[1]
pstrides = [1]+list(pool_strides)+[1]
return tf.nn.max_pool(conv_layer,pksize,pstrides,padding='SAME')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
import numpy as np
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
dim = x_tensor.get_shape().as_list()
flattened_size = np.prod(dim[1:])
return tf.reshape(x_tensor,[-1,flattened_size])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weights= tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1],num_outputs],stddev=0.1),name="wf")
biases = tf.Variable(tf.zeros(num_outputs),name="bf")
return tf.nn.relu(tf.add(tf.matmul(x_tensor,weights),biases))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weights= tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1],num_outputs],stddev=0.1),name="wo")
biases = tf.Variable(tf.zeros(num_outputs),name="bo")
return tf.add(tf.matmul(x_tensor,weights),biases)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# layer 1
x_tensor = conv2d_maxpool(x, 64, (3,3), (1,1), (2,2), (1,1))
# layer 2 ,
x_tensor = conv2d_maxpool(x_tensor, 64, (5,5), (1,1), (3,3), (1,1))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x_tensor = flatten(x_tensor)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
x_tensor = fully_conn(x_tensor,192)
x_tensor = fully_conn(x_tensor,64)
x_tensor = tf.nn.dropout(x_tensor,keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
# TODO: return output
return output(x_tensor,10)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer,feed_dict={x:feature_batch,y:label_batch,keep_prob: keep_probability})
pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
%matplotlib inline
import matplotlib.pyplot as plt
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
valid_loss = session.run(cost,feed_dict={x:valid_features[0:2048],y:valid_labels[0:2048],keep_prob: 1})
valid_accu = session.run(accuracy,feed_dict={x:valid_features[0:2048],y:valid_labels[0:2048],keep_prob: 1})
print('Validation Loss: {:>10.4f} Accuracy: {:.4f}'.format(valid_loss,valid_accu))
# TODO: Tune Parameters
epochs = 10
batch_size = 128
keep_probability = 0.5
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
6,177
|
<ASSISTANT_TASK:>
Python Code:
%reload_ext XTIPython
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import FloatProgress
from IPython.display import display
import subprocess,sys,os,json
FFPROBE_BIN = "ffprobe.exe"
FFMPEG_BIN = "ffmpeg.exe"
def get_json_tags(fn):
command = [ FFPROBE_BIN,'-v', 'error', '-count_frames', '-select_streams', 'v:0', \
'-print_format', 'json',
'-show_format', '-show_streams',
fn]
pipe = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
s = pipe.stdout.read().strip()
dic = json.loads(s)
return dic
def get_frames(fn,n_images=0,mod=1,grey=True,shape=(1080,1920)):
size = shape[0]*shape[1]
if grey:
pix_fmt = 'gray'
else:
pix_fmt = 'rgb24'
size *= 3
shape = (shape[0],shape[1],3)
command = [ FFMPEG_BIN,
'-i', fn,
'-f', 'image2pipe',
'-pix_fmt', pix_fmt,
'-vcodec', 'rawvideo', '-']
if n_images == 0:
n_images = get_nframes(fn)
print(n_images)
f = FloatProgress(min=0, max=n_images)
display(f)
pipe = subprocess.Popen(command, stdout=subprocess.PIPE, bufsize=10**8)
images = []
for i in range(n_images):
raw_image = pipe.stdout.read(size)
f.value += 1
if raw_image == '':
break
if i%mod != 0:
continue
image = np.fromstring(raw_image,dtype='uint8').reshape(*shape)
images.append(image)
return images
fn = r"D:\Data\Madison\maggots\CHEMO 1 VID BOTTOM RIGHT.mpg"
dic = get_json_tags(fn)
dic_stream = dic['streams'][0]
shape = (int(dic_stream['height']),int(dic_stream['width']))
n_frames = int(dic_stream['nb_frames'])
duration = float(dic_stream['duration'])
#You can access the following tags in streams[0]:
#print ", ".join(dic['streams'][0].keys())
#exact frame rate not important, use duration and number of frames:
framerate = n_frames / duration
images = get_frames(fn,n_frames,shape=shape)
background = np.mean(images,axis=0)
foreground = np.abs(images[0]-background, dtype=np.float32)
import libatrous
threshold = 10
gain = 4
kernel = libatrous.get_kernel(libatrous.LIN3)
filtered = libatrous.get_bandpass(foreground,2,4,kernel)
#threshold!
filtered[filtered < threshold] = threshold
#boost!
filtered *= gain
filtered[filtered > 255] = 255
mpl.rcParams['figure.figsize'] = (18.0, 12.0)
fig, axis = plt.subplots(2,2)
plt.subplots_adjust(bottom=0.1, right=0.8, top=0.9, wspace=0.05, hspace=0.05)
ax = axis[0,0]
ax.imshow(images[0][:300,:400],cmap='gray')
ax.set_axis_off()
t = ax.set_title("Original frame 0")
ax = axis[0,1]
ax.imshow(background[:300,:400],cmap='gray')
ax.set_axis_off()
t = ax.set_title("Background estimate (Average of all movie frames)")
ax = axis[1,0]
ax.imshow(foreground[:300,:400],cmap='gray')
ax.set_axis_off()
t = ax.set_title("Foreground from background subtraction")
ax = axis[1,1]
ax.imshow(filtered[:300,:400],cmap='gray')
ax.set_axis_off()
t = ax.set_title("Filtered output we will send to Imaris")
skip_factor = 5
threshold = 10
gain = 4
kernel = libatrous.get_kernel(libatrous.LIN3)
output = []
n_images = len(images)
# Progress bar
f = FloatProgress(min=0, max=n_images/skip_factor)
display(f)
for i in range(0,n_images/skip_factor):
foreground = np.abs(images[i*skip_factor]-background, dtype=np.float32)
filtered = libatrous.get_bandpass(foreground,2,4,kernel)
#threshold!
filtered[filtered < threshold] = 0
#boost!
filtered *= gain
filtered[filtered > 255] = 255
output.append(filtered)
f.value += 1
tracks = np.max(output,axis=0)
fig, ax = plt.subplots(1,1)
ax.imshow(tracks,cmap='gray')
ax.set_axis_off()
t = ax.set_title("Max projection image")
import time
%load_ext autoreload
%autoreload 2
vSurpassScene = vImaris.GetSurpassScene()
vFactory = vImaris.GetFactory() #Actually, we already have access to vFactory, but here's how you could re-create it.
if vSurpassScene is None:
print("No Scene, let's create one!")
vSurpassScene = vFactory.CreateDataContainer()
vSurpassScene.SetName('Scene')
vLightSource = vFactory.CreateLightSource()
vLightSource.SetName('Light Source 1');
vSurpassScene.AddChild(vLightSource,-1)
vFrame = vFactory.CreateFrame()
vFrame.SetName('Frame 1')
vSurpassScene.AddChild(vFrame,-1)
vImaris.SetSurpassScene(vSurpassScene);
#Now create a new dataset
print("Let's upload some frames!")
n_output = len(output)
h,w = output[0].shape
# Progress bar for iPython
f = FloatProgress(min=0, max=n_output)
display(f)
vDataSet = vFactory.CreateDataSet()
vDataSet.Create(vDataSet.GetType(),w,h,1,1,n_output)
#Now set the XYZ extent (assume 100um / pixel (3mm maggot = 30 pixels length))
BridgeLib.SetExtent(vDataSet,[0,w*100,0,h*100,0,1])
t0 = time.time()
tdelta = skip_factor / framerate
vDataSet.SetTimePointsDelta(tdelta)
for i in range(n_output):
#These are 2-D images, so use SetDataSlice. Channel 0. All that's varying is the timepoint
BridgeLib.SetDataSlice(vDataSet,output[i],0,0,i)
t = t0+i*tdelta
s = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(t))+("%.3f" % (t-int(t)))[1:]
vDataSet.SetTimePoint(i,s)
f.value += 1
vImaris.SetDataSet(vDataSet)
vSurpassScene = vImaris.GetSurpassScene()
vFactory = vImaris.GetFactory() #Actually, we already have access to vFactory, but here's how you could re-create it.
vIP = vImaris.GetImageProcessing() #We also need vImageProcessing for this step (vIP)
#The surfaces. These parameters will work, might need adjusting though...
surf = vIP.DetectSurfaces(vDataSet,None,0,0,0,False,threshold,'"Volume" above automatic threshold')
#The tracks (again, these parameters work but may not be optimal)
surf = vIP.TrackSurfacesAutoregressiveMotion(surf,5000,3, '"Track Duration" above 2.50 s')
#Give the tracks a name and add them to the surpass scene
surf.SetName('Maggot Tracks')
vSurpassScene.AddChild(surf,-1)
%imaris_screenshot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading a video and extracting the frames
Step2: Estimate the background
Step3: Maggots extraction
Step4: An additional step (not absolutely required) is to filter the images. We shall boost the intensity range and threshold the output while we're at it. Here we use the libatrous library with the Linear 3x3 filter.
Step5: Let's see what we have
Step6: Let's do this for all the frames in the movie. Maggots tend not to move too fast, so we don't actually need all the frames. 1 in 5 is going to be more than enough in this case.
Step7: Reality check... what does the filtered stack look like?
Step8: Sending the processed movie to Imaris
Step9: Let's track the maggots!
Step10: Final result (for now)
|
6,178
|
<ASSISTANT_TASK:>
Python Code:
%run "recurrences.py"
%run "sums.py"
%run "start_session.py"
from itertools import accumulate
def accumulating(acc, current): return Eq(acc.lhs + current.lhs, acc.rhs + current.rhs)
mapped = list(accumulate(mapped, accumulating))
mapped
clear_cache()
m,v,r = to_matrix_notation(mapped, f, [n-k for k in range(-2, 19)])
m,v,r
m_sym = m.subs(inverted_fibs, simultaneous=True)
m_sym[:,0] = m_sym[:,0].subs(f[2],f[1])
m_sym[1,2] = m_sym[1,2].subs(f[2],f[1])
m_sym
# the following cell produces an error due to ordering, while `m * v` doesn't.
#clear_cache()
#m_sym * v
to_matrix_notation(mapped, f, [n+k for k in range(-18, 3)])
i = symbols('i')
d = IndexedBase('d')
k_fn_gen = Eq((k+1)*f[n], Sum(d[k,2*k-i]*f[n-i], (i, 0, 2*k)))
d_triangle= {d[0,0]:1, d[n,2*n]:1, d[n,k]:d[n-1, k-1]+d[n-1,k]}
k_fn_gen, d_triangle
mapped = list(accumulate(mapped, accumulating))
mapped
# skip this cell to maintain math coerent version
def adjust(term):
a_wild, b_wild = Wild('a', exclude=[f]), Wild('b')
matched = term.match(a_wild*f[n+2] + b_wild)
return -(matched[a_wild]-1)*f[n+2]
m = fix_combination(mapped,adjust, lambda v, side: Add(v, side))
mapped = list(m)
mapped
to_matrix_notation(mapped, f, [n-k for k in range(-2, 19)])
mapped = list(accumulate(mapped, accumulating))
mapped
to_matrix_notation(mapped, f, [n-k for k in range(-2, 19)])
mapped = list(accumulate(mapped, accumulating))
mapped
to_matrix_notation(mapped, f, [n-k for k in range(-2, 19)])
mapped = list(accumulate(mapped, accumulating))
mapped
to_matrix_notation(mapped, f, [n-k for k in range(-2, 19)])
s = IndexedBase('s')
a = IndexedBase('a')
swaps_recurrence = Eq(n*s[n],(n+1)*s[n-1]+a[n])
swaps_recurrence
boundary_conditions = {s[0]:Integer(0)}
swaps_recurrence_spec=dict(recurrence_eq=swaps_recurrence, indexed=s,
index=n, terms_cache=boundary_conditions)
unfolded = do_unfolding_steps(swaps_recurrence_spec, 4)
recurrence_eq = project_recurrence_spec(unfolded, recurrence_eq=True)
recurrence_eq
factored_recurrence_eq = project_recurrence_spec(factor_rhs_unfolded_rec(unfolded), recurrence_eq=True)
factored_recurrence_eq
factored_recurrence_eq.rhs.collect(s[n-5]).collect(a[n-4])
factored_recurrence_eq.subs(n,5)
recurrence_eq.subs(n, 5)
def additional_term(n): return (2*Integer(n)-3)/6
as_dict = {a[n]:additional_term(n) for n in range(1,6)}
recurrence_eq.subs(n, 5).subs(as_dict)
d = 10
m = Matrix(d,d, lambda i,j: binomial(n-i,j)*binomial(n-j,i))
m
f = IndexedBase('f')
fibs = [fibonacci(i) for i in range(50)]
mp = (ones(1,d)*m*ones(d,1))[0,0]
odd_fibs_eq = Eq(f[2*n+1], mp, evaluate=True)
odd_fibs_eq
(m*ones(d,1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A generalization using accumulation
Step2: According to A162741, we can generalize the pattern above
Step3: Unfolding a recurrence with generic coefficients
Step4: A curious relation about Fibonacci numbers, in matrix notation
|
6,179
|
<ASSISTANT_TASK:>
Python Code:
2 * (1 + 2 + 3 + 4 + 5 + 6)
3.2 * 18 - 2.1
1.5e-10 * 1000
import math
math.sqrt(2)
width = 20
length = 30
area = length*width
area
'I love Structural Geology!'
"I love Structural Geology!"
'''I love
Structural
Geology'''
"He's a geologist"
'She asked, "Are you crazy?"'
greeting = "I love Structural Geology!"
print(greeting)
print("The area is", area)
"I " + "love " + "Structural " + "Geology!"
# Zeros left
print('Now is %02d:%02d.' % (16, 30))
# Real (The number after the decimal point specifies how many decimal digits )
print('Percent: %.1f%%, Exponencial:%.2e' % (5.333, 0.00314))
# Octal and hexadecimal
print('Decimal: %d, Octal: %o, Hexadecimal: %x' % (10, 10, 10))
# Parameters are identified by order
print('The area of square with side {0} is {1}'.format(5, 5*5))
# Parameters are identified by name
print('{greeting}, it is {hour:02d}:{minute:02d}AM'.format(greeting='Hi', hour=7, minute=30))
# Builtin function format()
print('Pi =', format(math.pi, '.15f'))
greeting
greeting[7]
greeting[2:6]
greeting[18:]
greeting[7:17:2]
greeting[::-1]
type(area)
type(math.sqrt(2))
type(greeting)
planets = ['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune']
planets[2]
planets[-1]
planets[:4]
for i in range(3):
print("Stress and strain")
for i in range(3):
for j in range(3):
print('i:{} j:{}'.format(i, j))
print("This statement is within the outer i-loop, but not the inner j-loop")
%pylab inline
linspace(-10, 10, 11)
vals = array([1, 2, 3, 4, 5])
vals
vals[0], vals[4]
vals[5]
vals[0:3]
a = linspace(1,5,5)
a
b = a
b
a[2] = 17
a
b
c = a.copy()
a[2] = 3
a
b
c
x = linspace(-pi, pi, 150)
plot(x, sin(x))
from IPython.core.display import HTML
def css_styling():
styles = open("./css/sg2.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Scientific notation
Step2: Python has a number of defined operators for handling numbers through arithmetic calculations, logic operations (that test whether a condition is true or false) or bitwise processing (where the numbers are processed in binary form).
Step3: Variables
Step4: You can name a variable almost anything you want. It needs to start with an alphabetical character or "_", can contain alphanumeric characters plus underscores ("_"). Certain words, however, are reserved for the language
Step5: But not both at the same time, unless you want one of the symbols to be part of the string.
Step6: Just like the onumbers we're familiar with, you can assign a string to a variable
Step7: The print function is often used for printing character strings
Step8: But it can also print data types other than strings
Step9: You can use the + operator to concatenate strings together
Step10: The operator % is used for string interpolation. The interpolation is more efficient in use of memory than the conventional concatenation.
Step11: In addition to interpolation operator %, there is the string method and function format().
Step12: Slicing strings
Step13: It is possible to invert strings by using a negative step
Step14: Types
Step15: Furthermore, there are types that function as collections. The main ones are
Step16: You can access members of the list using the index of that item
Step17: The -1 element of a list is the last element
Step18: Lists can be sliced in the same way that the strings.
Step19: Lists could collect objects of any type, including other lists.
Step20: If you have nested for-loops, there is a further indent for the inner loop.
Step21: Scientific Python Environment
Step22: So what just happened? We just imported most of numpy and matplotlib into current workspace, so their functions are from now available to use. So if we want to use the numpy function linspace, for instance, we can call it by writing
Step23: To learn new functions available to you, visit the NumPy Reference page. If you are a proficient MATLAB user, there is a wiki page that should prove helpful to you
Step24: Python uses a zero-based index, so let's look at the first and last element in the array myvals
Step25: There are 5 elements in the array vals, but if we try to look at vals[5], Python will be unhappy, as vals[5] is actually calling the non-existant 6th element of that array.
Step26: Arrays can also be 'sliced', grabbing a range of values. Let's look at the first three elements
Step27: Note here, the slice is inclusive on the front end and exclusive on the back, so the above command gives us the values of vals[0], vals[1] and vals[2], but not vals[3].
Step28: OK, so we have an array a, with the values 1 through 5. I want to make a copy of that array, called b, so I'll try the following
Step29: Great. So a has the values 1 through 5 and now so does b. Now that I have a backup of a, I can change its values without worrying about losing data (or so I may think!).
Step30: Here, the 3rd element of a has been changed to 17. Now let's check on b.
Step31: And that's how things go wrong! When you use a statement like a = b, rather than copying all the values of a into a new array called b, Python just creates an alias (or a pointer) called b and tells it to route us to a. So if we change a value in a then b will reflect that change (technically, this is called assignment by reference). If you want to make a true copy of the array, you have to tell Python to copy every element of a into a new array. Let's call it c.
Step32: Now, we can try again to change a value in a and see if the changes are also seen in c.
Step33: Plotting
Step34: Learn More
|
6,180
|
<ASSISTANT_TASK:>
Python Code:
#sign:max: MAXBOX8: 03/02/2021 18:34:41
# optimal moving average OMA for market index signals ARIMA study- Max Kleiner
# v2 shell argument forecast days - 4 lines compare - ^GDAXI for DAX
# pip install pandas-datareader
# C:\maXbox\mX46210\DataScience\princeton\AB_NYC_2019.csv AB_NYC_2019.csv
#https://medium.com/abzuai/the-qlattice-a-new-machine-learning-model-you-didnt-know-you-needed-c2e037878cd
#https://www.kaggle.com/dgomonov/data-exploration-on-nyc-airbnb 41
#https://www.kaggle.com/duygut/airbnb-nyc-price-prediction
#https://www.machinelearningplus.com/time-series/arima-model-time-series-forecasting-python/
import numpy as np
import matplotlib.pyplot as plt
import sys
import numpy as np, pandas as pd
from statsmodels.tsa.arima_model import ARIMA
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.stattools import adfuller, acf
import matplotlib.pyplot as plt
plt.rcParams.update({'figure.figsize':(9,7), 'figure.dpi':120})
# Import data
wwwus = pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/wwwusage.csv', names=['value'], header=0)
import pandas as pd
# Accuracy metrics
def forecast_accuracy(forecast, actual):
mape = np.mean(np.abs(forecast - actual)/np.abs(actual)) # MAPE
me = np.mean(forecast - actual) # ME
mae = np.mean(np.abs(forecast - actual)) # MAE
mpe = np.mean((forecast - actual)/actual) # MPE
rmse = np.mean((forecast - actual)**2)**.5 # RMSE
corr = np.corrcoef(forecast, actual)[0,1] # corr
mins = np.amin(np.hstack([forecast[:,None],
actual[:,None]]), axis=1)
maxs = np.amax(np.hstack([forecast[:,None],
actual[:,None]]), axis=1)
minmax = 1 - np.mean(mins/maxs) # minmax
acf1 = acf(fc-test)[1] # ACF1
return({'mape':mape, 'me':me, 'mae': mae,
'mpe': mpe, 'rmse':rmse, 'acf1':acf1,
'corr':corr, 'minmax':minmax})
#wwwus = pd.read_csv(r'C:\maXbox\mX46210\DataScience\princeton\1022dataset.txt', \
# names=['value'], header=0)
print(wwwus.head(10).T) #Transposed for column overview
#1. How to find the order of differencing (d) in ARIMA model
result = adfuller(wwwus.value.dropna())
print('ADF Statistic: %f' % result[0])
print('p-value: %f' % result[1])
#
# Original Series
fig, axes = plt.subplots(3, 2, sharex=True)
axes[0, 0].plot(wwwus.value); axes[0, 0].set_title('Orig Series')
plot_acf(wwwus.value, ax=axes[0, 1], lags=60)
# 1st Differencing
axes[1, 0].plot(wwwus.value.diff()); axes[1, 0].set_title('1st Order Differencing')
plot_acf(wwwus.value.diff().dropna(), ax=axes[1, 1], lags=60)
# 2nd Differencing
axes[2, 0].plot(wwwus.value.diff().diff()); axes[2, 0].set_title('2nd Order Differencing')
plot_acf(wwwus.value.diff().diff().dropna(), ax=axes[2, 1], lags=60)
plt.show()
#2. How to find the order of the AR term (p)
plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})
fig, axes = plt.subplots(1, 2, sharex=True)
axes[0].plot(wwwus.value.diff()); axes[0].set_title('1st Differencing')
axes[1].set(ylim=(0,5))
plot_pacf(wwwus.value.diff().dropna(), ax=axes[1], lags=100)
plt.show()
#3. How to find the order of the MA term (q)
fig, axes = plt.subplots(1, 2, sharex=True)
axes[0].plot(wwwus.value.diff()); axes[0].set_title('1st Differencing')
axes[1].set(ylim=(0,1.2))
plot_acf(wwwus.value.diff().dropna(), ax=axes[1] , lags=60)
plt.show()
#
#4. How to build the ARIMA Model
model = ARIMA(wwwus.value, order=(1,1,2))
model_fit = model.fit(disp=0)
print('first fit ',model_fit.summary())
# Plot residual errors
residuals = pd.DataFrame(model_fit.resid)
plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})
fig, ax = plt.subplots(1,2)
residuals.plot(title="Residuals", ax=ax[0])
residuals.plot(kind='kde', title='Density', ax=ax[1])
plt.show()
#5. Plot Predict Actual vs Fitted
# When you set dynamic=False in-sample lagged values are used for prediction.
plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})
model_fit.plot_predict(dynamic=False)
plt.show()
#That is, the model gets trained up until the previous value to make next prediction. This can make a fitted forecast and actuals look artificially good.
# Now Create Training and Test
train = wwwus.value[:80]
test = wwwus.value[80:]
#model = ARIMA(train, order=(3, 2, 1))
model = ARIMA(train, order=(2, 2, 3))
fitted = model.fit(disp=-1)
print('second fit ',fitted.summary())
# Forecast
fc,se,conf = fitted.forecast(20, alpha=0.05) # 95% conf
# Make as pandas series
fc_series = pd.Series(fc, index=test.index)
lower_series = pd.Series(conf[:,0], index=test.index)
upper_series = pd.Series(conf[:,1], index=test.index)
# Plot
plt.figure(figsize=(12,5), dpi=100)
plt.plot(train, label='training')
plt.plot(test, label='actual')
plt.plot(fc_series, label='forecast')
plt.fill_between(lower_series.index, lower_series, upper_series,
color='k', alpha=.15)
plt.title('maXbox4 Forecast vs Actuals ARIMA')
plt.legend(loc='upper left', fontsize=8)
plt.show()
print(forecast_accuracy(fc, test.values))
print('Around 5% MAPE implies a model is about 95% accurate in predicting next 20 observations.')
#1. How to find the order of differencing (d) in ARIMA model
result = adfuller(wwwus.value.dropna())
print('ADF Statistic: %f' % result[0])
print('p-value: %f' % result[1])
# Original Series
fig, axes = plt.subplots(3, 2, sharex=True)
axes[0, 0].plot(wwwus.value); axes[0, 0].set_title('Orig Series')
plot_acf(wwwus.value, ax=axes[0, 1], lags=60)
# 1st Differencing
axes[1, 0].plot(wwwus.value.diff()); axes[1, 0].set_title('1st Order Differencing')
plot_acf(wwwus.value.diff().dropna(), ax=axes[1, 1], lags=60)
# 2nd Differencing
axes[2, 0].plot(wwwus.value.diff().diff()); axes[2, 0].set_title('2nd Order Differencing')
plot_acf(wwwus.value.diff().diff().dropna(), ax=axes[2, 1], lags=60)
plt.show()
plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})
fig, axes = plt.subplots(1, 2, sharex=True)
axes[0].plot(wwwus.value.diff()); axes[0].set_title('1st Differencing')
axes[1].set(ylim=(0,5))
plot_pacf(wwwus.value.diff().dropna(), ax=axes[1], lags=100)
plt.show()
fig, axes = plt.subplots(1, 2, sharex=True)
axes[0].plot(wwwus.value.diff()); axes[0].set_title('1st Differencing')
axes[1].set(ylim=(0,1.2))
plot_acf(wwwus.value.diff().dropna(), ax=axes[1] , lags=90)
plt.show()
model = ARIMA(wwwus.value, order=(1,1,2))
model_fit = model.fit(disp=0)
print('first fit ',model_fit.summary())
residuals = pd.DataFrame(model_fit.resid)
plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})
fig, ax = plt.subplots(1,2)
residuals.plot(title="Residuals", ax=ax[0])
residuals.plot(kind='kde', title='Density', ax=ax[1])
plt.show()
plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})
model_fit.plot_predict(dynamic=False)
plt.show()
train = wwwus.value[:80]
test = wwwus.value[80:]
#model = ARIMA(train, order=(3, 2, 1))
model = ARIMA(train, order=(2, 2, 3))
fitted = model.fit(disp=-1)
print('second fit ',fitted.summary())
# Forecast
fc,se,conf = fitted.forecast(20, alpha=0.05) # 95% conf
# Make as pandas series
fc_series = pd.Series(fc, index=test.index)
lower_series = pd.Series(conf[:,0], index=test.index)
upper_series = pd.Series(conf[:,1], index=test.index)
# Plot
plt.figure(figsize=(12,5), dpi=100)
plt.plot(train, label='training')
plt.plot(test, label='actual')
plt.plot(fc_series, label='forecast')
plt.fill_between(lower_series.index, lower_series, upper_series,
color='k', alpha=.15)
plt.title('maXbox4 Forecast vs Actuals ARIMA')
plt.legend(loc='upper left', fontsize=8)
plt.show()
print(forecast_accuracy(fc, test.values))
print('Around 5% MAPE implies a model is about 95% accurate in predicting next 20 observations.')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: <a href="https
Step3: Step by Step Code Order
Step4: A p-value less than 0.05 (typically ≤ 0.05) is statistically significant. It indicates strong evidence against the null hypothesis, as there is less than a 5% probability the null hypo is correct (and the results are by random). Therefore, we reject the null hypothesis, and accept the alternative hypothesis (there is correlation).
Step5: For the above series, the time series reaches stationarity with two orders of differencing. But we use for the beginning 1 order as a conservative part. Let me explain that
Step6: #3. How to find the order of the MA term (q)
Step7: 4. How to build the ARIMA Model
Step8: Notice here the coefficient of the MA2 term is close to zero (-0.0010 ) and the P-Value in ‘P>|z|’ column is highly insignificant (0.9). It should ideally be less than 0.05 for the respective X to be significant << 0.05.
Step9: 6. Plot Predict Actual vs Fitted
Step10: 7. Now Create Training and Test Validation
Step11: 8. Some scores and performance
|
6,181
|
<ASSISTANT_TASK:>
Python Code:
theta0 = 0.6
x = sp.stats.bernoulli(theta0).rvs(1000)
N0, N1 = np.bincount(x, minlength=2)
N = N0 + N1
theta = N1/N
theta
theta0 = np.array([0.1, 0.3, 0.6])
x = np.random.choice(np.arange(3), 1000, p=theta0)
N0, N1, N2 = np.bincount(x, minlength=3)
N = N0 + N1 + N2
theta = np.array([N0, N1, N2]) / N
theta
mu0 = 1
sigma0 = 2
x = sp.stats.norm(mu0, sigma0).rvs(1000)
xbar = x.mean()
s2 = x.std(ddof=1)
xbar, s2
mu0 = np.array([0, 1])
sigma0 = np.array([[1, 0.2], [0.2, 4]])
x = sp.stats.multivariate_normal(mu0, sigma0).rvs(1000)
xbar = x.mean(axis=0)
S2 = np.cov(x, rowvar=0)
print(xbar)
print(S2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 카테고리 분포의 모수 추정
Step2: 정규 분포의 모수 추정
Step3: 다변수 정규 분포의 모수 추정
|
6,182
|
<ASSISTANT_TASK:>
Python Code:
import ipyrad as ip
## create an Assembly object named data1.
data1 = ip.Assembly("data1")
## create an Assembly object linked to 8 engines using MPI
data1 = ip.Assembly("data1", N=4, controller="MPI")
## setting/modifying parameters for this Assembly object
data1.set_params('project_dir', "./test_rad")
data1.set_params('raw_fastq_path', "./data/sim_rad_test_R1_.fastq.gz")
data1.set_params('barcodes_path', "./data/sim_rad_test_barcodes.txt")
data1.set_params('filter_adapters', 0)
data1.set_params('datatype', 'rad')
## print the parameters for `data`
data1.get_params()
ip.get_params_info(10)
## This would link fastq files from the 'sorted_fastq_path' if present
## Here it does nothing b/c there are no files in the sorted_fastq_path
data1.link_fastqs()
## run step 1 to demultiplex the data
data1.step1()
## print the results for each Sample in data1
print data1.stats
## remove the lane control sequence
#data1.samples.pop("FGXCONTROL")
## example of ways to run step 2 to filter and trim reads
#data1.step2(["1A_0"]) ## run on a single sample
#data1.step2(["1B_0", "1C_0"]) ## run on one or more samples
data1.step2(force=True) ## run on all samples, overwrite finished
## print the results
print data1.stats
#data1.samples["veitchii"].files
## create a copy of our Assembly object
data2 = data1.branch(newname="data2")
## set clustering threshold to 0.90
data2.set_params(11, 0.90)
## look at inherited parameters
data2.get_params()
import ipyrad as ip
data1 = ip.load_assembly("test_rad/data1")
## run step 3 to cluster reads within samples using vsearch
data1.step3(force=True)
## print the results
print data1.stats
## run step 3 to cluster reads in data2 at 0.90 sequence similarity
data2.step3(force=True)
## print the results
print data2.stats
print "data1 directories:"
for (i,j) in data1.dirs.items():
print "{}\t{}".format(i, j)
print "\ndata2 directories:"
for (i,j) in data2.dirs.items():
print "{}\t{}".format(i, j)
## TODO, just make a [name]_stats directory in [work] for each data obj
data1.statsfiles
data1.stats.to_csv("data1_results.csv", sep="\t")
data1.stats.to_latex("data1_results.tex")
import ipyrad.plotting as iplot
## plot for one or more selected samples
#iplot.depthplot(data1, ["1A_0", "1B_0"])
## plot for all samples in data1
iplot.depthplot(data1)
## save plot as pdf and html
#iplot.depthplot(data1, outprefix="testfig")
## run step 4
data1.step4()
## print the results
print data1.stats
#import ipyrad as ip
## reload autosaved data. In case you quit and came back
#data1 = ip.load_dataobj("test_rad/data1.assembly")
## run step 5
#data1.step5()
## print the results
#print data1.stats
ip.get_params_info(10)
for i in data1.log:
print i
print "\ndata 2 log includes its pre-branching history with data1"
for i in data2.log:
print i
## save assembly object
#ip.save_assembly("data1.p")
## load assembly object
#data = ip.load_assembly("data1.p")
#print data.name
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Assembly objects
Step2: The printout tells us that we created the object data1, and also that it found 4 engines on our system that can be used for computation. An engine is simply a CPU. When working on a single machine it will usually be easiest to simply let the Assembly object connect to all available local engines. However, on HPC clusters you may need to modify the controller or the number of engines, as shown below
Step3: For more information about connecting CPUs for parallelization see ipyparallel setup.
Step4: To get more detailed information about each parameter use ip.get_params_info(), or look up their funcion in the documentation (Parameters). To quickly look up the proper formatting for a parameter, you can use ip.get_params_info(N), where N is the number of a parameter. Example
Step5: Sample Objects
Step6: Step 1
Step7: Step 2
Step8: Branching Assembly objects
Step9: Step 3
Step10: Branched Assembly objects
Step11: Saving stats outputs
Step12: Example of plotting with ipyrad
Step13: Step 4
Step14: Step 5
Step15: Quick parameter explanations are always on-hand
Step16: Log history
Step17: Saving Assembly objects
|
6,183
|
<ASSISTANT_TASK:>
Python Code:
# enable plotting in notebook
%matplotlib notebook
from simulation_results import example_simulations
import physical_validation
simulation_vrescale = example_simulations.get(
"900 water molecules, NVT at 298K with v-rescale thermostat"
)
simulation_berendsen = example_simulations.get(
"900 water molecules, NVT at 298K with Berendsen thermostat"
)
num_molecules = 900
# Each water molecule has three atoms
num_atoms_per_molecule = 3
# Each fully constrained water molecule has three constraints
num_constraints_per_molecule = 3
# In this simulation, translational center of mass motion was removed
num_constrained_translational_dof = 3
# Rotational center of mass motion was not removed
num_constrained_rotational_dof = 0
system_data = physical_validation.data.SystemData(
natoms=num_molecules * num_atoms_per_molecule,
nconstraints=num_molecules * num_constraints_per_molecule,
ndof_reduction_tra=num_constrained_translational_dof,
ndof_reduction_rot=num_constrained_rotational_dof,
)
ensemble_data = physical_validation.data.EnsembleData(
ensemble="NVT",
natoms=num_molecules * 3,
volume=3.01125 ** 3,
temperature=298.15,
)
simulation_data_vrescale = physical_validation.data.SimulationData(
# Example simulations were performed using GROMACS
units=physical_validation.data.UnitData.units("GROMACS"),
system=system_data,
ensemble=ensemble_data,
observables=physical_validation.data.ObservableData(
# This test requires only the kinetic energy
kinetic_energy=simulation_vrescale["kinetic energy"]
),
)
simulation_data_berendsen = physical_validation.data.SimulationData(
# Example simulations were performed using GROMACS
units=physical_validation.data.UnitData.units("GROMACS"),
system=system_data,
ensemble=ensemble_data,
observables=physical_validation.data.ObservableData(
# This test requires only the kinetic energy
kinetic_energy=simulation_berendsen["kinetic energy"]
),
)
physical_validation.kinetic_energy.distribution(
data=simulation_data_vrescale, strict=True, screen=True
)
# We turn plotting off here (`screen=False`), because the plot is
# identical to the one in the strict test
physical_validation.kinetic_energy.distribution(
data=simulation_data_vrescale, strict=False, screen=False
)
physical_validation.kinetic_energy.distribution(
data=simulation_data_berendsen, strict=True, screen=True
)
# We turn plotting off here (`screen=False`), because the plot is
# identical to the one in the strict test
physical_validation.kinetic_energy.distribution(
data=simulation_data_berendsen, strict=False, screen=False
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The results imported here are the time series of kinetic and potential energy from example simulations, which are
Step2: In this example, we will check the distributions of two simulations of 900 water molecules simulated under NVT conditions at 298.15K.
Step3: First, we will create the object containing system information.
Step4: Next, we will create the ensemble information
Step5: Now we will create the simulation data objects which we will feed to the physical validation tests.
Step6: We can now check the velocity-rescale temperature coupling, using first
Step7: The strict test prints and returns a p-value which indicates that the null hypothesis stands with high confidence. The printed figure confirms that the sampled and the analytical distribution are very similar.
Step8: The non-strict test calculates the temperature of the mean and the width of the distribution. Analytically, we are expecting both to be close to 298.15K. The result confirms that the simulation behaves as expected, with both the calculated mean and variance being within one standard deviation of the analytical value. The returned tuple is the number of standard deviations the calculated mean and variance differ from the expected value - less than a standard deviations for the mean, and only about 0.05 standard deviations for the variance. Note that any deviation within about 2-3 standard deviations is usually considered as statistically insignificant.
Step9: The calculated p-value means that the trajectory is very unklikely to come from a Maxwell-Boltzmann distribution at 298.15K. Commonly, one would consider a p-value of less than 0.05 (5% confidence interval) a reasonable cut-off, which means that the test results make it easy to reject the null hypothesis that the kinetic energy samples the correct distribution. This does not come as a suprise, the Berendsen thermostat is known to keep the correct mean temperature, but not to sample the right distribution.
|
6,184
|
<ASSISTANT_TASK:>
Python Code:
def func(x):
return x[0]**2 + 2*x[1]**2 + 3*x[2]**2
def con(x):
return x[0] + x[1] + x[2] - 3.5 # rewritten in form c <= 0
x = [1.0, 1.0, 1.0]
sigma = [0.00, 0.06, 0.2]
import numpy as np
def stats(n):
f = np.zeros(n)
c = np.zeros(n)
for i in range(n):
x1 = x[0]
x2 = x[1] + np.random.randn(1)*sigma[1]
x3 = x[2] + np.random.randn(1)*sigma[2]
f[i] = func([x1, x2, x3])
c[i] = con([x1, x2, x3])
# mean
mu = np.average(f)
# standard deviation
std = np.std(f, ddof=1) #ddof=1 gives an unbiased estimate (np.sqrt(1.0/(n-1)*(np.sum(f**2) - n*mu**2)))
return mu, std, f, c
%matplotlib inline
import matplotlib.pyplot as plt
nvec = np.logspace(1, 6, 20)
muvec = np.zeros(20)
stdvec = np.zeros(20)
for i, n in enumerate(nvec):
muvec[i], stdvec[i], _, _ = stats(int(n))
print i
plt.figure()
plt.semilogx(nvec, muvec, '-o')
plt.figure()
plt.semilogx(nvec, stdvec, '-o')
plt.show()
n = 1e5
mu, std, f, c = stats(int(n))
print 'mu =', mu
print 'sigma =', std
plt.figure()
plt.hist(f, bins=20);
reliability = np.count_nonzero(c <= 0.0)/float(n)
print 'reliability = ', reliability*100, '%'
from pyDOE import lhs
from scipy.stats.distributions import norm
def statsLHS(n):
f = np.zeros(n)
c = np.zeros(n)
# generate latin hypercube sample points beforehand from normal dist
lhd = lhs(2, samples=n)
rpt = norm(loc=0, scale=1).ppf(lhd)
for i in range(n):
x1 = x[0]
x2 = x[1] + rpt[i, 0]*sigma[1]
x3 = x[2] + rpt[i, 1]*sigma[2]
f[i] = func([x1, x2, x3])
c[i] = con([x1, x2, x3])
# mean
mu = np.average(f)
# standard deviation
std = np.std(f, ddof=1) #ddof=1 gives an unbiased estimate (np.sqrt(1.0/(n-1)*(np.sum(f**2) - n*mu**2)))
return mu, std, f, c
muLHS = np.zeros(20)
stdLHS = np.zeros(20)
for i, n in enumerate(nvec):
muLHS[i], stdLHS[i], _, _ = statsLHS(int(n))
print i
plt.figure()
plt.semilogx(nvec, muvec, '-o')
plt.semilogx(nvec, muLHS, '-o')
plt.figure()
plt.semilogx(nvec, stdvec, '-o')
plt.semilogx(nvec, stdLHS, '-o')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will use randn, which gives us a random number k sampled from a normal distribution. It is sampled from a unit normal with zero mean and a standard deviation of 1 so to translate to an arbitrary mean and standard deviation the random value will be
Step2: Let's evaluate this function for different values of n (number of samples) to see how long it takes to converge.
Step3: Note that it takes about 100,000 simulations for the statistics to converge. Let's rerun that case and check our the histogram and statistics.
Step4: Notice that it skews to the right. Because of the square terms in the function, any deviation causes the function to increase.
Step5: Monte Carlo with LHS
|
6,185
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
from pyensae.datasource import download_data
download_data("ensae_competition_2016.zip",
url="https://github.com/sdpython/ensae_teaching_cs/raw/master/_doc/competitions/2016_ENSAE_2A/")
%matplotlib inline
import pandas as p
import numpy as np
df = p.read_csv('./ensae_competition_train.txt', header=[0,1], sep="\t", index_col=0)
#### Gender dummies
df['X2'] = df['X2'].applymap(str)
gender_dummies = p.get_dummies(df['X2'] )
### education dummies
df['X3'] = df['X3'].applymap(str)
educ_dummies = p.get_dummies(df['X3'] )
#### marriage dummies
df['X4'] = df['X4'].applymap(str)
mariage_dummies = p.get_dummies(df['X4'] )
### On va aussi supprimer les multi index de la table
df.columns = df.columns.droplevel(0)
#### on aggrège ensuite les 3 tables ensemble
data = df.join(gender_dummies).join(educ_dummies).join(mariage_dummies)
data.rename(columns = {'default payment next month' : "Y"}, inplace = True)
data = data.drop(['SEX','EDUCATION','MARRIAGE'],1)
data_resample = p.concat([data[data['Y']==1], data[data['Y']==0].sample(len(data[data['Y']==1]))])
data.head(n=2)
Y = data['Y']
Y = data_resample['Y']
X = data.drop('Y', 1)
#X = data[["SEX_1", "AGE", "MARRIAGE_0", 'PAY_0']]
X = data_resample.drop('Y',1)
X.columns
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33)
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier
from sklearn.linear_model import SGDClassifier, Perceptron
#type_classifier = LogisticRegression
#type_classifier = svm.SVC
type_classifier = GradientBoostingClassifier
#type_classifier = RandomForestClassifier
#type_classifier = Perceptron
clf = type_classifier()
#clf = SGDClassifier(loss="hinge", penalty="l2")
clf = clf.fit(X_train, Y_train.ravel())
# Matrice de confusion
%matplotlib inline
from sklearn.metrics import confusion_matrix
for x,y in [ (X_train, Y_train), (X_test, Y_test) ]:
yp = clf.predict(x)
cm = confusion_matrix(y.ravel(), yp.ravel())
print(cm.transpose())
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('ticks')
plt.matshow(cm.transpose())
plt.title('Confusion matrix sur Test')
plt.colorbar()
plt.ylabel('Predicted label')
plt.xlabel('True label')
(cm.transpose()[0,0]+cm.transpose()[1,1])/ (cm[0].sum()+cm[1].sum())
from sklearn.metrics import roc_curve, auc
probas = clf.predict_proba(X_test)
probas
rep = [ ]
yt = Y_test.ravel()
for i in range(probas.shape[0]):
p0,p1 = probas[i,:]
exp = yt[i]
if p0 > p1 :
if exp == 0 :
# bonne réponse, true positive (tp)
rep.append ( (1, p0) )
else :
# mauvaise réponse, false positive (fp)
rep.append( (0, p0) )
else :
if exp == 0 :
# mauvaise réponse, false negative (fn)
rep.append ( (0, p1) )
else :
# bonne réponse, true negative (tn)
rep.append( (1, p1) )
mat_rep = np.array(rep)
print("AUC : Taux de bonnes réponses" , sum(mat_rep[:,0]) / len(mat_rep[:,0]))
fpr, tpr, thresholds = roc_curve(mat_rep[:,0], mat_rep[:, 1])
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate (or precision)')
plt.title('ROC')
plt.legend(loc="lower right")
tp=0
fp=0
fn=0
tn=0
for i in range(len(probas[:,0])):
if (probas[i,0] >= 0.5 and yt[i] == 0):
tp+=1
elif (probas[i,0] >= 0.5 and yt[i] == 1):
fp+=1
elif (probas[i,0] <= 0.5 and yt[i] == 0):
fn+=1
else:
tn+=1
print("On retrouve la matrice de confusion :\n", "TP : ", tp, "FP : ", fp, "\n",
" FN : ", fn, "TN : ", tn)
print("Precision : TP / (TP + FP) = ", tp/(tp+fp))
print("Recall : TP / (TP + FN) = ", tp/(tp+fn))
precision = tp/(tp+fp)
recall = tp/(tp+fn)
print("F1 Score : T2 * P * R / (P + R) = ", 2 * precision * recall / (precision + recall) )
print("False Positive rate : FP / (FP + FN) = ", fp/(fp+tn))
from sklearn.metrics import precision_recall_curve
precision, recall, _ = precision_recall_curve(Y_test.ravel(), yp.ravel())
lw = 2
plt.plot(recall, precision, lw=lw, color='navy', label='Precision-Recall curve')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('Precision-Recall')
plt.legend(loc="lower left")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Données
Step2: Choix du Classifieur
Step3: Calcul du critère AUC
Step4: Tous les critères sont détaillés là. Attention au sens de la matrice de confusion, selon les articles, cela change.
Step5: En haut à droite, TPR et FPR valent 1 (il suffit de prédire toujours positif = pas de défaut = Y_hat=0), en bas à gauche, TPR et FPR valent 0 parce qu'il suffit de toujours prédire la situation négative (ou le défaut, Y_hat = 1).
|
6,186
|
<ASSISTANT_TASK:>
Python Code:
import keras
from keras.models import Sequential
from PIL import Image
import numpy as np
import tarfile
# 下載 dataset
url = "https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz"
import os
import urllib
from urllib.request import urlretrieve
def reporthook(a,b,c):
print("\rdownloading: %5.1f%%"%(a*b*100.0/c), end="")
tar_gz = "cifar-10-python.tar.gz"
if not os.path.isfile(tar_gz):
print('Downloading data from %s' % url)
urlretrieve(url, tar_gz, reporthook=reporthook)
# 讀取 dataset
# 只有 train 和 test 沒有 validation
import pickle
train_X=[]
train_y=[]
tar_gz = "cifar-10-python.tar.gz"
with tarfile.open(tar_gz) as tarf:
for i in range(1, 6):
dataset = "cifar-10-batches-py/data_batch_%d"%i
print("load",dataset)
with tarf.extractfile(dataset) as f:
result = pickle.load(f, encoding='latin1')
train_X.extend(result['data']/255)
train_y.extend(result['labels'])
train_X=np.float32(train_X)
train_y=np.int32(train_y)
dataset = "cifar-10-batches-py/test_batch"
print("load",dataset)
with tarf.extractfile(dataset) as f:
result = pickle.load(f, encoding='latin1')
test_X=np.float32(result['data']/255)
test_y=np.int32(result['labels'])
train_Y = np.eye(10)[train_y]
test_Y = np.eye(10)[test_y]
# or
# from keras.datasets import cifar10
# from keras.utils import np_utils
# (train_X, train_y), (test_X, test_y) = cifar10.load_data()
# train_Y = np_utils.to_categorical(train_y, 10)
# test_Y = np_utils.to_categorical(test_y, 10)
train_X.shape
# channels x 高 x 寬 (顏色)
3*32*32
from IPython.display import display
def showX(X):
int_X = (X*255).clip(0,255).astype('uint8')
# N*3072 -> N*3*32*32 -> 32 * 32N * 3
int_X_reshape = np.moveaxis(int_X.reshape(-1,3,32,32), 1, 3)
int_X_reshape = int_X_reshape.swapaxes(0,1).reshape(32,-1, 3)
display(Image.fromarray(int_X_reshape))
# 訓練資料, X 的前 20 筆
showX(train_X[:20])
print(train_y[:20])
name_array = np.array("飛機、汽車、鳥、貓、鹿、狗、青蛙、馬、船、卡車".split('、'))
print(name_array[train_y[:20]])
# 參考答案
# %load q_cifar10_logistic.py
# 參考答案
# %load q_cifar10_cnn.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 查看一下資料
Step2: Q
|
6,187
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
import numpy as np
b = phoebe.default_binary()
b.flip_constraint('mass@secondary', solve_for='q')
b.set_value(qualifier='mass', component='secondary', value=0.2)
b.set_value(qualifier='requiv', component='secondary', value=0.2)
b.set_value(qualifier='teff', component='secondary', value=300)
b.add_spot(component='primary',
relteff=0.90,
long=0,
colat=90,
radius=20,
feature='spot01')
b.add_dataset('lc', compute_times=phoebe.linspace(-0.1, 0.1, 201))
b.set_value(qualifier='atm', component='secondary', value='blackbody')
b.set_value(qualifier='ld_mode', component='secondary', value='manual')
anim_times = phoebe.linspace(-0.1, 0.1, 101)
b.add_dataset('mesh', compute_times=anim_times, coordinates='uvw', columns='teffs')
b.run_compute(distortion_method='sphere', irrad_method='none')
print(np.min(b.get_value('teffs', time=0.0, component='primary')), np.max(b.get_value('teffs', time=0.0, component='primary')))
afig, mplfig = b.plot(time=0.0,
fc='teffs', fcmap='plasma', fclim=(5000, 6000),
ec='face',
tight_layout=True,
show=True)
afig, mplfig = b.plot(times=anim_times,
fc='teffs', fcmap='plasma', fclim=(5000, 6000),
ec='face',
consider_for_limits={'primary': True, 'secondary': False},
tight_layout=True, pad_aspect=False,
animate=True,
save='spot_transit.gif',
save_kwargs={'writer': 'imagemagick'})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Let's set reasonable (although not necessarily physical) values for the secondary component.
Step3: We'll add a spot to the primary component.
Step4: Adding Datasets
Step5: Because we have such a cool transiting object, we'll have to use blackbody atmospheres and manually provide limb-darkening.
Step6: Running Compute
Step7: Plotting
Step8: Let's go through these options (see also the plot API docs)
Step9: Now let's animate the same figure in time. We'll use the same arguments as the static plot above, with the following exceptions
|
6,188
|
<ASSISTANT_TASK:>
Python Code:
# Installation
#!pip install boruta
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from boruta import BorutaPy
def load_data():
# URLS for dataset via UCI
train_data_url='https://archive.ics.uci.edu/ml/machine-learning-databases/madelon/MADELON/madelon_train.data'
train_label_url='https://archive.ics.uci.edu/ml/machine-learning-databases/madelon/MADELON/madelon_train.labels'
X_data = pd.read_csv(train_data_url, sep=" ", header=None)
y_data = pd.read_csv(train_label_url, sep=" ", header=None)
data = X_data.loc[:, :499]
data['target'] = y_data[0]
return data
data = load_data()
data.head()
y = data.pop('target')
X = data.copy().values
rf = RandomForestClassifier(n_jobs=-1, class_weight=None, max_depth=7, random_state=0)
# Define Boruta feature selection method
feat_selector = BorutaPy(rf, n_estimators='auto', verbose=2, random_state=0)
feat_selector.fit(X, y)
# Check selected features
print(feat_selector.support_)
# Select the chosen features from our dataframe.
selected = X[:, feat_selector.support_]
print ("")
print ("Selected Feature Matrix Shape")
print (selected.shape)
feat_selector.ranking_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Boruta conforms to the sklearn api and can be used in a Pipeline as well as on it's own. Here we will demonstrate stand alone operation.
Step2: Once built, we can use this object to identify the relevant features in our dataset.
Step3: Boruta has confirmed only a few features as useful. When our run ended, Boruta was undecided on 2 features. '
Step4: We can also interrogate the ranking of the unselected features with .ranking_
|
6,189
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from tensorflow import keras
import IPython
!pip install -U keras-tuner
import kerastuner as kt
(img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data()
# Normalize pixel values between 0 and 1
img_train = img_train.astype('float32') / 255.0
img_test = img_test.astype('float32') / 255.0
def model_builder(hp):
model = keras.Sequential()
model.add(keras.layers.Flatten(input_shape=(28, 28)))
# Tune the number of units in the first Dense layer
# Choose an optimal value between 32-512
hp_units = hp.Int('units', min_value = 32, max_value = 512, step = 32)
model.add(keras.layers.Dense(units = hp_units, activation = 'relu'))
model.add(keras.layers.Dense(10))
# Tune the learning rate for the optimizer
# Choose an optimal value from 0.01, 0.001, or 0.0001
hp_learning_rate = hp.Choice('learning_rate', values = [1e-2, 1e-3, 1e-4])
model.compile(optimizer = keras.optimizers.Adam(learning_rate = hp_learning_rate),
loss = keras.losses.SparseCategoricalCrossentropy(from_logits = True),
metrics = ['accuracy'])
return model
tuner = kt.Hyperband(model_builder,
objective = 'val_accuracy',
max_epochs = 10,
factor = 3,
directory = 'my_dir',
project_name = 'intro_to_kt')
class ClearTrainingOutput(tf.keras.callbacks.Callback):
def on_train_end(*args, **kwargs):
IPython.display.clear_output(wait = True)
tuner.search(img_train, label_train, epochs = 10, validation_data = (img_test, label_test), callbacks = [ClearTrainingOutput()])
# Get the optimal hyperparameters
best_hps = tuner.get_best_hyperparameters(num_trials = 1)[0]
print(f
The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is {best_hps.get('units')} and the optimal learning rate for the optimizer
is {best_hps.get('learning_rate')}.
)
# Build the model with the optimal hyperparameters and train it on the data
model = tuner.hypermodel.build(best_hps)
model.fit(img_train, label_train, epochs = 10, validation_data = (img_test, label_test))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Keras Tuner 소개
Step2: Keras Tuner를 설치하고 가져옵니다.
Step3: 데이터세트 다운로드 및 준비하기
Step4: 모델 정의하기
Step5: 튜너를 인스턴스화하고 하이퍼튜닝 수행하기
Step6: Hyperband 튜닝 알고리즘은 적응형 리소스 할당 및 조기 중단을 사용하여 고성능 모델에서 신속하게 수렴합니다. 이것은 스포츠 챔피언십 스타일 브래킷을 사용하여 수행됩니다. 이 알고리즘은 몇 개의 epoch에 대해 많은 수의 모델을 훈련하고 최고 성능을 보이는 절반만 다음 단계로 넘깁니다. Hyperband는 1 + log<sub><code>factor</code></sub>( max_epochs)를 계산하고 이를 가장 가까운 정수로 반올림하여 한 브래킷에서 훈련할 모델 수를 결정합니다.
Step8: 하이퍼파라미터 검색을 실행합니다. 검색 메서드의 인수는 위의 콜백 외에 tf.keras.model.fit에 사용되는 인수와 같습니다.
Step9: 이 튜토리얼을 마치려면 검색에서 최적의 하이퍼파라미터로 모델을 재훈련합니다.
|
6,190
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn import preprocessing
import numpy as np
# Create feature
x = np.array([[-500.5],
[-100.1],
[0],
[100.1],
[900.9]])
# Create scaler
minmax_scale = preprocessing.MinMaxScaler(feature_range=(0, 1))
# Scale feature
x_scale = minmax_scale.fit_transform(x)
# Show feature
x_scale
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Feature
Step2: Rescale Feature Using Min-Max
|
6,191
|
<ASSISTANT_TASK:>
Python Code:
# Initialize
import pandas as pd
import numpy as np
import pip #needed to use the pip functions
# Show versions of all installed software to help debug incompatibilities.
for i in pip.get_installed_distributions(local_only=True):
print(i)
try:
df_label_vendors = pd.io.parsers.read_csv(
"/home/jovyan/work/shared/data/csv/label_vendors.csv",
error_bad_lines=False,
warn_bad_lines=True,
quotechar='"',
encoding='utf-8')
except IOError as e:
print('\n\n***I/O error({0}): {1}\n\n'.format(
e.errno, e.strerror))
# except ValueError:
# self.logger.critical('Could not convert data to an integer.')
except:
print(
'\n\n***Unexpected error: {0}\n\n'.format(
sys.exc_info()[0]))
raise
# Number of records / columns
df_label_vendors.shape
# Print out some sample values
df_label_vendors.sample(5)
# Check that all rows are labelled
# (Should return "False")
df_label_vendors['match'].isnull().any()
# Format training data as "X" == "features, "y" == target.
# The target value is the 1st column.
df_match_train1 = df_label_vendors[['match','fz_ptl_ratio', 'fz_ptl_tok_sort_ratio', 'fz_ratio', 'fz_tok_set_ratio', 'fz_uwratio','ven_len', 'pu0_len']]
# Convert into 2 numpy arrays for the scikit-learn ML classification algorithms.
np_match_train1 = np.asarray(df_match_train1)
X, y = np_match_train1[:, 1:], np_match_train1[:, 0]
print(X.shape, y.shape)
# Now find optimum parameters for model using Grid Search
from time import time
from scipy.stats import randint as sp_randint
from sklearn.model_selection import RandomizedSearchCV
from sklearn.ensemble import RandomForestClassifier
# build a classifier
clf = RandomForestClassifier()
# Utility function to report best scores
def report(results, n_top=3):
for i in range(1, n_top + 1):
candidates = np.flatnonzero(results['rank_test_score'] == i)
for candidate in candidates:
print("Model with rank: {0}".format(i))
print("Mean validation score: {0:.3f} (std: {1:.3f})".format(
results['mean_test_score'][candidate],
results['std_test_score'][candidate]))
print("Parameters: {0}".format(results['params'][candidate]))
print("")
# specify parameters and distributions to sample from
param_dist = {"n_estimators": sp_randint(20, 100),
"max_depth": [3, None],
"max_features": sp_randint(1,7),
"min_samples_split": sp_randint(2,7),
"min_samples_leaf": sp_randint(1, 7),
"bootstrap": [True, False],
"class_weight": ['auto', None],
"criterion": ["gini", "entropy"]}
# run randomized search
n_iter_search = 40
random_search = RandomizedSearchCV(clf, param_distributions=param_dist,
n_iter=n_iter_search)
start = time()
random_search.fit(X, y)
print("RandomizedSearchCV took %.2f seconds for %d candidates"
" parameter settings." % ((time() - start), n_iter_search))
report(random_search.cv_results_)
clf = RandomForestClassifier(
bootstrap=True,
min_samples_leaf=2,
n_estimators=40,
min_samples_split=4,
criterion='entropy',
max_features=3,
max_depth=3,
class_weight=None
)
# Train model on original training data
clf.fit(X, y)
# save model for future use
from sklearn.externals import joblib
joblib.dump(clf, '/home/jovyan/work/shared/data/models/vendor_classif_trained_Rdm_Forest.pkl.z')
# Test loading
clf = joblib.load('/home/jovyan/work/shared/data/models/vendor_classif_trained_Rdm_Forest.pkl.z' )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in the vendor training data
Step2: Use a grid search to tune the ML algorithm
Step3: Run the ML classifier with optimum parameters on the test data
|
6,192
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-3', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
6,193
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.read_excel("https://github.com/chris1610/pbpython/blob/master/data/sample-sales-reps.xlsx?raw=true")
df.head()
df["commission"] = .02
df.head()
df.loc[df["category"] == "Shirt", ["commission"]] = .025
df.head()
df.loc[(df["category"] == "Belt") & (df["quantity"] >= 10), ["commission"]] = .04
df.head()
df["bonus"] = 0
df.loc[(df["category"] == "Shoes") & (df["ext price"] >= 1000 ), ["bonus", "commission"]] = 250, 0.045
df.ix[3:7]
df["comp"] = df["commission"] * df["ext price"] + df["bonus"]
df.head()
df.groupby(["sales rep"])["comp"].sum().round(2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Add a default commision rate of 2% for all sales
Step2: Since shirts are high margin, adjust all products in the shirt categort with a commission rate of 2.5%
Step3: Since there is a special program for selling 10 or more belts in a transaction, you get 4% commission!
Step4: Finally, some transactions can get a bonus and a commission increase.
Step5: Calculate the compensation at the line item leve
Step6: Calculate the commissions by sales rep
|
6,194
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
people = graphlab.SFrame('people_wiki.gl/')
people.head()
len(people)
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
clooney = people[people['name'] == 'George Clooney']
clooney['text']
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
print obama['word_count']
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
tfidf = graphlab.text_analytics.tf_idf(people['word_count'])
tfidf
people['tfidf'] = tfidf['docs']
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
knn_model.query(obama)
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load some text data - from wikipedia, pages on people
Step2: Data contains
Step3: Explore the dataset and checkout the text it contains
Step4: Exploring the entry for actor George Clooney
Step5: Get the word counts for Obama article
Step6: Sort the word counts for the Obama article
Step7: Sorting the word counts to show most common words at the top
Step8: Most common words include uninformative words like "the", "in", "and",...
Step9: Examine the TF-IDF for the Obama article
Step10: Words with highest TF-IDF are much more informative.
Step11: Is Obama closer to Clinton than to Beckham?
Step12: Build a nearest neighbor model for document retrieval
Step13: Applying the nearest-neighbors model for retrieval
Step14: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
|
6,195
|
<ASSISTANT_TASK:>
Python Code:
# ### uncomment below if you want...
# ## ... copious amounts of logging info
# import logging
# logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
# rootLogger = logging.getLogger()
# rootLogger.setLevel(logging.INFO)
# ## ... or auto-reload of gensim during development
# %load_ext autoreload
# %autoreload 2
import re
contractions = re.compile(r"'|-|\"")
# all non alphanumeric
symbols = re.compile(r'(\W+)', re.U)
# single character removal
singles = re.compile(r'(\s\S\s)', re.I|re.U)
# separators (any whitespace)
seps = re.compile(r'\s+')
# cleaner (order matters)
def clean(text):
text = text.lower()
text = contractions.sub('', text)
text = symbols.sub(r' \1 ', text)
text = singles.sub(' ', text)
text = seps.sub(' ', text)
return text
# sentence splitter
alteos = re.compile(r'([!\?])')
def sentences(l):
l = alteos.sub(r' \1 .', l).rstrip("(\.)*\n")
return l.split(".")
from zipfile import ZipFile
import json
def YelpReviews(label):
with ZipFile("yelp_%s_set.zip"%label, 'r') as zf:
with zf.open("yelp_%s_set/yelp_%s_set_review.json"%(label,label)) as f:
for line in f:
if type(line) is bytes:
line = line.decode('utf-8')
rev = json.loads(line)
yield {'y':rev['stars'],\
'x':[clean(s).split() for s in sentences(rev['text'])]}
next(YelpReviews("test"))
revtrain = list(YelpReviews("training"))
print(len(revtrain), "training reviews")
## and shuffle just in case they are ordered
import numpy as np
np.random.shuffle(revtrain)
def StarSentences(reviews, stars=[1,2,3,4,5]):
for r in reviews:
if r['y'] in stars:
for s in r['x']:
yield s
from gensim.models import Word2Vec
import multiprocessing
## create a w2v learner
basemodel = Word2Vec(
workers=multiprocessing.cpu_count(), # use your cores
iter=3, # iter = sweeps of SGD through the data; more is better
hs=1, negative=0 # we only have scoring for the hierarchical softmax setup
)
print(basemodel)
basemodel.build_vocab(StarSentences(revtrain))
from copy import deepcopy
starmodels = [deepcopy(basemodel) for i in range(5)]
for i in range(5):
slist = list(StarSentences(revtrain, [i+1]))
print(i+1, "stars (", len(slist), ")")
starmodels[i].train( slist, total_examples=len(slist) )
docprob takes two lists
* docs: a list of documents, each of which is a list of sentences
* models: the candidate word2vec models (each potential class)
it returns the array of class probabilities. Everything is done in-memory.
import pandas as pd # for quick summing within doc
def docprob(docs, mods):
# score() takes a list [s] of sentences here; could also be a sentence generator
sentlist = [s for d in docs for s in d]
# the log likelihood of each sentence in this review under each w2v representation
llhd = np.array( [ m.score(sentlist, len(sentlist)) for m in mods ] )
# now exponentiate to get likelihoods,
lhd = np.exp(llhd - llhd.max(axis=0)) # subtract row max to avoid numeric overload
# normalize across models (stars) to get sentence-star probabilities
prob = pd.DataFrame( (lhd/lhd.sum(axis=0)).transpose() )
# and finally average the sentence probabilities to get the review probability
prob["doc"] = [i for i,d in enumerate(docs) for s in d]
prob = prob.groupby("doc").mean()
return prob
# read in the test set
revtest = list(YelpReviews("test"))
# get the probs (note we give docprob a list of lists of words, plus the models)
probs = docprob( [r['x'] for r in revtest], starmodels )
import matplotlib
%matplotlib inline
probpos = pd.DataFrame({"out-of-sample prob positive":probs[[3,4]].sum(axis=1),
"true stars":[r['y'] for r in revtest]})
probpos.boxplot("out-of-sample prob positive",by="true stars", figsize=(12,5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, we define a super simple parser
Step2: And put everything together in a review generator that provides tokenized sentences and the number of stars for every review.
Step3: For example
Step4: Now, since the files are small we'll just read everything into in-memory lists. It takes a minute ...
Step5: Finally, write a function to generate sentences -- ordered lists of words -- from reviews that have certain star ratings
Step6: Word2Vec modeling
Step7: Build vocab from all sentences (you could also pre-train the base model from a neutral or un-labeled vocabulary)
Step8: Now, we will deep copy each base model and do star-specific training. This is where the big computations happen...
Step10: Inversion of the distributed representations
Step11: Test set example
|
6,196
|
<ASSISTANT_TASK:>
Python Code:
import xml.etree.ElementTree as ET
parameter_values = (('num_sweeps', '30'),
('num_simulations', '1'),
('num_banks', '1'),
('num_firms', '1'),
('num_households', '1'),
('bank_directory', 'agents/banks/'),
('firm_directory', 'agents/firms/'),
('household_directory', 'agents/households'),
('measurement_config', 'measurements/test_output.xml')
)
environment = ET.Element('environment')
parameters = []
for x in range(len(parameter_values)):
parameters.append(ET.SubElement(environment, 'parameter'))
for idx, p in enumerate(parameters):
p.set('type', 'static')
p.set('name', parameter_values[idx][0])
p.set('value', parameter_values[idx][1])
xml_params = ET.tostring(environment, encoding="unicode")
myfile = open("environments/solow_parameters.xml", "w")
myfile.write(xml_params)
myfile.close()
import logging
import os
from src.environment import Environment
from src.runner import Runner
log_directory = "log/"
identifier = "test_all_methods"
environment_directory = "environments/"
if not os.path.exists('log'):
os.makedirs('log')
logging.basicConfig(format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %H:%M:%S',
filename=log_directory + identifier + ".log", level=logging.INFO)
logging.info('START logging for run: %s', environment_directory + identifier + ".xml")
environment = Environment(environment_directory, identifier)
runner = Runner(environment)
for i in range(int(environment.num_simulations)):
logging.info(' STARTED with run %s', str(i))
environment.initialize(environment_directory, identifier)
runner.initialize(environment)
runner.do_run(environment)
logging.info(' DONE')
%matplotlib inline
import pandas as pd
#import numpy as np
import matplotlib.pyplot as plt
solow_data = pd.read_csv('measurements/TestMeasurement.csv', index_col=0)
solow_data.head(3)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10,6))
ax.plot(solow_data.index, solow_data)
ax.set_xlabel('Time', fontsize='14')
ax.set_ylabel('Capital', fontsize='14')
fig.savefig('solow_capital.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below you will find the parameter inputs for this model.
Step2: To translate this to an xml document, which the framework can read, we first need create an ET element.
Step3: And create parameter elements and give them the values seen below to add to the tree object.
Step4: This object can now be transformed to a string and written to an xml file using the code below.
Step5: Now, the parameter xml file has been generated and the model can be run. Before, running the model, you need to import a couple of extra modules. These are the Python logging module, the Black Rhino environment class, and the Black Rhino runner class.
Step6: Then, before running the model, the logger needs to be initialized.
Step7: First, the environment will be initialized. It contains the model parameters and variables. The environment also contains a list of agents. In the Solow model, the environment contains one bank, one firm, and one household. Finally, it also contains initialized transactions for these agents.
Step8: Next up, the runner object is created. As the name suggests, the runner object handles the running of the model. But that is not all, it also initializes the updater.
Step9: Now you can actually run the model. Since the agent-based model is simulated using Monte Carlo methods. This means that the model will be run multiple times (num_simulations). For every simulation run, the environment and runner will be (re-)initialized. Then, the runner will run the Solow model. During the run, the runner will write data about capital to a csv file specified within the Measurement class and print out all the transactions happening and the books of firm and household at the end of each sweep.
Step10: So, now the model has been run, you will be able to analyse its outputs. If you are running the Ipython notebook to run the model. The following command will set the notebook to depict the output inside the noteobok.
Step11: Futhermore, you should import the modules to create graphs (matplotlib) and dataframes (pandas).
Step12: Running the model has generated a csv file that contains its output. We can use pandas to read the csv.
Step13: The datafame looks as follows
Step14: The Solow growth model is expected to produce an increase in total wealth that slows over time. In this version of the model, this means that there is deposit growth untill the equilibrium is reached. This is exactly what we observe.
|
6,197
|
<ASSISTANT_TASK:>
Python Code:
%config InlineBackend.figure_format = 'retina'
import numpy as np
import matplotlib.pyplot as plt
def f(x):
r = np.exp(-1 / x ** 2)
r[x == 0] = 0
return r
rg = np.linspace(-10, 10, 401)
plt.plot(rg, f(rg))
def get_expansion(sigma):
nexpansion = 6
N = 1000
X = sigma * np.random.randn(N)
rnums = X.copy()
y = f(X)
Xs = []
coefficients = []
# Collect even powers of $x$ to form a polynomial regression.
Rs = []
for i in range(0, 2 * (nexpansion + 1), 2):
Rs.append(rg ** i)
Xs.append(X ** i)
# Tiny amount of trickery: orthogonalize the columns of our design matrix.
# This way, lower power coefficients won't change when we add higher power coefficients.
X = np.array(Xs).T
r = np.sqrt((X ** 2).sum(0, keepdims=True))
X = X / r
R = np.array(Rs).T
R = R / r
for i in range(1, X.shape[1]):
w = X[:, :i].T.dot(X[:, i])
X[:, i] -= X[:, :i].dot(w)
a = np.sqrt((X[:, i] ** 2).sum(0))
X[:, i] /= a
R[:, i] -= R[:, :i].dot(w)
R[:, i] /= a
# Check that the design matrix is indeed orthonormal.
np.testing.assert_allclose(X.T.dot(X), np.eye(X.shape[1]), atol=1E-6)
# Perform the polynomial regression. Note that we don't have to invert X.T.dot(X),
# since that is the identity matrix.
w_hat = (X.T @ y)
return w_hat, R
w_hat, R = get_expansion(1.0)
leg = ["True function"]
plt.plot(rg, f(rg))
for i in range(1, nexpansion+1):
plt.plot(rg, R[:, :i].dot(w_hat[:i]))
leg.append("degree %d" % ((i - 1) * 2))
plt.legend(leg)
plt.ylim((-.2, 1.2))
plt.xlim((-5, 5))
plt.title('Approximating exp(-1/x^2) with polynomials')
w_hat, R = get_expansion(2.0)
leg = ["True function"]
plt.plot(rg, f(rg))
for i in range(1, nexpansion+1):
plt.plot(rg, R[:, :i].dot(w_hat[:i]))
leg.append("degree %d" % ((i - 1) * 2))
plt.legend(leg)
plt.ylim((-.2, 1.2))
plt.xlim((-5, 5))
plt.title('Approximating exp(-1/x^2) with polynomials, $\sigma$ = 2')
w_hat, R = get_expansion(0.01)
leg = ["True function"]
plt.plot(rg, f(rg))
for i in range(1, nexpansion+1):
plt.plot(rg, R[:, :i].dot(w_hat[:i]))
leg.append("degree %d" % ((i - 1) * 2))
plt.legend(leg)
plt.ylim((-.2, 1.2))
plt.xlim((-5, 5))
plt.title('Approximating exp(-1/x^2) with polynomials, $\sigma$ = .01')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Doesn't look too promising! It's very flat at 0. All of its derivatives are exactly 0. However, even though its Taylor expansion is null, we can still form a perfectly good local polynomical approximation, that is, approximating the functions as the sum of a constant, a linear trend, a quadratic, etc.. The idea here is to probe the function at normally distributed locations around 0. We then perform a polynomial regression to approximate the function at these randomly chosen location. This is equivalent to minimizing the expected sum-of-squares error
Step2: Even though the function has no derivatives at 0, it can still be approximated by a polynomial! In the Taylor expansion, we only probe the function at 0. However, in the polynomial regression, we probe it at multiple points. While the Taylor expansion is unique once the degree is known, there are an infinite number of polynomial expansions that minimize different empirical risks. We can choose to minimize the empirical risk over a larger range of values of x by probing with normally distributed x values with a larger $\sigma$
Step3: Now the expansion cares relatively less about the center. What if we set $\sigma = .01$?
|
6,198
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
from functools import partial
import numpy as np
import jax.numpy as jnp
from jax import random
import numpyro
import numpyro.distributions as dist
from numpyro.diagnostics import summary
from numpyro.infer import MCMC, NUTS
assert numpyro.__version__.startswith("0.9.2")
# NB: replace cpu by gpu to run this notebook on gpu
numpyro.set_platform("cpu")
def run_inference(
model, num_warmup=1000, num_samples=1000, max_tree_depth=10, dense_mass=False
):
kernel = NUTS(model, max_tree_depth=max_tree_depth, dense_mass=dense_mass)
mcmc = MCMC(
kernel,
num_warmup=num_warmup,
num_samples=num_samples,
num_chains=1,
progress_bar=False,
)
mcmc.run(random.PRNGKey(0))
summary_dict = summary(mcmc.get_samples(), group_by_chain=False)
# print the largest r_hat for each variable
for k, v in summary_dict.items():
spaces = " " * max(12 - len(k), 0)
print("[{}] {} \t max r_hat: {:.4f}".format(k, spaces, np.max(v["r_hat"])))
# In this unreparameterized model some of the parameters of the distributions
# explicitly depend on other parameters (in particular beta depends on lambdas and tau).
# This kind of coordinate system can be a challenge for HMC.
def _unrep_hs_model(X, Y):
lambdas = numpyro.sample("lambdas", dist.HalfCauchy(jnp.ones(X.shape[1])))
tau = numpyro.sample("tau", dist.HalfCauchy(jnp.ones(1)))
betas = numpyro.sample("betas", dist.Normal(scale=tau * lambdas))
mean_function = jnp.dot(X, betas)
numpyro.sample("Y", dist.Normal(mean_function, 0.05), obs=Y)
# In this reparameterized model none of the parameters of the distributions
# explicitly depend on other parameters. This model is exactly equivalent
# to _unrep_hs_model but is expressed in a different coordinate system.
def _rep_hs_model1(X, Y):
lambdas = numpyro.sample("lambdas", dist.HalfCauchy(jnp.ones(X.shape[1])))
tau = numpyro.sample("tau", dist.HalfCauchy(jnp.ones(1)))
unscaled_betas = numpyro.sample(
"unscaled_betas", dist.Normal(scale=jnp.ones(X.shape[1]))
)
scaled_betas = numpyro.deterministic("betas", tau * lambdas * unscaled_betas)
mean_function = jnp.dot(X, scaled_betas)
numpyro.sample("Y", dist.Normal(mean_function, 0.05), obs=Y)
from numpyro.infer.reparam import LocScaleReparam
# LocScaleReparam with centered=0 fully "decenters" the prior over betas.
config = {"betas": LocScaleReparam(centered=0)}
# The coordinate system of this model is equivalent to that in _rep_hs_model1 above.
_rep_hs_model2 = numpyro.handlers.reparam(_unrep_hs_model, config=config)
from numpyro.distributions.transforms import AffineTransform
from numpyro.infer.reparam import TransformReparam
# In this reparameterized model none of the parameters of the distributions
# explicitly depend on other parameters. This model is exactly equivalent
# to _unrep_hs_model but is expressed in a different coordinate system.
def _rep_hs_model3(X, Y):
lambdas = numpyro.sample("lambdas", dist.HalfCauchy(jnp.ones(X.shape[1])))
tau = numpyro.sample("tau", dist.HalfCauchy(jnp.ones(1)))
# instruct NumPyro to do the reparameterization automatically.
reparam_config = {"betas": TransformReparam()}
with numpyro.handlers.reparam(config=reparam_config):
betas_root_variance = tau * lambdas
# in order to use TransformReparam we have to express the prior
# over betas as a TransformedDistribution
betas = numpyro.sample(
"betas",
dist.TransformedDistribution(
dist.Normal(0.0, jnp.ones(X.shape[1])),
AffineTransform(0.0, betas_root_variance),
),
)
mean_function = jnp.dot(X, betas)
numpyro.sample("Y", dist.Normal(mean_function, 0.05), obs=Y)
# create fake dataset
X = np.random.RandomState(0).randn(100, 500)
Y = X[:, 0]
print("unreparameterized model (very bad r_hats)")
run_inference(partial(_unrep_hs_model, X, Y))
print("\nreparameterized model with manual reparameterization (good r_hats)")
run_inference(partial(_rep_hs_model1, X, Y))
print("\nreparameterized model with LocScaleReparam (good r_hats)")
run_inference(partial(_rep_hs_model2, X, Y))
print("\nreparameterized model with TransformReparam (good r_hats)")
run_inference(partial(_rep_hs_model3, X, Y))
# Because rho is very close to 1.0 the posterior geometry
# is extremely skewed and using the "diagonal" coordinate system
# implied by dense_mass=False leads to bad results
rho = 0.9999
cov = jnp.array([[10.0, rho], [rho, 0.1]])
def mvn_model():
numpyro.sample("x", dist.MultivariateNormal(jnp.zeros(2), covariance_matrix=cov))
print("dense_mass = False (bad r_hat)")
run_inference(mvn_model, dense_mass=False, max_tree_depth=3)
print("dense_mass = True (good r_hat)")
run_inference(mvn_model, dense_mass=True, max_tree_depth=3)
rho = 0.9
cov = jnp.array([[10.0, rho], [rho, 0.1]])
# In this model x1 and x2 are highly correlated with one another
# but not correlated with y at all.
def partially_correlated_model():
x1 = numpyro.sample(
"x1", dist.MultivariateNormal(jnp.zeros(2), covariance_matrix=cov)
)
x2 = numpyro.sample(
"x2", dist.MultivariateNormal(jnp.zeros(2), covariance_matrix=cov)
)
y = numpyro.sample("y", dist.Normal(jnp.zeros(100), 1.0))
numpyro.sample("obs", dist.Normal(x1 - x2, 0.1), jnp.ones(2))
print("dense_mass = False (very bad r_hats)")
run_inference(partially_correlated_model, dense_mass=False, max_tree_depth=3)
print("\ndense_mass = True (bad r_hats)")
run_inference(partially_correlated_model, dense_mass=True, max_tree_depth=3)
# We use dense_mass=[("x1", "x2")] to specify
# a structured mass matrix in which the y-part of the mass matrix is diagonal
# and the (x1, x2) block of the mass matrix is full-rank.
# Graphically:
#
# x1 x2 y
# x1 | * * 0 |
# x2 | * * 0 |
# y | 0 0 * |
print("\nstructured mass matrix (good r_hats)")
run_inference(partially_correlated_model, dense_mass=[("x1", "x2")], max_tree_depth=3)
# Because rho is very close to 1.0 the posterior geometry is extremely
# skewed and using small max_tree_depth leads to bad results.
rho = 0.999
dim = 200
cov = rho * jnp.ones((dim, dim)) + (1 - rho) * jnp.eye(dim)
def mvn_model():
x = numpyro.sample(
"x", dist.MultivariateNormal(jnp.zeros(dim), covariance_matrix=cov)
)
print("max_tree_depth = 5 (bad r_hat)")
run_inference(mvn_model, max_tree_depth=5)
print("max_tree_depth = 10 (good r_hat)")
run_inference(mvn_model, max_tree_depth=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We begin by writing a helper function to do NUTS inference.
Step2: Evaluating HMC/NUTS
Step3: To deal with the bad geometry that results form this coordinate system we change coordinates using the following re-write logic.
Step4: Next we do the reparameterization using numpyro.infer.reparam.
Step5: To show the versatility of the numpyro.infer.reparam library let's do the reparameterization using TransformReparam instead.
Step6: Finally we verify that _rep_hs_model1, _rep_hs_model2, and _rep_hs_model3 do indeed achieve better r_hats than _unrep_hs_model.
Step7: Aside
Step8: Example #3
Step9: Now let's compare two choices of dense_mass.
Step10: max_tree_depth
|
6,199
|
<ASSISTANT_TASK:>
Python Code:
# this line is required to see visualizations inline for Jupyter notebook
%matplotlib inline
# importing modules that we need for analysis
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# read the data from file and print out first few rows and columns
thanksgiving = pd.read_csv("thanksgiving.csv", encoding="Latin-1")
thanksgiving.iloc[0:10,0:3]
thanksgiving.columns[:10]
thanksgiving["Do you celebrate Thanksgiving?"].value_counts()
thanksgiving = thanksgiving[thanksgiving["Do you celebrate Thanksgiving?"] == "Yes"]
thanksgiving["Do you celebrate Thanksgiving?"].value_counts()
thanksgiving["What is typically the main dish at your Thanksgiving dinner?"].unique()
thanksgiving["What is typically the main dish at your Thanksgiving dinner?"].value_counts()
thanksgiving["Age"][:10]
thanksgiving["Age"].unique()
def age_to_num(string):
# if nan, return None
if pd.isnull(string):
return None
first_item = string.split(" ")[0]
# if the answer is "60+" return 70
if "+" in first_item:
return 70.0
last_item = string.split(" ")[2]
#return average of the interval
return (int(first_item)+int(last_item))/2
# apply age_to_num function to "Age" column and assign it to new column
thanksgiving["num_age"] = thanksgiving["Age"].apply(age_to_num)
thanksgiving["num_age"].unique()
thanksgiving = thanksgiving[thanksgiving["num_age"].isnull() == False]
thanksgiving["num_age"].describe()
thanksgiving["How far will you travel for Thanksgiving?"].unique()
# for each unique answer, select the rows and calculate the mean value of ages.
local_string = 'Thanksgiving is local--it will take place in the town I live in'
local_rows = thanksgiving[thanksgiving["How far will you travel for Thanksgiving?"] == local_string]
local_age_mean = local_rows["num_age"].mean()
fewhours_string = "Thanksgiving is out of town but not too far--it's a drive of a few hours or less"
fewhours_rows = thanksgiving[thanksgiving["How far will you travel for Thanksgiving?"] == fewhours_string]
fewhours_age_mean = fewhours_rows["num_age"].mean()
home_string = "Thanksgiving is happening at my home--I won't travel at all"
home_rows = thanksgiving[thanksgiving["How far will you travel for Thanksgiving?"] == home_string]
home_age_mean = home_rows["num_age"].mean()
faraway_string = 'Thanksgiving is out of town and far away--I have to drive several hours or fly'
faraway_rows = thanksgiving[thanksgiving["How far will you travel for Thanksgiving?"] == faraway_string]
faraway_age_mean = faraway_rows["num_age"].mean()
print("Local: " + str(local_age_mean))
print("Drive of few hours or less: " + str(fewhours_age_mean))
print("Home: " + str(home_age_mean))
print("Drive of several hours or have to fly: " + str(faraway_age_mean))
x = np.arange(4)+0.75
plt.bar(x,[ fewhours_age_mean, local_age_mean, faraway_age_mean, home_age_mean], width=0.5)
plt.xticks([1,2,3,4], ["Few hours", "Local", "Far away", "Home"])
plt.title("Average Age of People for Different Amounts of Travel on Thanksgiving")
plt.ylabel("Average Age")
plt.xlabel("Travel amount")
plt.show()
thanksgiving2 = pd.read_csv("thanksgiving.csv", encoding="Latin-1")
thanksgiving2 = thanksgiving2 = thanksgiving2[thanksgiving2["Do you celebrate Thanksgiving?"] == "Yes"]
thanksgiving2["How much total combined money did all members of your HOUSEHOLD earn last year?"].unique()
def income_to_num(string):
if pd.isnull(string):
return None
first_item = string.split(" ")[0]
# if the answer is "Prefer not to answer" return none
if first_item == "Prefer":
return None
last_item = string.split(" ")[2]
#if the answer is "$200,000 and up" return 250000
if last_item == "up":
return 250000.0
#remove dollar signs and commas
first_item = first_item.replace("$","")
first_item = first_item.replace(",","")
last_item = last_item.replace("$","")
last_item = last_item.replace(",","")
#return the average of the interval
return (int(first_item)+int(last_item))/2
thanksgiving2["num_income"] = thanksgiving2["How much total combined money did all members of your HOUSEHOLD earn last year?"].apply(income_to_num)
thanksgiving2 = thanksgiving2[thanksgiving2["num_income"].isnull() == False]
thanksgiving2["num_income"].describe()
# for each unique answer, select the rows and calculate the mean value of income.
local_string = 'Thanksgiving is local--it will take place in the town I live in'
local_rows = thanksgiving2[thanksgiving2["How far will you travel for Thanksgiving?"] == local_string]
local_income_mean = local_rows["num_income"].mean()
fewhours_string = "Thanksgiving is out of town but not too far--it's a drive of a few hours or less"
fewhours_rows = thanksgiving2[thanksgiving2["How far will you travel for Thanksgiving?"] == fewhours_string]
fewhours_income_mean = fewhours_rows["num_income"].mean()
home_string = "Thanksgiving is happening at my home--I won't travel at all"
home_rows = thanksgiving2[thanksgiving2["How far will you travel for Thanksgiving?"] == home_string]
home_income_mean = home_rows["num_income"].mean()
faraway_string = 'Thanksgiving is out of town and far away--I have to drive several hours or fly'
faraway_rows = thanksgiving2[thanksgiving2["How far will you travel for Thanksgiving?"] == faraway_string]
faraway_income_mean = faraway_rows["num_income"].mean()
print("Local: " + str(local_income_mean))
print("Drive of few hours or less: " + str(fewhours_income_mean))
print("Home: " + str(home_income_mean))
print("Drive of several hours or have to fly: " + str(faraway_income_mean))
x = np.arange(4)+0.75
plt.bar(x,[ fewhours_income_mean, local_income_mean, faraway_income_mean, home_income_mean], width=0.5)
plt.xticks([1,2,3,4], ["Few hours", "Local", "Far away", "Home"])
plt.title("Average Income of People for Different Amounts of Travel on Thanksgiving")
plt.ylabel("Average Income")
plt.xlabel("Travel amount")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hypothesis 1 - "The most preferred food for Thanksgiving is turkey."
Step2: Let's look at all unique answers given for main dish at Thanksgiving
Step3: A short code can show us which food is the most preferred one
Step4: So as hypothesized, turkey is the most preferred main dish at Thanksgiving dinner.
Step5: As can be seen above, the age column has intervals instead of actual numbers. The unique answers are
Step6: Let's define a function and apply it to "Age" column to cast each answer to a number. (We'll take the average of intervals, and 70 for "60+".)
Step7: We need to get rid of missing values
Step8: Now we have ages, let's look at another survey question about traveling for Thanksgiving.
Step9: Since there are only 4 unique answers, we can calculate a mean age value for each of them.
Step10: Now, let's plot the results to get a better understanding.
Step11: As we can see average age of people who stay home is larger than the ones who travel. However, the mean values of ages are pretty close to each other, so we can't say there is a strong correlation between age and travel distance.
Step12: Let's look at how income data is stored in the dataset.
Step13: Again, we have intervals of values instead of precise values. Let's define a function to get an average, (we will have 250000 for "$200,000 and up").
Step14: We need to get rid of the rows with missing values.
Step15: We will follow the same process that we did for hypothesis 2.
Step16: Let's plot the results
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.