repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
mne-tools/mne-tools.github.io | 0.18/_downloads/d71abe904faddac1a89e44f2986e07fa/plot_mne_inverse_label_connectivity.ipynb | bsd-3-clause | # Authors: Martin Luessi <mluessi@nmr.mgh.harvard.edu>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Nicolas P. Rougier (graph code borrowed from his matplotlib gallery)
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import apply_inverse_epochs, read_inverse_operator
from mne.connectivity import spectral_connectivity
from mne.viz import circular_layout, plot_connectivity_circle
print(__doc__)
"""
Explanation: Compute source space connectivity and visualize it using a circular graph
This example computes the all-to-all connectivity between 68 regions in
source space based on dSPM inverse solutions and a FreeSurfer cortical
parcellation. The connectivity is visualized using a circular graph which
is ordered based on the locations of the regions in the axial plane.
End of explanation
"""
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_raw = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
fname_event = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# Load data
inverse_operator = read_inverse_operator(fname_inv)
raw = mne.io.read_raw_fif(fname_raw)
events = mne.read_events(fname_event)
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
# Pick MEG channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
exclude='bads')
# Define epochs for left-auditory condition
event_id, tmin, tmax = 1, -0.2, 0.5
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13,
eog=150e-6))
"""
Explanation: Load our data
First we'll load the data we'll use in connectivity estimation. We'll use
the sample MEG data provided with MNE.
End of explanation
"""
# Compute inverse solution and for each epoch. By using "return_generator=True"
# stcs will be a generator object instead of a list.
snr = 1.0 # use lower SNR for single epochs
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
stcs = apply_inverse_epochs(epochs, inverse_operator, lambda2, method,
pick_ori="normal", return_generator=True)
# Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi
labels = mne.read_labels_from_annot('sample', parc='aparc',
subjects_dir=subjects_dir)
label_colors = [label.color for label in labels]
# Average the source estimates within each label using sign-flips to reduce
# signal cancellations, also here we return a generator
src = inverse_operator['src']
label_ts = mne.extract_label_time_course(stcs, labels, src, mode='mean_flip',
return_generator=True)
fmin = 8.
fmax = 13.
sfreq = raw.info['sfreq'] # the sampling frequency
con_methods = ['pli', 'wpli2_debiased']
con, freqs, times, n_epochs, n_tapers = spectral_connectivity(
label_ts, method=con_methods, mode='multitaper', sfreq=sfreq, fmin=fmin,
fmax=fmax, faverage=True, mt_adaptive=True, n_jobs=1)
# con is a 3D array, get the connectivity for the first (and only) freq. band
# for each method
con_res = dict()
for method, c in zip(con_methods, con):
con_res[method] = c[:, :, 0]
"""
Explanation: Compute inverse solutions and their connectivity
Next, we need to compute the inverse solution for this data. This will return
the sources / source activity that we'll use in computing connectivity. We'll
compute the connectivity in the alpha band of these sources. We can specify
particular frequencies to include in the connectivity with the fmin and
fmax flags. Notice from the status messages how mne-python:
reads an epoch from the raw file
applies SSP and baseline correction
computes the inverse to obtain a source estimate
averages the source estimate to obtain a time series for each label
includes the label time series in the connectivity computation
moves to the next epoch.
This behaviour is because we are using generators. Since we only need to
operate on the data one epoch at a time, using a generator allows us to
compute connectivity in a computationally efficient manner where the amount
of memory (RAM) needed is independent from the number of epochs.
End of explanation
"""
# First, we reorder the labels based on their location in the left hemi
label_names = [label.name for label in labels]
lh_labels = [name for name in label_names if name.endswith('lh')]
# Get the y-location of the label
label_ypos = list()
for name in lh_labels:
idx = label_names.index(name)
ypos = np.mean(labels[idx].pos[:, 1])
label_ypos.append(ypos)
# Reorder the labels based on their location
lh_labels = [label for (yp, label) in sorted(zip(label_ypos, lh_labels))]
# For the right hemi
rh_labels = [label[:-2] + 'rh' for label in lh_labels]
# Save the plot order and create a circular layout
node_order = list()
node_order.extend(lh_labels[::-1]) # reverse the order
node_order.extend(rh_labels)
node_angles = circular_layout(label_names, node_order, start_pos=90,
group_boundaries=[0, len(label_names) / 2])
# Plot the graph using node colors from the FreeSurfer parcellation. We only
# show the 300 strongest connections.
plot_connectivity_circle(con_res['pli'], label_names, n_lines=300,
node_angles=node_angles, node_colors=label_colors,
title='All-to-All Connectivity left-Auditory '
'Condition (PLI)')
"""
Explanation: Make a connectivity plot
Now, we visualize this connectivity using a circular graph layout.
End of explanation
"""
fig = plt.figure(num=None, figsize=(8, 4), facecolor='black')
no_names = [''] * len(label_names)
for ii, method in enumerate(con_methods):
plot_connectivity_circle(con_res[method], no_names, n_lines=300,
node_angles=node_angles, node_colors=label_colors,
title=method, padding=0, fontsize_colorbar=6,
fig=fig, subplot=(1, 2, ii + 1))
plt.show()
"""
Explanation: Make two connectivity plots in the same figure
We can also assign these connectivity plots to axes in a figure. Below we'll
show the connectivity plot using two different connectivity methods.
End of explanation
"""
# fname_fig = data_path + '/MEG/sample/plot_inverse_connect.png'
# fig.savefig(fname_fig, facecolor='black')
"""
Explanation: Save the figure (optional)
By default matplotlib does not save using the facecolor, even though this was
set when the figure was generated. If not set via savefig, the labels, title,
and legend will be cut off from the output png file.
End of explanation
"""
|
maestrotf/pymepps | docs/examples/example_plot_stationnc.ipynb | gpl-3.0 | import pymepps
import matplotlib.pyplot as plt
"""
Explanation: Load station data based on NetCDF files
In this example we show how to load station data based on NetCDF files.
The data is loaded with the pymepps package. Thanks to Ingo Lange we
could use original data from the Wettermast for this example. In the
following the data is loaded, plotted and saved as json file.
End of explanation
"""
wm_ds = pymepps.open_station_dataset('../data/station/wettermast.nc', 'nc')
print(wm_ds)
"""
Explanation: We could use the global pymepps open_station_dataset function to open
the Wettermast data. We have to specify the data path and the data type.
End of explanation
"""
t2m = wm_ds.select('TT002_M10')
print(type(t2m))
print(t2m.describe())
"""
Explanation: Now we could extract the temperature in 2 m height. For this we use the
select method of the resulted dataset.
End of explanation
"""
t2m.plot()
plt.xlabel('Date')
plt.ylabel('Temperature in °C')
plt.title('Temperature at the Wettermast Hamburg')
plt.show()
"""
Explanation: We could see that the resulting temperature is a normal pandas.Series.
So it is possible to use all pandas methods, e.g. plotting of the
Series.
End of explanation
"""
print(t2m.pp.lonlat)
"""
Explanation: Pymepps uses an accessor to extend the pandas functionality. The
accessor could be accessed with Series.pp. At the moment there is only a
lonlat attribute, update, save and load method defined, but it is
planned to expand the number of additional methods.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.22/_downloads/2567f25ca4c6b483c12d38184d7fe9d7/plot_decoding_xdawn_eeg.ipynb | bsd-3-clause | # Authors: Alexandre Barachant <alexandre.barachant@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.preprocessing import MinMaxScaler
from mne import io, pick_types, read_events, Epochs, EvokedArray
from mne.datasets import sample
from mne.preprocessing import Xdawn
from mne.decoding import Vectorizer
print(__doc__)
data_path = sample.data_path()
"""
Explanation: XDAWN Decoding From EEG data
ERP decoding with Xdawn :footcite:RivetEtAl2009,RivetEtAl2011. For each event
type, a set of spatial Xdawn filters are trained and applied on the signal.
Channels are concatenated and rescaled to create features vectors that will be
fed into a logistic regression.
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.3
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4}
n_filter = 3
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 20, fir_design='firwin')
events = read_events(event_fname)
picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
epochs = Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=picks, baseline=None, preload=True,
verbose=False)
# Create classification pipeline
clf = make_pipeline(Xdawn(n_components=n_filter),
Vectorizer(),
MinMaxScaler(),
LogisticRegression(penalty='l1', solver='liblinear',
multi_class='auto'))
# Get the labels
labels = epochs.events[:, -1]
# Cross validator
cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=42)
# Do cross-validation
preds = np.empty(len(labels))
for train, test in cv.split(epochs, labels):
clf.fit(epochs[train], labels[train])
preds[test] = clf.predict(epochs[test])
# Classification report
target_names = ['aud_l', 'aud_r', 'vis_l', 'vis_r']
report = classification_report(labels, preds, target_names=target_names)
print(report)
# Normalized confusion matrix
cm = confusion_matrix(labels, preds)
cm_normalized = cm.astype(float) / cm.sum(axis=1)[:, np.newaxis]
# Plot confusion matrix
fig, ax = plt.subplots(1)
im = ax.imshow(cm_normalized, interpolation='nearest', cmap=plt.cm.Blues)
ax.set(title='Normalized Confusion matrix')
fig.colorbar(im)
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
fig.tight_layout()
ax.set(ylabel='True label', xlabel='Predicted label')
"""
Explanation: Set parameters and read data
End of explanation
"""
fig, axes = plt.subplots(nrows=len(event_id), ncols=n_filter,
figsize=(n_filter, len(event_id) * 2))
fitted_xdawn = clf.steps[0][1]
tmp_info = epochs.info.copy()
tmp_info['sfreq'] = 1.
for ii, cur_class in enumerate(sorted(event_id)):
cur_patterns = fitted_xdawn.patterns_[cur_class]
pattern_evoked = EvokedArray(cur_patterns[:n_filter].T, tmp_info, tmin=0)
pattern_evoked.plot_topomap(
times=np.arange(n_filter),
time_format='Component %d' if ii == 0 else '', colorbar=False,
show_names=False, axes=axes[ii], show=False)
axes[ii, 0].set(ylabel=cur_class)
fig.tight_layout(h_pad=1.0, w_pad=1.0, pad=0.1)
"""
Explanation: The patterns_ attribute of a fitted Xdawn instance (here from the last
cross-validation fold) can be used for visualization.
End of explanation
"""
|
frib-high-level-controls/FLAME | examples/flame_demo.ipynb | mit | ### import flame module
from flame import Machine
### specify lattice file location
lat_file = "LS1FS1_lattice.lat"
### read lattice file in
with open(lat_file, 'rb') as inf:
# create lattice data object M
M = Machine(inf)
### Initialize simulation parameters
# states
S = M.allocState({})
### run flame;
# 0, len(M); beginnig to end of the lattice
# observe=range(len(M)); return states data at each element
# propagation results assigned to 'results'
results = M.propagate(S, 0, len(M), observe=range(0,len(M)))
"""
Explanation: 2 to 5 charge state simulation and extract all charge states data
End of explanation
"""
### plot energy
# extract from 'results'
pos = [results[i][1].pos for i in range(len(M))]
ek = [results[i][1].ref_IonEk for i in range(len(M))]
# plot reference energy
plt.plot(pos,ek)
plt.title('reference energy histroy\n')
plt.xlabel('$z$ [m]')
plt.ylabel('energy [eV/u]')
plt.show()
"""
Explanation: - Plot energy history
End of explanation
"""
### plot x, y centroid and rms of overall beam
# extract from 'results'
pos = [results[i][1].pos for i in range(len(M))]
x,y = np.array([[results[i][1].moment0_env[j] for i in range(len(M))] for j in [0,2]])
xrms,yrms = np.array([[results[i][1].moment0_rms[j] for i in range(len(M))] for j in [0,2]])
# plot x,y centroid
plt.plot(pos,x,label='$x$')
plt.plot(pos,y,label='$y$')
plt.title('centroid orbit histroy of overall beam')
plt.xlabel('$z$ [m]')
plt.ylabel('centroid [mm]')
plt.legend(loc='best')
plt.show()
# plot x, y rms
plt.plot(pos,xrms,label='$x$')
plt.plot(pos,yrms,label='$y$')
plt.title('rms size histroy of overall beam')
plt.xlabel('$z$ [m]')
plt.ylabel('rms [mm]')
plt.legend(loc='upper left')
plt.show()
"""
Explanation: - plot x, y centroid and rms of overall beam
End of explanation
"""
### python object data are easy to manage
# plot x centroid and rms
plt.plot(pos,x,label='$x$ centroid')
plt.fill_between(pos, x+xrms, x-xrms, alpha=0.2, label='$x$ envelope')
plt.title('horizontal beam envelope histroy')
plt.xlabel('$z$ [m]')
plt.ylabel('$x$ [mm]')
plt.legend(loc='upper left')
plt.show()
# plot y centroid and rms
plt.plot(pos,y,'g',label='$y$ centroid')
plt.title('vertical beam envelope histroy')
plt.fill_between(pos, y+yrms, y-yrms,facecolor='g', alpha=0.2, label='$y$ envelope')
plt.xlabel('$z$ [m]')
plt.ylabel('$y$ [mm]')
plt.legend(loc='upper left')
plt.show()
"""
Explanation: - python object data are easy to manage
End of explanation
"""
### plot RMS x, y of each charge state
# extract from results
n_i = len(M.conf()['IonChargeStates'])
n_s = len(M.conf()['Stripper_IonChargeStates'])
pos, x, y, xrms, yrms = [[[] for _ in range(n_i+n_s)] for _ in range(5)]
for i in range(len(M)):
j,o = [n_i,0] if n_i == len(results[i][1].IonZ) else [n_s,n_i]
for k in range(j):
pos[k+o].append(results[i][1].pos)
x[k+o].append(results[i][1].moment0[0,k])
y[k+o].append(results[i][1].moment0[2,k])
xrms[k+o].append(np.sqrt(results[i][1].moment1[0,0,k]))
yrms[k+o].append(np.sqrt(results[i][1].moment1[2,2,k]))
# plot x centroid and rms
cs = ['33','34','76','77','78','79','80']
for i in range(n_i + n_s):
plt.plot(pos[i],x[i],color=colors[i],label='U$^{'+cs[i]+'}$')
plt.fill_between(pos[i],np.array(x[i])+np.array(xrms[i]),
np.array(x[i])-np.array(xrms[i]),
alpha=0.2,facecolor=colors[i])
plt.title('horizontal beam envelope histroy')
plt.xlabel('$z$ [m]')
plt.ylabel('$x$ [mm]')
plt.legend(loc = 'upper left', ncol = 4, fontsize=17)
plt.show()
# plot y centroid and rms
for i in range(n_i + n_s):
plt.plot(pos[i],y[i],color=colors[i],label='U$^{'+cs[i]+'}$')
plt.fill_between(pos[i],np.array(y[i])+np.array(yrms[i]),
np.array(y[i])-np.array(yrms[i]),
alpha=0.2,facecolor=colors[i])
plt.title('vertical beam envelope histroy')
plt.xlabel('$z$ [m]')
plt.ylabel('$y$ [mm]')
plt.legend(loc = 'upper left', ncol = 4, fontsize=17)
plt.show()
"""
Explanation: - plot beam envelope of each charge state
End of explanation
"""
# index of point A (solenoid:ls1_cb06_sol3_d1594_47)
point_A = [i for i in range(len(M)) if M.conf(i)['name']=='ls1_cb06_sol3_d1594_48'][0]
# Simulate from initial point(LS1 entrance) to point A
S2 = M.allocState({})
resultsA = M.propagate(S2, 0, point_A-1, observe = range(point_A))
### Simulate from A to B(FS1 exit) with changing solenoid strength
# Try 3 Bz [T] cases for the solenoid
bzcase = [5.0, 7.5, 10.0]
# make array for results
resultsB = [[] for _ in range(len(bzcase))]
for i,bz in enumerate(bzcase):
# Set new solenoid parameter
M.reconfigure(point_A,{'B': bz})
# clone beam parameter of point S
S3 = S2.clone()
# Simulate from A to B and store results
end_B = len(M)
resultsB[i] = M.propagate(S3, point_A, end_B, observe = range(end_B))
"""
Explanation: Advanced usage
Simulate up to point A and repeat from A to B simulation with changing lattice parameter
<img src="demo2.png">
End of explanation
"""
# extract from results
pos_A = [resultsA[i][1].pos for i in range(point_A-1)]
x_A,y_A = np.array([[resultsA[i][1].moment0_env[j] for i in range(point_A-1)] for j in [0,2]])
xrms_A,yrms_A = np.array([[resultsA[i][1].moment0_rms[j] for i in range(point_A-1)] for j in [0,2]])
dlen = len(resultsB)
pos_B, x_B, y_B, xrms_B, yrms_B = [[[] for _ in range(dlen)] for _ in range(5)]
for k in range(3):
clen = len(resultsB[k])
pos_B[k] = [resultsB[k][i][1].pos for i in range(clen)]
x_B[k], y_B[k] = np.array([[resultsB[k][i][1].moment0_env[j] for i in range(clen)] for j in [0,2]])
xrms_B[k], yrms_B[k] = np.array([[resultsB[k][i][1].moment0_rms[j] for i in range(clen)] for j in [0,2]])
# plot x centroid and rms
plt.plot(pos_A,x_A,'k')
plt.fill_between(pos_A, x_A+xrms_A, x_A-xrms_A, facecolor='k',alpha=0.2)
for k in range(dlen):
plt.plot(pos_B[k], x_B[k], color=colors[k], label='$B_z$='+str(bzcase[k]))
plt.fill_between(pos_B[k], x_B[k]+xrms_B[k], x_B[k]-xrms_B[k], facecolor=colors[k], alpha=0.2)
plt.title('Horizontal beam envelope histroy')
plt.xlabel('$z$ [m]')
plt.ylabel('$x$ [mm]')
plt.legend(loc='lower left', ncol = 3, fontsize=18)
plt.show()
# plot y centroid and rms
plt.plot(pos_A,y_A,'k')
plt.fill_between(pos_A, y_A+yrms_A, y_A-yrms_A, facecolor='k',alpha=0.2)
for k in range(dlen):
plt.plot(pos_B[k],y_B[k], color=colors[k], label='$B_z$='+str(bzcase[k]))
plt.fill_between(pos_B[k], y_B[k]+yrms_B[k], y_B[k]-yrms_B[k], facecolor=colors[k], alpha=0.2)
plt.title('Vertical beam envelope histroy')
plt.xlabel('$z$ [m]')
plt.ylabel('$y$ [mm]')
plt.legend(loc='lower left', ncol = 3, fontsize=18)
plt.show()
"""
Explanation: - plot beam envelope for 3 Bz cases
End of explanation
"""
|
DominikDitoIvosevic/Uni | STRUCE/SU-2019-LAB00-Python.ipynb | mit | xs = [5, 6, 2, 3]
xs
xs[0]
xs[-1]
xs[2] = 10
xs
xs[0] = "a book"
xs
xs[1] = [3, 4]
xs
xs += [99, 100]
xs
xs.extend([22, 33])
xs
xs[-1]
xs.pop()
xs
len(xs)
xs[0:2]
xs[2:]
xs[:3]
xs[:-2]
for el in xs:
print(el)
for idx, el in enumerate(xs):
print(idx, el)
for idx in range(len(xs)):
print(idx)
for idx in range(2, len(xs)):
print(idx)
for idx in range(0, len(xs), 2):
print(idx)
xs = []
for x in range(10):
xs.append(x ** 2)
xs
[x ** 2 for x in range(10)]
[x ** 2 for x in range(10) if x % 2 == 0]
[x ** 2 if x % 2 == 0 else 512 for x in range(10)]
for a, b in zip([1, 2, 3], [4, 5, 6]):
print(a, b)
for a, b in zip([1, 2, 3], [4, 5, 6, 7]):
print(a, b)
a_list, b_list = zip(*[(1, 4), (2, 3), (5, 6)])
print(a_list)
print(b_list)
"""
Explanation: Fakultet u Zagrebu
Fakultet elektrotehnike i računarstva
Strojno učenje 2019/2020
http://www.fer.unizg.hr/predmet/su
Laboratorijska vježba 0: Uvod u Python
Verzija: 1.1
Zadnji put ažurirano: 27. 9. 2019.
(c) 2015-2019 Jan Šnajder, Domagoj Alagić
Objavljeno: 30. 9. 2019.
Rok za predaju: N/A
1. Python
1.1. Liste
End of explanation
"""
cool_name = "Miyazaki"
cool_name
cool_name + " is " + "great"
num_people = 200000
cool_name + " has " + num_people + " citizens"
cool_name + " has " + str(num_people) + " citizens"
len(cool_name)
print("{0} has {1} citizens".format(cool_name, num_people))
"""
Explanation: 1.2. Stringovi
End of explanation
"""
class Product:
def __init__(self, product_name=None, tags=None, price=0.0):
self.product_name = product_name
self.tags = [] if tags is None else tags
self.price = price
vat = 0.25
def product_price(self, with_pdv=False):
if with_pdv:
return self.price * (1 + self.vat)
else:
return self.price
def contains_tag(self, tag):
return (tag in self.tags)
prod = Product(product_name="toilet paper", tags=["health", "toilet", "fresh"], price=10)
print(prod.product_price(with_pdv=True))
print(prod.product_price(with_pdv=False))
prod.contains_tag("fresh")
prod.contains_tag("money")
prod1 = Product(product_name="toilet paper", price=10)
prod2 = Product(product_name="toothbrush", price=10)
Product.vat = 0.5
prod1.vat = 0.3
prod1.product_price(with_pdv=True)
prod2.product_price(with_pdv=True)
"""
Explanation: 1.3. Razredi
End of explanation
"""
import numpy as np
"""
Explanation: 2. NumPy
End of explanation
"""
a = np.array([1, 2, 3])
a
c = np.array([1, 2, 3], dtype=np.float64)
c
a.shape
b = np.array([[1, 3, 4], [2, 3, 5]])
b
b.shape
b[1, 2]
b[0:1, 2] # Notice the difference
b[:, 2]
b[0:2, 2]
d = np.array([[1, 2, 3], [4, 5, 6]])
d
np.append(d, 1)
np.append(d, [1, 2])
d
to_add = np.array([[1], [2]])
to_add
d + to_add
np.hstack([d, to_add])
np.vstack([d, to_add])
to_add2 = np.array([2, 3, 4])
to_add2
result = np.vstack([d, to_add2])
result
np.sqrt(result)
result * 2
result + result
x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
v = np.array([1,2])
w = np.array([5,3])
print(x)
print(y)
print(v)
print(w)
v.dot(w)
np.dot(v, w)
x.dot(v)
np.linalg.inv(x)
np.linalg.det(x)
np.linalg.norm(x)
x
np.max(x)
np.max(x, axis=0)
np.max(x, axis=1)
p = np.array([2, 3, 1, 6, 4, 5])
b = ["a", "b", "c", "d", "e", "f"]
np.max(p)
np.argmax(p)
b[np.argmax(p)]
np.mean(p)
np.var(p)
np.linspace(1, 10, 100)
w = np.array([[1, 2, 3], [4, 5, 6]])
w > 2
w[w>2]
w = np.array([1, 2, 3, 4, 5, 6, 7, 8])
y = np.array([0, 0, 0, 1, 0, 0, 1, 0])
w[y==1]
list(p)
p.tolist()
"""
Explanation: 2.1. Polja
End of explanation
"""
import matplotlib.pyplot as plt
%pylab inline
plt.plot([1,2,3,4,5], [4,5,5,7,3])
plt.plot(np.array([1,2,3,4,5]), np.array([4,5,5,7,3]))
plt.plot([4,5,5,7,3])
plt.plot([4,5,5,7,3], 'x');
def f(x): return x**2
xs = linspace(0,100); xs
f(xs)
plt.plot(xs, f(xs))
plt.plot(xs, f(xs), 'r+');
plt.plot(xs, 1 - f(xs), 'b', xs, f(xs)/2 - 1000, 'r--');
plt.plot(xs, f(xs), label='f(x)')
plt.plot(xs, 1 - f(xs), label='1-f(x)')
plt.legend(loc="center right")
plt.show()
plt.plot(xs, f(xs), label='f(x)')
plt.plot(xs, 1 - f(xs), label='1-f(x)')
plt.legend(loc="best")
plt.show()
plt.plot(xs, f(xs), label='f(x)')
plt.plot(xs, 1 - f(xs), label='1-f(x)')
plt.legend(loc="best")
plt.xlim([20, 80])
plt.show()
plt.plot(xs, f(xs), label=r'$\lambda^{3}$')
plt.plot(xs, 1 - f(xs), label=r'$\mathcal{N}$')
plt.legend(loc="best")
plt.xlim([20, 80])
plt.xlabel("hello")
plt.title("woohoo")
plt.show()
def f_1(x):
return np.sqrt((1 - (np.abs(x) - 1)**2))
def f_2(x):
return -3 * np.sqrt(1 - np.sqrt((np.abs(x)/2)))
x = np.linspace(-2, 2, 1000)
plt.plot(x, f_1(x), 'b-', label="bolja polovica")
plt.plot(x, f_2(x), 'r-', label="dobra polovica")
plt.xlim([-3, 3])
plt.ylim([-3, 1.5])
plt.xlabel("x")
plt.ylabel("y")
plt.legend(loc="lower right")
plt.show()
plt.scatter([0, 1, 2, 0], [4, 5, 2, 1])
plt.show()
plt.scatter([0,1,2,0], [4, 5, 2, 1], s=200, marker='s');
for c in 'rgb':
plt.scatter(np.random.random(100), np.random.random(100), s=200, alpha=0.5, marker='o', c=c)
plt.subplot(2,1,1)
plt.plot(xs, f(xs), label='f(x)', c="g")
plt.legend(loc="center right")
plt.subplot(2,1,2)
plt.plot(xs, f(xs), label='f(x)', c="r")
plt.legend(loc="center right")
plt.show()
plt.subplot(2,3,1)
plt.plot(xs, f(xs), label='f(x)', c="g")
plt.legend(loc="center right")
plt.subplot(2,3,2)
plt.plot(xs, f(xs), label='f(x)', c="r")
plt.legend(loc="center right")
plt.subplot(2,3,3)
plt.plot(xs, f(xs), label='f(x)', c="m")
plt.legend(loc="center right")
plt.subplot(2,3,4)
plt.plot(xs, f(xs), label='f(x)', c="c")
plt.legend(loc="center right")
plt.subplot(2,3,5)
plt.plot(xs, f(xs), label='f(x)', c="y")
plt.legend(loc="center right")
plt.subplot(2,3,6)
plt.plot(xs, f(xs), label='f(x)', c="w")
plt.legend(loc="center right")
plt.show()
"""
Explanation: 3. Matplotlib
End of explanation
"""
|
Dharamsitejas/E4571-Personalisation-Theory-Project | Part1/analysis/CF-Data.ipynb | mit | ratings = pd.read_csv('../raw-data/BX-Book-Ratings.csv', encoding='iso-8859-1', sep = ';')
ratings.columns = ['user_id', 'isbn', 'book_rating']
print(ratings.dtypes)
print()
print(ratings.head())
print()
print("Data Points :", ratings.shape[0])
"""
Explanation: Loading the Book Ratings Dataset
End of explanation
"""
books = pd.read_csv('../raw-data/BX-Books.csv', sep=';', encoding = 'iso-8859-1', dtype =str)
del books['Image-URL-L']
del books['Image-URL-M']
del books['Image-URL-S']
del books['Book-Author']
del books['Publisher']
"""
Explanation: Loading the Books Dataset
End of explanation
"""
print('Number of Books == Number of ISBN ? ', books["Book-Title"].nunique() == books["ISBN"].nunique())
book_dict = books[["Book-Title","ISBN"]].set_index("Book-Title").to_dict()["ISBN"]
books['new_isbn'] = books["Book-Title"].apply(lambda x: book_dict[x])
print('Number of Books == Number of ISBN ? ', books["Book-Title"].nunique() == books["new_isbn"].nunique())
books['isbn'] = books['new_isbn']
del books['ISBN']
del books['new_isbn']
"""
Explanation: Some Books don't have unique ISBN, creating a 1:1 maping between books-title and ISBN
End of explanation
"""
newdf = ratings[ratings.book_rating>0]
joined = books.merge(newdf, on ='isbn')
print(newdf.shape)
"""
Explanation: Data Preparation/ Cleaning <br>
Removing ratings equal to zero, since Book Crossing Dataset has rating scale from 1-10. Taking Inner Join with books dataframe to maintain books whose details exist.
End of explanation
"""
datasets = []
for j in [100, 150, 200, 300, 500]:
df = joined.groupby('isbn').count().sort_values('user_id', ascending =False)[0:j].index.values
test = joined.groupby('user_id').count().sort_values('isbn', ascending = False)[:20000].index.values
newdf = joined[joined.user_id.isin(test) & joined.isbn.isin(df)]
data = newdf[newdf['user_id'].isin(newdf['user_id'].value_counts()[newdf['user_id'].value_counts()>1].index)]
print("users books")
print(data.user_id.nunique(), data.isbn.nunique())
print()
print('Sparsity :', data.shape[0]/(data.user_id.nunique() * data.isbn.nunique()))
print()
print(data.shape)
print()
print(data.groupby('user_id').count().sort_values('isbn', ascending = False).mean())
print()
datasets.append(data)
"""
Explanation: Sampling <br>
Book Crossing Dataset is a very sparse Dataset with sparsity of more than 99.99%. In order to choose a small subset of dataset with constraints given in the dataset, we selected top 100 items which have been rated the most and took intersection with top 20000 users who have given ratings to more books. Further we remove all users who rated only one book to make the data more denser. This gave a dataset with 2517 users and 100 items with 8242 data points. In order to verify the scalability of our model we created different dataset with increasing book counts (150, 200, 300, 500)
End of explanation
"""
data = datasets[0]
rows = data.user_id.unique()
cols = data['Book-Title'].unique()
print(data.user_id.nunique(), data.isbn.nunique())
data = data[['user_id', 'Book-Title', 'book_rating']]
print("Sparsity :", 100 - (data.shape[0]/(len(cols)*len(rows)) * 100))
idict = dict(zip(cols, range(len(cols))))
udict = dict(zip(rows, range(len(rows))))
data.user_id = [
udict[i] for i in data.user_id
]
data['Book-Title'] = [
idict[i] for i in data['Book-Title']
]
nmat = data.as_matrix()
nmat
"""
Explanation: Taking Dataset with 100 items
Algo 1: Memory Based Algorithm : Item-Item CF Algorithm
Since average number of books rated by an user was around 3.3, we decided to use item-item CF as our memory based algorithm. Our implementation of item-item algorithm is below:
End of explanation
"""
def rmse(ypred, ytrue):
ypred = ypred[ytrue.nonzero()].flatten()
ytrue = ytrue[ytrue.nonzero()].flatten()
return np.sqrt(mean_squared_error(ypred, ytrue))
def mae(ypred, ytrue):
ypred = ypred[ytrue.nonzero()].flatten()
ytrue = ytrue[ytrue.nonzero()].flatten()
return mean_absolute_error(ypred, ytrue)
"""
Explanation: Function for Evaluation Metrics: MAE and RMSE
End of explanation
"""
def predict_naive(user, item):
prediction = imean1[item] + umean1[user] - amean1
return prediction
x1, x2 = train_test_split(nmat, test_size = 0.2, random_state =42)
naive = np.zeros((len(rows),len(cols)))
for row in x1:
naive[row[0], row[1]] = row[2]
predictions = []
targets = []
amean1 = np.mean(naive[naive!=0])
umean1 = sum(naive.T) / sum((naive!=0).T)
imean1 = sum(naive) / sum((naive!=0))
umean1 = np.where(np.isnan(umean1), amean1, umean1)
imean1 = np.where(np.isnan(imean1), amean1, imean1)
print('Naive---')
for row in x2:
user, item, actual = row[0], row[1], row[2]
predictions.append(predict_naive(user, item))
targets.append(actual)
print('rmse %.4f' % rmse(np.array(predictions), np.array(targets)))
print('mae %.4f' % mae(np.array(predictions), np.array(targets)))
print()
"""
Explanation: Our Naive Baseline for any user i, item j prediction is to assign it with (sum of mean rating given by user i (umean[i]), mean rating received by item j (imean[j]) substracting average rating over entire dataset. (amean)) <br><br>
-------------- Naive Baseline ---------------
End of explanation
"""
def cos(mat, a, b):
if a == b:
return 1
aval = mat.T[a].nonzero()
bval = mat.T[b].nonzero()
corated = np.intersect1d(aval, bval)
if len(corated) == 0:
return 0
avec = np.take(mat.T[a], corated)
bvec = np.take(mat.T[b], corated)
val = 1 - cosine(avec, bvec)
if np.isnan(val):
return 0
return val
def adjcos(mat, a, b, umean):
if a == b:
return 1
aval = mat.T[a].nonzero()
bval = mat.T[b].nonzero()
corated = np.intersect1d(aval, bval)
if len(corated) == 0:
return 0
avec = np.take(mat.T[a], corated)
bvec = np.take(mat.T[b], corated)
avec1 = avec - umean[corated]
bvec1 = bvec - umean[corated]
val = 1 - cosine(avec1, bvec1)
if np.isnan(val):
return 0
return val
def pr(mat, a, b, imean):
if a == b:
return 1
aval = mat.T[a].nonzero()
bval = mat.T[b].nonzero()
corated = np.intersect1d(aval, bval)
if len(corated) < 2:
return 0
avec = np.take(mat.T[a], corated)
bvec = np.take(mat.T[b], corated)
avec1 = avec - imean[a]
bvec1 = bvec - imean[b]
val = 1 - cosine(avec1, bvec1)
if np.isnan(val):
return 0
return val
def euc(mat, a, b):
if a == b:
return 1
aval = mat.T[a].nonzero()
bval = mat.T[b].nonzero()
corated = np.intersect1d(aval, bval)
if len(corated) == 0:
return 0
avec = np.take(mat.T[a], corated)
bvec = np.take(mat.T[b], corated)
dist = np.sqrt(np.sum(a-b)**2)
val = 1/(1+dist)
if np.isnan(val):
return 0
return val
"""
Explanation: Following are the functions to calculate pairwise similarity between two items : Cosine, Adjusted Cosine, Euclidean, Pearson Corelation.
End of explanation
"""
def itemsimilar(mat, option):
amean = np.mean(mat[mat!=0])
umean = sum(mat.T) / sum((mat!=0).T)
imean = sum(mat) / sum((mat!=0))
umean = np.where(np.isnan(umean), amean, umean)
imean = np.where(np.isnan(imean), amean, imean)
n = mat.shape[1]
sim_mat = np.zeros((n, n))
if option == 'pr':
#print("PR")
for i in range(n):
for j in range(n):
sim_mat[i][j] = pr(mat, i, j, imean)
sim_mat = (sim_mat + 1)/2
elif option == 'cos':
#print("COS")
for i in range(n):
for j in range(n):
sim_mat[i][j] = cos(mat, i, j)
elif option == 'adjcos':
#print("ADJCOS")
for i in range(n):
for j in range(n):
sim_mat[i][j] = adjcos(mat, i, j, umean)
sim_mat = (sim_mat + 1)/2
elif option == 'euc':
#print("EUCLIDEAN")
for i in range(n):
for j in range(n):
sim_mat[i][j] = euc(mat, i, j)
else:
#print("Hello")
sim_mat = cosine_similarity(mat.T)
return sim_mat, amean, umean, imean
"""
Explanation: Function item similar returns matrix of pairwise similarity between all items based on the option provided. Also return amean (global mean rating), umean (average rating of each user), imean (Average rating of each item)
End of explanation
"""
def predict(user, item, mat, item_similarity, amean, umean, imean, k=20):
nzero = mat[user].nonzero()[0]
if len(nzero) == 0:
return amean
baseline = imean + umean[user] - amean
choice = nzero[item_similarity[item, nzero].argsort()[::-1][:k]]
prediction = ((mat[user, choice] - baseline[choice]).dot(item_similarity[item, choice])/ sum(item_similarity[item, choice])) + baseline[item]
if np.isnan(prediction):
prediction = amean
if prediction > 10:
prediction = 10
if prediction < 1:
prediction = 1
return prediction
"""
Explanation: Predict function is used to get recommended rating by user i for item j.
End of explanation
"""
def get_results(X, option, rows, cols, folds, k, timing = False):
kf = KFold(n_splits=folds, shuffle = True, random_state=42)
count = 1
rmse_list = []
mae_list = []
trmse_list = []
tmae_list = []
for train_index, test_index in kf.split(X):
print("---------- Fold ", count, "---------------")
train_data, test_data = X[train_index], X[test_index]
full_mat = np.zeros((rows, cols))
for row in train_data:
full_mat[row[0], row[1]] = row[2]
if timing:
start = time.time()
item_similarity, amean, umean, imean = itemsimilar(full_mat, option)
if timing:
end = time.time()
train_time = end - start
print("Training Time : ", train_time)
preds = []
real = []
for row in train_data:
user_id, isbn, rating = row[0], row[1], row[2]
preds.append(predict(user_id, isbn, full_mat, item_similarity, amean, umean, imean, k))
real.append(rating)
err1 = rmse(np.array(preds), np.array(real))
err2 = mae(np.array(preds), np.array(real))
trmse_list.append(err1)
tmae_list.append(err2)
print('Train Errors')
print('RMSE : %.4f' % err1)
print('MAE : %.4f' % err2)
preds = []
real = []
if timing:
start = time.time()
for row in test_data:
user_id, isbn, rating = row[0], row[1], row[2]
preds.append(predict(user_id, isbn, full_mat, item_similarity, amean, umean, imean, k))
real.append(rating)
if timing:
end = time.time()
test_time = end - start
print("Prediction Time : ", test_time)
err1 = rmse(np.array(preds), np.array(real))
err2 = mae(np.array(preds), np.array(real))
rmse_list.append(err1)
mae_list.append(err2)
print('Test Errors')
print('RMSE : %.4f' % err1)
print('MAE : %.4f' % err2)
count+=1
if timing:
return train_time, test_time
print("-------------------------------------")
print("Training Avg Error:")
print("AVG RMSE :", str(np.mean(trmse_list)))
print("AVG MAE :", str(np.mean(tmae_list)))
print()
print("Testing Avg Error:")
print("AVG RMSE :", str(np.mean(rmse_list)))
print("AVG MAE :", str(np.mean(mae_list)))
print(" ")
return np.mean(mae_list), np.mean(rmse_list)
"""
Explanation: get_results function is our function to cross_val setup and changing the parameter of this function will help to tune hyperparameter k (nearest neighbours)
End of explanation
"""
sims = []
sims_rmse = []
for arg in ['euc','cos','','pr','adjcos']:
each_sims = []
each_sims_rmse = []
for k in [2, 3, 4, 5, 10, 15, 20, 25]:
print(arg, k)
ans1, ans2 = get_results(nmat, arg, len(rows), len(cols), 5 ,k)
each_sims.append(ans1)
each_sims_rmse.append(ans2)
print()
print("Best K Value for ", arg)
print()
print("Min MAE")
print(np.min(each_sims), np.argmin(each_sims))
print("Min RMSE")
print(np.min(each_sims_rmse), np.argmin(each_sims_rmse))
print()
sims.append(each_sims)
sims_rmse.append(each_sims_rmse)
cos_res = sims[1]
euc_res = sims[0]
pr_res = sims[3]
adjcos_res = sims[4]
k = [2, 3, 4, 5, 10, 15, 20, 25]
"""
Explanation: Grid Search for best K for item-item CF using all the similarity metric implemented.
End of explanation
"""
results_df1 = pd.DataFrame({'K': k, 'COS': cos_res, 'EUC': euc_res, 'Pearson': pr_res, 'Adjusted Cosine': adjcos_res})
plot1 = results_df1.plot(x='K', y=['COS', 'EUC', 'Pearson', 'Adjusted Cosine'], ylim=(0.95, 1.1), title = 'Item-Item CF: MAE for different similarity metrics at different Ks')
fig = plot1.get_figure()
fig.savefig('../figures/Kmae_item.png')
"""
Explanation: Plot of MAE
End of explanation
"""
cos_res = sims_rmse[1]
euc_res = sims_rmse[0]
pr_res = sims_rmse[3]
adjcos_res = sims_rmse[4]
k = [2, 3, 4, 5, 10, 15, 20, 25]
results_df1 = pd.DataFrame({'K': k, 'COS': cos_res, 'EUC': euc_res, 'Pearson': pr_res, 'Adjusted Cosine': adjcos_res})
plot1 = results_df1.plot(x='K', y=['COS', 'EUC', 'Pearson', 'Adjusted Cosine'], ylim=(1.5, 1.6), title = 'Item-Item CF: RMSE for different similarity metrics at different Ks')
fig = plot1.get_figure()
fig.savefig('../figures/Krmse_item.png')
"""
Explanation: Plot of RMSE
End of explanation
"""
import time
trtimer = []
tetimer = []
for data1 in datasets:
rows1 = data1.user_id.unique()
cols1 = data1['Book-Title'].unique()
print(data1.user_id.nunique(), data1.isbn.nunique())
data1 = data1[['user_id', 'Book-Title', 'book_rating']]
idict = dict(zip(cols1, range(len(cols1))))
udict = dict(zip(rows1, range(len(rows1))))
data1.user_id = [
udict[i] for i in data1.user_id
]
data1['Book-Title'] = [
idict[i] for i in data1['Book-Title']
]
nmat1 = data1.as_matrix()
trt, tet = get_results(nmat1, 'euc', len(rows1), len(cols1), 5, 5, True)
trtimer.append(trt)
tetimer.append(tet)
print()
results_df1 = pd.DataFrame({'Items': [100, 150, 200, 300, 500], 'Time': trtimer})
plot1 = results_df1.plot(x='Items', y='Time', ylim=(0, 80), title = 'Item-Item CF: Time to train over dataset with increase in items')
fig = plot1.get_figure()
fig.savefig('../figures/traintime.png')
results_df1 = pd.DataFrame({'Items': [100, 150, 200, 300, 500], 'Time': tetimer})
plot1 = results_df1.plot(x='Items', y='Time', ylim=(0, 1), title = 'Item-Item CF: Time to Predict over Test Set with increase in items')
fig = plot1.get_figure()
fig.savefig('../figures/testtime.png')
"""
Explanation: We observe that there is no significant change in rmse and mae values beyond k =5, simple explaination of these can be that average books rated per user is around 3.3
End of explanation
"""
full_mat = np.zeros((len(rows),len(cols)))
for row in nmat:
full_mat[row[0], row[1]] = row[2]
item_similarity, amean, umean, imean = itemsimilar(full_mat, 'euc')
def getmrec(full_mat, user_id, item_similarity, k, m, idict, cov = False):
n = item_similarity.shape[0]
nzero = full_mat[user_id].nonzero()[0]
preds = {}
for row in range(n):
preds[row] = predict(user_id, row, full_mat, item_similarity, amean, umean, imean, k)
flipped_dict = dict(zip(idict.values(), idict.keys()))
if not cov:
print("Books Read -----")
for i in nzero:
print(flipped_dict[i])
del preds[i]
res = sorted(preds.items(), key=lambda x: x[1], reverse = True)
ans = [flipped_dict[i[0]] for i in res[:m]]
return ans
for m in [5, 8, 10, 15]:
cov = []
for i in range(len(rows)):
cov.extend(getmrec(full_mat, i, item_similarity, 5, m, idict, True))
print("Coverage with", m, "recs:", len(set(cov)), "%")
getmrec(full_mat, 313, item_similarity, 5, 10, idict)
"""
Explanation: getmrec function is used to get top m recommendation for a user_id based on the similarity matrix (option), k neighbours.
End of explanation
"""
from surprise import evaluate, Reader, Dataset, SVD, NMF, GridSearch, KNNWithMeans
reader = Reader(rating_scale=(1, 10))
data2 = Dataset.load_from_df(data[['user_id', 'Book-Title', 'book_rating']], reader)
data2.split(5)
param_grid = {'n_factors': [30, 40, 50, 60, 70], 'n_epochs': [40, 50, 60], 'reg_pu': [0.001, 0.1, 1],
'reg_qi': [ 0.1, 1, 3, 5]}
grid_search = GridSearch(NMF, param_grid, measures=['RMSE', 'MAE'])
grid_search.evaluate(data2)
results_df = pd.DataFrame.from_dict(grid_search.cv_results)
print(results_df)
print(grid_search.best_score['RMSE'])
print(grid_search.best_params['RMSE'])
print(grid_search.best_score['MAE'])
print(grid_search.best_params['MAE'])
"""
Explanation: Algo 2: Model Based Algorithm : NMF
We used scikit-surprise to implement NMF and tune its hyperparameter and regularisation. We made an attempt at manually implement NMF but the evaluation metrics were not as comparable to regularised NMF using surprise-scikit. The code for manual NMF can be found in analyze/NMF.ipynb
End of explanation
"""
maelist = []
rmselist = []
factors = [20, 30, 40 ,50 ,60, 70, 80]
for i in factors:
algo = NMF(n_factors = i, reg_pu = 0.001, reg_qi = 3)
perf = evaluate(algo, data2)
maelist.append(np.mean(perf['mae']))
rmselist.append(np.mean(perf['rmse']))
"""
Explanation: 60 latent factor seem to be optimal,
End of explanation
"""
results_df = pd.DataFrame({'Factors': factors, 'MAE': maelist, 'RMSE': rmselist})
plot1 = results_df.plot(x='Factors', y=['MAE', 'RMSE'], ylim=(0.9, 1.7), title = 'NMF: evaluation metrics vs number of latent factors')
fig = plot1.get_figure()
fig.savefig('../figures/NMFfactor.png')
from collections import defaultdict
def get_top_n(predictions, n=10):
top_n = defaultdict(list)
for uid, iid, true_r, est, _ in predictions:
top_n[uid].append((iid, est))
# Then sort the predictions for each user and retrieve the k highest ones.
for uid, user_ratings in top_n.items():
user_ratings.sort(key=lambda x: x[1], reverse=True)
top_n[uid] = user_ratings[:n]
return top_n
trainset = data2.build_full_trainset()
algo = NMF(n_epochs = 60, n_factors = 50, reg_pu = 0.001, reg_qi = 3)
algo.train(trainset)
# Than predict ratings for all pairs (u, i) that are NOT in the training set.
testset = trainset.build_anti_testset()
predictions = algo.test(testset)
top_n = get_top_n(predictions, n=10)
# Print the recommended items for each user
"""
Explanation: Plot of varying evaluation metrics vs number of latent factors for NMF
End of explanation
"""
def recbooks(mat, user_id, idict, cov = False):
full_mat = np.zeros((len(rows),len(cols)))
for row in mat:
full_mat[row[0], row[1]] = row[2]
nzero = full_mat[user_id].nonzero()[0]
flipped_dict = dict(zip(idict.values(), idict.keys()))
ans = [flipped_dict[i[0]] for i in top_n[user_id]]
if not cov:
print("Books Read -----")
for i in nzero:
print(flipped_dict[i])
print()
print("Recs -----")
for i in ans:
print(i)
return ans
recbooks(nmat, 1,idict)
"""
Explanation: RecBooks is used to recommend books to a user.
End of explanation
"""
for m in [5, 8, 10, 15]:
cov = []
top_n = get_top_n(predictions, m)
for i in range(len(rows)):
cov.extend(recbooks(nmat, i,idict, True))
print("Coverage with", m, "recs:", len(set(cov)), "%")
"""
Explanation: Coverage : Percentage of books coverage from all the books when recommending top-m books
End of explanation
"""
trtimer = []
tetimer = []
for data4 in datasets:
rows4 = data4.user_id.unique()
cols4 = data4['Book-Title'].unique()
print(data4.user_id.nunique(), data4.isbn.nunique())
data4 = data4[['user_id', 'Book-Title', 'book_rating']]
idict = dict(zip(cols4, range(len(cols4))))
udict = dict(zip(rows4, range(len(rows4))))
data4.user_id = [
udict[i] for i in data4.user_id
]
data4['Book-Title'] = [
idict[i] for i in data4['Book-Title']
]
start = time.time()
reader = Reader(rating_scale=(1, 10))
data4 = Dataset.load_from_df(data4[['user_id', 'Book-Title', 'book_rating']], reader)
data4.split(5)
trainset = data4.build_full_trainset()
algo = NMF(n_epochs = 60, n_factors = 70, reg_pu = 0.001, reg_qi = 5)
algo.train(trainset)
end = time.time()
trt = end - start
print(trt)
testset = trainset.build_testset()
start = time.time()
predictions = algo.test(testset)
end = time.time()
tet = end - start
print(tet)
trtimer.append(trt)
tetimer.append(tet)
print()
results_df1 = pd.DataFrame({'Items': [100, 150, 200, 300, 500], 'Time': trtimer})
plot1 = results_df1.plot(x='Items', y='Time', ylim=(0, 25), title = 'NMF Scaling: Time to train the dataset with increase in items')
fig = plot1.get_figure()
fig.savefig('../figures/traintimeNMF.png')
results_df1 = pd.DataFrame({'Items': [100, 150, 200, 300, 500], 'Time': tetimer})
plot1 = results_df1.plot(x='Items', y='Time', ylim=(0, 1), title = 'NMF Scaling: Time to Predict over Test Set with increase in items')
fig = plot1.get_figure()
fig.savefig('../figures/testtimeNMF.png')
"""
Explanation: NMF Scaling with items
End of explanation
"""
sim_options = {
'name': 'MSD',
'user_based' : False
}
algo = KNNWithMeans(sim_options = sim_options, k = 5, min_k =2)
perf = evaluate(algo, data2)
"""
Explanation: Comparing our implementation with Surprise
End of explanation
"""
|
ray-project/ray | doc/source/_templates/template.ipynb | apache-2.0 | import ray
import ray.rllib.agents.ppo as ppo
from ray import serve
def train_ppo_model():
trainer = ppo.PPOTrainer(
config={"framework": "torch", "num_workers": 0},
env="CartPole-v0",
)
# Train for one iteration
trainer.train()
trainer.save("/tmp/rllib_checkpoint")
return "/tmp/rllib_checkpoint/checkpoint_000001/checkpoint-1"
checkpoint_path = train_ppo_model()
"""
Explanation: (document-tag-to-refer-to)=
Creating an Example
This is an example template file for writing Jupyter Notebooks in markdown, using MyST.
For more information on MyST notebooks, see the
MyST-NB documentation.
If you want to learn more about the MyST parser, see the
MyST documentation.
MyST is common markdown compliant, so if you can use plain markdown here.
In case you need to execute restructured text (rSt) directives, you can use {eval-rst} to execute the code.
For instance, a here's a note written in rSt:
```{eval-rst}
.. note::
A note written in reStructuredText.
```
{margin}
You can create margins with this syntax for smaller notes that don't make it into the main
text.
You can also easily define footnotes.[^example]
[^example]: This is a footnote.
Adding code cells
End of explanation
"""
# This can be useful if you don't want to clutter the page with details.
import ray
import ray.rllib.agents.ppo as ppo
from ray import serve
"""
Explanation: Hiding and removing cells
You can hide cells, so that they will toggle when you click on the cell header.
You can use different :tags: like hide-cell, hide-input, or hide-output to hide cell content,
and you can use remove-cell, remove-input, or remove-output to remove the cell completely when rendered.
Those cells will still show up in the notebook itself, e.g. when you launch it in binder.
End of explanation
"""
ray.shutdown()
"""
Explanation: :::{tip}
Here's a quick tip.
:::
:::{note}
And this is a note.
:::
The following cell will be removed and not render:
End of explanation
"""
|
legacysurvey/pipeline | doc/nb/overview-paper-gallery.ipynb | gpl-2.0 | import os, sys
import shutil, time, warnings
from contextlib import redirect_stdout
import numpy as np
import matplotlib.pyplot as plt
from astropy.table import Table, vstack
from PIL import Image, ImageDraw, ImageFont
import multiprocessing
nproc = multiprocessing.cpu_count() // 2
%matplotlib inline
"""
Explanation: Gallery for the Overview Paper
The purpose of this notebook is to build a nice gallery of object images for the overview paper.
For future reference: The notebook must be run from https://jupyter-dev.nersc.gov with the following (approximate) activation script:
```bash
!/bin/bash
version=$1
connection_file=$2
desiconda_version=20170818-1.1.12-img
module use /global/common/${NERSC_HOST}/contrib/desi/desiconda/$desiconda_version/modulefiles
module load desiconda
export LEGACYPIPE_DIR=$SCRATCH/repos/legacypipe
export PATH=$LEGACYPIPE_DIR/bin:${PATH}
export PATH=$SCRATCH//repos/build/bin:$PATH
export PYTHONPATH=$LEGACYPIPE_DIR/py:${PYTHONPATH}
export PYTHONPATH=$SCRATCH/repos/build/lib/python3.5/site-packages:$PYTHONPATH
module use $LEGACYPIPE_DIR/bin/modulefiles/cori
module load dust
exec python -m ipykernel -f $connection_file
```
Some neat objects:
* Bow shock
* Abell 383
* SDSS/C4-2010 Galaxy Cluster
* NGC2874 Group
* UGC10321 Group
* NGC6742 (PN)
Imports and paths
End of explanation
"""
PIXSCALE = 0.262
figdir = '/global/project/projectdirs/desi/users/ioannis/legacysurveys/overview-paper'
figfile = os.path.join(figdir, 'gallery.fits')
jpgdir = os.path.join(figdir, 'jpg')
if not os.path.isdir(jpgdir):
os.mkdir(jpgdir)
pngdir = os.path.join(figdir, 'png')
if not os.path.isdir(pngdir):
os.mkdir(pngdir)
"""
Explanation: Preliminaries
Define the data release and the various output directories.
End of explanation
"""
cat = Table()
cat['name'] = (
'NGC6742',
'M92',
'Bow-Shock',
'NGC2782',
'UGC10321',
'C4-2010'
)
cat['nicename'] = (
'NGC 6742 Planetary Nebula',
'Messier 92 Globular Cluster',
'Interstellar Bow Shock',
'NGC 2782',
'UGC 10321 Galaxy Group',
'SDSS/C4 Galaxy Cluster 2010'
)
cat['viewer'] = (
'http://legacysurvey.org/viewer/?layer=decals-dr6&ra=284.83291667&dec=48.46527778',
'http://legacysurvey.org/viewer/?layer=decals-dr6&ra=259.28029167&dec=43.13652778&zoom=12',
'http://legacysurvey.org/viewer?ra=325.6872&dec=1.0032&zoom=14&layer=decals-dr5',
'http://legacysurvey.org/viewer/?layer=decals-dr6&ra=138.52129167&dec=40.11369444&zoom=12',
'http://legacysurvey.org/viewer?ra=244.5280&dec=21.5591&zoom=14&layer=decals-dr5',
'http://legacysurvey.org/viewer?ra=29.0707&dec=1.0510&zoom=13&layer=decals-dr5'
)
cat['dr'] = (
'dr6',
'dr6',
'dr7',
'dr6',
'dr7',
'dr7'
)
cat['ra'] = (
284.83291667,
259.28029167,
325.6872,
138.52129167,
244.5280,
29.070641492
)
cat['dec'] = (
48.46527778,
43.13652778,
1.0032,
40.11369444,
21.5591,
1.050816667
)
cat['diam'] = np.array([
1.5,
20,
4,
7,
3,
5
]).astype('f4') # [arcmin]
cat
"""
Explanation: Build a sample with the objects of interest.
End of explanation
"""
toss = Table()
toss['name'] = (
'Abell383',
'NGC2874'
)
toss['nicename'] = (
'Abell 383',
'NGC2874 Galaxy Group'
)
toss['viewer'] = (
'http://legacysurvey.org/viewer?ra=42.0141&dec=-3.5291&zoom=15&layer=decals-dr5',
'http://legacysurvey.org/viewer?ra=141.4373&dec=11.4284&zoom=13&layer=decals-dr5'
)
toss['dr'] = (
'dr5', # Abell 383
'dr5' # C4 cluster
)
toss['ra'] = (
42.0141,
141.44215000
)
toss['dec'] = (
-3.5291,
11.43696000
)
toss['diam'] = np.array([
6,
6
]).astype('f4') # [arcmin]
toss
"""
Explanation: Some rejected objects.
End of explanation
"""
def init_survey(dr='dr7'):
from legacypipe.survey import LegacySurveyData
if dr == 'dr7':
survey = LegacySurveyData(
survey_dir='/global/project/projectdirs/cosmo/work/legacysurvey/dr7',
output_dir=figdir)
else:
survey = LegacySurveyData(
survey_dir='/global/project/projectdirs/cosmo/work/legacysurvey/dr6',
output_dir=figdir)
return survey
def simple_wcs(obj):
"""Build a simple WCS object for a single object."""
from astrometry.util.util import Tan
size = np.rint(obj['diam'] * 60 / PIXSCALE).astype('int') # [pixels]
wcs = Tan(obj['ra'], obj['dec'], size/2+0.5, size/2+0.5,
-PIXSCALE/3600.0, 0.0, 0.0, PIXSCALE/3600.0,
float(size), float(size))
return wcs
def _build_sample_one(args):
"""Wrapper function for the multiprocessing."""
return build_sample_one(*args)
def build_sample_one(obj, verbose=False):
"""Wrapper function to find overlapping grz CCDs for a given object.
"""
survey = init_survey(dr=obj['dr'])
print('Working on {}...'.format(obj['name']))
wcs = simple_wcs(obj)
try:
ccds = survey.ccds_touching_wcs(wcs) # , ccdrad=2*diam/3600)
except:
return None
if ccds:
# Is there 3-band coverage?
if 'g' in ccds.filter and 'r' in ccds.filter and 'z' in ccds.filter:
if verbose:
print('For {} found {} CCDs, RA = {:.5f}, Dec = {:.5f}, Diameter={:.4f} arcmin'.format(
obj['name'], len(ccds), obj['ra'], obj['dec'], obj['diam']))
return obj
return None
def build_sample(cat, use_nproc=nproc):
"""Build the full sample with grz coverage in DR6."""
sampleargs = list()
for cc in cat:
sampleargs.append( (cc, True) ) # the False refers to verbose=False
if use_nproc > 1:
p = multiprocessing.Pool(nproc)
result = p.map(_build_sample_one, sampleargs)
p.close()
else:
result = list()
for args in sampleargs:
result.append(_build_sample_one(args))
# Remove non-matching objects and write out the sample
outcat = vstack(list(filter(None, result)))
print('Found {}/{} objects in the DR6+DR7 footprint.'.format(len(outcat), len(cat)))
return outcat
sample = build_sample(cat, use_nproc=1)
print('Writing {}'.format(figfile))
sample.write(figfile, overwrite=True)
sample
"""
Explanation: Ensure all objects are in the DR6+DR7 footprint before building coadds.
End of explanation
"""
def custom_brickname(obj, prefix='custom-'):
brickname = 'custom-{:06d}{}{:05d}'.format(
int(1000*obj['ra']), 'm' if obj['dec'] < 0 else 'p',
int(1000*np.abs(obj['dec'])))
return brickname
def custom_coadds_one(obj, scale=PIXSCALE, clobber=False):
from legacypipe.runbrick import run_brick
#from astrometry.util.multiproc import multiproc
#from legacypipe.runbrick import stage_tims, run_brick
#from legacypipe.coadds import make_coadds
name = obj['name']
jpgfile = os.path.join(jpgdir, '{}.jpg'.format(name))
if os.path.isfile(jpgfile) and not clobber:
print('File {} exists...skipping.'.format(jpgfile))
else:
size = np.rint(obj['diam'] * 60 / scale).astype('int') # [pixels]
print('Generating mosaic for {} with width={} pixels.'.format(name, size))
bands = ('g', 'r', 'z')
if 'Bow' in name:
rgb_kwargs = dict({'Q': 200, 'm': 0.01})
else:
rgb_kwargs = dict({'Q': 20, 'm': 0.03})
survey = init_survey(dr=obj['dr'])
brickname = custom_brickname(obj, prefix='custom-')
with warnings.catch_warnings():
warnings.simplefilter("ignore")
run_brick(None, survey, radec=(obj['ra'], obj['dec']), pixscale=scale,
width=size, height=size, rgb_kwargs=rgb_kwargs, threads=nproc,
stages=['image_coadds'], splinesky=True, early_coadds=True, pixPsf=True,
hybridPsf=True, normalizePsf=True, write_pickles=False, depth_cut=False,
apodize=True, do_calibs=False, ceres=False)
sys.stdout.flush()
_jpgfile = os.path.join(survey.output_dir, 'coadd', 'cus', brickname,
'legacysurvey-{}-image.jpg'.format(brickname))
shutil.copy(_jpgfile, jpgfile)
shutil.rmtree(os.path.join(survey.output_dir, 'coadd'))
#custom_coadds_one(sample[2], clobber=True)
def custom_coadds(sample, clobber=False):
for obj in sample:
custom_coadds_one(obj, clobber=clobber)
coaddslogfile = os.path.join(figdir, 'make-coadds.log')
print('Generating the coadds.')
print('Logging to {}'.format(coaddslogfile))
t0 = time.time()
with open(coaddslogfile, 'w') as log:
with redirect_stdout(log):
custom_coadds(sample, clobber=True)
print('Total time = {:.3f} minutes.'.format((time.time() - t0) / 60))
"""
Explanation: Generate the color mosaics for each object.
End of explanation
"""
barlen = np.round(60.0 / PIXSCALE).astype('int')
fonttype = os.path.join(figdir, 'Georgia.ttf')
def _add_labels_one(args):
"""Wrapper function for the multiprocessing."""
return add_labels_one(*args)
def add_labels_one(obj, verbose=False):
name = obj['name']
nicename = obj['nicename']
jpgfile = os.path.join(jpgdir, '{}.jpg'.format(name))
pngfile = os.path.join(pngdir, '{}.png'.format(name))
thumbfile = os.path.join(pngdir, 'thumb-{}.png'.format(name))
im = Image.open(jpgfile)
sz = im.size
fntsize = np.round(sz[0]/28).astype('int')
width = np.round(sz[0]/175).astype('int')
font = ImageFont.truetype(fonttype, size=fntsize)
draw = ImageDraw.Draw(im)
# Label the object name--
draw.text((0+fntsize*2, 0+fntsize*2), nicename, font=font)
# Add a scale bar--
x0, x1, yy = sz[1]-fntsize*2-barlen, sz[1]-fntsize*2, sz[0]-fntsize*2
draw.line((x0, yy, x1, yy), fill='white', width=width)
im.save(pngfile)
# Generate a thumbnail
if False:
cmd = '/usr/bin/convert -thumbnail 300x300 {} {}'.format(pngfile, thumbfile)
os.system(cmd)
def add_labels(sample):
labelargs = list()
for obj in sample:
labelargs.append((obj, False))
if nproc > 1:
p = multiprocessing.Pool(nproc)
res = p.map(_add_labels_one, labelargs)
p.close()
else:
for args in labelargs:
res = _add_labels_one(args)
%time add_labels(sample)
"""
Explanation: Add labels and a scale bar.
End of explanation
"""
def make_montage(cat, clobber=False):
montagefile = os.path.join(figdir, 'overview-gallery.png')
ncol = 3
nrow = np.ceil(len(sample) / ncol).astype('int')
if not os.path.isfile(montagefile) or clobber:
cmd = 'montage -bordercolor white -borderwidth 1 -tile {}x{} -geometry 512x512 '.format(ncol, nrow)
cmd = cmd+' '.join([os.path.join(pngdir, '{}.png'.format(name)) for name in cat['name']])
cmd = cmd+' {}'.format(montagefile)
print(cmd)
os.system(cmd)
print('Writing {}'.format(montagefile))
%time make_montage(cat, clobber=True)
"""
Explanation: Finally make a nice montage figure for the paper.
End of explanation
"""
|
cfcdavidchan/Deep-Learning-Foundation-Nanodegree | dcgan-svhn/DCGAN_Exercises.ipynb | mit | %matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
"""
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
"""
Explanation: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
End of explanation
"""
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
"""
Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
End of explanation
"""
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
"""
Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
End of explanation
"""
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), y
"""
Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
"""
Explanation: Network Inputs
Here, just creating some placeholders like normal.
End of explanation
"""
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x
# Output layer, 32x32x3
logits =
out = tf.tanh(logits)
return out
"""
Explanation: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:
Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.
Exercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one.
End of explanation
"""
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x =
logits =
out =
return out, logits
"""
Explanation: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.
Exercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first.
End of explanation
"""
def model_loss(input_real, input_z, output_dim, alpha=0.2):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
"""
Explanation: Model Loss
Calculating the loss like before, nothing new here.
End of explanation
"""
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
"""
Explanation: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
End of explanation
"""
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=0.2)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
"""
Explanation: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
End of explanation
"""
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
"""
Explanation: Here is a function for displaying generated images.
End of explanation
"""
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
"""
Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.
End of explanation
"""
real_size = (32,32,3)
z_size = 100
learning_rate = 0.001
batch_size = 64
epochs = 1
alpha = 0.01
beta1 = 0.9
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
# Load the data and train the network here
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
"""
Explanation: Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
Exercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time.
End of explanation
"""
|
raschuetz/foundations-homework | 12/311 time series homework.ipynb | mit | df = pd.read_csv('311-2015.csv', dtype = str)
df.head()
import datetime
def created_date_to_datetime(date_str):
return datetime.datetime.strptime(date_str, '%m/%d/%Y %I:%M:%S %p')
df['created_datetime'] = df['Created Date'].apply(created_date_to_datetime)
df = df.set_index('created_datetime')
"""
Explanation: First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it.
Importing and preparing your data
Import your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not.
End of explanation
"""
freq_complaints = df[['Unique Key', 'Complaint Type']].groupby('Complaint Type').count().sort_values('Unique Key', ascending=False).head()
freq_complaints
"""
Explanation: What was the most popular type of complaint, and how many times was it filed?
End of explanation
"""
ax = freq_complaints.plot(kind = 'barh', legend = False)
ax.set_title('5 Most Frequent 311 Complaints')
ax.set_xlabel('Number of Complaints in 2015')
"""
Explanation: Make a horizontal bar graph of the top 5 most frequent complaint types.
End of explanation
"""
df[['Unique Key', 'Borough']].groupby('Borough').count().sort_values('Unique Key', ascending = False)
"""
Explanation: Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually.
End of explanation
"""
cases_in_mar = df[df.index.month == 3]['Unique Key'].count()
print('There were', cases_in_mar, 'cases filed in March.')
cases_in_may = df[df.index.month == 5]['Unique Key'].count()
print('There were', cases_in_may, 'cases filed in May.')
"""
Explanation: According to your selection of data, how many cases were filed in March? How about May?
End of explanation
"""
april_1_complaints = df[(df.index.month == 4) & (df.index.day == 1)][['Unique Key', 'Created Date', 'Complaint Type', 'Descriptor']]
april_1_complaints
"""
Explanation: I'd like to see all of the 311 complaints called in on April 1st.
Surprise! We couldn't do this in class, but it was just a limitation of our data set
End of explanation
"""
april_1_complaints[['Unique Key', 'Complaint Type']].groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head(1)
"""
Explanation: What was the most popular type of complaint on April 1st?
End of explanation
"""
april_1_complaints[['Unique Key', 'Complaint Type']].groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head(3)
"""
Explanation: What were the most popular three types of complaint on April 1st
End of explanation
"""
complaints_by_month = df['Unique Key'].groupby(df.index.month).count()
complaints_by_month
ax = complaints_by_month.plot()
ax.set_xticks([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
ax.set_xticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
ax.set_ylabel('Number of Complaints')
ax.set_title('311 Complaints by Month in 2015')
# x_values = df.groupby(df.index.month).median().index
min_values = 0
max_values = complaints_by_month
ax.fill_between(x_values, min_values, max_values, alpha = 0.4)
"""
Explanation: What month has the most reports filed? How many? Graph it.
End of explanation
"""
complaints_by_week = df['Unique Key'].groupby(df.index.week).count()
complaints_by_week
ax = complaints_by_week.plot()
ax.set_xticks(range(1,53))
ax.set_xticklabels(['', '', '', '', '5',
'', '', '', '', '10',
'', '', '', '', '15',
'', '', '', '', '20',
'', '', '', '', '25',
'', '', '', '', '30',
'', '', '', '', '35',
'', '', '', '', '40',
'', '', '', '', '45',
'', '', '', '', '50',
'', '',])
ax.set_ylabel('Number of Complaints')
ax.set_xlabel('Week of the Year')
ax.set_title('311 Complaints by Week in 2015')
"""
Explanation: What week of the year has the most reports filed? How many? Graph the weekly complaints.
End of explanation
"""
noise_complaints = df[df['Complaint Type'].str.contains('Noise') == True]
noise_complaints_hour = noise_complaints['Unique Key'].groupby(noise_complaints.index.hour).count()
ax = noise_complaints_hour.plot()
ax.set_xticks(range(0,24))
ax.set_xticklabels(['Midnight', '', '', '', '', '',
'6 am', '', '', '', '', '',
'Noon', '', '', '', '', '',
'6 pm', '', '', '', '', ''])
ax.set_ylabel('Number of Complaints')
ax.set_xlabel('Time of Day')
ax.set_title('Noise Complaints by Time of Day in 2015')
"""
Explanation: Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic).
End of explanation
"""
top_complaining_days = df['Unique Key'].resample('D').count().sort_values(ascending = False).head()
top_complaining_days
ax = top_complaining_days.plot(kind = 'barh')
ax.set_ylabel('Date')
ax.set_xlabel('Number of Complaints')
ax.set_title('The Top 5 Days for 311 Complaints in 2015')
complaining_days = df['Unique Key'].resample('D').count()
ax = complaining_days.plot()
ax.set_ylabel('Number of Complaints')
ax.set_xlabel('Day of Year')
ax.set_title('Noise Complaints by Day in 2015')
"""
Explanation: Which were the top five days of the year for filing complaints? How many on each of those days? Graph it.
End of explanation
"""
df.index[1]
def get_day_of_wk(timestamp):
return datetime.datetime.strftime(timestamp, '%a')
df['datetime'] = df.index
df['day_of_wk'] = df['datetime'].apply(get_day_of_wk)
complaining_day_of_wk = df[['Unique Key', 'day_of_wk']].groupby('day_of_wk').count()
complaining_day_of_wk['number_of_day'] = [6, 2, 7, 1, 5, 3, 4]
complaining_day_of_wk_sorted = complaining_day_of_wk.sort_values('number_of_day')
complaining_day_of_wk_sorted
ax = complaining_day_of_wk_sorted.plot(y = 'Unique Key', legend = False)
"""
Explanation: Interesting—it looks cyclical. Let's see what day of the week is most popular:
End of explanation
"""
hourly_complaints = df['Unique Key'].groupby(df.index.hour).count()
ax = hourly_complaints.plot()
ax.set_xticks(range(0,24))
ax.set_xticklabels(['Midnight', '', '', '', '', '',
'6 am', '', '', '', '', '',
'Noon', '', '', '', '', '',
'6 pm', '', '', '', '', ''])
ax.set_ylabel('Number of Complaints')
ax.set_xlabel('Time of Day')
ax.set_title('311 Complaints by Time of Day in 2015')
"""
Explanation: What hour of the day are the most complaints? Graph a day of complaints.
End of explanation
"""
# 11 pm
df[df.index.hour == 23][['Unique Key', 'Complaint Type', 'Descriptor']].groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head()
# 12 am
df[df.index.hour == 0][['Unique Key', 'Complaint Type', 'Descriptor']].groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head()
# 1 am
df[df.index.hour == 1][['Unique Key', 'Complaint Type', 'Descriptor']].groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head()
"""
Explanation: One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after?
End of explanation
"""
midnight_complaints = df[df.index.hour == 0][['Unique Key', 'Complaint Type']]
for minute in range(0, 60):
top_complaint = midnight_complaints[midnight_complaints.index.minute == minute].groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head(1)
if minute < 10:
minute = '0' + str(minute)
else:
minute = str(minute)
print('12:' + minute + '\'s top complaint was:', top_complaint)
print('')
# hourly_complaints = df['Unique Key'].groupby(df.index.hour).count()
"""
Explanation: So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am.
End of explanation
"""
df[['Unique Key', 'Agency']].groupby('Agency').count().sort_values('Unique Key', ascending = False).head()
def agency_hourly_complaints(agency_name_str):
agency_complaints = df[df['Agency'] == agency_name_str]
return agency_complaints['Unique Key'].groupby(agency_complaints.index.hour).count()
ax = agency_hourly_complaints('HPD').plot(label = 'HPD', legend = True)
for x in ['NYPD', 'DOT', 'DEP', 'DSNY']:
agency_hourly_complaints(x).plot(ax = ax, label = x, legend = True)
"""
Explanation: Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight).
5 Top Agencies:
End of explanation
"""
def agency_weekly_complaints(agency_name_str):
agency_complaints = df[df['Agency'] == agency_name_str]
return agency_complaints['Unique Key'].groupby(agency_complaints.index.week).count()
ax = agency_weekly_complaints('NYPD').plot(label = 'NYPD', legend = True)
for x in ['DOT', 'HPD', 'DPR', 'DSNY']:
agency_weekly_complaints(x).plot(ax = ax, label = x, legend = True)
NYPD_complaints = df[df['Agency'] == 'NYPD']
NYPD_weekly_complaints = NYPD_complaints['Unique Key'].groupby(NYPD_complaints.index.week).count()
NYPD_weekly_complaints[NYPD_weekly_complaints > 13000]
# # a way to use the function agency_weekly_complaints that's actually longer than not using it.
# week_number = 0
# for week in agency_weekly_complaints('NYPD'):
# week_number += 1
# if week > 1500:
# print('In week', week_number)
# print('there were', week, 'complaints.')
# print('')
"""
Explanation: Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints?
End of explanation
"""
NYPD_weekly_complaints[NYPD_weekly_complaints < 6000]
"""
Explanation: It looks like complaints are most popular in May, June, September—generally in the summer.
End of explanation
"""
NYPD_complaints = df[df['Agency'] == 'NYPD']
NYPD_jul_aug_complaints = NYPD_complaints[(NYPD_complaints.index.month == 7) | (NYPD_complaints.index.month == 8)][['Unique Key', 'Complaint Type']]
NYPD_jul_aug_complaints.groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head()
"""
Explanation: It looks like complaints are least popular in around Christmas and New Year's.
Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer.
Most popular NYPD complaints in July and August:
End of explanation
"""
NYPD_complaints = df[df['Agency'] == 'NYPD']
NYPD_may_complaints = NYPD_complaints[(NYPD_complaints.index.month == 5)][['Unique Key', 'Complaint Type']]
NYPD_may_complaints.groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head()
"""
Explanation: Most popular NYPD complaints in May:
End of explanation
"""
HPD_complaints = df[df['Agency'] == 'HPD']
HPD_jun_jul_aug_complaints = HPD_complaints[(HPD_complaints.index.month == 6) |
(HPD_complaints.index.month == 7) |
(HPD_complaints.index.month == 8)][['Unique Key', 'Complaint Type']]
HPD_jun_jul_aug_complaints.groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head()
"""
Explanation: Most popular HPD complaints in June, July, and August:
End of explanation
"""
HPD_complaints = df[df['Agency'] == 'HPD']
HPD_dec_jan_feb_complaints = HPD_complaints[(HPD_complaints.index.month == 12) |
(HPD_complaints.index.month == 1) |
(HPD_complaints.index.month == 2)][['Unique Key', 'Complaint Type']]
HPD_dec_jan_feb_complaints.groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head()
"""
Explanation: Most popular HPD complaints in December, January, and February:
End of explanation
"""
|
root-mirror/training | NCPSchool2021/Examples/GraphDrawPython.ipynb | gpl-2.0 | import ROOT
c = ROOT.TCanvas()
"""
Explanation: Interactively Draw a Graph
End of explanation
"""
g = ROOT.TGraph()
for i in range(5): g.SetPoint(i,i,i*i)
g.Draw("APL")
c.Draw()
"""
Explanation: The simple graph
End of explanation
"""
%jsroot on
g.SetMarkerStyle(ROOT.kFullTriangleUp)
g.SetMarkerSize(3)
g.SetMarkerColor(ROOT.kAzure)
g.SetLineColor(ROOT.kRed - 2)
g.SetLineWidth(2)
g.SetLineStyle(3)
c.Draw()
"""
Explanation: Change marker style, colour as well as line colour and thickness. Make the plot interactive. Re-draw the plot and interact with it!
End of explanation
"""
g.SetTitle("My Graph;The X;My Y")
c.SetGrid()
c.Draw()
"""
Explanation: Now we set the title and the grid on the canvas.
End of explanation
"""
txt = "#color[804]{My text #mu {}^{40}_{20}Ca}"
l = ROOT.TLatex(.2, 10, txt)
l.Draw()
c.Draw()
"""
Explanation: We will now add the symbol of the Calcium isotope
End of explanation
"""
c.SetLogy()
c.Draw()
"""
Explanation: Redraw using a Y axis in log scale.
End of explanation
"""
|
luofan18/deep-learning | tensorboard/Anna_KaRNNa_Name_Scoped.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
"""
Explanation: First we'll load the text file and convert it into integers for our network to use.
End of explanation
"""
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
"""
Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
"""
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_layers"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
"""
Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
"""
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
End of explanation
"""
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/3', sess.graph)
"""
Explanation: Write out the graph for TensorBoard
End of explanation
"""
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
"""
Explanation: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
|
astrojhgu/mcupy | example/estimate_eff/README.ipynb | bsd-3-clause | import sys
from mcupy.graph import *
from mcupy.nodes import *
from mcupy.utils import *
from mcupy.core import ensemble_type
try:
import pydot
except(ImportError):
import pydot_ng as pydot
"""
Explanation: Example
Check .<br/>
This is an example given in thie book section 8.2.
First let's import necessary packages
End of explanation
"""
g=Graph()
"""
Explanation: Create a graph object, which is used to hold nodes.
End of explanation
"""
A=FixedUniformNode(0.001,1-1e-5).withTag("A")
B=FixedUniformNode(0.001,1-1e-5).withTag("B")
mu=FixedUniformNode(.001,100-1e-5).withTag("mu")
sigma=FixedUniformNode(.001,100-1e-5).withTag("sigma")
"""
Explanation: Create some nodes
End of explanation
"""
for l in open('eff.txt'):
e1,nrec1,ninj1=l.split()
e1=float(e1)
nrec1=float(nrec1)
ninj1=float(ninj1)
E=C_(e1).inGroup("E")
ninj=C_(ninj1).inGroup("ninj")
eff=((B-A)*PhiNode((E-mu)/sigma)+A).inGroup("eff")
nrec=BinNode(eff,ninj).withObservedValue(nrec1).inGroup("nrec")
g.addNode(nrec)
"""
Explanation: And some more nodes
End of explanation
"""
display_graph(g)
"""
Explanation: Then let us check the topology of graph
End of explanation
"""
mA=g.getMonitor(A)
mB=g.getMonitor(B)
mSigma=g.getMonitor(sigma)
mMu=g.getMonitor(mu)
"""
Explanation: It's correct.<br/>
Then we'd like to perform several sampling and record the values.<br/>
Before sampling, we need to decide which variables we need to monitor.
End of explanation
"""
result=[]
"""
Explanation: We need a variable to hold the results
End of explanation
"""
for i in log_progress(range(1000)):
g.sample()
"""
Explanation: Then we perform the sampling for 1000 time for burning
End of explanation
"""
for i in log_progress(range(30000)):
g.sample()
result.append([mA.get(),mB.get(),mMu.get(),mSigma.get()])
"""
Explanation: Then we perform 30000 sampling and record the results
End of explanation
"""
%matplotlib inline
import seaborn
import scipy
result=scipy.array(result)
seaborn.jointplot(result[:,0],result[:,1],kind='hex')
seaborn.jointplot(result[:,0],result[:,2],kind='hex')
seaborn.jointplot(result[:,0],result[:,3],kind='hex')
seaborn.jointplot(result[:,1],result[:,2],kind='hex')
seaborn.jointplot(result[:,1],result[:,3],kind='hex')
seaborn.jointplot(result[:,2],result[:,3],kind='hex')
"""
Explanation: Then we plot the results.
End of explanation
"""
em=ensemble_type()
"""
Explanation: Now, mcupy has also implemented the ensemble-based sampler for graph, which is much more faster. To use it, first declare a data structure to store the ensemble as:
End of explanation
"""
result=[]
for i in log_progress(range(100000)):
g.ensemble_sample(em)
result.append([em[0][0],em[0][1],em[0][2],em[0][3]])
result=scipy.array(result)
seaborn.jointplot(result[:,1],result[:,0],kind='hex')
seaborn.jointplot(result[:,1],result[:,2],kind='hex')
"""
Explanation: then, iteratively call the graph.ensemble_sample(em)
End of explanation
"""
|
david-hoffman/pyOTF | notebooks/Microscope Imaging Models/Epi with Camera.ipynb | apache-2.0 | # We'll use a 1.27 NA water dipping objective imaging in water
psf_params = dict(
na=1.27,
ni=1.33,
wl=0.585,
size=64,
vec_corr="none",
zrange=[0]
)
# Set the Nyquist sampling rate
nyquist_sampling = psf_params["wl"] / psf_params["na"] / 4
# our oversampling factor
oversample_factor = 8
# we need to be just slightly less than nyquist for this to work
psf_params["res"] = nyquist_sampling * 0.99 / oversample_factor
psf_params["size"] *= oversample_factor
# calculate infocus part only
psf = HanserPSF(**psf_params)
# for each camera pixel size we want to show 10 camera pixels worth of the intensity
num_pixels = 10
# gamma for display
gam = 0.3
# set up the figure
fig, axs_total = plt.subplots(3, 3, dpi=150, figsize=(9,9), gridspec_kw=dict(hspace=0.1, wspace=0.1))
# rows will be for different camera pixel sizes, the camera pixel size = subsample / 8 * Nyquist
for axs, subsample in zip(axs_total, (4, 8, 16)):
# for display zoom in
offset = (len(psf.PSFi.squeeze()) - num_pixels * subsample) // 2
# show the original data, shifted such that the max is at the center of the
# camera ROI
axs[0].matshow(psf.PSFi.squeeze()[offset-subsample//2:-offset-subsample//2, offset-subsample//2:-offset-subsample//2],
norm=mpl.colors.PowerNorm(gam))
# Use the convolution to shift the data so that the max is centered on camera ROI
origin_shift = subsample // 2 - 1
exact = ndi.uniform_filter(psf.PSFi[0], subsample, origin=origin_shift)
# Show convolved data
axs[1].matshow(exact[offset:-offset, offset:-offset], norm=mpl.colors.PowerNorm(gam))
for ax in axs[:2]:
ax.xaxis.set_major_locator(plt.FixedLocator(np.arange(0, offset, subsample) - 0.5))
ax.yaxis.set_major_locator(plt.FixedLocator(np.arange(0, offset, subsample) - 0.5))
# integrate across pixel
exact_subsample = bin_ndarray(exact, bin_size=subsample, operation="sum")
# Display final camera pixels
offset_sub = offset//subsample
ax = axs[-1]
ax.matshow(exact_subsample[offset_sub:-offset_sub, offset_sub:-offset_sub], norm=mpl.colors.PowerNorm(gam))
ax.xaxis.set_major_locator(plt.FixedLocator(np.arange(0, offset_sub) - 0.5))
ax.yaxis.set_major_locator(plt.FixedLocator(np.arange(0, offset_sub) - 0.5))
# clean up plot
for ax in axs:
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.yaxis.set_major_formatter(plt.NullFormatter())
ax.tick_params(length=0)
ax.grid(True)
# label
axs_total[0, 0].set_title("Intensity Incident on Camera\n($\\frac{1}{8}$ Nyquist Simulation)")
axs_total[0, 1].set_title("Convolution with\nCamera Pixel Function")
axs_total[0, 2].set_title("Integration to Final\nCamera Pixel Intensity")
axs_total[0, 0].set_ylabel(r"$\frac{1}{2}\times$ Nyquist Camera Pixel Size")
axs_total[1, 0].set_ylabel(r"$1\times$ Nyquist Camera Pixel Size")
axs_total[2, 0].set_ylabel(r"$2\times$ Nyquist Camera Pixel Size");
"""
Explanation: Is the PSF generated by pyotf what the camera sees?
Short answer
Not quite.
Long answer
What pyotf is modeling is the wavefront at the camera due to a point source at the focus of the objective in a widefield epifluorescence (AKA, widefield or epi) microscope. But what the camera records is more complex. First, each pixel acts a a square aperture (similar to the circular aperture in confocal microscopy) and then the intensity across the pixel is integrated and eventually converted into a single number. To model this we'll take the following approach:
1. Use pyotf to model the intensity point spread function (PSF) at the camera at a pixel size of $1/8^{\text{th}}$ Nyquist, i.e. $\lambda/4 \text{NA}/8$
2. Convolve this image with a square equal to the size of the camera pixel
3. Integrate over the camera pixels
End of explanation
"""
# keep our original parameters safe
psf_params_wf = psf_params.copy()
# for each camera pixel size we want to show 10 camera pixels worth of the intensity
num_pixels = 64
# set up the figure
fig, axs_total = plt.subplots(3, 4, dpi=150, figsize=(9.25, 9),
gridspec_kw=dict(hspace=0.1, wspace=0.1, width_ratios=(1, 1, 1, 1 / 12)))
# rows will be for different camera pixel sizes, the camera pixel size = subsample / 8 * Nyquist
for axs, subsample in zip(axs_total, (2, 4, 8)):
# for display zoom in
offset = (len(psf.PSFi.squeeze()) - num_pixels) // 2
# show the original data, shifted such that the max is at the center of the
# camera ROI
# axs[0].matshow(psf.PSFi.squeeze()[offset-subsample//2:-offset-subsample//2, offset-subsample//2:-offset-subsample//2],
# norm=mpl.colors.PowerNorm(gam))
# Use the convolution to shift the data so that the max is centered on camera ROI
origin_shift = subsample // 2 - 1
exact = ndi.uniform_filter(psf.PSFi[0], subsample, origin=origin_shift)
# Show convolved data
# axs[1].matshow(exact[offset:-offset, offset:-offset], norm=mpl.colors.PowerNorm(gam))
# integrate across pixel
exact_subsample = bin_ndarray(exact, bin_size=subsample, operation="sum")
exact_subsample /= exact_subsample.max()
# Display final camera pixels
offset_sub = offset//subsample
axs[0].matshow(exact_subsample[offset_sub:-offset_sub, offset_sub:-offset_sub], norm=mpl.colors.PowerNorm(gam))
# Directly simulate at Nyquist
psf_params_wf['res'] = psf_params['res'] * subsample
psf_params_wf['size'] = psf_params['size'] // subsample
low_res = HanserPSF(**psf_params_wf).PSFi.squeeze()
low_res /= low_res.max()
# display direct simulation
axs[1].matshow(low_res[offset_sub:-offset_sub, offset_sub:-offset_sub], norm=mpl.colors.PowerNorm(gam))
# Calculate percent of max difference and display
difference = (exact_subsample - low_res)
im = axs[2].matshow(difference[offset_sub:-offset_sub, offset_sub:-offset_sub] * 100, cmap="viridis")
plt.colorbar(im, ax=axs[2], cax=axs[3])
# clean up plot
for ax in axs[:3]:
ax.xaxis.set_major_locator(plt.NullLocator())
ax.yaxis.set_major_locator(plt.NullLocator())
# label
axs_total[0, 0].set_title("Integration to Final\nCamera Pixel Intensity")
axs_total[0, 1].set_title("Intensity Incident on Camera\n(Nyquist Simulation)")
axs_total[0, 2].set_title("Difference (%)")
axs_total[0, 0].set_ylabel(r"$\frac{1}{4}\times$ Nyquist Camera Pixel Size")
axs_total[1, 0].set_ylabel(r"$\frac{1}{2}\times$ Nyquist Camera Pixel Size")
axs_total[2, 0].set_ylabel(r"$1\times$ Nyquist Camera Pixel Size");
"""
Explanation: The above figure shows each of the three steps (columns) for three different camera pixels sizes (rows). Gray lines indicate the final camera pixel grid. It's clear that the convolution has an effect on camera pixel sizes larger than Nyquist. Considering that we usually ask microscopists to image at Nyquist and therefore we usually model PSFs at Nyquist a natural question is: how different are the higher resolution calculations (such as in the figure above) from simulating directly with Nyquist sized camera pixels? Furthermore, when simulating PSFs for camera pixels that are larger than Nyquist, how important is the convolution operation (step 2)?
It's safe to assume that the area with the highest resolution will be most effected and thus we can limit our investigations to the 2D infocus PSF.
End of explanation
"""
from pyotf.utils import easy_fft
from dphtools.utils import radial_profile
fig, ax = plt.subplots(figsize=(4,4), dpi=150)
k_pixel_size = 2 / psf_params_wf["res"] / len(exact_subsample)
abbe_limit = 1 / nyquist_sampling / k_pixel_size
for l, d in zip(("Exact", "Direct"), (exact_subsample, low_res)):
o = abs(easy_fft(d))
ro = radial_profile(o)[0]
ax.plot(np.arange(len(ro)) / abbe_limit * 2, ro, label=l)
ax.legend()
ax.set_xlabel("Spatial Frequency")
ax.set_ylabel("Intensity")
ax.set_xlim(0, 2.6)
ax.set_ylim(0)
ax.yaxis.set_major_locator(plt.NullLocator())
ax.xaxis.set_major_locator(plt.MultipleLocator(1 / 2))
def formatter(x, pos):
if x == 0:
return 0
if x / 0.5 % 2:
x = int(x) * 2 + 1
if x == 1:
x = ""
return r"$\frac{{{}NA}}{{2\lambda}}$".format(x)
elif int(x):
x = int(x)
if x == 1:
x = ""
return r"$\frac{{{}NA}}{{\lambda}}$".format(x)
return r"$\frac{NA}{\lambda}$"
ax.xaxis.set_major_formatter(plt.FuncFormatter(formatter))
"""
Explanation: Presented in the figure above is a comparison of the "exact" simulation (first column) to the "direct" simulation (second column), the difference is shown in the third column. As expected smaller camera pixels result in smaller the differences between the "exact" and "direct" calculations. But even at it's worst (i.e. Nyquist sampling on the camera) the maximum deviation is about 7% of the peak PSF intensity.
Of course, we know that single numbers are no way to evaluate resolution, or the loss thereof. Therefore we'll take a look in frequency space.
End of explanation
"""
# for each camera pixel size we want to show 10 camera pixels worth of the intensity
num_pixels = len(psf.PSFi.squeeze())
# Directly simulate at Nyquist
psf_params_wf['res'] = psf_params['res'] * oversample_factor
psf_params_wf['size'] = psf_params['size'] // oversample_factor
low_res = HanserPSF(**psf_params_wf).PSFi.squeeze()
# set up the figure
fig, axs_total = plt.subplots(3, 4, dpi=150, figsize=(9.25,9),
gridspec_kw=dict(hspace=0.1, wspace=0.1, width_ratios=(1, 1, 1, 1 / 12)))
# rows will be for different camera pixel sizes, the camera pixel size = subsample / 8 * Nyquist
for axs, subsample in zip(axs_total[::-1], (8, 4, 2)):
subsample2 = oversample_factor * subsample
# for display zoom in
offset = (len(psf.PSFi.squeeze()) - num_pixels) // 2
# show the original data, shifted such that the max is at the center of the
# camera ROI
# axs[0].matshow(psf.PSFi.squeeze(), norm=mpl.colors.PowerNorm(gam))
# Use the convolution to shift the data so that the max is centered on camera ROI
origin_shift2 = subsample2 // 2 - 1
exact = ndi.uniform_filter(psf.PSFi[0], subsample2, origin=origin_shift2)
# Show convolved data
# axs[1].matshow(exact, norm=mpl.colors.PowerNorm(gam))
# integrate across pixel
exact_subsample = bin_ndarray(exact, bin_size=subsample2, operation="sum")
exact_subsample /= exact_subsample.max()
# Display final camera pixels
offset_sub = offset//subsample2
axs[0].matshow(exact_subsample, norm=mpl.colors.PowerNorm(gam))
origin_shift = subsample // 2 - 1
exact_low_res = ndi.uniform_filter(low_res, subsample, origin=origin_shift)
exact_low_res_subsample = bin_ndarray(exact_low_res, bin_size=subsample, operation="sum")
exact_low_res_subsample /= exact_low_res_subsample.max()
low_res_subsample = bin_ndarray(low_res, bin_size=subsample, operation="sum")
low_res_subsample /= low_res_subsample.max()
# display direct simulation
axs[1].matshow(exact_low_res_subsample, norm=mpl.colors.PowerNorm(gam))
# Calculate percent of max difference and display
difference = (exact_subsample - exact_low_res_subsample)
im = axs[2].matshow(difference * 100, cmap="viridis")
plt.colorbar(im, ax=axs[2], cax=axs[3])
# clean up plot
for ax in axs[:3]:
ax.xaxis.set_major_locator(plt.NullLocator())
ax.yaxis.set_major_locator(plt.NullLocator())
# label
axs_total[0, 0].set_title(r"$\frac{1}{8}\times$" + "Nyquist Simulation\nwith Convolution")
axs_total[0, 1].set_title(r"$1\times$ " + "Nyquist Simulation\nwith Convolution")
axs_total[0, 2].set_title("Difference (%)")
axs_total[0, 0].set_ylabel(r"$2\times$ Nyquist Camera Pixel Size")
axs_total[1, 0].set_ylabel(r"$4\times$ Nyquist Camera Pixel Size")
axs_total[2, 0].set_ylabel(r"$8\times$ Nyquist Camera Pixel Size");
"""
Explanation: We see (figure above) that the exact simulation, which includes convolution and then integration, redistributes the OTF support slightly towards the DC component, which makes sense as both convolution and integration will blur high frequency information. Note that the OTF cutoff remains nearly the same in both cases: $2 NA / \lambda$.
What about PSFs for large camera pixels? We follow the exact same procedure as above.
End of explanation
"""
# set up the figure
fig, axs_total = plt.subplots(3, 4, dpi=150, figsize=(9.25,9),
gridspec_kw=dict(hspace=0.1, wspace=0.1, width_ratios=(1, 1, 1, 1 / 12)))
# rows will be for different camera pixel sizes, the camera pixel size = subsample / 8 * Nyquist
for axs, subsample in zip(axs_total[::-1], (9, 5, 3)):
# Directly simulate at Nyquist
psf_params_wf['res'] = psf_params['res'] * oversample_factor
c = np.log2(subsample) % 2
if c < 1:
c = 1
else:
c = -1
psf_params_wf['size'] = psf_params['size'] // oversample_factor + c
low_res = HanserPSF(**psf_params_wf).PSFi.squeeze()
subsample2 = oversample_factor * subsample
# Use the convolution to shift the data so that the max is centered on camera ROI
shift = len(psf.PSFi[0])%subsample + 1
shifted = psf.PSFi[0, shift:, shift:]
exact = ndi.uniform_filter(shifted, subsample2)
# integrate across pixel
exact_subsample = bin_ndarray(shifted, bin_size=subsample2, operation="sum")
exact_subsample /= exact_subsample.max()
# Display final camera pixels
offset_sub = offset//subsample2
axs[0].matshow(exact_subsample, norm=mpl.colors.PowerNorm(gam))
exact_low_res = ndi.uniform_filter(low_res, subsample)
exact_low_res_subsample = bin_ndarray(exact_low_res, bin_size=subsample, operation="sum")
exact_low_res_subsample /= exact_low_res_subsample.max()
low_res_subsample = bin_ndarray(low_res, bin_size=subsample)
low_res_subsample /= low_res_subsample.max()
# display direct simulation
axs[1].matshow(low_res_subsample, norm=mpl.colors.PowerNorm(gam))
# Calculate percent of max difference and display
lexact = len(exact_subsample)
llow = len(low_res_subsample)
if lexact <= llow:
difference = (exact_subsample - low_res_subsample[:lexact, :lexact])
else:
difference = (exact_subsample - low_res_subsample[:llow, :llow])
im = axs[2].matshow(difference * 100, cmap="viridis")
plt.colorbar(im, ax=axs[2], cax=axs[3])
# clean up plot
for ax in axs[:3]:
ax.xaxis.set_major_locator(plt.NullLocator())
ax.yaxis.set_major_locator(plt.NullLocator())
# label
axs_total[0, 0].set_title(r"$\frac{1}{8}\times$" + "Nyquist Simulation\nwithout Convolution")
axs_total[0, 1].set_title(r"$1\times$ " + "Nyquist Simulation\nwithout Convolution")
axs_total[0, 2].set_title("Difference (%)")
axs_total[0, 0].set_ylabel(r"$3\times$ Nyquist Camera Pixel Size")
axs_total[1, 0].set_ylabel(r"$5\times$ Nyquist Camera Pixel Size")
axs_total[2, 0].set_ylabel(r"$9\times$ Nyquist Camera Pixel Size");
"""
Explanation: As expected, the larger the final camera pixel size the smaller the relative difference in simulation pixel size and thus the smaller the difference in the simulations. Now for the question of whether the convolution step is even necessary when looking at camera pixels larger than Nyquist.
First note that without convolution to redistribute the intensity before integration (a kind of interpolation) we won't have a symmetric PSF using an even shaped camera pixel (relative to the simulation pixels). So instead of looking at 2x, 4x, and 8x camera pixel sizes like we've been doing above we'll use odd sizes of 3x, 5x and 9x. As a sanity check let's look at the difference between the two methods with no convolution step for either. The result is a measure of the integration error between a finer and coarser integration grid.
End of explanation
"""
# set up the figure
fig, axs_total = plt.subplots(3, 4, dpi=150, figsize=(9.25,9),
gridspec_kw=dict(hspace=0.1, wspace=0.1, width_ratios=(1, 1, 1, 1 / 12)))
# rows will be for different camera pixel sizes, the camera pixel size = subsample / 8 * Nyquist
for axs, subsample in zip(axs_total[::-1], (9, 5, 3)):
# Directly simulate at Nyquist
psf_params_wf['res'] = psf_params['res'] * oversample_factor
c = np.log2(subsample) % 2
if c < 1:
c = 1
else:
c = -1
psf_params_wf['size'] = psf_params['size'] // oversample_factor + c
low_res = HanserPSF(**psf_params_wf).PSFi.squeeze()
subsample2 = oversample_factor * subsample
# Use the convolution to shift the data so that the max is centered on camera ROI
shift = len(psf.PSFi[0])%subsample + 1
shifted = psf.PSFi[0, shift:, shift:]
exact = ndi.uniform_filter(shifted, subsample2)
# integrate across pixel
exact_subsample = bin_ndarray(exact, bin_size=subsample2, operation="sum")
exact_subsample /= exact_subsample.max()
# Display final camera pixels
offset_sub = offset//subsample2
axs[0].matshow(exact_subsample, norm=mpl.colors.PowerNorm(gam))
exact_low_res = ndi.uniform_filter(low_res, subsample)
exact_low_res_subsample = bin_ndarray(exact_low_res, bin_size=subsample, operation="sum")
exact_low_res_subsample /= exact_low_res_subsample.max()
low_res_subsample = bin_ndarray(low_res, bin_size=subsample)
low_res_subsample /= low_res_subsample.max()
# display direct simulation
axs[1].matshow(low_res_subsample, norm=mpl.colors.PowerNorm(gam))
# Calculate percent of max difference and display
lexact = len(exact_subsample)
llow = len(low_res_subsample)
if lexact <= llow:
difference = (exact_subsample - low_res_subsample[:lexact, :lexact])
else:
difference = (exact_subsample - low_res_subsample[:llow, :llow])
im = axs[2].matshow(difference * 100, cmap="viridis")
plt.colorbar(im, ax=axs[2], cax=axs[3])
# clean up plot
for ax in axs[:3]:
ax.xaxis.set_major_locator(plt.NullLocator())
ax.yaxis.set_major_locator(plt.NullLocator())
# label
axs_total[0, 0].set_title(r"$\frac{1}{8}\times$" + "Nyquist Simulation\nwith Convolution")
axs_total[0, 1].set_title(r"$1\times$ " + "Nyquist Simulation\nwithout Convolution")
axs_total[0, 2].set_title("Difference (%)")
axs_total[0, 0].set_ylabel(r"$3\times$ Nyquist Camera Pixel Size")
axs_total[1, 0].set_ylabel(r"$5\times$ Nyquist Camera Pixel Size")
axs_total[2, 0].set_ylabel(r"$9\times$ Nyquist Camera Pixel Size");
"""
Explanation: As expected the integration error decreases with increasing camera pixel size. Now to test the effect of convolution on the process.
End of explanation
"""
fig, ax = plt.subplots(figsize=(4,4), dpi=150)
k_pixel_size = 2 / psf_params_wf["res"] / len(exact_subsample) / subsample
abbe_limit = 1 / nyquist_sampling / k_pixel_size
for l, d in zip(("Exact", "Direct with Convolution", "Direct"), (exact_subsample, exact_low_res_subsample, low_res_subsample)):
o = abs(easy_fft(d))
ro = radial_profile(o)[0]
ax.plot(np.arange(len(ro)) / abbe_limit * 2, ro, label=l)
ax.legend()
ax.set_xlabel("Spatial Frequency")
ax.set_ylabel("Intensity")
ax.set_xlim(0, 2.6)
ax.set_ylim(0)
ax.yaxis.set_major_locator(plt.NullLocator())
ax.xaxis.set_major_locator(plt.MultipleLocator(1 / 2))
ax.xaxis.set_major_formatter(plt.FuncFormatter(formatter))
"""
Explanation: Clearly there's quite a bit of error, up to ~20% of the max value in the worst case. Again we see a decrease in error with increasing camera pixel size. Now turning to the more informative frequency space representation for the 3X Nyquist camera pixels.
End of explanation
"""
|
quantopian/research_public | notebooks/tutorials/1_getting_started_lesson4/notebook.ipynb | apache-2.0 | # Import Pipeline class and datasets
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data import EquityPricing
from quantopian.pipeline.domain import US_EQUITIES
from quantopian.pipeline.data.sentdex import sentiment
# Import built-in moving average calculation
from quantopian.pipeline.factors import SimpleMovingAverage
# Import built-in trading universe
from quantopian.pipeline.filters import QTradableStocksUS
# Pipeline definition
def make_pipeline():
# Create a reference to our trading universe
base_universe = QTradableStocksUS()
# Calculate 3 day average of sentiment scores
sentiment_score = SimpleMovingAverage(
inputs=[sentiment.sentiment_signal],
window_length=3,
)
# Return Pipeline containing sentiment_score that has our trading universe as a screen
return Pipeline(
columns={
'close_price': close_price,
'sentiment_score': sentiment_score,
},
screen=base_universe,
domain=US_EQUITIES,
)
"""
Explanation: Strategy Definition
Now that we have learned how to access and manipulate data in Quantopian, let's construct a data pipeline for our long-short equity strategy. In general, long-short equity strategies consist of modeling the relative value of assets with respect to each other, and placing bets on the sets of assets that we are confident will increase (long) and decrease (short) the most in value.
Long-short equity strategies profit as the spread in returns between the sets of high and low value assets increases. The quality of long-short equity strategy relies entirely on the quality of its underling ranking model. In this tutorial we will use a simple ranking schema for our strategy:
Strategy: We will consider assets with a high 3 day average sentiment score as high value, and assets with a low 3 day average sentiment score as low value.
Strategy Analysis
We can define the strategy above using SimpleMovingAverage and sentdex sentiment dataset, similar to the pipeline we created in the previous lesson:
End of explanation
"""
# Pipeline imports
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.sentdex import sentiment
from quantopian.pipeline.factors import SimpleMovingAverage
from quantopian.pipeline.filters import QTradableStocksUS
# Pipeline definition
def make_pipeline():
# Create a reference to our trading universe
base_universe = QTradableStocksUS()
# Calculate 3 day average of sentiment scores
sentiment_score = SimpleMovingAverage(
inputs=[sentiment.sentiment_signal],
window_length=3,
)
# Create filter for top 350 and bottom 350
# assets based on their sentiment scores
top_bottom_scores = (
sentiment_score.top(350) | sentiment_score.bottom(350)
)
return Pipeline(
columns={
'sentiment_score': sentiment_score,
},
# Set screen as the intersection between our trading universe and our filter
screen=(
base_universe
& top_bottom_scores
)
)
"""
Explanation: For simplicity, we will only analyze the top 350 and bottom 350 stocks ranked by sentiment_mean. We can create pipeline filters for these sets using the top and bottom methods of our sentiment_mean output, and combine them using the | operator to get their union. Then, we will remove anything outside of our tradable universe by using the &amp; operator to get the intersection between our filter and our universe:
End of explanation
"""
# Import run_pipeline method
from quantopian.research import run_pipeline
# Specify a time range to evaluate
period_start = '2014-01-01'
period_end = '2017-01-01'
# Execute pipeline over evaluation period
pipeline_output = run_pipeline(
make_pipeline(),
start_date=period_start,
end_date=period_end
)
"""
Explanation: Next, let's run our pipeline over a 3 year period to get an output we can use for our analysis. This will take ~3 minutes.
End of explanation
"""
# Import prices function
from quantopian.research import prices
# Get list of unique assets from the pipeline output
asset_list = pipeline_output.index.get_level_values(level=1).unique()
# Query pricing data for all assets present during
# evaluation period
asset_prices = prices(
asset_list,
start=period_start,
end=period_end
)
"""
Explanation: In addition to sentiment data, we will need pricing data for all assets present in this period. We can easily get a list of these assets from our pipeline output's index, and pass that list to prices to get the pricing data we need:
End of explanation
"""
# Import Alphalens
import alphalens as al
# Get asset forward returns and quantile classification
# based on sentiment scores
factor_data = al.utils.get_clean_factor_and_forward_returns(
factor=pipeline_output['sentiment_score'],
prices=asset_prices,
quantiles=2,
periods=(1,5,10),
)
# Display first 5 rows
factor_data.head(5)
"""
Explanation: Now we can use Quantopian's open source factor analysis tool, Alphalens, to test the quality of our selection strategy. First, let's combine our factor and pricing data using get_clean_factor_and_forward_returns. This function classifies our factor data into quantiles and computes forward returns for each security for multiple holding periods. We will separate our factor data into 2 quantiles (the top and bottom half), and use 1, 5 and 10 day holding periods:
End of explanation
"""
# Calculate mean return by factor quantile
mean_return_by_q, std_err_by_q = al.performance.mean_return_by_quantile(factor_data)
# Plot mean returns by quantile and holding period
# over evaluation time range
al.plotting.plot_quantile_returns_bar(
mean_return_by_q.apply(
al.utils.rate_of_return,
axis=0,
args=('1D',)
)
);
"""
Explanation: Having our data in this format allows us to use several of Alphalens's analysis and plotting tools. Let's start by looking at the mean returns by quantile over the entire period. Because our goal is to build a long-short strategy, we want to see the lower quantile (1) have negative returns and the upper quantile(2) have positive returns:
End of explanation
"""
import pandas as pd
# Calculate factor-weighted long-short portfolio returns
ls_factor_returns = al.performance.factor_returns(factor_data)
# Plot cumulative returns for 5 day holding period
al.plotting.plot_cumulative_returns(ls_factor_returns['5D'], '5D', freq=pd.tseries.offsets.BDay());
"""
Explanation: We can also plot the cumulative returns of a factor-weighted long-short portfolio with a 5 day holding period using the following code:
End of explanation
"""
|
bspalding/research_public | lectures/beta_hedging/How To - Beta Hedging.ipynb | apache-2.0 | # Import libraries
import numpy as np
from statsmodels import regression
import statsmodels.api as sm
import matplotlib.pyplot as plt
import math
# Get data for the specified period and stocks
start = '2014-01-01'
end = '2015-01-01'
asset = get_pricing('TSLA', fields='price', start_date=start, end_date=end)
benchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end)
# We have to take the percent changes to get to returns
# Get rid of the first (0th) element because it is NAN
r_a = asset.pct_change()[1:]
r_b = benchmark.pct_change()[1:]
# Let's plot them just for fun
r_a.plot()
r_b.plot()
plt.ylabel("Daily Return")
plt.legend();
"""
Explanation: Beta Hedging
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie with example algorithms by David Edwards
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
Factor Models
Factor models are a way of explaining the returns of one asset via a linear combination of the returns of other assets. The general form of a factor model is
$$Y = \alpha + \beta_1 X_1 + \beta_2 X_2 + \dots + \beta_n X_n$$
This looks familiar, as it is exactly the model type that a linear regression fits. The $X$'s can also be indicators rather than assets. An example might be a analyst estimation.
What is Beta?
An asset's beta to another asset is just the $\beta$ from the above model. For instance, if we regressed TSLA against the S&P 500 using the model $Y_{TSLA} = \alpha + \beta X$, then TSLA's beta exposure to the S&P 500 would be that beta. If we used the model $Y_{TSLA} = \alpha + \beta X_{SPY} + \beta X_{AAPL}$, then we now have two betas, one is TSLA's exposure to the S&P 500 and one is TSLA's exposure to AAPL.
Often "beta" will refer to a stock's beta exposure to the S&P 500. We will use it to mean that unless otherwise specified.
End of explanation
"""
# Let's define everything in familiar regression terms
X = r_b.values # Get just the values, ignore the timestamps
Y = r_a.values
def linreg(x,y):
# We add a constant so that we can also fit an intercept (alpha) to the model
# This just adds a column of 1s to our data
x = sm.add_constant(x)
model = regression.linear_model.OLS(y,x).fit()
# Remove the constant now that we're done
x = x[:, 1]
return model.params[0], model.params[1]
alpha, beta = linreg(X,Y)
print 'alpha: ' + str(alpha)
print 'beta: ' + str(beta)
"""
Explanation: Now we can perform the regression to find $\alpha$ and $\beta$:
End of explanation
"""
X2 = np.linspace(X.min(), X.max(), 100)
Y_hat = X2 * beta + alpha
plt.scatter(X, Y, alpha=0.3) # Plot the raw data
plt.xlabel("SPY Daily Return")
plt.ylabel("TSLA Daily Return")
plt.plot(X2, Y_hat, 'r', alpha=0.9); # Add the regression line, colored in red
"""
Explanation: If we plot the line $\alpha + \beta r_a$, we can see that it does indeed look like the line of best fit:
End of explanation
"""
# Construct a portfolio with beta hedging
portfolio = -1*beta*r_b + r_a
portfolio.name = "TSLA + Hedge"
# Plot the returns of the portfolio as well as the asset by itself
portfolio.plot(alpha=0.9)
r_b.plot(alpha=0.5);
r_a.plot(alpha=0.5);
plt.ylabel("Daily Return")
plt.legend()
"""
Explanation: Risk Exposure
More generally, this beta gets at the concept of how much risk exposure you take on by holding an asset. If an asset has a high beta exposure to the S&P 500, then while it will do very well while the market is rising, it will do very poorly when the market falls. A high beta corresponds to high speculative risk. You are taking out a more volatile bet.
At Quantopian, we value stratgies that have negligible beta exposure to as many factors as possible. What this means is that all of the returns in a strategy lie in the $\alpha$ portion of the model, and are independent of other factors. This is highly desirable, as it means that the strategy is agnostic to market conditions. It will make money equally well in a crash as it will during a bull market. These strategies are the most attractive to individuals with huge cash pools such as endowments and soverign wealth funds.
Risk Management
The process of reducing exposure to other factors is known as risk management. Hedging is one of the best ways to perform risk management in practice.
Hedging
If we determine that our portfolio's returns are dependent on the market via this relation
$$Y_{portfolio} = \alpha + \beta X_{SPY}$$
then we can take out a short position in SPY to try to cancel out this risk. The amount we take out is $-\beta DV$ where $DV$ is the total dollar volume of our portfolio. This works because if our returns are approximated by $\alpha + \beta X_{SPY}$, then adding a short in SPY will make our new returns be $\alpha + \beta X_{SPY} - \beta X_{SPY} = \alpha$. Our returns are now purely alpha, which is independent of SPY and will suffer no risk exposure to the market.
Market Neutral
When a stragy exhibits a consistent beta of 0, we say that this strategy is market neutral.
Problems with Estimation
The problem here is that the beta we estimated is not necessarily going to stay the same as we walk forward in time. As such the amount of short we took out in the SPY may not perfectly hedge our portfolio, and in practice it is quite difficult to reduce beta by a significant amount.
We will talk more about problems with estimating parameters in future lectures. In short, each estimate has a stardard error that corresponds with how stable the estimate is within the observed data.
Implementing hedging
Now that we know how much to hedge, let's see how it affects our returns. We will build our portfolio using the asset and the benchmark, weighing the benchmark by $-\beta$ (negative since we are short in it).
End of explanation
"""
print "means: ", portfolio.mean(), r_a.mean()
print "volatilities: ", portfolio.std(), r_a.std()
"""
Explanation: It looks like the portfolio return follows the asset alone fairly closely. We can quantify the difference in their performances by computing the mean returns and the volatilities (standard deviations of returns) for both:
End of explanation
"""
P = portfolio.values
alpha, beta = linreg(X,P)
print 'alpha: ' + str(alpha)
print 'beta: ' + str(beta)
"""
Explanation: We've decreased volatility at the expense of some returns. Let's check that the alpha is the same as before, while the beta has been eliminated:
End of explanation
"""
# Get the alpha and beta estimates over the last year
start = '2014-01-01'
end = '2015-01-01'
asset = get_pricing('TSLA', fields='price', start_date=start, end_date=end)
benchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end)
r_a = asset.pct_change()[1:]
r_b = benchmark.pct_change()[1:]
X = r_b.values
Y = r_a.values
historical_alpha, historical_beta = linreg(X,Y)
print 'Asset Historical Estimate:'
print 'alpha: ' + str(alpha)
print 'beta: ' + str(beta)
# Get data for a different time frame:
start = '2015-01-01'
end = '2015-06-01'
asset = get_pricing('TSLA', fields='price', start_date=start, end_date=end)
benchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end)
# Repeat the process from before to compute alpha and beta for the asset
r_a = asset.pct_change()[1:]
r_b = benchmark.pct_change()[1:]
X = r_b.values
Y = r_a.values
alpha, beta = linreg(X,Y)
print 'Asset Out of Sample Estimate:'
print 'alpha: ' + str(alpha)
print 'beta: ' + str(beta)
# Create hedged portfolio and compute alpha and beta
portfolio = -1*historical_beta*r_b + r_a
P = portfolio.values
alpha, beta = linreg(X,P)
print 'Portfolio Out of Sample:'
print 'alpha: ' + str(alpha)
print 'beta: ' + str(beta)
# Plot the returns of the portfolio as well as the asset by itself
portfolio.name = "TSLA + Hedge"
portfolio.plot(alpha=0.9)
r_a.plot(alpha=0.5);
r_b.plot(alpha=0.5)
plt.ylabel("Daily Return")
plt.legend()
"""
Explanation: Note that we developed our hedging strategy using historical data. We can check that it is still valid out of sample by checking the alpha and beta values of the asset and the hedged portfolio in a different time frame:
End of explanation
"""
|
captain-proton/aise | documentation/source/nia/jupyter_nb/exercise_1.ipynb | gpl-3.0 | import matplotlib.pyplot as plt
import numpy as np
plt.style.use('ggplot')
import subprocess
hosts = ('uni-due.de', 'whitehouse.gov', 'oceania.pool.ntp.org')
log = []
for host in hosts:
process = subprocess.Popen(['ping', '-c', "50", host], stdout=subprocess.PIPE)
for line in process.stdout:
# die zeile ist ein raw string, muss also dekodiert werden
line = line.decode('utf-8')
# in der zeile ist bereits ein zeilenumbruch vorhanden
print(line, end='')
log.append((host, line))
"""
Explanation: Übung 1
Problem 1.1 Application Layer Rerouting
1.1.1 Ping
Ausführung von Ping zu unterschiedlichen Hosts
Zu den unterschiedlichen Hosts wird jeweils n-mal ein Ping durchgeführt und die Ausgabe in ein Tupel mit dem dazugehörigen Hostnamen gespeichert.
End of explanation
"""
import re
data = []
for host, line in log:
# Der regulaere Ausdruck findet Zeitangaben (time=) bei Bedarf durch . getrennt
m = re.search('time=(\d+(\.\d+)?)', line)
if m:
groups = m.groups()
time = float(groups[0])
data.append((host, time))
print(data)
"""
Explanation: Parsen der Rohdaten
Die Angabe der Round trip time ist gegeben durch z.B. den Text time=166 ms. Dieser kann durch einen regulären Ausdruck aus einer Zeile gefiltert werden.
End of explanation
"""
import itertools
# Die zu gruppierenden Daten muessen zunaechst nach demselben Schluessel
# sortiert werden, siehe http://stackoverflow.com/questions/773/how-do-i-use-pythons-itertools-groupby
data = sorted(data, key=lambda t: t[0])
for host, g in itertools.groupby(data, key=lambda t: t[0]):
rtt = [e[1] for e in g]
max_rtt = max(rtt)
min_rtt = min(rtt)
# alternativer durchschnitt: sum(rtt) / len(rtt)
# aber achtung vor rundungsfehlern!
mean_rtt = np.mean(rtt)
# zur varianzberechnung siehe
# http://www.frustfrei-lernen.de/mathematik/varianz-berechnen.html
variance = (1 / len(rtt)) * sum([np.power(x - mean_rtt, 2) for x in rtt])
# die standardvarianz ist die wurzel der varianz
std_deviation = np.sqrt(variance)
print(' Host: %s' % host)
print(' Max RTT: %f' % max_rtt)
print(' Min RTT: %f' % min_rtt)
print(' Durchschnitt RTT: %f' % mean_rtt)
print(' Varianz: %f' % variance)
print('Standardabweichung: %f' % std_deviation)
print()
"""
Explanation: Statistiken zum Ping
In der Aufgabenstellung war z.B. nach dem Mittelwert und der Varianz gefragt. Gerade bei der Varianz ergaben sich Schwierigkeiten.
Wikipedia:
Sie beschreibt die erwartete quadratische Abweichung der Zufallsvariablen von ihrem Erwartungswert.
Zum Erwartungswert ist eine entsprechende Quellenangabe vorhanden. Ansonsten ist folgendes Video zu empfehlen:
Zufallsgröße, Erwartungswert, Faires Spiel, ...
Da im Beispiel alle Möglichkeiten und deren Wahrscheinlichkeit bekannt sind kann der Erwartungswert einfach bestimmt werden: $\sum_{i \in I} x_i P(X = x_i)$
Da hier aber nicht von bekannten möglichen Werten ausgegangen wird ist eine unkorrigierte Stichprobenvarianz das Mittel der Wahl.
$s^2 = \frac{1}{n} \sum_{i = 1}^n (x_i - \overline{x})^2$
End of explanation
"""
import matplotlib.pyplot as plt
times = [e[1] for e in data]
plt.hist(times, 10, normed=1, facecolor='orange', alpha=0.75)
plt.xlabel('RTT')
plt.ylabel('Wahrscheinlichkeit')
plt.title('Histogramm der Round trip times')
plt.grid(True)
x_min, x_max, y_min, y_max = plt.axis()
plt.axis((x_min - 10, x_max + 10, y_min - .005, y_max + .005))
plt.show()
"""
Explanation: Anzeige eines Histograms
End of explanation
"""
def ecdf(values):
# 1. sortieren der werte
values = sorted(values)
# 2. reduzieren der werte auf eindeutigkeit
unique_values = sorted(list(set(values)))
# 3. ermittlung wie viele werte <= x sind fuer jede eindeutige zeit
cumsum_values = []
for u in unique_values:
cumsum_values.append((u, len([1 for _ in values if _ <= u])))
# 4. ermittlung der prozentualen anteile wie viele werte <= x sind
y = np.round([c / len(values) for t, c in cumsum_values], decimals=2)
# fuer jedes eindeutige x wird ein punkt dargestellt
plt.plot(unique_values, y, color='#e53935', linestyle=' ', marker='.')
# von x bis x + 1 wird ein einzelner strich geplottet
for i in range(len(unique_values)):
x_0 = unique_values[i]
x_1 = unique_values[i + 1] if i < len(unique_values) - 1 else unique_values[i] + 1
plt.plot([x_0, x_1], [y[i], y[i]], color='#1e88e5', linestyle='-')
ecdf(times)
plt.title('ECDF aller Pingwerte')
x_min, x_max, y_min, y_max = plt.axis()
plt.axis((x_min - 10, x_max + 10, y_min - .02, y_max + .02))
plt.show()
for host in hosts:
times = [t for h, t in data if h == host]
ecdf(times)
plt.title('ECDF zum Host %s' % host)
x_min, x_max, y_min, y_max = plt.axis()
plt.axis((x_min - 10, x_max + 10, y_min - .02, y_max + .02))
plt.show()
"""
Explanation: ECDF
Zur Hilfe zu empirischen kumulativen Distributionsfunktionen haben die Wirtschaftswissenschaftler auch eine schöne Hilfe:
https://www.youtube.com/watch?v=EtyAsjzifZU
https://onlinecourses.science.psu.edu/stat464/node/84
End of explanation
"""
import random
iterations = (100, 1000, 10000, 1000000)
loss_ab = 0.01
loss_bc = 0.02
for i in iterations:
# 0 <= random.random() < 1
# 1 fuer pakete, die erfolgreich sind, 0 fuer fehlschlaege
# filterung a -> b
pakets = [1 if random.random() >= loss_ab else 0 for x in range(0, i)]
# filterung b -> c
pakets = [1 if random.random() >= loss_bc and p else 0 for p in pakets]
print('Verlust: {:3.4f}% bei {:8d} Paketen'.format((1 - float(sum(pakets)) / i) * 100, i))
print('\nBei einem kalkulierten Verlust von: {:3.4f}%'.format((0.01 + 0.02 * (1 - 0.01)) * 100))
"""
Explanation: 1.1.2 Gesamtverlustwahrscheinlichkeit
Es sollte simuliert werden wie die Gesamtverlustwahrscheinlichkeit bei einer zweiteiligen Strecke mit jeweils Teilverlustwahrscheinlichkeiten von $p_{ab} = 1\%$ und $p_{bc} = 2\%$ sind. Über einen Ereignisbaum kann man sich das ganz gut bildlich vorstellen.
Insgesamt bestehen $100^2$ Möglichkeiten. Bei einer von 100 Möglichkeiten geht das Paket auf der ersten Teilstrecke verloren. Nach jeder dieser 100 Möglichkeiten gibt es noch einmal 100 Möglichkeiten in denen das Paket verloren gehen kann. Am ersten Baum gibt es eine Möglichkeit des Paketverlustes. Daraus folgen für die Gesamtverlustwahrscheinlichkeit 100 Fehlschläge. Für jede andere der 99 Möglichkeiten gibt es 2 Möglichkeiten in denen das Paket widerum verloren gehen kann.
Demzufolge ist die Gesamtwahrscheinlichkeit die Wahrscheinlichkeit der ersten Strecke $\dfrac{1}{100}$ plus $\dfrac{99}{100} * \dfrac{2}{100}$ die Wahrscheinlichkeit der zweiten Strecke (0.02) (Summen- und Produktregel).
$p_{ges} = \dfrac{1}{100} + \dfrac{99}{100} * \dfrac{2}{100} = 0,0298 = 2,98\%$
Siehe: http://www.mathematik-wissen.de/mehrstufige_zufallsexperimente.htm
End of explanation
"""
def loss(p_i):
p_i = list(p_i)
if len(p_i) < 2:
return p_i[0]
_p = []
# P(0) hinzufuegen
_p.append(p_i[0])
for i in range(1, len(p_i)):
# alle anderen wahrscheinlichkeiten basieren auch auf dem vorgaenger
_p.append(p_i[i] * (1 - sum(_p)))
return sum(_p)
print(loss((0.01, 0.02)))
print(loss((0.5, 0.4, 0.3, 0.2)))
"""
Explanation: Sollten n beliebige Strecken mit gegebenen Verlustwahrscheinlichkeiten gegeben sein, kann die folgende Methode verwendet werden:
End of explanation
"""
import math
import random
mu_ab = 42
mu_bc = 42
counts = (100, 1000, 10000, 100000)
delay = {
'ab': [],
'bc': []
}
data = []
# fuer jede teilstrecke werden zufallswerte bestimmt
# Beispiel: delay['ab'] = [[1.0, 42.3, 60.1, 34.3], [...], [...], [..]] (numpy array!)
for count in counts:
delay['ab'].append(np.array([-mu_ab * math.log(random.uniform(0, 1)) for i in range(0, count)]))
delay['bc'].append(np.array([-mu_bc * math.log(random.uniform(0, 1)) for i in range(0, count)]))
# In Python werden arrays durch addition zusammengefuegt
# [1, 2] + [3, 4] = [1, 2, 3, 4]
# es soll aber jede verzoegerung a -> b -> c addiert werden, daher werden
# numpy arrays verwendet.
# np.array([1, 2]) + np.array([3, 4]) = [4, 6]
# Beispiel: data = [([1.0, 42.3, 60.1, 34.3], 100), [...], [...], [..]]
# ..Liste von 2-Tupeln bestehend aus einem numpy array und dem dazugehoerigen label
data = [(delay['ab'][i] + delay['bc'][i], counts[i]) for i in range(0, len(counts))]
for values, label in data:
# sortieren der werte zur darstellung als cdf
xs = sorted(values)
# erstellen der gleichmaeszigen werte fuer die y-achse
ys = np.arange(1, len(xs) + 1) / float(len(xs))
plt.plot(xs, ys, label=label)
# hinzufuegen einer legende. label muss vorher durch plt.plot gegeben sein
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
# padding hinzufuegen
x_min, x_max, y_min, y_max = plt.axis()
plt.axis((x_min - 10, x_max + 10, y_min - .02, y_max + .02))
plt.show()
"""
Explanation: 1.1.3 Verzögerungen und Vergleiche untereinander
Es sollen exponentiell verteilte Werte in Abhängigkeit von einem gegebenen Mittelwert generiert und in einer CDF dargestellt werden. Diese Werte lassen sich mit Hilfe der Funktion $R_e = -\mu * log(R_u)$ generieren.
End of explanation
"""
import math
import random
plt.hist([d[0] for d in data], bins=20, normed=True, label=[d[1] for d in data])
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
for i in range(0, len(counts)):
xs, label = delay['ab'][i] + delay['bc'][i], str(counts[i])
plt.hist(xs, bins=80, normed=True, label=label)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
"""
Explanation: Zur Veranschaulichung werden die Daten noch einmal als Histogramm dargestellt.
End of explanation
"""
# plotten der werte wie zuvor zum vergleich mit den erlang verteilungen
for values, label in data:
xs = sorted(values)
ys = np.arange(1, len(xs) + 1) / float(len(xs))
plt.plot(xs, ys, label=label)
# erzeugen der gleichmaeszigen x werte fuer die erlang verteilung <=> min <= x <= max
# des letzten elementes
e_x = np.linspace(min(data[-1][0]), max(data[-1][0]), len(data[-1][0]))
e_y = {
'erlang.1': [1 - math.exp(-1 / mu_ab * i) * (1 + 1 / mu_ab * i) for i in e_x],
'erlang.2': [1 - math.exp(-1 / (2 * mu_ab) * i) * (1 + 1 / (2 * mu_ab) * i) for i in e_x]
}
for label in e_y:
plt.plot(e_x, e_y[label], label=label)
# hinzufuegen einer legende. label muss vorher durch plt.plot gegeben sein
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
# padding hinzufuegen
x_min, x_max, y_min, y_max = plt.axis()
plt.axis((x_min - 10, x_max + 10, y_min - .02, y_max + .02))
plt.show()
"""
Explanation: Vergleich mit einer Erlangverteilung
Die Verteilungsfunktion der Erlangverteilung ist: $P(D_{ABC} \leq x) = 1 - e^{-gx}(1 + gx)$ mit $g = \frac{1}{\mu_{AB}}$
End of explanation
"""
# plr = packet loss rate
p_loss = np.linspace(0., 1., 100)
# replikationsgrade
degrees = [d for d in range(1, 11)]
for degree in degrees:
p_voice = p_loss ** degree
plt.plot(p_loss, p_voice, label=str(degree))
# legende hinzufuegen
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., title='Degree')
# padding hinzufuegen
x_min, x_max, y_min, y_max = plt.axis()
plt.axis((x_min - .02, x_max + .02, y_min - .02, y_max + .02))
plt.ylabel('Sample loss rate')
plt.xlabel('Packet loss rate')
plt.show()
"""
Explanation: 1.1.4
Frage 1
Versendung von Sequenznummern mit den Paketen, die nach der Rücksendung verglichen werden
Frage 2
Verfügbare Bandbreite
Anzahl der Verbindungen zu anderen Knoten
Resourcenkapazität
Uptime des Rechners
mobile Knoten (Akku betrieben)
Frage 3
Regelung der Bandbreite
Ermittlung von neuen Supernodes
Paketweiterleitung am Nutzer vorbei
Frage 4
Supernodes bilden ein Overlay, da Sie den Datenverkehr an die Klienten regeln und anderen Supernode bekannt sind. Erst durch Supernodes wird das eigentlich Netz realisiert.
ECDF-Beispiel
Siehe Wikipedia Empirische Verteilungsfunktion
Ausgangsdaten:
6, 2, 7, 12, 1, 11, 1, 1, 2, 3
sort:
1, 1, 1, 2, 2, 3, 6, 7, 11, 12
table:
<pre>
1 2 3 6 7 11 12
3 2 1 1 1 1 1
</pre>
<pre>
cumsum 3 5 6 7 8 9 10
/10 .3 .5 .6 .7 .8 .9 .10
</pre>
Problem 1.2 Replikation von Sprach- und Datenpaketen und QoE
1.2.1 Sprachverlustrate - Paketverlustrate
Es soll die Sprachsample-Verlustrate $p_{voice}$ in Abhängigkeit der Paketverlustrate $p_{loss}$ für die Replikationsgrade $R = 1, 2, 3, 4$ dargestellt werden. Im Skript ist das auf Folie 16 - 18 dargestellt. Hier kann auch die Formel verwendet werden: $p_{voice} = p_{loss}^R$. Für die Paketverlustrate $p_{loss}$ werden Werte von 0% bis 100% verwendet.
End of explanation
"""
import math
for degree in degrees:
p_voice = p_loss ** degree
mos = [4 * math.exp(-4.2 * x) + 1 for x in p_voice]
plt.plot(p_loss, mos, label=str(degree))
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., title='Degree')
x_min, x_max, y_min, y_max = plt.axis()
plt.axis((x_min - .02, x_max + .02, y_min - .1, y_max + .1))
plt.ylabel('MOS')
plt.xlabel('Packet loss rate')
plt.show()
"""
Explanation: 1.2.2 MOS - Paketverlustrate
Mit der zuvor berechneten Sprachverlustrate lässt sich der $MOS(p_{voice})$ bestimmen ( $MOS(x) = 4 e^{-4.2 * x} + 1$ - siehe Aufgabenstellung). Bereits bei einer Replikationsrate von 2 lässt sich eine erhebliche Verbesserung feststellen.
End of explanation
"""
import random
packet_count = 100000
degree = 4
loss = 0.09
packet_time_delta = 20
delay_sender_receiver = 13
# Jedes Paket kommt an
packets = []
sample_delay = [0 for x in range(packet_count)]
for i in range(packet_count):
packets.append(random.uniform(0, 1) > loss)
loss_without_rep = sum([1 for x in packets if not x]) / float(packet_count) * 100
print('Paketverlust ohne Replikation: {:.3f}%'.format(loss_without_rep))
"""
Explanation: 1.2.3
Es soll zunächst die Verzögerung der empfangenen Sprachsamples bestimmt werden. Alle 20ms wird ein Paket mit einer konstanten Verzögerung von 13ms versendet. Hierzu ist vor allem die Folie 16 zur Veranschaulichung hilfreich.
End of explanation
"""
|
dolittle007/dolittle007.github.io | notebooks/Euler-Maruyama and SDEs.ipynb | gpl-3.0 | %pylab inline
import pymc3 as pm
import theano.tensor as tt
import scipy
from pymc3.distributions.timeseries import EulerMaruyama
"""
Explanation: Inferring parameters of SDEs using a Euler-Maruyama scheme
This notebook is derived from a presentation prepared for the Theoretical Neuroscience Group, Institute of Systems Neuroscience at Aix-Marseile University.
End of explanation
"""
# parameters
λ = -0.78
σ2 = 5e-3
N = 200
dt = 1e-1
# time series
x = 0.1
x_t = []
# simulate
for i in range(N):
x += dt * λ * x + sqrt(dt) * σ2 * randn()
x_t.append(x)
x_t = array(x_t)
# z_t noisy observation
z_t = x_t + randn(x_t.size) * 5e-3
figure(figsize=(10, 3))
subplot(121)
plot(x_t[:30], 'k', label='$x(t)$', alpha=0.5), plot(z_t[:30], 'r', label='$z(t)$', alpha=0.5)
title('Transient'), legend()
subplot(122)
plot(x_t[30:], 'k', label='$x(t)$', alpha=0.5), plot(z_t[30:], 'r', label='$z(t)$', alpha=0.5)
title('All time');
tight_layout()
"""
Explanation: Toy model 1
Here's a scalar linear SDE in symbolic form
$ dX_t = \lambda X_t + \sigma^2 dW_t $
discretized with the Euler-Maruyama scheme
End of explanation
"""
def lin_sde(x, lam):
return lam * x, σ2
"""
Explanation: What is the inference we want to make? Since we've made a noisy observation of the generated time series, we need to estimate both $x(t)$ and $\lambda$.
First, we rewrite our SDE as a function returning a tuple of the drift and diffusion coefficients
End of explanation
"""
with pm.Model() as model:
# uniform prior, but we know it must be negative
lam = pm.Flat('lam')
# "hidden states" following a linear SDE distribution
# parametrized by time step (det. variable) and lam (random variable)
xh = EulerMaruyama('xh', dt, lin_sde, (lam, ), shape=N, testval=x_t)
# predicted observation
zh = pm.Normal('zh', mu=xh, sd=5e-3, observed=z_t)
"""
Explanation: Next, we describe the probability model as a set of three stochastic variables, lam, xh, and zh:
End of explanation
"""
with model:
# optimize to find the mode of the posterior as starting point for prob. mass
start = pm.find_MAP(vars=[xh], fmin=scipy.optimize.fmin_l_bfgs_b)
# "warm up" to transition from mode to prob. mass
step = pm.NUTS(scaling=start)
trace = pm.sample(1000, step, progressbar=True)
# sample from the prob. mass
step = pm.NUTS(scaling=trace[-1], gamma=.25)
trace = pm.sample(2000, step, start=trace[-1], progressbar=True)
"""
Explanation: Once the model is constructed, we perform inference, i.e. sample from the posterior distribution, in the following steps:
End of explanation
"""
figure(figsize=(10, 3))
subplot(121)
plot(percentile(trace[xh], [2.5, 97.5], axis=0).T, 'k', label='$\hat{x}_{95\%}(t)$')
plot(x_t, 'r', label='$x(t)$')
legend()
subplot(122)
hist(trace[lam], 30, label='$\hat{\lambda}$', alpha=0.5)
axvline(λ, color='r', label='$\lambda$', alpha=0.5)
legend();
"""
Explanation: Next, we plot some basic statistics on the samples from the posterior,
End of explanation
"""
# generate trace from posterior
ppc_trace = pm.sample_ppc(trace, model=model)
# plot with data
figure(figsize=(10, 3))
plot(percentile(ppc_trace['zh'], [2.5, 97.5], axis=0).T, 'k', label=r'$z_{95\% PP}(t)$')
plot(z_t, 'r', label='$z(t)$')
legend()
"""
Explanation: A model can fit the data precisely and still be wrong; we need to use posterior predictive checks to assess if, under our fit model, the data our likely.
In other words, we
- assume the model is correct
- simulate new observations
- check that the new observations fit with the original data
End of explanation
"""
N, τ, a, m, σ2 = 200, 3.0, 1.05, 0.2, 1e-1
xs, ys = [0.0], [1.0]
for i in range(N):
x, y = xs[-1], ys[-1]
dx = τ * (x - x**3.0/3.0 + y)
dy = (1.0 / τ) * (a - x)
xs.append(x + dt * dx + sqrt(dt) * σ2 * randn())
ys.append(y + dt * dy + sqrt(dt) * σ2 * randn())
xs, ys = array(xs), array(ys)
zs = m * xs + (1 - m) * ys + randn(xs.size) * 0.1
figure(figsize=(10, 2))
plot(xs, label='$x(t)$')
plot(ys, label='$y(t)$')
plot(zs, label='$z(t)$')
legend()
"""
Explanation: Note that
inference also estimates the initial conditions
the observed data $z(t)$ lies fully within the 95% interval of the PPC.
there are many other ways of evaluating fit
Toy model 2
As the next model, let's use a 2D deterministic oscillator,
\begin{align}
\dot{x} &= \tau (x - x^3/3 + y) \
\dot{y} &= \frac{1}{\tau} (a - x)
\end{align}
with noisy observation $z(t) = m x + (1 - m) y + N(0, 0.05)$.
End of explanation
"""
def osc_sde(xy, τ, a):
x, y = xy[:, 0], xy[:, 1]
dx = τ * (x - x**3.0/3.0 + y)
dy = (1.0 / τ) * (a - x)
dxy = tt.stack([dx, dy], axis=0).T
return dxy, σ2
"""
Explanation: Now, estimate the hidden states $x(t)$ and $y(t)$, as well as parameters $\tau$, $a$ and $m$.
As before, we rewrite our SDE as a function returned drift & diffusion coefficients:
End of explanation
"""
xys = c_[xs, ys]
with pm.Model() as model:
τh = pm.Uniform('τh', lower=0.1, upper=5.0)
ah = pm.Uniform('ah', lower=0.5, upper=1.5)
mh = pm.Uniform('mh', lower=0.0, upper=1.0)
xyh = EulerMaruyama('xyh', dt, osc_sde, (τh, ah), shape=xys.shape, testval=xys)
zh = pm.Normal('zh', mu=mh * xyh[:, 0] + (1 - mh) * xyh[:, 1], sd=0.1, observed=zs)
"""
Explanation: As before, the Euler-Maruyama discretization of the SDE is written as a prediction of the state at step $i+1$ based on the state at step $i$.
We can now write our statistical model as before, with uninformative priors on $\tau$, $a$ and $m$:
End of explanation
"""
with model:
# optimize to find the mode of the posterior as starting point for prob. mass
start = pm.find_MAP(vars=[xyh], fmin=scipy.optimize.fmin_l_bfgs_b)
# "warm up" to transition from mode to prob. mass
step = pm.NUTS(scaling=start)
trace = pm.sample(100, step, progressbar=True)
# sample from the prob. mass
step = pm.NUTS(scaling=trace[-1], gamma=.25)
trace = pm.sample(2000, step, start=trace[-1], progressbar=True)
"""
Explanation: As with the linear SDE, we 1) find a MAP estimate, 2) warm up and 3) sample from the probability mass:
End of explanation
"""
figure(figsize=(10, 6))
subplot(211)
plot(percentile(trace[xyh][..., 0], [2.5, 97.5], axis=0).T, 'k', label='$\hat{x}_{95\%}(t)$')
plot(xs, 'r', label='$x(t)$')
legend(loc=0)
subplot(234), hist(trace['τh']), axvline(τ), xlim([1.0, 4.0]), title('τ')
subplot(235), hist(trace['ah']), axvline(a), xlim([0, 2.0]), title('a')
subplot(236), hist(trace['mh']), axvline(m), xlim([0, 1]), title('m')
tight_layout()
"""
Explanation: Again, the result is a set of samples from the posterior, including our parameters of interest but also the hidden states
End of explanation
"""
# generate trace from posterior
ppc_trace = pm.sample_ppc(trace, model=model)
# plot with data
figure(figsize=(10, 3))
plot(percentile(ppc_trace['zh'], [2.5, 97.5], axis=0).T, 'k', label=r'$z_{95\% PP}(t)$')
plot(zs, 'r', label='$z(t)$')
legend()
"""
Explanation: Again, we can perform a posterior predictive check, that our data are likely given the fit model
End of explanation
"""
|
YosefLab/scVI | tests/notebooks/autotune_advanced_notebook.ipynb | bsd-3-clause | import sys
sys.path.append("../../")
sys.path.append("../")
%matplotlib inline
import logging
import os
import pickle
import scanpy
import anndata
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import torch
from hyperopt import hp
import scvi
from scvi.data import cortex, pbmc_dataset, brainlarge_dataset, annotation_simulation
from scvi.inference import auto_tune_scvi_model
logger = logging.getLogger("scvi.inference.autotune")
logger.setLevel(logging.WARNING)
def allow_notebook_for_test():
print("Testing the autotune advanced notebook")
test_mode = False
def if_not_test_else(x, y):
if not test_mode:
return x
else:
return y
save_path = "data/"
n_epochs = if_not_test_else(1000, 1)
n_epochs_brain_large = if_not_test_else(50, 1)
max_evals = if_not_test_else(100, 1)
reserve_timeout = if_not_test_else(180, 5)
fmin_timeout = if_not_test_else(300, 10)
"""
Explanation: Advanced autotune tutorial
DISCLAIMER: Most experiments in this notebook require one or more GPUs to keep their runtime a matter of hours.
DISCLAIMER: To use our new autotune feature in parallel mode, you need to install MongoDb first.
In this notebook, we give an in-depth tutorial on scVI's new autotune module.
Overall, the new module enables users to perform parallel hyperparemter search for any scVI model and on any number of GPUs/CPUs. Although, the search may be performed sequentially using only one GPU/CPU, we will focus on the paralel case.
Note that GPUs provide a much faster approach as they are particularly suitable for neural networks gradient back-propagation.
Additionally, we provide the code used to generate the results presented in our Hyperoptimization blog post. For an in-depth analysis of the results obtained on three gold standard scRNAseq datasets (Cortex, PBMC and BrainLarge), please to the above blog post. In the blog post, we also suggest guidelines on how and when to use our auto-tuning feature.
End of explanation
"""
cortex_dataset = scvi.data.cortex(save_path=save_path)
best_vae, trials = auto_tune_scvi_model(
gene_dataset=cortex_dataset,
parallel=True,
exp_key="cortex_dataset",
train_func_specific_kwargs={"n_epochs": n_epochs},
max_evals=max_evals,
reserve_timeout=reserve_timeout,
fmin_timeout=fmin_timeout,
)
latent = best_vae.get_latent_representation()
"""
Explanation: Default usage
For the sake of principled simplicity, we provide an all-default approach to hyperparameter search for any scVI model.
The few lines below present an example of how to perform hyper-parameter search for scVI on the Cortex dataset.
Note that, by default, the model used is scVI's VAE and the trainer is the UnsupervisedTrainer
Also, the default search space is as follows:
n_latent: [5, 15]
n_hidden: {64, 128, 256}
n_layers: [1, 5]
dropout_rate: {0.1, 0.3, 0.5, 0.7}
reconstruction_loss: {"zinb", "nb"}
lr: {0.01, 0.005, 0.001, 0.0005, 0.0001}
On a more practical note, verbosity varies in the following way:
logger.setLevel(logging.WARNING) will show a progress bar.
logger.setLevel(logging.INFO) will show global logs including the number of jobs done.
logger.setLevel(logging.DEBUG) will show detailed logs for each training (e.g the parameters tested).
This function's behaviour can be customized, please refer to the rest of this tutorial as well as its documentation for information about the different parameters available.
Running the hyperoptimization process.
End of explanation
"""
space = {
"model_tunable_kwargs": {"dropout_rate": hp.uniform("dropout_rate", 0.1, 0.3)},
"train_func_tunable_kwargs": {"lr": hp.loguniform("lr", -4.0, -3.0)},
}
best_vae, trials = auto_tune_scvi_model(
gene_dataset=cortex_dataset,
space=space,
parallel=True,
exp_key="cortex_dataset_custom_space",
train_func_specific_kwargs={"n_epochs": n_epochs},
max_evals=max_evals,
reserve_timeout=reserve_timeout,
fmin_timeout=fmin_timeout,
)
"""
Explanation: Returned objects
The trials object contains detailed information about each run.
trials.trials is an Iterable in which each element corresponds to a single run. It can be used as a dictionary for wich the key "result" yields a dictionnary containing the outcome of the run as defined in our default objective function (or the user's custom version). For example, it will contain information on the hyperparameters used (under the "space" key), the resulting metric (under the "loss" key) or the status of the run.
The best_trainer object can be used directly as an scVI Trainer object. It is the result of a training on the whole dataset provided using the optimal set of hyperparameters found.
Custom hyperamater space
Although our default can be a good one in a number of cases, we still provide an easy way to use custom values for the hyperparameters search space.
These are broken down in three categories:
Hyperparameters for the Trainer instance. (if any)
Hyperparameters for the Trainer instance's train method. (e.g lr)
Hyperparameters for the model instance. (e.g n_layers)
To build your own hyperparameter space follow the scheme used in scVI's codebase as well as the sample below.
Note the various spaces you define, have to follow the hyperopt syntax, for which you can find a detailed description here.
For example, if you were to want to search over a continuous range of droupouts varying in [0.1, 0.3] and for a continuous learning rate varying in [0.001, 0.0001], you could use the following search space.
End of explanation
"""
pbmc_dataset = pbmc_dataset(save_path=os.path.join(save_path, "10X/"))
# best_trainer, trials = auto_tune_scvi_model(
# gene_dataset=pbmc_dataset,
# metric_name="entropy_batch_mixing",
# data_loader_name="train_set",
# parallel=True,
# exp_key="pbmc_entropy_batch_mixing",
# train_func_specific_kwargs={"n_epochs": n_epochs},
# max_evals=max_evals,
# reserve_timeout=reserve_timeout,
# fmin_timeout=fmin_timeout,
# )
"""
Explanation: Custom objective metric
By default, our autotune process tracks the marginal negative log likelihood of the best state of the model according ot the held-out Evidence Lower BOund (ELBO). But, if you want to track a different early stopping metric and optimize a different loss you can use auto_tune_scvi_model's parameters.
For example, if for some reason, you had a dataset coming from two batches (i.e two merged datasets) and wanted to optimize the hyperparameters for the batch mixing entropy. You could use the code below, which makes use of the metric_name argument of auto_tune_scvi_model. This can work for any metric that is implemented in the ScviDataLoader class you use. You may also specify the name of the ScviDataLoader attribute you want to use (e.g "train_set").
End of explanation
"""
from notebooks.utils.autotune_advanced_notebook import custom_objective_hyperopt
synthetic_dataset = annotation_simulation(1, save_path=os.path.join(save_path, "simulation/"))
objective_kwargs = dict(dataset=synthetic_dataset, n_epochs=n_epochs)
best_trainer, trials = auto_tune_scvi_model(
custom_objective_hyperopt=custom_objective_hyperopt,
objective_kwargs=objective_kwargs,
parallel=True,
exp_key="synthetic_dataset_scanvi",
max_evals=max_evals,
reserve_timeout=reserve_timeout,
fmin_timeout=fmin_timeout,
)
"""
Explanation: Custom objective function
Below, we describe, using one of our Synthetic dataset, how to tune our annotation model SCANVI for, e.g, better accuracy on a 20% subset of the labelled data. Note that the model is trained in a semi-supervised framework, that is why we have a labelled and unlabelled dataset. Please, refer to the original paper for details on SCANVI!
In this case, as described in our annotation notebook we may want to form the labelled/unlabelled sets using batch indices. Unfortunately, that requires a little "by hand" work. Even in that case, we are able to leverage the new autotune module to perform hyperparameter tuning. In order to do so, one has to write his own objective function and feed it to auto_tune_scvi_model.
One can proceed as described below.
Note three important conditions:
Since it is going to be pickled the objective should not be implemented in the "main" module, i.e an executable script or a notebook.
the objective should have the search space as its first attribute and a boolean is_best_training as its second.
If not using a cutstom search space, it should be expected to take the form of a dictionary with the following keys:
"model_tunable_kwargs"
"trainer_tunable_kwargs"
"train_func_tunable_kwargs"
End of explanation
"""
# brain_large_dataset_path = os.path.join(save_path, 'brainlarge_dataset_test.h5ad')
# best_trainer, trials = auto_tune_scvi_model(
# gene_dataset=brain_large_dataset_path,
# parallel=True,
# exp_key="brain_large_dataset",
# max_evals=max_evals,
# trainer_specific_kwargs={
# "early_stopping_kwargs": {
# "early_stopping_metric": "elbo",
# "save_best_state_metric": "elbo",
# "patience": 20,
# "threshold": 0,
# "reduce_lr_on_plateau": True,
# "lr_patience": 10,
# "lr_factor": 0.2,
# }
# },
# train_func_specific_kwargs={"n_epochs": n_epochs_brain_large},
# reserve_timeout=reserve_timeout,
# fmin_timeout=fmin_timeout,
# )
"""
Explanation: Delayed populating, for very large datasets.
DISCLAIMER: We don't actually need this for the BrainLarge dataset with 720 genes, this is just an example.
The fact is that after building the objective function and feeding it to hyperopt, it is pickled on to the MongoWorkers. Thus, if you pass a loaded dataset as a partial argument to the objective function, and this dataset exceeds 4Gb, you'll get a PickleError (Objects larger than 4Gb can't be pickled).
To remedy this issue, in case you have a very large dataset for which you want to perform hyperparameter optimization, you should subclass scVI's DownloadableDataset or use one of its many existing subclasses, such that the dataset can be populated inside the objective function which is called by each worker.
End of explanation
"""
adata = scvi.data.pbmcs_10x_cite_seq(
save_path=save_path, run_setup_anndata=False
)
adata = if_not_test_else(adata, adata[:75, :50].copy())
scvi.data.setup_anndata(
adata, batch_key="batch", protein_expression_obsm_key="protein_expression"
)
space = {
"model_tunable_kwargs": {
"n_latent": 5 + hp.randint("n_latent", 11), # [5, 15]
"n_hidden": hp.choice("n_hidden", [64, 128, 256]),
"n_layers_encoder": 1 + hp.randint("n_layers", 5),
"dropout_rate_encoder": hp.choice("dropout_rate", [0.1, 0.3, 0.5, 0.7]),
"gene_likelihood": hp.choice("gene_likelihood", ["zinb", "nb"]),
},
"train_func_tunable_kwargs": {
"lr": hp.choice("lr", [0.01, 0.005, 0.001, 0.0005, 0.0001])
},
}
best_vae, trials = auto_tune_scvi_model(
gene_dataset=adata,
space=space,
parallel=True,
model_class=scvi.model.TOTALVI,
exp_key="totalvi_adata",
train_func_specific_kwargs={"n_epochs": n_epochs},
max_evals=max_evals,
reserve_timeout=reserve_timeout,
fmin_timeout=fmin_timeout,
save_path=save_path, # temp dir, see conftest.py
)
best_vae.get_latent_representation()
# def get_param_df(self):
# ddd = {}
# for i, trial in enumerate(self.trials):
# dd = {}
# dd["marginal_ll"] = trial["result"]["loss"]
# for item in trial["result"]["space"].values():
# for key, value in item.items():
# dd[key] = value
# ddd[i] = dd
# df_space = pd.DataFrame(ddd)
# df_space = df_space.T
# n_params_dataset = np.vectorize(
# partial(
# n_params, self.trainer.adata.uns["_scvi"]["summary_stats"]["n_vars"]
# )
# )
# df_space["n_params"] = n_params_dataset(
# df_space["n_layers"], df_space["n_hidden"], df_space["n_latent"]
# )
# df_space = df_space[
# [
# "marginal_ll",
# "n_layers",
# "n_hidden",
# "n_latent",
# "reconstruction_loss",
# "dropout_rate",
# "lr",
# "n_epochs",
# "n_params",
# ]
# ]
# df_space = df_space.sort_values(by="marginal_ll")
# df_space["run index"] = df_space.index
# df_space.index = np.arange(1, df_space.shape[0] + 1)
# return df_space
# def n_params(n_vars, n_layers, n_hidden, n_latent):
# if n_layers == 0:
# res = 2 * n_vars * n_latent
# else:
# res = 2 * n_vars * n_hidden
# for i in range(n_layers - 1):
# res += 2 * n_hidden * n_hidden
# res += 2 * n_hidden * n_latent
# return res
"""
Explanation: Working with totalVI
End of explanation
"""
|
tensorflow/model-optimization | tensorflow_model_optimization/g3doc/guide/quantization/training_comprehensive_guide.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
! pip uninstall -y tensorflow
! pip install -q tf-nightly
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tensorflow_model_optimization as tfmot
import tempfile
input_shape = [20]
x_train = np.random.randn(1, 20).astype(np.float32)
y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20)
def setup_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(20, input_shape=input_shape),
tf.keras.layers.Flatten()
])
return model
def setup_pretrained_weights():
model= setup_model()
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model.fit(x_train, y_train)
_, pretrained_weights = tempfile.mkstemp('.tf')
model.save_weights(pretrained_weights)
return pretrained_weights
def setup_pretrained_model():
model = setup_model()
pretrained_weights = setup_pretrained_weights()
model.load_weights(pretrained_weights)
return model
setup_model()
pretrained_weights = setup_pretrained_weights()
"""
Explanation: Quantization aware training comprehensive guide
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/quantization/training_comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/quantization/training_comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/quantization/training_comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Welcome to the comprehensive guide for Keras quantization aware training.
This page documents various use cases and shows how to use the API for each one. Once you know which APIs you need, find the parameters and the low-level details in the
API docs.
If you want to see the benefits of quantization aware training and what's supported, see the overview.
For a single end-to-end example, see the quantization aware training example.
The following use cases are covered:
Deploy a model with 8-bit quantization with these steps.
Define a quantization aware model.
For Keras HDF5 models only, use special checkpointing and
deserialization logic. Training is otherwise standard.
Create a quantized model from the quantization aware one.
Experiment with quantization.
Anything for experimentation has no supported path to deployment.
Custom Keras layers fall under experimentation.
Setup
For finding the APIs you need and understanding purposes, you can run but skip reading this section.
End of explanation
"""
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
quant_aware_model.summary()
"""
Explanation: Define quantization aware model
By defining models in the following ways, there are available paths to deployment to backends listed in the overview page. By default, 8-bit quantization is used.
Note: a quantization aware model is not actually quantized. Creating a quantized model is a separate step.
Quantize whole model
Your use case:
* Subclassed models are not supported.
Tips for better model accuracy:
Try "Quantize some layers" to skip quantizing the layers that reduce accuracy the most.
It's generally better to finetune with quantization aware training as opposed to training from scratch.
To make the whole model aware of quantization, apply tfmot.quantization.keras.quantize_model to the model.
End of explanation
"""
# Create a base model
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
# Helper function uses `quantize_annotate_layer` to annotate that only the
# Dense layers should be quantized.
def apply_quantization_to_dense(layer):
if isinstance(layer, tf.keras.layers.Dense):
return tfmot.quantization.keras.quantize_annotate_layer(layer)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_quantization_to_dense`
# to the layers of the model.
annotated_model = tf.keras.models.clone_model(
base_model,
clone_function=apply_quantization_to_dense,
)
# Now that the Dense layers are annotated,
# `quantize_apply` actually makes the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
quant_aware_model.summary()
"""
Explanation: Quantize some layers
Quantizing a model can have a negative effect on accuracy. You can selectively quantize layers of a model to explore the trade-off between accuracy, speed, and model size.
Your use case:
* To deploy to a backend that only works well with fully quantized models (e.g. EdgeTPU v1, most DSPs), try "Quantize whole model".
Tips for better model accuracy:
* It's generally better to finetune with quantization aware training as opposed to training from scratch.
* Try quantizing the later layers instead of the first layers.
* Avoid quantizing critical layers (e.g. attention mechanism).
In the example below, quantize only the Dense layers.
End of explanation
"""
print(base_model.layers[0].name)
"""
Explanation: While this example used the type of the layer to decide what to quantize, the easiest way to quantize a particular layer is to set its name property, and look for that name in the clone_function.
End of explanation
"""
# Use `quantize_annotate_layer` to annotate that the `Dense` layer
# should be quantized.
i = tf.keras.Input(shape=(20,))
x = tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(10))(i)
o = tf.keras.layers.Flatten()(x)
annotated_model = tf.keras.Model(inputs=i, outputs=o)
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
# For deployment purposes, the tool adds `QuantizeLayer` after `InputLayer` so that the
# quantized model can take in float inputs instead of only uint8.
quant_aware_model.summary()
"""
Explanation: More readable but potentially lower model accuracy
This is not compatible with finetuning with quantization aware training, which is why it may be less accurate than the above examples.
Functional example
End of explanation
"""
# Use `quantize_annotate_layer` to annotate that the `Dense` layer
# should be quantized.
annotated_model = tf.keras.Sequential([
tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=input_shape)),
tf.keras.layers.Flatten()
])
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
quant_aware_model.summary()
"""
Explanation: Sequential example
End of explanation
"""
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
# Save or checkpoint the model.
_, keras_model_file = tempfile.mkstemp('.h5')
quant_aware_model.save(keras_model_file)
# `quantize_scope` is needed for deserializing HDF5 models.
with tfmot.quantization.keras.quantize_scope():
loaded_model = tf.keras.models.load_model(keras_model_file)
loaded_model.summary()
"""
Explanation: Checkpoint and deserialize
Your use case: this code is only needed for the HDF5 model format (not HDF5 weights or other formats).
End of explanation
"""
base_model = setup_pretrained_model()
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
# Typically you train the model here.
converter = tf.lite.TFLiteConverter.from_keras_model(quant_aware_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_tflite_model = converter.convert()
"""
Explanation: Create and deploy quantized model
In general, reference the documentation for the deployment backend that you
will use.
This is an example for the TFLite backend.
End of explanation
"""
LastValueQuantizer = tfmot.quantization.keras.quantizers.LastValueQuantizer
MovingAverageQuantizer = tfmot.quantization.keras.quantizers.MovingAverageQuantizer
class DefaultDenseQuantizeConfig(tfmot.quantization.keras.QuantizeConfig):
# Configure how to quantize weights.
def get_weights_and_quantizers(self, layer):
return [(layer.kernel, LastValueQuantizer(num_bits=8, symmetric=True, narrow_range=False, per_axis=False))]
# Configure how to quantize activations.
def get_activations_and_quantizers(self, layer):
return [(layer.activation, MovingAverageQuantizer(num_bits=8, symmetric=False, narrow_range=False, per_axis=False))]
def set_quantize_weights(self, layer, quantize_weights):
# Add this line for each item returned in `get_weights_and_quantizers`
# , in the same order
layer.kernel = quantize_weights[0]
def set_quantize_activations(self, layer, quantize_activations):
# Add this line for each item returned in `get_activations_and_quantizers`
# , in the same order.
layer.activation = quantize_activations[0]
# Configure how to quantize outputs (may be equivalent to activations).
def get_output_quantizers(self, layer):
return []
def get_config(self):
return {}
"""
Explanation: Experiment with quantization
Your use case: using the following APIs means that there is no
supported path to deployment. For instance, TFLite conversion
and kernel implementations only support 8-bit quantization.
The features are also experimental and not subject to backward compatibility.
* tfmot.quantization.keras.QuantizeConfig
* tfmot.quantization.keras.quantizers.Quantizer
* tfmot.quantization.keras.quantizers.LastValueQuantizer
* tfmot.quantization.keras.quantizers.MovingAverageQuantizer
Setup: DefaultDenseQuantizeConfig
Experimenting requires using tfmot.quantization.keras.QuantizeConfig, which describes how to quantize the weights, activations, and outputs of a layer.
Below is an example that defines the same QuantizeConfig used for the Dense layer in the API defaults.
During the forward propagation in this example, the LastValueQuantizer returned in get_weights_and_quantizers is called with layer.kernel as the input, producing an output. The output replaces layer.kernel
in the original forward propagation of the Dense layer, via the logic defined in set_quantize_weights. The same idea applies to the activations and outputs.
End of explanation
"""
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class CustomLayer(tf.keras.layers.Dense):
pass
model = quantize_annotate_model(tf.keras.Sequential([
quantize_annotate_layer(CustomLayer(20, input_shape=(20,)), DefaultDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `DefaultDenseQuantizeConfig` with `quantize_scope`
# as well as the custom Keras layer.
with quantize_scope(
{'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig,
'CustomLayer': CustomLayer}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
"""
Explanation: Quantize custom Keras layer
This example uses the DefaultDenseQuantizeConfig to quantize the CustomLayer.
Applying the configuration is the same across
the "Experiment with quantization" use cases.
* Apply tfmot.quantization.keras.quantize_annotate_layer to the CustomLayer and pass in the QuantizeConfig.
* Use
tfmot.quantization.keras.quantize_annotate_model to continue to quantize the rest of the model with the API defaults.
End of explanation
"""
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
# Configure weights to quantize with 4-bit instead of 8-bits.
def get_weights_and_quantizers(self, layer):
return [(layer.kernel, LastValueQuantizer(num_bits=4, symmetric=True, narrow_range=False, per_axis=False))]
"""
Explanation: Modify quantization parameters
Common mistake: quantizing the bias to fewer than 32-bits usually harms model accuracy too much.
This example modifies the Dense layer to use 4-bits for its weights instead
of the default 8-bits. The rest of the model continues to use API defaults.
End of explanation
"""
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this Dense layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
"""
Explanation: Applying the configuration is the same across
the "Experiment with quantization" use cases.
* Apply tfmot.quantization.keras.quantize_annotate_layer to the Dense layer and pass in the QuantizeConfig.
* Use
tfmot.quantization.keras.quantize_annotate_model to continue to quantize the rest of the model with the API defaults.
End of explanation
"""
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
def get_activations_and_quantizers(self, layer):
# Skip quantizing activations.
return []
def set_quantize_activations(self, layer, quantize_activations):
# Empty since `get_activaations_and_quantizers` returns
# an empty list.
return
"""
Explanation: Modify parts of layer to quantize
This example modifies the Dense layer to skip quantizing the activation. The rest of the model continues to use API defaults.
End of explanation
"""
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this Dense layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
"""
Explanation: Applying the configuration is the same across
the "Experiment with quantization" use cases.
* Apply tfmot.quantization.keras.quantize_annotate_layer to the Dense layer and pass in the QuantizeConfig.
* Use
tfmot.quantization.keras.quantize_annotate_model to continue to quantize the rest of the model with the API defaults.
End of explanation
"""
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class FixedRangeQuantizer(tfmot.quantization.keras.quantizers.Quantizer):
"""Quantizer which forces outputs to be between -1 and 1."""
def build(self, tensor_shape, name, layer):
# Not needed. No new TensorFlow variables needed.
return {}
def __call__(self, inputs, training, weights, **kwargs):
return tf.keras.backend.clip(inputs, -1.0, 1.0)
def get_config(self):
# Not needed. No __init__ parameters to serialize.
return {}
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
# Configure weights to quantize with 4-bit instead of 8-bits.
def get_weights_and_quantizers(self, layer):
# Use custom algorithm defined in `FixedRangeQuantizer` instead of default Quantizer.
return [(layer.kernel, FixedRangeQuantizer())]
"""
Explanation: Use custom quantization algorithm
The tfmot.quantization.keras.quantizers.Quantizer class is a callable that
can apply any algorithm to its inputs.
In this example, the inputs are the weights, and we apply the math in the
FixedRangeQuantizer __call__ function to the weights. Instead of the original
weights values, the output of the
FixedRangeQuantizer is now passed to whatever would have used the weights.
End of explanation
"""
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this `Dense` layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
"""
Explanation: Applying the configuration is the same across
the "Experiment with quantization" use cases.
* Apply tfmot.quantization.keras.quantize_annotate_layer to the Dense layer and pass in the QuantizeConfig.
* Use
tfmot.quantization.keras.quantize_annotate_model to continue to quantize the rest of the model with the API defaults.
End of explanation
"""
|
YuriyGuts/kaggle-quora-question-pairs | notebooks/feature-phrase-embedding.ipynb | mit | from pygoose import *
from gensim.models.wrappers.fasttext import FastText
from scipy.spatial.distance import cosine, euclidean, cityblock
"""
Explanation: Feature: Phrase Embedding Distances
Based on the pre-trained word embeddings, we'll calculate the mean embedding vector of each question (as well as the unit-length normalized sum of word embeddings), and compute vector distances between these aggregate vectors.
Imports
This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.
End of explanation
"""
project = kg.Project.discover()
"""
Explanation: Config
Automatically discover the paths to various data folders and compose the project structure.
End of explanation
"""
feature_list_id = 'phrase_embedding'
"""
Explanation: Identifier for storing these features on disk and referring to them later.
End of explanation
"""
tokens_train = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_no_stopwords_train.pickle')
tokens_test = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_no_stopwords_test.pickle')
tokens = tokens_train + tokens_test
"""
Explanation: Read Data
Preprocessed and tokenized questions.
End of explanation
"""
embedding_model = FastText.load_word2vec_format(project.aux_dir + 'fasttext_vocab.vec')
"""
Explanation: Pretrained word vector database.
End of explanation
"""
def get_phrase_embedding_distances(pair):
q1_vectors = [embedding_model[token] for token in pair[0] if token in embedding_model]
q2_vectors = [embedding_model[token] for token in pair[1] if token in embedding_model]
if len(q1_vectors) == 0:
q1_vectors.append(np.zeros(word_vector_dim))
if len(q2_vectors) == 0:
q2_vectors.append(np.zeros(word_vector_dim))
q1_mean = np.mean(q1_vectors, axis=0)
q2_mean = np.mean(q2_vectors, axis=0)
q1_sum = np.sum(q1_vectors, axis=0)
q2_sum = np.sum(q2_vectors, axis=0)
q1_norm = q1_sum / np.sqrt((q1_sum ** 2).sum())
q2_norm = q2_sum / np.sqrt((q2_sum ** 2).sum())
return [
cosine(q1_mean, q2_mean),
np.log(cityblock(q1_mean, q2_mean) + 1),
euclidean(q1_mean, q2_mean),
cosine(q1_norm, q2_norm),
np.log(cityblock(q1_norm, q2_norm) + 1),
euclidean(q1_norm, q2_norm),
]
distances = kg.jobs.map_batch_parallel(
tokens,
item_mapper=get_phrase_embedding_distances,
batch_size=1000,
)
distances = np.array(distances)
X_train = distances[:len(tokens_train)]
X_test = distances[len(tokens_train):]
print('X_train:', X_train.shape)
print('X_test: ', X_test.shape)
"""
Explanation: Build Features
End of explanation
"""
feature_names = [
'phrase_emb_mean_cosine',
'phrase_emb_mean_cityblock_log',
'phrase_emb_mean_euclidean',
'phrase_emb_normsum_cosine',
'phrase_emb_normsum_cityblock_log',
'phrase_emb_normsum_euclidean',
]
project.save_features(X_train, X_test, feature_names, feature_list_id)
"""
Explanation: Save features
End of explanation
"""
|
vbarua/PythonWorkshop | Code/An Interlude on Input and Output/1 - Reading and Writing Data.ipynb | mit | f = open("basicOutput.txt", 'w') # Open/create the basicOutput.txt file for writing ('w')
f.write("Hello World\n") # Write the string to the basicOutput.txt file.
f.write("Goodbye World\n")
f.close() # Close the string to the file.
"""
Explanation: Reading and Writing Data
Basic I/O
Reading and writing to files is referred to as I/O (Input/Output) in some circles.
In Python the open function is used to open a file for reading and writing. The write function is used to write strings to the file. Finally the close function is used to close files.
End of explanation
"""
with open('withFile.txt', 'w') as f:
f.write("A better way of opening a file.\n")
"""
Explanation: It's a good idea to close every file you open. It is recommended that you deal with files using the with keyword which ensures that the file is closed after use.
End of explanation
"""
with open("basicOutput.txt", 'a') as f:
f.write("Are you... ")
f.write("still here?\n")
"""
Explanation: Opening a file with 'w' will overwrite the existing contents of a file. If you want to append more information to the file use 'a'.
End of explanation
"""
with open('basicOutput.txt', 'r') as f: # Use 'r' for reading.
contents = f.read()
print(contents)
"""
Explanation: You can read the contents of a file as a single string using the read function.
End of explanation
"""
with open('basicOutput.txt', 'r') as f:
lines = f.readlines()
print(lines)
"""
Explanation: Notice how Are you... still here? is all on a single line even though it was written using two separate calls to write. This happened because the "Are you... " string didn't terminate in \n (known as the newline character).
You can also read each line of a file into a list directly.
End of explanation
"""
with open('basicOutput.txt', 'r') as f:
for line in f:
print(len(line))
"""
Explanation: or read them in a for loop
End of explanation
"""
import numpy as np
time = [0, 10, 20, 30]
temperature = [300, 314, 323, 331]
zippedData = zip(time, temperature)
np.savetxt('temperatureData.csv', zippedData,
delimiter=',', header="Time (s), Temperature (K)")
zippedData
"""
Explanation: This last method is useful when dealing with very large files. In such cases reading the whole file into Python may be very slow.
Comma Separate Values (CSV)
CSV files are a very basic way of of storing data. For example the following mock file has two columns, one for Time and another for Temperature. Each row represents an entry, with commas used to separate values.
Time (s), Temperature (K)
0,300
10,314
20,323
30,331
Python has a csv library that can be used to read and write csv files. It's a very general purpose library and works in many different scenarios. For our purposes though it's more straightforward to use two Numpy functions np.savetxt and np.readtxt.
End of explanation
"""
with open("temperatureData.csv", 'r') as f:
print(f.read())
"""
Explanation: The above writes the time and temperature data to a file, along with a header describing the data. We can verify this by opening and reading the file
End of explanation
"""
data = np.loadtxt("temperatureData.csv", delimiter=",", skiprows=1)
data
"""
Explanation: The np.loadtxt function can be used to read in csv files
End of explanation
"""
timeFromFile, temperatureFromFile = np.loadtxt("temperatureData.csv", delimiter=",", skiprows=1,
unpack=True)
timeFromFile, temperatureFromFile
"""
Explanation: The skiprows keyword tells loadtxt to skip the first line (which contains the header). It's also possible to read the columns in directly to variables.
End of explanation
"""
|
trangel/Data-Science | reinforcement_learning/dqn_atari.ipynb | gpl-3.0 | #XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
"""
Explanation: Deep Q-Network implementation
This notebook shamelessly demands you to implement a DQN - an approximate q-learning algorithm with experience replay and target networks - and see if it works any better this way.
End of explanation
"""
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Frameworks - we'll accept this homework in any deep learning framework. This particular notebook was designed for tensorflow, but you will find it easy to adapt it to almost any python-based deep learning framework.
End of explanation
"""
from gym.core import ObservationWrapper
from gym.spaces import Box
from scipy.misc import imresize
from skimage import color
class PreprocessAtari(ObservationWrapper):
def __init__(self, env):
"""A gym wrapper that crops, scales image into the desired shapes and optionally grayscales it."""
ObservationWrapper.__init__(self,env)
self.img_size = (64, 64)
self.observation_space = Box(0.0, 1.0, (self.img_size[0], self.img_size[1], 1))
def observation(self, img):
"""what happens to each observation"""
# Here's what you need to do:
# * crop image, remove irrelevant parts
# * resize image to self.img_size
# (use imresize imported above or any library you want,
# e.g. opencv, skimage, PIL, keras)
# * cast image to grayscale
# * convert image pixels to (0,1) range, float32 type
#top = self.img_size[0]
#bottom = self.img_size[0] - 18
#left = self.img_size[1]
#right = self.img_size[1] - left
#crop = img[top:bottom, left:right, :]
#print(top, bottom, left, right)
img2 = imresize(img, self.img_size)
img2 = color.rgb2gray(img2)
s = (img2.shape[0], img2.shape[1], 1 )
img2 = img2.reshape(s)
img2 = img2.astype('float32') / img2.max()
return img2
import gym
#spawn game instance for tests
env = gym.make("BreakoutDeterministic-v0") #create raw env
env = PreprocessAtari(env)
observation_shape = env.observation_space.shape
n_actions = env.action_space.n
obs = env.reset()
#test observation
assert obs.ndim == 3, "observation must be [batch, time, channels] even if there's just one channel"
assert obs.shape == observation_shape
assert obs.dtype == 'float32'
assert len(np.unique(obs))>2, "your image must not be binary"
assert 0 <= np.min(obs) and np.max(obs) <=1, "convert image pixels to (0,1) range"
print("Formal tests seem fine. Here's an example of what you'll get.")
plt.title("what your network gonna see")
plt.imshow(obs[:, :, 0], interpolation='none',cmap='gray');
"""
Explanation: Let's play some old videogames
This time we're gonna apply approximate q-learning to an atari game called Breakout. It's not the hardest thing out there, but it's definitely way more complex than anything we tried before.
Processing game image
Raw atari images are large, 210x160x3 by default. However, we don't need that level of detail in order to learn them.
We can thus save a lot of time by preprocessing game image, including
* Resizing to a smaller shape, 64 x 64
* Converting to grayscale
* Cropping irrelevant image parts (top & bottom)
End of explanation
"""
from framebuffer import FrameBuffer
def make_env():
env = gym.make("BreakoutDeterministic-v4")
env = PreprocessAtari(env)
env = FrameBuffer(env, n_frames=4, dim_order='tensorflow')
return env
env = make_env()
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
for _ in range(50):
obs, _, _, _ = env.step(env.action_space.sample())
plt.title("Game image")
plt.imshow(env.render("rgb_array"))
plt.show()
plt.title("Agent observation (4 frames left to right)")
plt.imshow(obs.transpose([0,2,1]).reshape([state_dim[0],-1]));
"""
Explanation: Frame buffer
Our agent can only process one observation at a time, so we gotta make sure it contains enough information to fing optimal actions. For instance, agent has to react to moving objects so he must be able to measure object's velocity.
To do so, we introduce a buffer that stores 4 last images. This time everything is pre-implemented for you.
End of explanation
"""
import tensorflow as tf
tf.reset_default_graph()
sess = tf.InteractiveSession()
from keras.layers import Conv2D, Dense, Flatten
from keras.models import Sequential
class DQNAgent:
def __init__(self, name, state_shape, n_actions, epsilon=0, reuse=False):
"""A simple DQN agent"""
with tf.variable_scope(name, reuse=reuse):
#< Define your network body here. Please make sure you don't use any layers created elsewhere >
self.network = Sequential()
self.network.add(Conv2D(filters=16, kernel_size=(3, 3), strides=(2, 2), activation='relu'))
self.network.add(Conv2D(filters=32, kernel_size=(3, 3), strides=(2, 2), activation='relu'))
self.network.add(Conv2D(filters=64, kernel_size=(3, 3), strides=(2, 2), activation='relu'))
self.network.add(Flatten())
self.network.add(Dense(256, activation='relu'))
self.network.add(Dense(n_actions, activation='linear'))
# prepare a graph for agent step
self.state_t = tf.placeholder('float32', [None,] + list(state_shape))
self.qvalues_t = self.get_symbolic_qvalues(self.state_t)
self.weights = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=name)
self.epsilon = epsilon
def get_symbolic_qvalues(self, state_t):
"""takes agent's observation, returns qvalues. Both are tf Tensors"""
#< apply your network layers here >
qvalues = self.network(state_t) #< symbolic tensor for q-values >
assert tf.is_numeric_tensor(qvalues) and qvalues.shape.ndims == 2, \
"please return 2d tf tensor of qvalues [you got %s]" % repr(qvalues)
assert int(qvalues.shape[1]) == n_actions
return qvalues
def get_qvalues(self, state_t):
"""Same as symbolic step except it operates on numpy arrays"""
sess = tf.get_default_session()
return sess.run(self.qvalues_t, {self.state_t: state_t})
def sample_actions(self, qvalues):
"""pick actions given qvalues. Uses epsilon-greedy exploration strategy. """
epsilon = self.epsilon
batch_size, n_actions = qvalues.shape
random_actions = np.random.choice(n_actions, size=batch_size)
best_actions = qvalues.argmax(axis=-1)
should_explore = np.random.choice([0, 1], batch_size, p = [1-epsilon, epsilon])
return np.where(should_explore, random_actions, best_actions)
agent = DQNAgent("dqn_agent", state_dim, n_actions, epsilon=0.5)
sess.run(tf.global_variables_initializer())
"""
Explanation: Building a network
We now need to build a neural network that can map images to state q-values. This network will be called on every agent's step so it better not be resnet-152 unless you have an array of GPUs. Instead, you can use strided convolutions with a small number of features to save time and memory.
You can build any architecture you want, but for reference, here's something that will more or less work:
End of explanation
"""
def evaluate(env, agent, n_games=1, greedy=False, t_max=10000):
""" Plays n_games full games. If greedy, picks actions as argmax(qvalues). Returns mean reward. """
rewards = []
for _ in range(n_games):
s = env.reset()
reward = 0
for _ in range(t_max):
qvalues = agent.get_qvalues([s])
action = qvalues.argmax(axis=-1)[0] if greedy else agent.sample_actions(qvalues)[0]
s, r, done, _ = env.step(action)
reward += r
if done:
break
rewards.append(reward)
return np.mean(rewards)
evaluate(env, agent, n_games=1)
"""
Explanation: Now let's try out our agent to see if it raises any errors.
End of explanation
"""
from replay_buffer import ReplayBuffer
exp_replay = ReplayBuffer(10)
for _ in range(30):
exp_replay.add(env.reset(), env.action_space.sample(), 1.0, env.reset(), done=False)
obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch = exp_replay.sample(5)
assert len(exp_replay) == 10, "experience replay size should be 10 because that's what maximum capacity is"
def play_and_record(agent, env, exp_replay, n_steps=1):
"""
Play the game for exactly n steps, record every (s,a,r,s', done) to replay buffer.
Whenever game ends, add record with done=True and reset the game.
:returns: return sum of rewards over time
Note: please do not env.reset() unless env is done.
It is guaranteed that env has done=False when passed to this function.
"""
# State at the beginning of rollout
s = env.framebuffer
greedy = True
# Play the game for n_steps as per instructions above
#<YOUR CODE>
last_info = None
total_reward = 0
for _ in range(n_steps):
qvalues = agent.get_qvalues([s])
a = agent.sample_actions(qvalues)[0]
next_s, r, done, info = env.step(a)
r = -10 if ( last_info is not None and last_info['ale.lives'] > info['ale.lives'] ) else r
last_info = info
# Experience Replay
exp_replay.add(s, a, r, next_s, done)
total_reward += r
s = next_s
if done:
s = env.reset()
return total_reward
# testing your code. This may take a minute...
exp_replay = ReplayBuffer(20000)
play_and_record(agent, env, exp_replay, n_steps=10000)
# if you're using your own experience replay buffer, some of those tests may need correction.
# just make sure you know what your code does
assert len(exp_replay) == 10000, "play_and_record should have added exactly 10000 steps, "\
"but instead added %i"%len(exp_replay)
is_dones = list(zip(*exp_replay._storage))[-1]
assert 0 < np.mean(is_dones) < 0.1, "Please make sure you restart the game whenever it is 'done' and record the is_done correctly into the buffer."\
"Got %f is_done rate over %i steps. [If you think it's your tough luck, just re-run the test]"%(np.mean(is_dones), len(exp_replay))
for _ in range(100):
obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch = exp_replay.sample(10)
assert obs_batch.shape == next_obs_batch.shape == (10,) + state_dim
assert act_batch.shape == (10,), "actions batch should have shape (10,) but is instead %s"%str(act_batch.shape)
assert reward_batch.shape == (10,), "rewards batch should have shape (10,) but is instead %s"%str(reward_batch.shape)
assert is_done_batch.shape == (10,), "is_done batch should have shape (10,) but is instead %s"%str(is_done_batch.shape)
assert [int(i) in (0,1) for i in is_dones], "is_done should be strictly True or False"
assert [0 <= a <= n_actions for a in act_batch], "actions should be within [0, n_actions]"
print("Well done!")
"""
Explanation: Experience replay
For this assignment, we provide you with experience replay buffer. If you implemented experience replay buffer in last week's assignment, you can copy-paste it here to get 2 bonus points.
The interface is fairly simple:
exp_replay.add(obs, act, rw, next_obs, done) - saves (s,a,r,s',done) tuple into the buffer
exp_replay.sample(batch_size) - returns observations, actions, rewards, next_observations and is_done for batch_size random samples.
len(exp_replay) - returns number of elements stored in replay buffer.
End of explanation
"""
target_network = DQNAgent("target_network", state_dim, n_actions)
def load_weigths_into_target_network(agent, target_network):
""" assign target_network.weights variables to their respective agent.weights values. """
assigns = []
for w_agent, w_target in zip(agent.weights, target_network.weights):
assigns.append(tf.assign(w_target, w_agent, validate_shape=True))
tf.get_default_session().run(assigns)
load_weigths_into_target_network(agent, target_network)
# check that it works
sess.run([tf.assert_equal(w, w_target) for w, w_target in zip(agent.weights, target_network.weights)]);
print("It works!")
"""
Explanation: Target networks
We also employ the so called "target network" - a copy of neural network weights to be used for reference Q-values:
The network itself is an exact copy of agent network, but it's parameters are not trained. Instead, they are moved here from agent's actual network every so often.
$$ Q_{reference}(s,a) = r + \gamma \cdot \max {a'} Q{target}(s',a') $$
End of explanation
"""
# placeholders that will be fed with exp_replay.sample(batch_size)
obs_ph = tf.placeholder(tf.float32, shape=(None,) + state_dim)
actions_ph = tf.placeholder(tf.int32, shape=[None])
rewards_ph = tf.placeholder(tf.float32, shape=[None])
next_obs_ph = tf.placeholder(tf.float32, shape=(None,) + state_dim)
is_done_ph = tf.placeholder(tf.float32, shape=[None])
is_not_done = 1 - is_done_ph
gamma = 0.99
"""
Explanation: Learning with... Q-learning
Here we write a function similar to agent.update from tabular q-learning.
End of explanation
"""
current_qvalues = agent.get_symbolic_qvalues(obs_ph)
current_action_qvalues = tf.reduce_sum(tf.one_hot(actions_ph, n_actions) * current_qvalues, axis=1)
"""
Explanation: Take q-values for actions agent just took
End of explanation
"""
# compute q-values for NEXT states with target network
next_qvalues_target = target_network.get_symbolic_qvalues(next_obs_ph) #<your code>
# compute state values by taking max over next_qvalues_target for all actions
next_state_values_target = tf.reduce_max(next_qvalues_target, axis=1) #<YOUR CODE>
# compute Q_reference(s,a) as per formula above.
reference_qvalues = rewards_ph + gamma * next_state_values_target #<YOUR CODE>
# Define loss function for sgd.
td_loss = (current_action_qvalues - reference_qvalues) ** 2
td_loss = tf.reduce_mean(td_loss)
train_step = tf.train.AdamOptimizer(1e-3).minimize(td_loss, var_list=agent.weights)
sess.run(tf.global_variables_initializer())
for chk_grad in tf.gradients(reference_qvalues, agent.weights):
error_msg = "Reference q-values should have no gradient w.r.t. agent weights. Make sure you used target_network qvalues! "
error_msg += "If you know what you're doing, ignore this assert."
assert chk_grad is None or np.allclose(sess.run(chk_grad), sess.run(chk_grad * 0)), error_msg
#assert tf.gradients(reference_qvalues, is_not_done)[0] is not None, "make sure you used is_not_done"
assert tf.gradients(reference_qvalues, rewards_ph)[0] is not None, "make sure you used rewards"
assert tf.gradients(reference_qvalues, next_obs_ph)[0] is not None, "make sure you used next states"
assert tf.gradients(reference_qvalues, obs_ph)[0] is None, "reference qvalues shouldn't depend on current observation!" # ignore if you're certain it's ok
print("Splendid!")
"""
Explanation: Compute Q-learning TD error:
$$ L = { 1 \over N} \sum_i [ Q_{\theta}(s,a) - Q_{reference}(s,a) ] ^2 $$
With Q-reference defined as
$$ Q_{reference}(s,a) = r(s,a) + \gamma \cdot max_{a'} Q_{target}(s', a') $$
Where
* $Q_{target}(s',a')$ denotes q-value of next state and next action predicted by target_network
* $s, a, r, s'$ are current state, action, reward and next state respectively
* $\gamma$ is a discount factor defined two cells above.
End of explanation
"""
from tqdm import trange
from IPython.display import clear_output
import matplotlib.pyplot as plt
from pandas import DataFrame
moving_average = lambda x, span, **kw: DataFrame({'x':np.asarray(x)}).x.ewm(span=span, **kw).mean().values
%matplotlib inline
mean_rw_history = []
td_loss_history = []
#exp_replay = ReplayBuffer(10**5)
exp_replay = ReplayBuffer(10**4)
play_and_record(agent, env, exp_replay, n_steps=10000)
def sample_batch(exp_replay, batch_size):
obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch = exp_replay.sample(batch_size)
return {
obs_ph:obs_batch, actions_ph:act_batch, rewards_ph:reward_batch,
next_obs_ph:next_obs_batch, is_done_ph:is_done_batch
}
for i in trange(10**5):
# play
play_and_record(agent, env, exp_replay, 10)
# train
_, loss_t = sess.run([train_step, td_loss], sample_batch(exp_replay, batch_size=64))
td_loss_history.append(loss_t)
# adjust agent parameters
if i % 500 == 0:
load_weigths_into_target_network(agent, target_network)
agent.epsilon = max(agent.epsilon * 0.99, 0.01)
mean_rw_history.append(evaluate(make_env(), agent, n_games=3))
if i % 100 == 0:
clear_output(True)
print("buffer size = %i, epsilon = %.5f" % (len(exp_replay), agent.epsilon))
plt.subplot(1,2,1)
plt.title("mean reward per game")
plt.plot(mean_rw_history)
plt.grid()
assert not np.isnan(loss_t)
plt.figure(figsize=[12, 4])
plt.subplot(1,2,2)
plt.title("TD loss history (moving average)")
plt.plot(moving_average(np.array(td_loss_history), span=100, min_periods=100))
plt.grid()
plt.show()
if np.mean(mean_rw_history[-10:]) > 10.:
break
assert np.mean(mean_rw_history[-10:]) > 10.
print("That's good enough for tutorial.")
"""
Explanation: Main loop
It's time to put everything together and see if it learns anything.
End of explanation
"""
agent.epsilon=0 # Don't forget to reset epsilon back to previous value if you want to go on training
#record sessions
import gym.wrappers
env_monitor = gym.wrappers.Monitor(make_env(),directory="videos",force=True)
sessions = [evaluate(env_monitor, agent, n_games=1) for _ in range(100)]
env_monitor.close()
#show video
from IPython.display import HTML
import os
video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) #this may or may not be _last_ video. Try other indices
"""
Explanation: How to interpret plots:
This aint no supervised learning so don't expect anything to improve monotonously.
* TD loss is the MSE between agent's current Q-values and target Q-values. It may slowly increase or decrease, it's ok. The "not ok" behavior includes going NaN or stayng at exactly zero before agent has perfect performance.
* mean reward is the expected sum of r(s,a) agent gets over the full game session. It will oscillate, but on average it should get higher over time (after a few thousand iterations...).
* In basic q-learning implementation it takes 5-10k steps to "warm up" agent before it starts to get better.
* buffer size - this one is simple. It should go up and cap at max size.
* epsilon - agent's willingness to explore. If you see that agent's already at 0.01 epsilon before it's average reward is above 0 - it means you need to increase epsilon. Set it back to some 0.2 - 0.5 and decrease the pace at which it goes down.
* Also please ignore first 100-200 steps of each plot - they're just oscillations because of the way moving average works.
At first your agent will lose quickly. Then it will learn to suck less and at least hit the ball a few times before it loses. Finally it will learn to actually score points.
Training will take time. A lot of it actually. An optimistic estimate is to say it's gonna start winning (average reward > 10) after 10k steps.
But hey, look on the bright side of things:
Video
End of explanation
"""
from submit import submit_breakout
env = make_env()
submit_breakout(agent, env, evaluate, "tonatiuh_rangel@hotmail.com", "CGeZfHhZ10uqx4Ud")
"""
Explanation: More
If you want to play with DQN a bit more, here's a list of things you can try with it:
Easy:
Implementing double q-learning shouldn't be a problem if you've already have target networks in place.
You will probably need tf.argmax to select best actions
Here's an original article
Dueling architecture is also quite straightforward if you have standard DQN.
You will need to change network architecture, namely the q-values layer
It must now contain two heads: V(s) and A(s,a), both dense layers
You should then add them up via elemwise sum layer.
Here's an article
Hard: Prioritized experience replay
In this section, you're invited to implement prioritized experience replay
You will probably need to provide a custom data structure
Once pool.update is called, collect the pool.experience_replay.observations, actions, rewards and is_alive and store them in your data structure
You can now sample such transitions in proportion to the error (see article) for training.
It's probably more convenient to explicitly declare inputs for "sample observations", "sample actions" and so on to plug them into q-learning.
Prioritized (and even normal) experience replay should greatly reduce amount of game sessions you need to play in order to achieve good performance.
While it's effect on runtime is limited for atari, more complicated envs (further in the course) will certainly benefit for it.
There is even more out there - see this overview article.
End of explanation
"""
|
mohsinhaider/pythonbootcampacm | Objects and Data Structures/.ipynb_checkpoints/List Comprehensions-checkpoint.ipynb | mit | # Store even numbers from 0 to 20
even_lst = [num for num in range(21) if num % 2 == 0]
print(even_lst)
"""
Explanation: List Comprehensions and Generators
Python comes with more than just a programming language, it also includes a way to write elegant code. Pythonic code is syntax that wishes to emulate natural constructs of programming.
Let's look at the basic syntax for a list comprehension:
some_list = [item for item in domain if .... ]
We store the item that exists in the domain into the list, as long as it passes any conditions in the if statement.
Example 1 Extract the even numbers from a given range.
End of explanation
"""
cash_value = 20
rsu_dict = {"Max":20, "Willie":13, "Joanna":14}
lst = [rsu_dict[name]*cash_value for name in rsu_dict]
print(lst)
my_dict = {"Ross":19, "Bernie":13, "Micah":15}
cash_value = 20
# [19*20, 13*20, 15*20]
cash_lst = [my_dict[key]*20 for key in my_dict]
print(cash_lst)
"""
Explanation: Example 2 Convert the reserved stock units (RSUs) an employee has in a company to the current cash value.
End of explanation
"""
rows = 'ABC'
cols = '123'
vowels = ('a', 'e', 'i', 'o', 'u')
sentence = 'cogito ergo sum'
words = sentence.split()
# Produce [A3, B2, C1]
number_letter_lst = [rows[element]+cols[2-element] for element in range(3)]
print(number_letter_lst)
# Produce [A1, B1, C1, A2, B2, C2, A3, B3, C3]
letter_number_lst = [r+c for c in cols for r in rows]
print(letter_number_lst)
x = [s1 + ' x ' + s2
for s1 in (rows[i]+cols[i] for i in range(3))
for s2 in (rows[2-j]+cols[j] for j in range(3))]
print(x)
"""
Explanation: Let's take a look at some values and see how we can produce certain outputs.
End of explanation
"""
# Simply accessing rows and cols in a comprehensions [A1, A2, A3, B1, B2, B3, C1, C2, C3]
# Non-Pythonic
lst = []
for r in rows:
for c in cols:
lst.append(r+c)
# Pythonic
lst = [r+c for r in rows for c in cols]
print(lst)
"""
Explanation: Generators
We can do more complex things than just the basic list comprehensions. We can create more complex, succint comprehensions by introducing generators in our very statements. We haev the following data to work with:
rows = 'ABC
cols = '123'
In general, we want to access both the rows in cols at the same time.... How did we do that?
End of explanation
"""
# let's figure this list out with normal syntax
lst = []
for r in (rows[i]+cols[i] for i in range(3)):
for c in (rows[2-i]+cols[i] for i in range(3)):
lst.append(r + 'x' + c)
print(lst)
# shortened
crossed_list = [x + " x " + y for x in (rows[i]+cols[i] for i in range(3)) for y in (rows[2-i]+cols[i] for i in range(3))]
print(crossed_list)
x = sorted(words, key=lambda x: len(x))
print(x)
"""
Explanation: Let's try Creating:
['A1 x C1', 'A1 x B2', 'A1 x A3', 'B2 x C1', 'B2 x B2', 'B2 x A3', 'C3 x C1', 'C3 x B2', 'C3 x A3']
Thought process, we know that rows will change every other 3, meaning we keep it constant...
End of explanation
"""
|
Unidata/unidata-python-workshop | notebooks/Siphon/Siphon Overview.ipynb | mit | from datetime import datetime, timedelta
from siphon.catalog import TDSCatalog
date = datetime.utcnow() - timedelta(days=1)
cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/nexrad/level3/'
f'N0Q/LRX/{date:%Y%m%d}/catalog.xml')
"""
Explanation: <a name="top"></a>
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Siphon Overview</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="https://unidata.github.io/siphon/latest/_static/siphon_150x150.png" alt="TDS" style="height: 200px;"></div>
Overview:
Teaching: 10 minutes
Exercises: 10 minutes
Questions
What is a THREDDS Data Server (TDS)?
How can I use Siphon to access a TDS?
Objectives
<a href="#threddsintro">Use siphon to access a THREDDS catalog</a>
<a href="#filtering">Find data within the catalog that we wish to access</a>
<a href="#dataaccess">Use siphon to perform remote data access</a>
<a name="threddsintro"></a>
1. What is THREDDS?
Server for providing remote access to datasets
Variety of services for accesing data:
HTTP Download
Web Mapping/Coverage Service (WMS/WCS)
OPeNDAP
NetCDF Subset Service
CDMRemote
Provides a more uniform way to access different types/formats of data
THREDDS Demo
http://thredds.ucar.edu
THREDDS Catalogs
XML descriptions of data and metadata
Access methods
Easily handled with siphon.catalog.TDSCatalog
End of explanation
"""
request_time = date.replace(hour=18, minute=30, second=0, microsecond=0)
ds = cat.datasets.filter_time_nearest(request_time)
ds
"""
Explanation: <a href="#top">Top</a>
<hr style="height:2px;">
<a name="filtering"></a>
2. Filtering data
We could manually figure out what dataset we're looking for and generate that name (or index). Siphon provides some helpers to simplify this process, provided the names of the dataset follow a pattern with the timestamp in the name:
End of explanation
"""
datasets = cat.datasets.filter_time_range(request_time, request_time + timedelta(hours=1))
print(datasets)
"""
Explanation: We can also find the list of datasets within a time range:
End of explanation
"""
# YOUR CODE GOES HERE
"""
Explanation: Exercise
Starting from http://thredds.ucar.edu/thredds/catalog/satellite/SFC-T/SUPER-NATIONAL_1km/catalog.html, find the composites for the previous day.
Grab the URL and create a TDSCatalog instance.
Using Siphon, find the data available in the catalog between 12Z and 18Z on the previous day.
End of explanation
"""
# %load solutions/datasets.py
"""
Explanation: Solution
End of explanation
"""
ds = datasets[0]
"""
Explanation: <a href="#top">Top</a>
<hr style="height:2px;">
<a name="dataaccess"></a>
3. Accessing data
Accessing catalogs is only part of the story; Siphon is much more useful if you're trying to access/download datasets.
For instance, using our data that we just retrieved:
End of explanation
"""
ds.download()
import os; os.listdir()
"""
Explanation: We can ask Siphon to download the file locally:
End of explanation
"""
fobj = ds.remote_open()
data = fobj.read()
print(len(data))
"""
Explanation: Or better yet, get a file-like object that lets us read from the file as if it were local:
End of explanation
"""
nc = ds.remote_access()
"""
Explanation: This is handy if you have Python code to read a particular format.
It's also possible to get access to the file through services that provide netCDF4-like access, but for the remote file. This access allows downloading information only for variables of interest, or for (index-based) subsets of that data:
End of explanation
"""
print(list(nc.variables))
"""
Explanation: By default this uses CDMRemote (if available), but it's also possible to ask for OPeNDAP (using netCDF4-python).
End of explanation
"""
|
yedivanseven/bestPy | examples/06.1_BenchmarkSplitData.ipynb | gpl-3.0 | import sys
sys.path.append('../..')
"""
Explanation: CHAPTER 6
6.1 Benchmark: Split Data into Training and Test Sets
Now that we have a convenient way to make recommendations, we still need to make an informed choice as to which of bestPy's algorithms we should pick and how we should set its parameters to achieve the highest possible fidelity in our recommendations.
The only way of telling how well we are doing with our recommendations is to see how well we can predict future purchases from past purchases. This means that, instead of using all of our data to train an algorithm, we have to hold out the last couple of purchases of each user and only used the rest. Then, we can test the recommendations produced by our algorithm against what customers actually did buy next.
To conveniently perform this split of our data into training and test sets, bestPy offers the advanced data class TrainTest.
Preliminaries
We only need this because the examples folder is a subdirectory of the bestPy package.
End of explanation
"""
from bestPy import write_log_to
from bestPy.datastructures import TrainTest # Additionally import RecoBasedOn
logfile = 'logfile.txt'
write_log_to(logfile, 20)
"""
Explanation: Imports and logging
No algorithm or recommender is needed for now as we are focusing solely on the data structure TrainTest, which is, naturally, accessible through the sub-package bestPy.datastructures.
End of explanation
"""
file = 'examples_data.csv'
data = TrainTest.from_csv(file)
"""
Explanation: Read TrainTest data
Reading in TrainTest data works in pretty much the same way as reading in Transactions data. Again, two data sources are available, a postgreSQL database and a CSV file. For the former, we again need a fully configured PostgreSQLparams instance (let's call it database) before we can read in the data with:
data = TrainTest.from_postgreSQL(database)
Reading from then works like so:
End of explanation
"""
print(data.number_of_corrupted_records)
print(data.number_of_transactions)
"""
Explanation: NOTE: There is only one difference to reading Transactions data. The from_csv() class method has an addtional argument fmt. If it is not given, then the timestamps in the CSV file are assumed to be UNIX timestamp since epoch, i.e., integer numbers.
If, on the other hand, it is given, then it must be a valid format string specifying the format in which the timestamps are written in the CSV file. To tell bestPy that, for example, the timestamps in your CSV file look like
2012-03-09 16:18:02
i.e., year-month-day hour:minute:second, you would have to set fmt to the string:
'%Y-%m-%d %H:%M:%S'
With the documentation of the datetime package, it should be easy to assemble the correct format string for just about any which way a timestamp could possible be composed.
Initial attributes of TrainTest data objects
Inspecting the new data object with Tab completion reveals reveals several attribute that we already now from Transactions data. Notably, these are:
End of explanation
"""
data.max_hold_out
"""
Explanation: There is also an additional attribute that tells us the maximum numbers of purchases we can possibly hold out as test data for each customer.
End of explanation
"""
data.split(4, False)
"""
Explanation: Splitting the data into training and test sets
Also present is a method called split(), which indeed does exactly what you think it should. It has two arguments, hold_out and only_new. Naturally, the former tells bestPy how many unique purchases per cutomer to hold out (i.e., put aside) for each customer. Naturally, cutomers who bought fewer than hold_out articles cannot be tested at all and cutomers who bought exactly hold_out articles will be treated as new customers in testing.
The second argument, only_new tells bestPy whether only new articles will be recommended in the benchmark run or whether recommendations will include also articles that customers bouhgt before. If True, then all previous buys of any of the hold_out last unique items need to be deleted from the training data for each customer. Let's try.
End of explanation
"""
print(type(data.train))
print(data.train.user.count)
"""
Explanation: Attributes of split TrainTest data objects
Inspecting the TrainTest data object with Tab completion again reveals two more attributes that magically appeared, train and test. The former is an instance of Transactions with all the attributes we already know.
End of explanation
"""
data.test.data
"""
Explanation: So we have 2141 customers that bought 4 items or more and whose next 4 purchases can therefore be compared to our recommendations. I suggest you make it a habbit of checking that you have a decent number of customers left in your training set.
NOTE: Should you, for some reason, chose to hold out max_hold_out purchases, you might well end up with a single customer in your training set and, therefore, obtain spurious benchmark results.
The test attribute of a split TrainTest instance is a new, auxiliary data type with a very simple structure. Its data attribute contains the test data in the form of a python dictionary with customer IDs as keys and the artcile IDs of their hold_out last unique purchases as values.
End of explanation
"""
print(data.test.hold_out)
print(data.test.only_new)
print(data.test.number_of_cases)
"""
Explanation: Its attributes hold_out and only_new simply reflect the respective arguments from the last call to the split() method and, hard to guess, number_of_cases yields the number of test cases.
End of explanation
"""
data.split(6, True)
print(data.train.user.count)
print(data.test.number_of_cases)
"""
Explanation: NOTE: Should you wish to split the same data again, but this time with different settings, no need to read it in again. Just call split() again.
End of explanation
"""
|
Benedicto/ML-Learning | document-retrieval.ipynb | gpl-3.0 | import graphlab
"""
Explanation: Document retrieval from wikipedia data
Fire up GraphLab Create
End of explanation
"""
people = graphlab.SFrame('people_wiki.gl/')
"""
Explanation: Load some text data - from wikipedia, pages on people
End of explanation
"""
people.head()
len(people)
"""
Explanation: Data contains: link to wikipedia article, name of person, text of article.
End of explanation
"""
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
"""
Explanation: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
End of explanation
"""
clooney = people[people['name'] == 'George Clooney']
clooney['text']
"""
Explanation: Exploring the entry for actor George Clooney
End of explanation
"""
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
print obama['word_count']
"""
Explanation: Get the word counts for Obama article
End of explanation
"""
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
"""
Explanation: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
End of explanation
"""
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
"""
Explanation: Sorting the word counts to show most common words at the top
End of explanation
"""
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
tfidf = graphlab.text_analytics.tf_idf(people['word_count'])
# Earlier versions of GraphLab Create returned an SFrame rather than a single SArray
# This notebook was created using Graphlab Create version 1.7.1
if graphlab.version <= '1.6.1':
tfidf = tfidf['docs']
tfidf
people['tfidf'] = tfidf
"""
Explanation: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
End of explanation
"""
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
"""
Explanation: Examine the TF-IDF for the Obama article
End of explanation
"""
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
"""
Explanation: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
End of explanation
"""
graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
"""
Explanation: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
End of explanation
"""
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
"""
Explanation: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
End of explanation
"""
knn_model.query(obama)
"""
Explanation: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
End of explanation
"""
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
elton = people[people['name'] == 'Elton John']
elton[['word_count']].stack('word_count', new_column_name=['word', 'count']).sort('count', ascending=False)
elton[['tfidf']].stack('tfidf', new_column_name=['word', 'tfidf']).sort('tfidf', ascending=False)
victoria = people[people['name'] == 'Victoria Beckham']
paul = people[people['name'] == 'Paul McCartney']
graphlab.distances.cosine(elton['tfidf'][0], victoria['tfidf'][0])
graphlab.distances.cosine(elton['tfidf'][0], paul['tfidf'][0])
knn_tfidf = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name',distance='cosine')
knn_tfidf.query(elton)
knn_tfidf.query(victoria)
knn_word_count = graphlab.nearest_neighbors.create(people,features=['word_count'],label='name',distance='cosine')
knn_word_count.query(victoria)
knn_word_count.query(elton)
"""
Explanation: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval
End of explanation
"""
|
RTHMaK/RPGOne | scipy-2017-sklearn-master/notebooks/15 Pipelining Estimators.ipynb | apache-2.0 | import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [x[0] == "ham" for x in lines]
from sklearn.model_selection import train_test_split
text_train, text_test, y_train, y_test = train_test_split(text, y)
"""
Explanation: SciPy 2016 Scikit-learn Tutorial
Pipelining estimators
In this section we study how different estimators maybe be chained.
A simple example: feature extraction and selection before an estimator
Feature extraction: vectorizer
For some types of data, for instance text data, a feature extraction step must be applied to convert it to numerical features.
To illustrate we load the SMS spam dataset we used earlier.
End of explanation
"""
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
"""
Explanation: Previously, we applied the feature extraction manually, like so:
End of explanation
"""
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
pipeline.fit(text_train, y_train)
pipeline.score(text_test, y_test)
"""
Explanation: The situation where we learn a transformation and then apply it to the test data is very common in machine learning.
Therefore scikit-learn has a shortcut for this, called pipelines:
End of explanation
"""
# This illustrates a common mistake. Don't use this code!
from sklearn.model_selection import GridSearchCV
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
grid = GridSearchCV(clf, param_grid={'C': [.1, 1, 10, 100]}, cv=5)
grid.fit(X_train, y_train)
"""
Explanation: As you can see, this makes the code much shorter and easier to handle. Behind the scenes, exactly the same as above is happening. When calling fit on the pipeline, it will call fit on each step in turn.
After the first step is fit, it will use the transform method of the first step to create a new representation.
This will then be fed to the fit of the next step, and so on.
Finally, on the last step, only fit is called.
If we call score, only transform will be called on each step - this could be the test set after all! Then, on the last step, score is called with the new representation. The same goes for predict.
Building pipelines not only simplifies the code, it is also important for model selection.
Say we want to grid-search C to tune our Logistic Regression above.
Let's say we do it like this:
End of explanation
"""
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(),
LogisticRegression())
grid = GridSearchCV(pipeline,
param_grid={'logisticregression__C': [.1, 1, 10, 100]}, cv=5)
grid.fit(text_train, y_train)
grid.score(text_test, y_test)
"""
Explanation: 2.1.2 What did we do wrong?
Here, we did grid-search with cross-validation on X_train. However, when applying TfidfVectorizer, it saw all of the X_train,
not only the training folds! So it could use knowledge of the frequency of the words in the test-folds. This is called "contamination" of the test set, and leads to too optimistic estimates of generalization performance, or badly selected parameters.
We can fix this with the pipeline, though:
End of explanation
"""
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
params = {'logisticregression__C': [.1, 1, 10, 100],
"tfidfvectorizer__ngram_range": [(1, 1), (1, 2), (2, 2)]}
grid = GridSearchCV(pipeline, param_grid=params, cv=5)
grid.fit(text_train, y_train)
print(grid.best_params_)
grid.score(text_test, y_test)
"""
Explanation: Note that we need to tell the pipeline where at which step we wanted to set the parameter C.
We can do this using the special __ syntax. The name before the __ is simply the name of the class, the part after __ is the parameter we want to set with grid-search.
<img src="figures/pipeline_cross_validation.svg" width="50%">
Another benefit of using pipelines is that we can now also search over parameters of the feature extraction with GridSearchCV:
End of explanation
"""
#%load solutions/15A_ridge_grid.py
"""
Explanation: Exercise
Create a pipeline out of a StandardScaler and Ridge regression and apply it to the Boston housing dataset (load using sklearn.datasets.load_boston). Try adding the sklearn.preprocessing.PolynomialFeatures transformer as a second preprocessing step, and grid-search the degree of the polynomials (try 1, 2 and 3).
End of explanation
"""
|
AtmaMani/pyChakras | udemy_ml_bootcamp/Python-for-Data-Analysis/Pandas/Pandas Exercises/SF Salaries Exercise- Solutions.ipynb | mit | import pandas as pd
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../../Pierian_Data_Logo.png' /></a>
SF Salaries Exercise - Solutions
Welcome to a quick exercise for you to practice your pandas skills! We will be using the SF Salaries Dataset from Kaggle! Just follow along and complete the tasks outlined in bold below. The tasks will get harder and harder as you go along.
Import pandas as pd.
End of explanation
"""
sal = pd.read_csv('Salaries.csv')
"""
Explanation: Read Salaries.csv as a dataframe called sal.
End of explanation
"""
sal.head()
"""
Explanation: Check the head of the DataFrame.
End of explanation
"""
sal.info() # 148654 Entries
"""
Explanation: Use the .info() method to find out how many entries there are.
End of explanation
"""
sal['BasePay'].mean()
"""
Explanation: What is the average BasePay ?
End of explanation
"""
sal['OvertimePay'].max()
"""
Explanation: What is the highest amount of OvertimePay in the dataset ?
End of explanation
"""
sal[sal['EmployeeName']=='JOSEPH DRISCOLL']['JobTitle']
"""
Explanation: What is the job title of JOSEPH DRISCOLL ? Note: Use all caps, otherwise you may get an answer that doesn't match up (there is also a lowercase Joseph Driscoll).
End of explanation
"""
sal[sal['EmployeeName']=='JOSEPH DRISCOLL']['TotalPayBenefits']
"""
Explanation: How much does JOSEPH DRISCOLL make (including benefits)?
End of explanation
"""
sal[sal['TotalPayBenefits']== sal['TotalPayBenefits'].max()] #['EmployeeName']
# or
# sal.loc[sal['TotalPayBenefits'].idxmax()]
"""
Explanation: What is the name of highest paid person (including benefits)?
End of explanation
"""
sal[sal['TotalPayBenefits']== sal['TotalPayBenefits'].min()] #['EmployeeName']
# or
# sal.loc[sal['TotalPayBenefits'].idxmax()]['EmployeeName']
## ITS NEGATIVE!! VERY STRANGE
"""
Explanation: What is the name of lowest paid person (including benefits)? Do you notice something strange about how much he or she is paid?
End of explanation
"""
sal.groupby('Year').mean()['BasePay']
"""
Explanation: What was the average (mean) BasePay of all employees per year? (2011-2014) ?
End of explanation
"""
sal['JobTitle'].nunique()
"""
Explanation: How many unique job titles are there?
End of explanation
"""
sal['JobTitle'].value_counts().head(5)
"""
Explanation: What are the top 5 most common jobs?
End of explanation
"""
sum(sal[sal['Year']==2013]['JobTitle'].value_counts() == 1) # pretty tricky way to do this...
"""
Explanation: How many Job Titles were represented by only one person in 2013? (e.g. Job Titles with only one occurence in 2013?)
End of explanation
"""
def chief_string(title):
if 'chief' in title.lower():
return True
else:
return False
sum(sal['JobTitle'].apply(lambda x: chief_string(x)))
"""
Explanation: How many people have the word Chief in their job title? (This is pretty tricky)
End of explanation
"""
sal['title_len'] = sal['JobTitle'].apply(len)
sal[['title_len','TotalPayBenefits']].corr() # No correlation.
"""
Explanation: Bonus: Is there a correlation between length of the Job Title string and Salary?
End of explanation
"""
|
twosigma/beakerx | doc/python/KernelMagics.ipynb | apache-2.0 | %%groovy
println("stdout works")
f = {it + " work"}
f("results")
%%groovy
new Plot(title:"plots work", initHeight: 200)
%%groovy
[a:"tables", b:"work"]
%%groovy
"errors work"/1
%%groovy
HTML("<h1>HTML works</h1>")
%%groovy
def p = new Plot(title : 'Plots Work', xLabel: 'Horizontal', yLabel: 'Vertical');
p << new Line(x: [0, 1, 2, 3, 4, 5], y: [0, 1, 6, 5, 2, 8])
"""
Explanation: Magics to Access the JVM Kernels from Python
BeakerX has magics for Python so you can run cells in the other languages.
The first few cells below show how complete the implementation is with Groovy, then we have just one cell in each other language.
There are also Polyglot Magics magics for accessing Python from the JVM.
You can communicate between languages with Autotranslation.
Groovy
End of explanation
"""
%%java
import java.util.List;
import com.twosigma.beakerx.chart.xychart.Plot;
import java.util.Arrays;
Plot p = new Plot();
p.setTitle("Java Works");
p.setXLabel("Horizontal");
p.setYLabel("Vertical");
Bars b = new Bars();
List<Object> x = Arrays.asList(0, 1, 2, 3, 4, 5);
List<Number> y = Arrays.asList(0, 1, 6, 5, 2, 8);
Line line = new Line();
line.setX(x);
line.setY(y);
p.add(line);
return p;
"""
Explanation: Java
End of explanation
"""
%%scala
val plot = new Plot { title = "Scala Works"; xLabel="Horizontal"; yLabel="Vertical" }
val line = new Line {x = Seq(0, 1, 2, 3, 4, 5); y = Seq(0, 1, 6, 5, 2, 8)}
plot.add(line)
"""
Explanation: Scala
End of explanation
"""
%%kotlin
val x: MutableList<Any> = mutableListOf(0, 1, 2, 3, 4, 5)
val y: MutableList<Number> = mutableListOf(0, 1, 6, 5, 2, 8)
val line = Line()
line.setX(x)
line.setY(y)
val plot = Plot()
plot.setTitle("Kotlin Works")
plot.setXLabel("Horizontal")
plot.setYLabel("Vertical")
plot.add(line)
plot
"""
Explanation: Kotlin
End of explanation
"""
%%clojure
(import '[com.twosigma.beakerx.chart.xychart Plot]
'[com.twosigma.beakerx.chart.xychart.plotitem Line])
(doto (Plot.)
(.setTitle "Clojure Works")
(.setXLabel "Horizontal")
(.setYLabel "Vertical")
(.add (doto (Line.)
(.setX [0, 1, 2, 3, 4, 5])
(.setY [0, 1, 6, 5, 2, 8]))))
"""
Explanation: Clojure
End of explanation
"""
%%sql
%defaultDatasource jdbc:h2:mem:db
DROP TABLE IF EXISTS cities;
CREATE TABLE cities(
zip_code varchar(5),
latitude float,
longitude float,
city varchar(100),
state varchar(2),
county varchar(100),
PRIMARY KEY (zip_code),
) AS SELECT
zip_code,
latitude,
longitude,
city,
state,
county
FROM CSVREAD('../resources/data/UScity.csv')
%%sql
SELECT * FROM cities WHERE state = 'NY'
"""
Explanation: SQL
End of explanation
"""
|
kettlewell/pipeline | Input/notebooks/kafkaSendDataPy.ipynb | mit | import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--conf spark.ui.port=4041 --packages org.apache.kafka:kafka_2.11:0.10.0.0,org.apache.kafka:kafka-clients:0.10.0.0 pyspark-shell'
"""
Explanation: kafkaSendDataPy
This notebook sends data to Kafka on the topic 'test'. A message that gives the current time is sent every second
Add dependencies
End of explanation
"""
from pyspark import SparkContext
sc = SparkContext("local[1]", "KafkaSendStream")
from kafka import KafkaProducer
import time
"""
Explanation: Load modules and start SparkContext
Note that SparkContext must be started to effectively load the package dependencies. One core is used.
End of explanation
"""
producer = KafkaProducer(bootstrap_servers='localhost:9092')
while True:
message=time.strftime("%Y-%m-%d %H:%M:%S")
producer.send('test', message)
time.sleep(1)
"""
Explanation: Start Kafka producer
One message giving current time is sent every second to the topic test
End of explanation
"""
|
nansencenter/nansat-lectures | notebooks/15 Django-Geo-SPaaS.ipynb | gpl-3.0 | import os, sys
os.environ['DJANGO_SETTINGS_MODULE'] = 'geospaas_project.settings'
sys.path.insert(0, '/vagrant/shared/course_vm/geospaas_project/')
import django
django.setup()
from django.conf import settings
"""
Explanation: Django-Geo-SPaaS - GeoDjango framework for Satellite Data Management
First of all we need to initialize Django to work. Let's do some 'magic'
End of explanation
"""
from geospaas.catalog.models import Dataset
from geospaas.catalog.models import DatasetURI
"""
Explanation: Now we can import our models
End of explanation
"""
# find all images
datasets = Dataset.objects.all()
"""
Explanation: Now we can use the model Dataset to search for datasets
End of explanation
"""
print datasets.count()
# print info about each image
for ds in datasets:
print ds
"""
Explanation: What is happening:
A SQL query is generated
The query is sent to the database (local database driven by SpatiaLite)
The query is executed by the database engine
The result is sent back to Python
The results is wrapped into a QuerySet object
End of explanation
"""
# get just one Dataset
ds0 = datasets.first()
print ds0.time_coverage_start
# print joined fields (Foreign key)
for ds in datasets:
print ds.source.instrument.short_name,
print ds.source.platform.short_name
# get infromation from Foreign key in the opposite direction
print ds0.dataseturi_set.first().uri
"""
Explanation: Use the complex structure of the catalog models
End of explanation
"""
# search by time
ds = Dataset.objects.filter(time_coverage_start='2012-03-03 09:38:10.423969')
print ds
ds = Dataset.objects.filter(time_coverage_start__gte='1900-03-01')
print ds
# search by instrument
ds = Dataset.objects.filter(source__instrument__short_name='MODIS')
print ds
# search by spatial location
ds0 = Dataset.objects.first()
ds0_geom = ds0.geographic_location.geometry
ds_ovlp = Dataset.objects.filter(
geographic_location__geometry__intersects=ds0_geom,
time_coverage_start__gte='2015-05-02',
source__platform__short_name='AQUA')
print ds_ovlp
"""
Explanation: Search for data
End of explanation
"""
dsovlp0 = ds_ovlp.first()
uri0 = dsovlp0.dataseturi_set.first().uri
print uri0
from nansat import Nansat
n = Nansat(uri0.replace('file://localhost', ''))
print n[1]
"""
Explanation: Finally, get data
End of explanation
"""
|
bakanchevn/DBCourseMirea2017 | Неделя 2/Задание в классе/Лекция-2-1.ipynb | gpl-3.0 | a = 'Pop'
%sql select * from genres where Name = :a
"""
Explanation: Передача переменных python в sql
Можно передать переменную из python в sql
End of explanation
"""
a = %sql select * from genres
type(a)
print(a)
"""
Explanation: Можно присвоить результат запроса в переменную
End of explanation
"""
import sqlite3
# Создаем БД в RAM
db=sqlite3.connect(':memory:')
# После окончания работы не забываем закрыть соединение
db.close()
# Создаем или открываем бд
db=sqlite3.connect('testdb')
# Закрываем бд
db.close()
"""
Explanation: Другой способ соединения
использование библиотеки sqlite3
End of explanation
"""
db=sqlite3.connect('testdb')
# Получить cursor объекта
cursor = db.cursor()
cursor.execute('''
DROP TABLE IF EXISTS users
''');
cursor.execute('''
CREATE TABLE users(id INTEGER PRIMARY KEY, name TEXT,
phone TEXT, email TEXT UNIQUE, password TEXT);
''')
db.commit()
"""
Explanation: Создание (CREATE) и Удаление (DROP) таблиц.
Для того, чтобы выполнить любую операцию с базой данных необходимо создать объект cursor и передать SQL-выражение в объект cursor, чтобы вызвать его. В конце необходимо выполнить выполнить commit (заметьте, что commit выполняется для db объекта, а не cursor объекта)
End of explanation
"""
cursor=db.cursor()
name1 = 'Andrew'
phone1 = '123232'
email1 = 'user@example.com'
password1 = '12345'
name2 = 'John'
phone2 = '234241'
email2 = 'john@example.com'
password2 = 'abcdef'
# Insert user 1
cursor.execute('''INSERT INTO users(name, phone, email, password)
VALUES(?,?,?,?)''', (name1, phone1, email1, password1))
print('First user inserted')
# Insert user 2
cursor.execute('''INSERT INTO users(name, phone, email, password)
VALUES(?,?,?,?)''', (name2, phone2, email2, password2))
print('Second user inserted')
db.commit()
"""
Explanation: Вставка (INSERT) данных в базу данных
Для вставки данных мы используем cursor для выполнения запроса. Если требуется вставка данных из python, то можно использовать "?". Не используйте строчные операторы или конкатенацию для создания запросов, потому что это не безопасно.
End of explanation
"""
name3 = 'Nikita'
phone3 = '323232'
email3 = 'nikita@example.com'
password3 = '123'
cursor = db.cursor()
cursor.execute('''INSERT INTO users(name, phone, email, password)
VALUES(:name, :phone, :email, :password)''',
{'name':name3, 'phone':phone3, 'email':email3, 'password':password3})
print('Third user inserted')
db.commit()
name3 = 'Nikita'
phone3 = '323232'
email3 = 'nikita@example.com'
password3 = '123'
cursor = db.cursor()
cursor.execute('''INSERT INTO users(name, phone, email, password)
VALUES(:name3, :phone3, :email3, :password3)''')
print('Third user inserted')
db.commit()
"""
Explanation: Значения переменных python подставляются через кортеж. Другой способ - через словарь, используя ':'
End of explanation
"""
name4 = 'Ann'
phone4 = '490904'
email4 = 'ann@example.com'
password4 = '345'
name5 = 'Jane'
phone5 = '809908'
email5 = 'jane@example.com'
password5 = '785'
users = [(name4, phone4, email4, password4),
(name5, phone5, email5, password5)]
cursor.executemany('''INSERT INTO users(name, phone, email, password) VALUES (?,?,?,?)''', users)
db.commit()
"""
Explanation: Если вы хотите вставить нескольо пользователей в таблицу, используйте executemany и список из кортежей
End of explanation
"""
id = cursor.lastrowid
print('Last row id: %d' % id)
"""
Explanation: Если вам требуется получить ид строки, которую вы только что добавили, используйте lastrowid
End of explanation
"""
cursor.execute('''SELECT name, email, phone FROM users''')
user1 = cursor.fetchone() # получить одну строку
print(user1[0])
all_rows = cursor.fetchall()
for row in all_rows:
# row[0] возращает первый столбец - name, row[1] - email, row[2] - phone
print('{0} : {1}, {2}'.format(row[0], row[1], row[2]))
"""
Explanation: Получение данных (SELECT) с SQLite
Чтобы получить данные, необходимо выполнить fetchone для выбора одной строки или fetchall для всех строк
End of explanation
"""
cursor.execute('''SELECT name, email, phone FROM users''')
for row in cursor:
print('{0} : {1}, {2}'.format(row[0], row[1], row[2]))
"""
Explanation: Объект cursor работает как итератор, вызывая fetchall() автоматически
End of explanation
"""
user_id=3
cursor.execute('''SELECT name, email, phone FROM users WHERE id=?''', (user_id,))
user=cursor.fetchone()
print (user[0], user[1], user[2])
"""
Explanation: Чтобы получить данные с условиями, используйте '?'
End of explanation
"""
# Обновить пользователя с id = 1
newphone = '77777'
userid = 1
cursor.execute('''UPDATE users SET phone = ? WHERE id = ?''', (newphone, userid))
# Удалить пользователя с id = 2
delete_userid = 2
cursor.execute('''DELETE FROM users WHERE id = ?''', (delete_userid,))
db.commit()
"""
Explanation: Обновление (UPDATE) и удаление (DELETE) данных
Процедура аналогична вставке данных
End of explanation
"""
cursor.execute('''UPDATE users SET phone = ? WHERE id = ? ''', (newphone, userid))
db.commit()
"""
Explanation: Использование SQLite транзакций
Транзакции очень важное свойство баз данных. Они обеспечивают атомарность БД. Используйте commit для сохранения изменений.
End of explanation
"""
cursor.execute('''UPDATE users SET phone = ? WHERE id = ?''', (newphone, userid))
db.rollback()
"""
Explanation: Или rollback для отмены изменений
End of explanation
"""
db.close()
"""
Explanation: Помните, что всегда требуется сохранить изменения. Если вы закроете соединение или соединение будет потеряно, то ваши изменения
не будут внесены
End of explanation
"""
import sqlite3
try:
db=sqlite3.connect('testdb')
cursor=db.cursor()
cursor.execute('''CREATE TABLE users(id INTEGER PRIMARY KEY, name TEXT
email TEXT unique, password TEXT)''')
db.commit()
except Exception as e:
db.rollback()
print('we are here')
raise e
finally:
db.close()
"""
Explanation: Исключения SQLite
Для best practices всегда оборачивайте операции баз данных в try или context manager
End of explanation
"""
name1 = 'Andres'
phone1 = '333658'
email1 = 'user@example.com'
password1 = '12345'
try:
db=sqlite3.connect('testdb')
with db:
db.execute('''INSERT INTO users(name, phone, email, password)
VALUES(?,?,?,?)''', (name1, phone1, email1, password1))
except sqlite3.IntegrityError:
print('Record already exists')
finally:
db.close()
"""
Explanation: В этом примере мы используем try/except/finally для того, чтобы "поймать" исключение в коде. Служебное слово finally - очень важно, потому что благодаря ему коннект к бд закрывается корректно. Более подробно здесь.
Используя except as Exception, мы ловим все исключения. Обычно в production коде необходимо "ловить" определенное исключение. Ссылка
Можно использовать объект Connection для автоматического commit'а и rollback'а
End of explanation
"""
%%sql
select company
FROM invoices
join customers
ON invoices.customerid = customers.customerid
WHERE customers.company <> 'None'
group by customers.customerid, customers.company
having count(*)
in
(
select min(cnt) from
(
select count(*) as cnt
FROM invoices
group by customerid
) A
UNION ALL
select max(cnt) from
(
select count(*) as cnt
FROM invoices
group by customerid
) A
)
"""
Explanation: В пример выше, если insert вызывает исключение, для транзакции будет совершен откат и сообщение будет написано, иначе транзакция будет выполнена. Заметьте, что в данном случае мы вызываем execute на db объект.
End of explanation
"""
|
google/jax-md | notebooks/customizing_potentials_cookbook.ipynb | apache-2.0 | #@title Imports & Utils
!pip install -q git+https://www.github.com/google/jax-md
import numpy as onp
import jax.numpy as np
from jax import random
from jax import jit, grad, vmap, value_and_grad
from jax import lax
from jax import ops
from jax.config import config
config.update("jax_enable_x64", True)
from jax_md import space, smap, energy, minimize, quantity, simulate, partition
from functools import partial
import time
f32 = np.float32
f64 = np.float64
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 16})
#import seaborn as sns
#sns.set_style(style='white')
def format_plot(x, y):
plt.grid(True)
plt.xlabel(x, fontsize=20)
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 0.7)):
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
plt.tight_layout()
def calculate_bond_data(displacement_or_metric, R, dr_cutoff, species=None):
if( not(species is None)):
assert(False)
metric = space.map_product(space.canonicalize_displacement_or_metric(displacement))
dr = metric(R,R)
dr_include = np.triu(np.where(dr<dr_cutoff, 1, 0)) - np.eye(R.shape[0],dtype=np.int32)
index_list=np.dstack(np.meshgrid(np.arange(N), np.arange(N), indexing='ij'))
i_s = np.where(dr_include==1, index_list[:,:,0], -1).flatten()
j_s = np.where(dr_include==1, index_list[:,:,1], -1).flatten()
ij_s = np.transpose(np.array([i_s,j_s]))
bonds = ij_s[(ij_s!=np.array([-1,-1]))[:,1]]
lengths = dr.flatten()[(ij_s!=np.array([-1,-1]))[:,1]]
return bonds, lengths
def plot_system(R,box_size,species=None,ms=20):
R_plt = onp.array(R)
if(species is None):
plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms)
else:
for ii in range(np.amax(species)+1):
Rtemp = R_plt[species==ii]
plt.plot(Rtemp[:, 0], Rtemp[:, 1], 'o', markersize=ms)
plt.xlim([0, box_size])
plt.ylim([0, box_size])
plt.xticks([], [])
plt.yticks([], [])
finalize_plot((1,1))
key = random.PRNGKey(0)
"""
Explanation: <a href="https://colab.research.google.com/github/google/jax-md/blob/main/notebooks/customizing_potentials_cookbook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Customizing Potentials in JAX MD
This cookbook was contributed by Carl Goodrich.
End of explanation
"""
def harmonic_morse(dr, D0=5.0, alpha=5.0, r0=1.0, k=50.0, **kwargs):
U = np.where(dr < r0,
0.5 * k * (dr - r0)**2 - D0,
D0 * (np.exp(-2. * alpha * (dr - r0)) - 2. * np.exp(-alpha * (dr - r0)))
)
return np.array(U, dtype=dr.dtype)
"""
Explanation: Prerequisites
This cookbook assumes a working knowledge of Python and Numpy. The concept of broadcasting is particularly important both in this cookbook and in JAX MD.
We also assume a basic knowlege of JAX, which JAX MD is built on top of. Here we briefly review a few JAX basics that are important for us:
jax.vmap allows for automatic vectorization of a function. What this means is that if you have a function that takes an input x and returns an output y, i.e. y = f(x), then vmap will transform this function to act on an array of x's and return an array of y's, i.e. Y = vmap(f)(X), where X=np.array([x1,x2,...,xn]) and Y=np.array([y1,y2,...,yn]).
jax.grad employs automatic differentiation to transform a function into a new function that calculates its gradient, for example: dydx = grad(f)(x).
jax.lax.scan allows for efficient for-loops that can be compiled and differentiated over. See here for more details.
Random numbers are different in JAX. The details aren't necessary for this cookbook, but if things look a bit different, this is why.
The basics of user-defined potentials
Create a user defined potential function to use throughout this cookbook
Here we create a custom potential that has a short-ranged, non-diverging repulsive interaction and a medium-ranged Morse-like attractive interaction. It takes the following form:
\begin{equation}
V(r) =
\begin{cases}
\frac{1}{2} k (r-r_0)^2 - D_0,& r < r_0\
D_0\left( e^{-2\alpha (r-r_0)} -2 e^{-\alpha(r-r_0)}\right), & r \geq r_0
\end{cases}
\end{equation}
and has 4 parameters: $D_0$, $\alpha$, $r_0$, and $k$.
End of explanation
"""
drs = np.arange(0,3,0.01)
U = harmonic_morse(drs)
plt.plot(drs,U)
format_plot(r'$r$', r'$V(r)$')
finalize_plot()
"""
Explanation: plot $V(r)$.
End of explanation
"""
N = 50
dimension = 2
box_size = 6.8
key, split = random.split(key)
R = random.uniform(split, (N,dimension), minval=0.0, maxval=box_size, dtype=f64)
plot_system(R,box_size)
"""
Explanation: Calculate the energy of a system of interacting particles
We now want to calculate the energy of a system of $N$ spheres in $d$ dimensions, where each particle interacts with every other particle via our user-defined function $V(r)$. The total energy is
\begin{equation}
E_\text{total} = \sum_{i<j}V(r_{ij}),
\end{equation}
where $r_{ij}$ is the distance between particles $i$ and $j$.
Our first task is to set up the system by specifying the $N$, $d$, and the size of the simulation box. We then use JAX's internal random number generator to pick positions for each particle.
End of explanation
"""
def setup_periodic_box(box_size):
def displacement_fn(Ra, Rb, **unused_kwargs):
dR = Ra - Rb
return np.mod(dR + box_size * f32(0.5), box_size) - f32(0.5) * box_size
def shift_fn(R, dR, **unused_kwargs):
return np.mod(R + dR, box_size)
return displacement_fn, shift_fn
displacement, shift = setup_periodic_box(box_size)
"""
Explanation: At this point, we could manually loop over all particle pairs and calculate the energy, keeping track of boundary conditions, etc. Fortunately, JAX MD has machinery to automate this.
First, we must define two functions, displacement and shift, which contain all the information of the simulation box, boundary conditions, and underlying metric. displacement is used to calculate the vector displacement between particles, and shift is used to move particles. For most cases, it is recommended to use JAX MD's built in functions, which can be called using:
* displacement, shift = space.free()
* displacement, shift = space.periodic(box_size)
* displacement, shift = space.periodic_general(T)
For demonstration purposes, we will define these manually for a square periodic box, though without proper error handling, etc. The following should have the same functionality as displacement, shift = space.periodic(box_size).
End of explanation
"""
def harmonic_morse_pair(
displacement_or_metric, species=None, D0=5.0, alpha=10.0, r0=1.0, k=50.0):
D0 = np.array(D0, dtype=f32)
alpha = np.array(alpha, dtype=f32)
r0 = np.array(r0, dtype=f32)
k = np.array(k, dtype=f32)
return smap.pair(
harmonic_morse,
space.canonicalize_displacement_or_metric(displacement_or_metric),
species=species,
D0=D0,
alpha=alpha,
r0=r0,
k=k)
"""
Explanation: We now set up a function to calculate the total energy of the system. The JAX MD function smap.pair takes a given potential and promotes it to act on all particle pairs in a system. smap.pair does not actually return an energy, rather it returns a function that can be used to calculate the energy.
For convenience and readability, we wrap smap.pair in a new function called harmonic_morse_pair. For now, ignore the species keyword, we will return to this later.
End of explanation
"""
# Create a function to calculate the total energy with specified parameters
energy_fn = harmonic_morse_pair(displacement,D0=5.0,alpha=10.0,r0=1.0,k=500.0)
# Use this to calculate the total energy
print(energy_fn(R))
# Use grad to calculate the net force
force = -grad(energy_fn)(R)
print(force[:5])
"""
Explanation: Our helper function can be used to construct a function to compute the energy of the entire system as follows.
End of explanation
"""
def run_minimization(energy_fn, R_init, shift, num_steps=5000):
dt_start = 0.001
dt_max = 0.004
init,apply=minimize.fire_descent(jit(energy_fn),shift,dt_start=dt_start,dt_max=dt_max)
apply = jit(apply)
@jit
def scan_fn(state, i):
return apply(state), 0.
state = init(R_init)
state, _ = lax.scan(scan_fn,state,np.arange(num_steps))
return state.position, np.amax(np.abs(-grad(energy_fn)(state.position)))
"""
Explanation: We are now in a position to use our energy function to manipulate the system. As an example, we perform energy minimization using JAX MD's implementation of the FIRE algorithm.
We start by defining a function that takes an energy function, a set of initial positions, and a shift function and runs a specified number of steps of the minimization algorithm. The function returns the final set of positions and the maximum absolute value component of the force. We will use this function throughout this cookbook.
End of explanation
"""
Rfinal, max_force_component = run_minimization(energy_fn, R, shift)
print('largest component of force after minimization = {}'.format(max_force_component))
plot_system( Rfinal, box_size )
"""
Explanation: Now run the minimization with our custom energy function.
End of explanation
"""
dr = np.arange(0,3,0.01)
S = energy.multiplicative_isotropic_cutoff(lambda dr: 1, r_onset=1.5, r_cutoff=2.0)(dr)
ngradS = vmap(grad(energy.multiplicative_isotropic_cutoff(lambda dr: 1, r_onset=1.5, r_cutoff=2.0)))(dr)
plt.plot(dr,S,label=r'$S(r)$')
plt.plot(dr,ngradS,label=r'$\frac{dS(r)}{dr}$')
plt.legend()
format_plot(r'$r$','')
finalize_plot()
"""
Explanation: Create a truncated potential
It is often desirable to have a potential that is strictly zero beyond a well-defined cutoff distance. In addition, MD simulations require the energy and force (i.e. first derivative) to be continuous. To easily modify an existing potential $V(r)$ to have this property, JAX MD follows the approach taken by HOOMD Blue.
Consider the function
\begin{equation}
S(r) =
\begin{cases}
1,& r<r_\mathrm{on} \
\frac{(r_\mathrm{cut}^2-r^2)^2 (r_\mathrm{cut}^2 + 2r^2 - 3 r_\mathrm{on}^2)}{(r_\mathrm{cut}^2-r_\mathrm{on}^2)^3},& r_\mathrm{on} \leq r < r_\mathrm{cut}\
0,& r \geq r_\mathrm{cut}
\end{cases}
\end{equation}
Here we plot both $S(r)$ and $\frac{dS(r)}{dr}$, both of which are smooth and strictly zero above $r_\mathrm{cut}$.
End of explanation
"""
harmonic_morse_cutoff = energy.multiplicative_isotropic_cutoff(
harmonic_morse, r_onset=1.5, r_cutoff=2.0)
dr = np.arange(0,3,0.01)
V = harmonic_morse(dr)
V_cutoff = harmonic_morse_cutoff(dr)
F = -vmap(grad(harmonic_morse))(dr)
F_cutoff = -vmap(grad(harmonic_morse_cutoff))(dr)
plt.plot(dr,V, label=r'$V(r)$')
plt.plot(dr,V_cutoff, label=r'$\tilde V(r)$')
plt.plot(dr,F, label=r'$-\frac{d}{dr} V(r)$')
plt.plot(dr,F_cutoff, label=r'$-\frac{d}{dr} \tilde V(r)$')
plt.legend()
format_plot('$r$', '')
plt.ylim(-13,5)
finalize_plot()
"""
Explanation: We then use $S(r)$ to create a new function
\begin{equation}\tilde V(r) = V(r) S(r),
\end{equation}
which is exactly $V(r)$ below $r_\mathrm{on}$, strictly zero above $r_\mathrm{cut}$ and is continuous in its first derivative.
This is implemented in JAX MD through energy.multiplicative_isotropic_cutoff, which takes in a potential function $V(r)$ (e.g. our harmonic_morse function) and returns a new function $\tilde V(r)$.
End of explanation
"""
def harmonic_morse_cutoff_pair(
displacement_or_metric, D0=5.0, alpha=5.0, r0=1.0, k=50.0,
r_onset=1.5, r_cutoff=2.0):
D0 = np.array(D0, dtype=f32)
alpha = np.array(alpha, dtype=f32)
r0 = np.array(r0, dtype=f32)
k = np.array(k, dtype=f32)
return smap.pair(
energy.multiplicative_isotropic_cutoff(
harmonic_morse, r_onset=r_onset, r_cutoff=r_cutoff),
space.canonicalize_displacement_or_metric(displacement_or_metric),
D0=D0,
alpha=alpha,
r0=r0,
k=k)
"""
Explanation: As before, we can use smap.pair to promote this to act on an entire system.
End of explanation
"""
# Create a function to calculate the total energy
energy_fn = harmonic_morse_cutoff_pair(displacement, D0=5.0, alpha=10.0, r0=1.0,
k=500.0, r_onset=1.5, r_cutoff=2.0)
# Use this to calculate the total energy
print(energy_fn(R))
# Use grad to calculate the net force
force = -grad(energy_fn)(R)
print(force[:5])
# Minimize the energy using the FIRE algorithm
Rfinal, max_force_component = run_minimization(energy_fn, R, shift)
print('largest component of force after minimization = {}'.format(max_force_component))
plot_system( Rfinal, box_size )
"""
Explanation: This is implemented as before
End of explanation
"""
print(energy_fn(Rfinal))
print(-grad(energy_fn)(Rfinal)[:5])
"""
Explanation: Specifying parameters
Dynamic parameters
In the above examples, the strategy is to create a function energy_fn that takes a set of positions and calculates the energy of the system with all the parameters (e.g. D0, alpha, etc.) baked in. However, JAX MD allows you to override these baked-in values dynamically, i.e. when energy_fn is called.
For example, we can print out the minimized energy and force of the above system with the truncated potential:
End of explanation
"""
print(energy_fn(Rfinal, D0=0))
"""
Explanation: This uses the baked-in values of the 4 parameters: D0=5.0,alpha=10.0,r0=1.0,k=500.0. If, for example, we want to dynamically turn off the attractive part of the potential, we simply pass D0=0 to energy_fn:
End of explanation
"""
print(-grad(energy_fn)(Rfinal, D0=0)[:5])
"""
Explanation: Since changing the potential moves the minimum, the force will not be zero:
End of explanation
"""
def run_brownian(energy_fn, R_init, shift, key, num_steps):
init, apply = simulate.brownian(energy_fn, shift,
dt=0.00001, kT=0.0, gamma=0.1)
apply = jit(apply)
# Define how r0 changes for each step
r0_initial = 1.0
r0_final = .5
def get_r0(t):
return r0_final + (r0_initial-r0_final)*(num_steps-t)/num_steps
@jit
def scan_fn(state, t):
# Dynamically pass r0 to apply, which passes it on to energy_fn
return apply(state, r0=get_r0(t)), 0
key, split = random.split(key)
state = init(split, R_init)
state, _ = lax.scan(scan_fn,state,np.arange(num_steps))
return state.position, np.amax(np.abs(-grad(energy_fn)(state.position)))
"""
Explanation: This ability to dynamically pass parameters is very powerful. For example, if you want to shrink particles each step during a simulation, you can simply specify a different r0 each step.
This is demonstrated below, where we run a Brownian dynamics simulation at zero temperature with continuously decreasing r0. The details of simulate.brownian are beyond the scope of this cookbook, but the idea is that we pass a new value of r0 to the function apply each time it is called. The function apply takes a step of the simulation, and internally it passes any extra parameters like r0 to energy_fn.
End of explanation
"""
key, split = random.split(key)
Rfinal2, max_force_component = run_brownian(energy_fn, Rfinal, shift, split,
num_steps=6000)
plot_system( Rfinal2, box_size )
"""
Explanation: If we use the previous result as the starting point for the Brownian Dynamics simulation, we find exactly what we would expect, the system contracts into a finite cluster, held together by the attractive part of the potential.
End of explanation
"""
# Draw the radii from a uniform distribution
key, split = random.split(key)
radii = random.uniform(split, (N,), minval=1.0, maxval=2.0, dtype=f64)
# Rescale to match the initial volume fraction
radii = np.array([radii * np.sqrt(N/(4.*np.dot(radii,radii)))])
# Turn this into a matrix of sums
r0_matrix = radii+radii.transpose()
# Create the energy function using r0_matrix
energy_fn = harmonic_morse_pair(displacement, D0=5.0, alpha=10.0, r0=r0_matrix,
k=500.0)
# Minimize the energy using the FIRE algorithm
Rfinal, max_force_component = run_minimization(energy_fn, R, shift)
print('largest component of force after minimization = {}'.format(max_force_component))
plot_system( Rfinal, box_size )
"""
Explanation: Particle-specific parameters
Our example potential has 4 parameters: D0, alpha, r0, and k. The usual way to pass these parameters is as a scalar (e.g. D0=5.0), in which case that parameter is fixed for every particle pair. However, Python broadcasting allows for these parameters to be specified separately for every different particle pair by passing an $(N,N)$ array rather than a scalar.
As an example, let's do this for the parameter r0, which is an effective way of generating a system with continuous polydispersity in particle size. Note that the polydispersity disrupts the crystalline order after minimization.
End of explanation
"""
# Create the energy function the radii array
energy_fn = harmonic_morse_pair(displacement, D0=5.0, alpha=10.0, r0=2.*radii,
k=500.0)
# Minimize the energy using the FIRE algorithm
Rfinal, max_force_component = run_minimization(energy_fn, R, shift)
print('largest component of force after minimization = {}'.format(max_force_component))
plot_system( Rfinal, box_size )
"""
Explanation: In addition to standard Python broadcasting, JAX MD allows for the special case of additive parameters. If a parameter is passed as a (N,) array p_vector, JAX MD will convert this into a (N,N) array p_matrix where p_matrix[i,j] = 0.5 (p_vector[i] + p_vector[j]). This is a JAX MD specific ability and not a feature of Python broadcasting.
As it turns out, our above polydisperse example falls into this category. Therefore, we could achieve the same result by passing r0=2.0*radii.
End of explanation
"""
N_0 = N // 2 # Half the particles in species 0
N_1 = N - N_0 # The rest in species 1
species = np.array([0] * N_0 + [1] * N_1, dtype=np.int32)
print(species)
"""
Explanation: Species
It is often important to specify parameters differently for different particle pairs, but doing so with full ($N$,$N$) matrices is both inefficient and obnoxious. JAX MD allows users to create species, i.e. $N_s$ groups of particles that are identical to each other, so that parameters can be passed as much smaller ($N_s$,$N_s$) matrices.
First, create an array that specifies which particles belong in which species. We will divide our system into two species.
End of explanation
"""
rsmall=0.41099747 # Match the total volume fraction
rlarge=1.4*rsmall
r0_species_matrix = np.array([[2*rsmall, rsmall+rlarge],
[rsmall+rlarge, 2*rlarge]])
print(r0_species_matrix)
energy_fn = harmonic_morse_pair(displacement, species=species, D0=5.0,
alpha=10.0, r0=r0_species_matrix, k=500.0)
Rfinal, max_force_component = run_minimization(energy_fn, R, shift)
print('largest component of force after minimization = {}'.format(max_force_component))
plot_system(Rfinal, box_size, species=species )
"""
Explanation: Next, create the $(2,2)$ matrix of r0's, which are set so that the overall volume fraction matches our monodisperse case.
End of explanation
"""
D0_species_matrix = np.array([[ 5.0, 0.0],
[0.0, 0.0]])
energy_fn = harmonic_morse_pair(displacement,
species=2,
D0=D0_species_matrix,
alpha=10.0,
r0=0.5,
k=500.0)
"""
Explanation: Dynamic Species
Just like standard parameters, the species list can be passed dynamically as well. However, unlike standard parameters, you have to tell smap.pair that the species will be specified dynamically. To do this, set species=2 be the total number of types of particles when creating your energy function.
The following sets up an energy function where the attractive part of the interaction only exists between members of the first species, but where the species will be defined dynamically.
End of explanation
"""
def run_brownian(energy_fn, R_init, shift, key, num_steps):
init, apply = simulate.brownian(energy_fn, shift, dt=0.00001, kT=1.0, gamma=0.1)
# apply = jit(apply)
# Define a function to recalculate the species each step
def get_species(R):
return np.where(R[:,0] < box_size / 2, 0, 1)
@jit
def scan_fn(state, t):
# Recalculate the species list
species = get_species(state.position)
# Dynamically pass species to apply, which passes it on to energy_fn
return apply(state, species=species, species_count=2), 0
key, split = random.split(key)
state = init(split, R_init)
state, _ = lax.scan(scan_fn,state,np.arange(num_steps))
return state.position,np.amax(np.abs(-grad(energy_fn)(state.position,
species=get_species(state.position),
species_count=2)))
"""
Explanation: Now we set up a finite temperature Brownian Dynamics simulation where, at every step, particles on the left half of the simulation box are assigned to species 0, while particles on the right half are assigned to species 1.
End of explanation
"""
key, split = random.split(key)
Rfinal, max_force_component = run_brownian(energy_fn, R, shift, split, num_steps=10000)
plot_system( Rfinal, box_size )
"""
Explanation: When we run this, we see that particles on the left side form clusters while particles on the right side do not.
End of explanation
"""
def harmonic_morse_cutoff_neighbor_list(
displacement_or_metric,
box_size,
species=None,
D0=5.0,
alpha=5.0,
r0=1.0,
k=50.0,
r_onset=1.0,
r_cutoff=1.5,
dr_threshold=2.0,
format=partition.OrderedSparse,
**kwargs):
D0 = np.array(D0, dtype=np.float32)
alpha = np.array(alpha, dtype=np.float32)
r0 = np.array(r0, dtype=np.float32)
k = np.array(k, dtype=np.float32)
r_onset = np.array(r_onset, dtype=np.float32)
r_cutoff = np.array(r_cutoff, np.float32)
dr_threshold = np.float32(dr_threshold)
neighbor_fn = partition.neighbor_list(
displacement_or_metric,
box_size,
r_cutoff,
dr_threshold,
format=format)
energy_fn = smap.pair_neighbor_list(
energy.multiplicative_isotropic_cutoff(harmonic_morse, r_onset, r_cutoff),
space.canonicalize_displacement_or_metric(displacement_or_metric),
species=species,
D0=D0,
alpha=alpha,
r0=r0,
k=k)
return neighbor_fn, energy_fn
"""
Explanation: Efficeiently calculating neighbors
The most computationally expensive part of most MD programs is calculating the force between all pairs of particles. Generically, this scales with $N^2$. However, for systems with isotropic pairwise interactions that are strictly zero beyond a cutoff, there are techniques to dramatically improve the efficiency. The two most common methods are cell list and neighbor lists.
Cell lists
The technique here is to divide space into small cells that are just larger than the largest interaction range in the system. Thus, if particle $i$ is in cell $c_i$ and particle $j$ is in cell $c_j$, $i$ and $j$ can only interact if $c_i$ and $c_j$ are neighboring cells. Rather than searching all $N^2$ combinations of particle pairs for non-zero interactions, you only have to search the particles in the neighboring cells.
Neighbor lists
Here, for each particle $i$, we make a list of potential neighbors: particles $j$ that are within some threshold distance $r_\mathrm{threshold}$. If $r_\mathrm{threshold} = r_\mathrm{cutoff} + \Delta r_\mathrm{threshold}$ (where $r_\mathrm{cutoff}$ is the largest interaction range in the system and $\Delta r_\mathrm{threshold}$ is an appropriately chosen buffer size), then all interacting particles will appear in this list as long as no particles moves by more than $\Delta r_\mathrm{threhsold}/2$. There is a tradeoff here: smaller $\Delta r_\mathrm{threhsold}$ means fewer particles to search over each MD step but the list must be recalculated more often, while larger $\Delta r_\mathrm{threhsold}$ means slower force calculates but less frequent neighbor list calculations.
In practice, the most efficient technique is often to use cell lists to calculate neighbor lists. In JAX MD, this occurs under the hood, and so only calls to neighbor-list functionality are necessary.
To implement neighbor lists, we need two functions: 1) a function to create and update the neighbor list, and 2) an energy function that uses a neighbor list rather than operating on all particle pairs. We create these functions with partition.neighbor_list and smap.pair_neighbor_list, respectively.
partition.neighbor_list takes basic box information as well as the maximum interaction range r_cutoff and the buffer size dr_threshold.
End of explanation
"""
r_onset = 1.5
r_cutoff = 2.0
dr_threshold = 1.0
neighbor_fn, energy_fn = harmonic_morse_cutoff_neighbor_list(
displacement, box_size, D0=5.0, alpha=10.0, r0=1.0, k=500.0,
r_onset=r_onset, r_cutoff=r_cutoff, dr_threshold=dr_threshold)
energy_fn_comparison = harmonic_morse_cutoff_pair(
displacement, D0=5.0, alpha=10.0, r0=1.0, k=500.0,
r_onset=r_onset, r_cutoff=r_cutoff)
"""
Explanation: To test this, we generate our new neighbor_fn and energy_fn, as well as a comparison energy function using the default approach.
End of explanation
"""
nbrs = neighbor_fn.allocate(R)
"""
Explanation: Next, we use neighbor_fn.allocate and the current set of positions to populate the neighbor list.
End of explanation
"""
print(energy_fn(R, neighbor=nbrs))
print(energy_fn_comparison(R))
"""
Explanation: To calculate the energy, we pass nbrs to energy_fn. The energy matches the comparison.
End of explanation
"""
def run_brownian_neighbor_list(energy_fn, neighbor_fn, R_init, shift, key, num_steps):
nbrs = neighbor_fn.allocate(R_init)
init, apply = simulate.brownian(energy_fn, shift, dt=0.00001, kT=1.0, gamma=0.1)
def body_fn(state, t):
state, nbrs = state
nbrs = nbrs.update(state.position)
state = apply(state, neighbor=nbrs)
return (state, nbrs), 0
key, split = random.split(key)
state = init(split, R_init)
step = 0
step_inc=100
while step < num_steps/step_inc:
rtn_state, _ = lax.scan(body_fn, (state, nbrs), np.arange(step_inc))
new_state, nbrs = rtn_state
# If the neighbor list overflowed, rebuild it and repeat part of
# the simulation.
if nbrs.did_buffer_overflow:
print('Buffer overflow.')
nbrs = neighbor_fn.allocate(state.position)
else:
state = new_state
step += 1
return state.position
"""
Explanation: Note that by default neighbor_fn uses a cell list internally to populate the neighbor list. This approach fails when the box size in any dimension is less than 3 times $r_\mathrm{threhsold} = r_\mathrm{cutoff} + \Delta r_\mathrm{threshold}$. In this case, neighbor_fn automatically turns off the use of cell lists, and instead searches over all particle pairs. This can also be done manually by passing disable_cell_list=True to partition.neighbor_list. This can be useful for debugging or for small systems where the overhead of cell lists outweighs the benefit.
Updating neighbor lists
The function neighbor_fn has two different usages, depending on how it is called. When used as above, i.e. nbrs = neighbor_fn(R), a new neighbor list is generated from scratch. Internally, JAX MD uses the given positions R to estimate a maximum capacity, i.e. the maximum number of neighbors any particle will have at any point during the use of the neighbor list. This estimate can be adjusted by passing a value of capacity_multiplier to partition.neighbor_list, which defaults to capacity_multiplier=1.25.
Since the maximum capacity is not known ahead of time, this construction of the neighbor list cannot be compiled. However, once a neighbor list is created in this way, repopulating the list with the same maximum capacity is a simpler operation that can be compiled. This is done by calling nbrs = neighbor_fn(R, nbrs). Internally, this checks if any particle has moved more than $\Delta r_\mathrm{threshold}/2$ and, if so, recomputes the neighbor list. If the new neighbor list exceeds the maximum capacity for any particle, the boolean variable nbrs.did_buffer_overflow is set to True.
These two uses together allow for safe and efficient neighbor list calculations. The example below demonstrates a typical simulation loop that uses neighbor lists.
End of explanation
"""
Nlarge = 100*N
box_size_large = 10*box_size
displacement_large, shift_large = setup_periodic_box(box_size_large)
key, split1, split2 = random.split(key,3)
Rlarge = random.uniform(split1, (Nlarge,dimension), minval=0.0, maxval=box_size_large, dtype=f64)
dr_threshold = 1.5
neighbor_fn, energy_fn = harmonic_morse_cutoff_neighbor_list(
displacement_large, box_size_large, D0=5.0, alpha=10.0, r0=1.0, k=500.0,
r_onset=r_onset, r_cutoff=r_cutoff, dr_threshold=dr_threshold)
energy_fn = jit(energy_fn)
start_time = time.process_time()
Rfinal = run_brownian_neighbor_list(energy_fn, neighbor_fn, Rlarge, shift_large, split2, num_steps=4000)
end_time = time.process_time()
print('run time = {}'.format(end_time-start_time))
plot_system( Rfinal, box_size_large, ms=2 )
"""
Explanation: To run this, we consider a much larger system than we have to this point. Warning: running this may take a few minutes.
End of explanation
"""
def bistable_spring(dr, r0=1.0, a2=2, a4=5, **kwargs):
return a4*(dr-r0)**4 - a2*(dr-r0)**2
"""
Explanation: Bonds
Bonds are a way of specifying potentials between specific pairs of particles that are "on" regardless of separation. For example, it is common to employ a two-sided spring potential between specific particle pairs, but JAX MD allows the user to specify arbitrary potentials with static or dynamic parameters.
Create and implement a bond potential
We start by creating a custom potential that corresponds to a bistable spring, taking the form
\begin{equation}
V(r) = a_4(r-r_0)^4 - a_2(r-r_0)^2.
\end{equation}
$V(r)$ has two minima, at $r = r_0 \pm \sqrt{\frac{a_2}{2a_4}}$.
End of explanation
"""
drs = np.arange(0,2,0.01)
U = bistable_spring(drs)
plt.plot(drs,U)
format_plot(r'$r$', r'$V(r)$')
finalize_plot()
"""
Explanation: Plot $V(r)$
End of explanation
"""
def bistable_spring_bond(
displacement_or_metric, bond, bond_type=None, r0=1, a2=2, a4=5):
"""Convenience wrapper to compute energy of particles bonded by springs."""
r0 = np.array(r0, f32)
a2 = np.array(a2, f32)
a4 = np.array(a4, f32)
return smap.bond(
bistable_spring,
space.canonicalize_displacement_or_metric(displacement_or_metric),
bond,
bond_type,
r0=r0,
a2=a2,
a4=a4)
"""
Explanation: The next step is to promote this function to act on a set of bonds. This is done via smap.bond, which takes our bistable_spring function, our displacement function, and a list of the bonds. It returns a function that calculates the energy for a given set of positions.
End of explanation
"""
R_temp, max_force_component = run_minimization(harmonic_morse_pair(displacement,D0=5.0,alpha=10.0,r0=1.0,k=500.0), R, shift)
print('largest component of force after minimization = {}'.format(max_force_component))
plot_system( R_temp, box_size )
"""
Explanation: However, in order to implement this, we need a list of bonds. We will do this by taking a system minimized under our original harmonic_morse potential:
End of explanation
"""
bonds, lengths = calculate_bond_data(displacement, R_temp, 1.3)
print(bonds[:5]) # list of particle index pairs that form bonds
print(lengths[:5]) # list of the current length of each bond
"""
Explanation: We now place a bond between all particle pairs that are separated by less than 1.3. calculate_bond_data returns a list of such bonds, as well as a list of the corresponding current length of each bond.
End of explanation
"""
bond_energy_fn = bistable_spring_bond(displacement, bonds, r0=lengths)
"""
Explanation: We use this length as the r0 parameter, meaning that initially each bond is at the unstable local maximum $r=r_0$.
End of explanation
"""
Rfinal, max_force_component = run_minimization(bond_energy_fn, R_temp, shift)
print('largest component of force after minimization = {}'.format(max_force_component))
plot_system( Rfinal, box_size )
"""
Explanation: We now use our new bond_energy_fn to minimize the energy of the system. The expectation is that nearby particles should either move closer together or further apart, and the choice of which to do should be made collectively due to the constraint of constant volume. This is exactly what we see.
End of explanation
"""
# Specifying the bonds dynamically ADDS additional bonds.
# Here, we dynamically pass the same bonds that were passed statically, which
# has the effect of doubling the energy
print(bond_energy_fn(R))
print(bond_energy_fn(R,bonds=bonds, r0=lengths))
"""
Explanation: Specifying bonds dynamically
As with species or parameters, bonds can be specified dynamically, i.e. when the energy function is called. Importantly, note that this does NOT override bonds that were specified statically in smap.bond.
End of explanation
"""
# Note, the code in the "Bonds" section must be run prior to this.
energy_fn = harmonic_morse_pair(displacement,D0=0.,alpha=10.0,r0=1.0,k=1.0)
bond_energy_fn = bistable_spring_bond(displacement, bonds, r0=lengths)
def combined_energy_fn(R):
return energy_fn(R) + bond_energy_fn(R)
"""
Explanation: We won't go thorugh a further example as the implementation is exactly the same as specifying species or parameters dynamically, but the ability to employ bonds both statically and dynamically is a very powerful and general framework.
Combining potentials
Most JAX MD functionality (e.g. simulations, energy minimizations) relies on a function that calculates energy for a set of positions. Importantly, while this cookbook focus on simple and robust ways of defining such functions, JAX MD is not limited to these methods; users can implement energy functions however they like.
As an important example, here we consider the case where the energy includes both a pair potential and a bond potential. Specifically, we combine harmonic_morse_pair with bistable_spring_bond.
End of explanation
"""
drs = np.arange(0,2,0.01)
U = harmonic_morse(drs,D0=0.,alpha=10.0,r0=1.0,k=1.0)+bistable_spring(drs)
plt.plot(drs,U)
format_plot(r'$r$', r'$V(r)$')
finalize_plot()
"""
Explanation: Here, we have set $D_0=0$, so the pair potential is just a one-sided repulsive harmonic potential. For particles connected with a bond, this raises the energy of the "contracted" minimum relative to the "extended" minimum.
End of explanation
"""
Rfinal, max_force_component = run_minimization(combined_energy_fn, R_temp, shift)
print('largest component of force after minimization = {}'.format(max_force_component))
plot_system( Rfinal, box_size )
"""
Explanation: This new energy function can be passed to the minimization routine (or any other JAX MD simulation routine) in the usual way.
End of explanation
"""
N_0 = N // 2 # Half the particles in species 0
N_1 = N - N_0 # The rest in species 1
species = np.array([0]*N_0 + [1]*N_1, dtype=np.int32)
print(species)
"""
Explanation: Specifying forces instead of energies
So far, we have defined functions that calculate the energy of the system, which we then pass to JAX MD. Internally, JAX MD uses automatic differentiation to convert these into functions that calculate forces, which are necessary to evolve a system under a given dynamics. However, JAX MD has the option to pass force functions directly, rather than energy functions. This creates additional flexibility because some forces cannot be represented as the gradient of a potential.
As a simple example, we create a custom force function that zeros out the force of some particles. During energy minimization, where there is no stochastic noise, this has the effect of fixing the position of these particles.
First, we break the system up into two species, as before.
End of explanation
"""
energy_fn = harmonic_morse_pair(displacement,D0=5.0,alpha=10.0,r0=1.0,k=500.0)
force_fn = quantity.force(energy_fn)
def custom_force_fn(R, **kwargs):
return vmap(lambda a,b: a*b)(force_fn(R),species)
"""
Explanation: Next, we we creat our custom force function. Starting with our harmonic_morse pair potential, we calculate the force manually (i.e. using built-in automatic differentiation), and then multiply the force by the species id, which has the desired effect.
End of explanation
"""
def run_minimization_general(energy_or_force, R_init, shift, num_steps=5000):
dt_start = 0.001
dt_max = 0.004
init,apply=minimize.fire_descent(jit(energy_or_force),shift,dt_start=dt_start,dt_max=dt_max)
apply = jit(apply)
@jit
def scan_fn(state, i):
return apply(state), 0.
state = init(R_init)
state, _ = lax.scan(scan_fn,state,np.arange(num_steps))
return state.position, np.amax(np.abs(quantity.canonicalize_force(energy_or_force)(state.position)))
"""
Explanation: Running simulations with custom forces is as easy as passing this force function to the simulation.
End of explanation
"""
key, split = random.split(key)
Rfinal, _ = run_minimization_general(custom_force_fn, R, shift)
plot_system( Rfinal, box_size, species )
"""
Explanation: We run this as usual,
End of explanation
"""
plot_system( R, box_size, species )
"""
Explanation: After the above minimization, the blue particles have the same positions as they did initially:
End of explanation
"""
energy_fn = harmonic_morse_pair(displacement,D0=5.0,alpha=10.0,r0=0.5,k=500.0)
def spring_energy_fn(Rall, k_spr=50.0, **kwargs):
metric = vmap(space.canonicalize_displacement_or_metric(displacement), (0, 0), 0)
dr = metric(Rall[0],Rall[1])
return 0.5*k_spr*np.sum((dr)**2)
def total_energy_fn(Rall, **kwargs):
return np.sum(vmap(energy_fn)(Rall)) + spring_energy_fn(Rall)
"""
Explanation: Note, this method for fixing particles only works when there is no stochastic noise (e.g. in Langevin or Brownian dynamics) because such noise affects partices whether or not they have a net force. A safer way to fix particles is to create a custom shift function.
Coupled ensembles
For a final example that demonstrates the flexibility within JAX MD, lets do something that is particularly difficult in most standard MD packages. We will create a "coupled ensemble" -- i.e. a set of two identical systems that are connected via a $Nd$ dimensional spring. An extension of this idea is used, for example, in the Doubly Nudged Elastic Band method for finding transition states.
If the "normal" energy of each system is
\begin{equation}
U(R) = \sum_{i,j} V( r_{ij} ),
\end{equation}
where $r_{ij}$ is the distance between the $i$th and $j$th particles in $R$ and the $V(r)$ is a standard pair potential, and if the two sets of positions, $R_0$ and $R_1$, are coupled via the potential
\begin{equation}
U_\mathrm{spr}(R_0,R_1) = \frac 12 k_\mathrm{spr} \left| R_1 - R_0 \right|^2,
\end{equation}
so that the total energy of the system is
\begin{equation}
U_\mathrm{total} = U(R_0) + U(R_1) + U_\mathrm{spr}(R_0,R_1).
\end{equation}
End of explanation
"""
def shift_all(Rall, dRall, **kwargs):
return vmap(shift)(Rall, dRall)
Rall = np.array([R,R])
"""
Explanation: We now have to define a new shift function that can handle arrays of shape $(2,N,d)$. In addition, we make two copies of our initial positions R, one for each system.
End of explanation
"""
def run_brownian_simple(energy_or_force, R_init, shift, key, num_steps):
init, apply = simulate.brownian(energy_or_force, shift, dt=0.00001, kT=1.0, gamma=0.1)
apply = jit(apply)
@jit
def scan_fn(state, t):
return apply(state), 0
key, split = random.split(key)
state = init(split, R_init)
state, _ = lax.scan(scan_fn, state, np.arange(num_steps))
return state.position
"""
Explanation: Now, all we have to do is pass our custom energy and shift functions, as well as the $(2,N,d)$ dimensional initial position, to JAX MD, and proceed as normal.
As a demonstration, we define a simple and general Brownian Dynamics simulation function, similar to the simulation routines above except without the special cases (e.g. chaning r0 or species).
End of explanation
"""
key, split = random.split(key)
Rall_final = run_brownian_simple(total_energy_fn, Rall, shift_all, split, num_steps=10000)
"""
Explanation: Note that nowhere in this function is there any indication that we are simulating an ensemble of systems. This comes entirely form the inputs: i.e. the energy function, the shift function, and the set of initial positions.
End of explanation
"""
for Ri in Rall_final:
plot_system( Ri, box_size )
finalize_plot((0.5,0.5))
"""
Explanation: The output also has shape $(2,N,d)$. If we display the results, we see that the two systems are in similar, but not identical, positions, showing that we have succeeded in simulating a coupled ensemble.
End of explanation
"""
|
vanheck/blog-notes | SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb | mit | SQUARE = 128
SQUARE_MULTIPLIER = 1.5
# how many
BARS_BACK_TO_REFERENCE = np.int(np.ceil(SQUARE * SQUARE_MULTIPLIER))
# set higher timeframe for getting SquareMathLevels
MINUTES = 30 # range 0-59
PD_RESAMPLE_RULE = f'{MINUTES}Min'
# set the period of PD_RESAMPLE_RULE will be started. E.g. PD_RESAMPLE_RULE == '30min':
# PD_GROUPER_BASE = 5, periods will be: 8:05:00, 8:35:00, 9:05:00, etc...
# PD_GROUPER_BASE = 0, means 8:00:00, 8:30:00, 9:00:00, etc...
PD_GROUPER_BASE = 0
"""
Explanation: Backtest SquareMathLevels
Cíl
Ověření hypotézy, že SquareMath Levels fungují jako S/R úrovně, tzn. trh má tendenci se od nich odrážet.
Ověření na statistice ZN 1min, SML - 30min SQUARE 16
Příprava dat
Nastavení pro kalkulaci SquareMath
End of explanation
"""
TICK_SIZE_STR = f'{1/32*0.5}'
TICK_SIZE = float(TICK_SIZE_STR)
#SYMBOL = 'ZN'
TICK_SIZE_STR
DATA_FILE = '../../Data/ZN-1s.csv'
read_cols = ['Date', 'Time', 'Open', 'High', 'Low', 'Last']
data = pd.read_csv(DATA_FILE, index_col=0, skipinitialspace=True, usecols=read_cols, parse_dates={'Datetime': [0, 1]})
data.rename(columns={"Last": "Close"}, inplace=True)
data.index.name = 'Datetime'
data['Idx'] = np.arange(data.shape[0])
df = data
df
"""
Explanation: Data, která se budou analyzovat
End of explanation
"""
# calculate max high for actual record from higher tiframe his period
df_helper_gr = df[['High']].groupby(pd.Grouper(freq=PD_RESAMPLE_RULE, base=PD_GROUPER_BASE))
df_helper = df_helper_gr.rolling(PD_RESAMPLE_RULE, min_periods=1).max().dropna() # cummax() with new index
df_helper['bigCumMaxHigh'] = df_helper.assign(l=df_helper_gr.max().dropna().rolling(BARS_BACK_TO_REFERENCE-1).max().shift().loc[df_helper.index.get_level_values(0)].to_numpy()).max(axis=1, skipna=False)
df_helper.set_index(df_helper.index.get_level_values(1), inplace=True) # drop multiindex
df['SMLHighLimit'] = df_helper.bigCumMaxHigh
df
"""
Explanation: Maximální high a low za posledních BARS_BACK_TO_REFERENCE svíček z vyššího timeframu.
High
End of explanation
"""
# calculate min low for actual record from higher tiframe his period
df_helper_gr = df[['Low']].groupby(pd.Grouper(freq=PD_RESAMPLE_RULE, base=PD_GROUPER_BASE))
df_helper = df_helper_gr.rolling(PD_RESAMPLE_RULE, min_periods=1).min().dropna() # cummin() with new index
df_helper['bigCumMinLow'] = df_helper.assign(l=df_helper_gr.min().dropna().rolling(BARS_BACK_TO_REFERENCE-1).min().shift().loc[df_helper.index.get_level_values(0)].to_numpy()).min(axis=1, skipna=False)
df_helper.set_index(df_helper.index.get_level_values(1), inplace=True) # drop multiindex
df['SMLLowLimit'] = df_helper.bigCumMinLow
df
"""
Explanation: Low
End of explanation
"""
del df_helper
del df_helper_gr
df.dropna(inplace=True)
df
"""
Explanation: Zahození nepotřebných prostředků a záznamů NaN, které nemůžu analyzovat
End of explanation
"""
from vhat.squaremath.funcs import calculate_octave
SML_INDEXES = np.arange(-2, 10+1, dtype=np.int) # from -2/8 to +2/8
def round_to_tick_size(values, tick_size):
return np.round(values / tick_size) * tick_size
def get_smlines(r):
tick_size = TICK_SIZE
lowLimit = r.SMLLowLimit
highLimit = r.SMLHighLimit
zeroLine, frameSize = calculate_octave(lowLimit, highLimit)
spread = frameSize * 0.125
sml = SML_INDEXES * spread + zeroLine
sml = round_to_tick_size(sml, tick_size)
return [sml, zeroLine, frameSize, spread]
temp = df.apply(get_smlines, axis=1, result_type='expand')
temp.columns = ['SML', 'zeroLine', 'framSize', 'spread']
df = df.join(temp)
del temp
df
"""
Explanation: Výpočet SMLevels pro každý záznam
End of explanation
"""
df['prevSML'] = df.SML.shift()
df.dropna(inplace=True)
df
df['SMLTouch'] = df.apply(lambda r: np.bitwise_and(r.Low<=r.prevSML, r.prevSML<=r.High), axis=1)
df['SMLTouchCount'] = df.SMLTouch.apply(lambda v: sum(v))
df
from dataclasses import dataclass
from typing import List
@dataclass
class Trade:
tId: int
# vstupní data, která znám dopředu
entry_idx: int
entry_sml_number: int
entry_sml_spread: float
entry_price: float
entry_lots: int # -1 short, 1 long
profit_target: float
stop_loss: float
# průběh a vývoj trhu v otevřeném obchodu
max_running_profit_price: float
max_running_loss_price: float
# výstupní data, která se vyplní až na konci
exit_idx: int = -1
exit_price: float = 0.0
exit_sml_number: int = 9999
# pokud obchod skončí tak, že nebude možné zjistit výsledek, co bylo realizováno dříve, nastaví se tahle proměnná
unrecognizable_trade: bool = False
@dataclass
class TradeList:
trades: List[Trade]
def check_open_trades(v, finished_trades, r):
# TODO: dodělat indexy aktuální svíce
trades_to_close = []
for tid, trade in opened_trades.items():
# průběžné statistiky
if trade.entry_lots == 0 : raise Exception('Něco jsem dojebal - open trades má entry lots == 0')
long_trade = trade.entry_lots>0
if long_trade:
trade.max_running_profit_price = max(min(r.High, trade.profit_target), trade.max_running_profit_price)
trade.max_running_loss_price = min(max(r.Low, trade.stop_loss), trade.max_running_loss_price)
else: # short trade
trade.max_running_profit_price = min(max(r.Low, trade.profit_target), trade.max_running_profit_price)
trade.max_running_loss_price = max(min(r.High, trade.stop_loss), trade.max_running_loss_price)
# zasažení PT nebo SL
hit_pt = True if r.Low<=trade.profit_target<=r.High else False
hit_sl = True if r.Low<=trade.stop_loss<=r.High else False
hits = (hit_pt, hit_sl)
if all(hits):
# špatný stav - nedokážu přesně určit, zda obchod trefil první SL nebo PT
trade.unrecognizable_trade = True
trades_to_close.append(tid)
elif hit_pt:
trade.exit_idx = r.Idx
trade.exit_price = trade.profit_target
trade.exit_sml_number = trade.entry_sml_number+(1 if long_trade else -1)
trades_to_close.append(tid)
elif hit_sl:
trade.exit_idx = r.Idx
trade.exit_price = trade.stop_loss
trade.exit_sml_number = trade.entry_sml_number-(1 if long_trade else -1)
trades_to_close.append(tid)
# Uzavření tradů
for tid in trades_to_close:
finished_trades.append(opened_trades[tid])
del opened_trades[tid]
def entry_logic(opened_trades, finished_trades, r, prev_r, last_level, tick_size, rr_multiplier=1):
if r.SMLTouchCount !=1:
# TODO: tohle neni az tak uplne pravda
# pokud je open pod oběma proraženými levely, je jasné, že levely byly
# aktivovány v jasném pořadí, ale to asi není až tak důležité.
return # nejde urcit, co bylo aktivováno dříve
# zjistit, který level je aktivován => musí být z minulých levelů
price_level_hit = r.prevSML[r.SMLTouch][0]
for trade in opened_trades.values():
if trade.entry_price == price_level_hit:
return # zadny obchod nechci otevirat, uz je otevren
newtid = len(opened_trades) + len(finished_trades) + 1
idx_level_hit = SML_INDEXES[r.SMLTouch][0]
# otevrit obchod na prorazenem levelu
if price_level_hit < last_level:
# dotek z vrchu == long
lots = 1
pt = round_to_tick_size(price_level_hit + prev_r.spread * rr_multiplier, tick_size)
sl = round_to_tick_size(price_level_hit - prev_r.spread, tick_size)
running_profit_price = r.High
running_loss_price = r.Low
else:
# short
lots = -1
pt = round_to_tick_size(price_level_hit - prev_r.spread * rr_multiplier, tick_size)
sl = round_to_tick_size(price_level_hit + prev_r.spread, tick_size)
running_profit_price = r.Low
running_loss_price = r.High
new_trade = Trade(newtid, r.Idx, idx_level_hit, prev_r.spread, price_level_hit, lots, pt, sl, running_profit_price, running_loss_price)
opened_trades[newtid] = new_trade
last_level = None # price of last SML for predicting
opened_trades = {}
finished_trades = []
for idxdt, r in df.iterrows():
if not last_level:
last_level = r.Close
prev_r = r
continue
check_open_trades(opened_trades, finished_trades, r)
entry_logic(opened_trades, finished_trades, r, prev_r, last_level, TICK_SIZE)
# nastavit poslední vývoj pro kalkulaci v další svíci
prev_r = r
if r.SMLTouchCount == 1:
last_level = r.prevSML[r.SMLTouch][0]
elif r.SMLTouchCount > 1:
last_level = r.Close
from dataclasses import astuple
finished_trades = TradeList(finished_trades)
opened_trades = TradeList(list(opened_trades.values()))
cols = ['id', 'entryIdx', 'entrySmLvl', 'entrySmlSpread', 'entryPrice', 'lots', 'pt', 'sl', 'runningProfit', 'runningRisk', 'exitIdx', 'exitPrice', 'exitSmLvl', 'unrecognizableTrade']
stats_opened = pd.DataFrame(astuple(opened_trades)[0], columns=cols)
stats = pd.DataFrame(astuple(finished_trades)[0], columns=cols)
stats
"""
Explanation: Výpočet dotyku SML
Musím vypočítat dotyk předchozího průrazu kvůli frame-shift.
End of explanation
"""
print('Od:', df.iloc[0].name)
print('Do', df.iloc[-1].name)
print('Časové období:', df.iloc[-1].name - df.iloc[0].name)
print('Počet obchodních dnů:', df.Close.resample('1D').ohlc().shape[0])
print('Počet záznamů jemného tf:', df.shape[0])
"""
Explanation: Statistika výsledků
Backtest základní info
End of explanation
"""
touchCounts = df.SMLTouchCount.value_counts().to_frame(name='Occurences')
touchCounts['Occ%'] = touchCounts / df.shape[0]*100
print(f'Počet protnutích více něž jedné SML v jednom záznamu: v {(df.SMLTouchCount>1).sum()} případech ({(df.SMLTouchCount>1).sum()/df.shape[0]*100:.3f}%) z {df.shape[0]} celkem\n')
touchCounts
"""
Explanation: Validita nízkého timeframe pro backtest - možná zasenesená chyba
Zjištění, zda je zvolený SQUARE na vyšším timeframu dostatečný pro backtest na tomto nízkém timeframu. Tzn. pokud mám Square=32 z vyššího timeframe='30min', mohu zjistit jestli jsou záznamy timeframe='1min' vhodné pro backtest.
Pokud by byla vysoká chyba rozlišení nízkého timeframe (např. nad 5%), je třeba pro relevatní výsledky zvolit buď nižší rozlišení pro backtest např. '30s' příp. '1s', nebo zvýšit SQUARE=64 nebo zvýšit vysoký timeframe pro výpočet SML `1h, 2h, 4h, 8h, 1d, ...'.
Počet průrazů na jednu malou svíčku
Dává informaci o tom, zda je tento malý rámec dostatečný pro výpočet obchodů a může mít vypovídající informaci o chybovosti.
End of explanation
"""
spread_stats = df.spread.value_counts().to_frame(name='Occurences')
spread_stats['Occ%'] = spread_stats / df.shape[0]*100
spread_stats['Ticks'] = spread_stats.index / TICK_SIZE # index musím
print(f'Počet spredu SML menších než 2 ticky v jednom záznamu: v {(df.spread/TICK_SIZE<2).sum()} případech ({(df.spread/TICK_SIZE<2).sum()/df.shape[0]*100:.3f}%) z {df.shape[0]} celkem\n')
spread_stats
"""
Explanation: Velmi nízký SML spread
End of explanation
"""
chybovost = df.spread[(df.spread/TICK_SIZE<2) | (df.SMLTouchCount>1)].shape[0]
print(f'Celková chybovost v nízkém timeframe může být v {chybovost} případech ({chybovost/df.shape[0]*100:.3f}%) z {df.shape[0]} celkem')
"""
Explanation: Výsledná možná chybovost na nízkém TF pro backtest
End of explanation
"""
finishedCount = stats.shape[0]
print('Total finished trades:', finishedCount)
# pokud je opravdu hodně "unrecognizableTrade", mám moc nízké rozlišení SquareMath levels (malý square)
unrec_trades = stats.unrecognizableTrade.sum()
print('Unrecognizable trades:', unrec_trades, f'({unrec_trades/finishedCount *100:.3f}%)')
print('Opened trades:', stats_opened.shape[0])
"""
Explanation: Validita výsledků obchodů
End of explanation
"""
stats.drop(stats[stats.unrecognizableTrade].index, inplace=True)
shorts_mask = stats.lots<0
longs_mask = stats.lots>0
stats.loc[shorts_mask, 'PnL'] = ((stats[shorts_mask].entryPrice - stats[shorts_mask].exitPrice) / TICK_SIZE).round()
stats.loc[longs_mask, 'PnL'] = ((stats[longs_mask].exitPrice - stats[longs_mask].entryPrice) / TICK_SIZE).round()
stats.PnL = stats.PnL.astype(int)
stats['runPTicks'] = ((stats.entryPrice - stats.runningProfit).abs() / TICK_SIZE).round().astype(int)
stats['runLTicks'] = ((stats.entryPrice - stats.runningRisk).abs() * -1 / TICK_SIZE).round().astype(int)
stats['ptTicks'] = ((stats.entryPrice - stats.pt).abs() / TICK_SIZE).round().astype(int)
stats['slTicks'] = ((stats.entryPrice - stats.sl).abs() * -1 / TICK_SIZE).round().astype(int)
stats['tradeTime'] = stats.exitIdx - stats.entryIdx
stats
"""
Explanation: Dál nebudu potřebovat unrecognized trades
End of explanation
"""
# masks
shorts_mask = stats.lots<0
longs_mask = stats.lots>0
profit_mask = stats.PnL>0
loss_mask = stats.PnL<0
breakeven_mask = stats.PnL==0
total_trades = stats.shape[0]
profit_trades_count = stats.PnL[profit_mask].shape[0]
loss_trades_count = stats.PnL[loss_mask].shape[0]
breakeven_trades_count = stats.PnL[breakeven_mask].shape[0]
print(f'Ziskových obchodů {profit_trades_count}({profit_trades_count/total_trades*100:.2f}%) z {total_trades} celkem')
print(f'Ztrátových obchodů {loss_trades_count}({loss_trades_count/total_trades*100:.2f}%) z {total_trades} celkem')
print(f'Break-even obchodů {breakeven_trades_count}({breakeven_trades_count/total_trades*100:.2f}%) z {total_trades} celkem')
print('---')
print(f'Počet Long obchodů = {stats[longs_mask].shape[0]} ({stats[longs_mask].shape[0]/stats.shape[0]*100:.2f}%) z {total_trades} celkem')
print(f'Počet Short obchodů = {stats[shorts_mask].shape[0]} ({stats[shorts_mask].shape[0]/stats.shape[0]*100:.2f}%) z {total_trades} celkem')
print('---')
print(f'Suma zisků = {stats.PnL[profit_mask].sum()} Ticks')
print(f'Suma ztrát = {stats.PnL[loss_mask].sum()} Ticks')
print(f'Celkem = {stats.PnL.sum()} Ticks')
"""
Explanation: Celkové výsledky
End of explanation
"""
selected_stats = stats[loss_mask]
selected_pnl_stats = selected_stats.PnL.value_counts().to_frame(name='PnLOccurences')
selected_pnl_stats['Occ%'] = selected_pnl_stats / selected_stats.shape[0]*100
selected_pnl_stats['Ticks'] = selected_pnl_stats.index / TICK_SIZE
selected_pnl_stats
"""
Explanation: Ztrátové obchody
End of explanation
"""
sns.distplot(selected_stats.runPTicks, color="g");
"""
Explanation: Max pohyb v zisku ve ztrátových obchodech
End of explanation
"""
sns.distplot(selected_stats.runPTicks/selected_stats.ptTicks, color="g");
"""
Explanation: Poměrově pohyb v zisku k nastavenému PT u ztrátových obchodů.
End of explanation
"""
sns.distplot(selected_stats.runLTicks, color="r");
sns.distplot(selected_stats.runLTicks/selected_stats.slTicks, color="r");
"""
Explanation: Max pohyb ve ztrátě ve ztrátových obchodech
End of explanation
"""
selected_stats = stats[profit_mask]
selected_pnl_stats = selected_stats.PnL.value_counts().to_frame(name='PnLOccurences')
selected_pnl_stats['Occ%'] = selected_pnl_stats / selected_stats.shape[0]*100
selected_pnl_stats['Ticks'] = selected_pnl_stats.index / TICK_SIZE
selected_pnl_stats
"""
Explanation: Ziskové obchody
End of explanation
"""
sns.distplot(selected_stats.runPTicks, color="g");
"""
Explanation: PT adjustment ve ziskových obchodech - Max pohyb v zisku
End of explanation
"""
sns.distplot(selected_stats.runPTicks/selected_stats.ptTicks, color="g");
"""
Explanation: Poměrově pohyb v zisku k PT u ziskových obchodů.
End of explanation
"""
sns.distplot(selected_stats.runLTicks, color="r");
"""
Explanation: Max pohyb ve ztrátě ve ziskových obchodech
End of explanation
"""
sns.distplot(selected_stats.runLTicks/selected_stats.slTicks, color="r");
"""
Explanation: poměr vývoje ztráty k zadanému SL v ziskových obchodech
End of explanation
"""
selected_stats = stats[longs_mask]
print('Počet obchodů:', selected_stats.shape[0], f'({selected_stats.shape[0]/stats.shape[0]*100:.2f}%) z {stats.shape[0]}')
print('Počet win:', selected_stats[selected_stats.PnL>0].shape[0], f'({selected_stats[selected_stats.PnL>0].shape[0]/selected_stats.shape[0]*100:.2f}%) z {selected_stats.shape[0]}')
print('Počet loss:', selected_stats[selected_stats.PnL<0].shape[0], f'({selected_stats[selected_stats.PnL<0].shape[0]/selected_stats.shape[0]*100:.2f}%) z {selected_stats.shape[0]}')
print('Počet break-even:', selected_stats[selected_stats.PnL==0].shape[0], f'({selected_stats[selected_stats.PnL==0].shape[0]/selected_stats.shape[0]*100:.2f}%) z {selected_stats.shape[0]}')
print('---')
print(f'Průměrný zisk: {selected_stats.PnL[selected_stats.PnL>0].mean():.3f}')
print(f'Průměrná ztráta: {selected_stats.PnL[selected_stats.PnL<0].mean():.3f}')
print('---')
selected_pnl_stats = selected_stats.PnL.value_counts().to_frame(name='PnLOccurences')
selected_pnl_stats['Occ%'] = selected_pnl_stats / selected_stats.shape[0]*100
selected_pnl_stats['Ticks'] = selected_pnl_stats.index / TICK_SIZE
selected_pnl_stats
"""
Explanation: Long obchody
End of explanation
"""
sns.distplot(selected_stats[selected_stats.PnL<0].runPTicks, color="g");
"""
Explanation: PT adjustment ve ztrátových long obchodech - Max pohyb v zisku
End of explanation
"""
sns.distplot(selected_stats[selected_stats.PnL<0].runPTicks/selected_stats[selected_stats.PnL<0].ptTicks, color="g");
"""
Explanation: Poměrově pohyb v zisku k nastavenému PT u ztrátových obchodů.
End of explanation
"""
sns.distplot(selected_stats[selected_stats.PnL<0].runLTicks, color="r");
sns.distplot(selected_stats[selected_stats.PnL<0].runLTicks/selected_stats[selected_stats.PnL<0].slTicks, color="r"); # kontrola
"""
Explanation: SL djustment ve ztrátových obchodech - max pohyb v zisku
End of explanation
"""
sns.distplot(selected_stats[selected_stats.PnL>0].runPTicks, color="g");
"""
Explanation: PT adjustment v ziskových long obchodech - Max pohyb v zisku
End of explanation
"""
sns.distplot(selected_stats[selected_stats.PnL>0].runPTicks/selected_stats[selected_stats.PnL>0].ptTicks, color="g");
"""
Explanation: Poměrově pohyb v zisku k nastavenému PT u ziskových obchodů.
End of explanation
"""
sns.distplot(selected_stats[selected_stats.PnL>0].runLTicks, color="r");
sns.distplot(selected_stats[selected_stats.PnL>0].runLTicks/selected_stats[selected_stats.PnL>0].slTicks, color="r"); # kontrola
"""
Explanation: SL djustment v ziskových obchodech - max pohyb ve ztrátě
End of explanation
"""
#smlvl_stats = stats.entrySmLvl.value_counts().to_frame(name='entrySmLvlOcc')
smlvl_stats = stats[['entrySmLvl', 'lots']].groupby(['entrySmLvl']).count()
smlvl_stats.sort_values(by='lots', ascending=False, inplace=True)
smlvl_stats.rename(columns={'lots':'entrySmLvlOcc'}, inplace=True)
smlvl_stats['Occ%'] = smlvl_stats.entrySmLvlOcc / stats.shape[0] * 100
print(f'Vstup do obchodu z nejčastějších 3 levelů: {smlvl_stats.iloc[:3].index.to_list()} {smlvl_stats["Occ%"].iloc[:3].sum():.2f}%')
print(f'Vstup do obchodu z nejčastějších 5 levelů: {smlvl_stats.iloc[:5].index.to_list()} {smlvl_stats["Occ%"].iloc[:5].sum():.2f}%')
print('---')
print(f'Vstup do obchodu z nejčastějších 7 levelů: {smlvl_stats.iloc[:7].index.to_list()} {smlvl_stats["Occ%"].iloc[:7].sum():.2f}%')
print(f'Vstup do obchodu z nejčastějších 9 levelů: {smlvl_stats.iloc[:9].index.to_list()} {smlvl_stats["Occ%"].iloc[:9].sum():.2f}%')
print(f'Vstup do obchodu z nejčastějších 11 levelů: {smlvl_stats.iloc[:11].index.to_list()} {smlvl_stats["Occ%"].iloc[:11].sum():.2f}%')
print('---')
smlvl_stats
"""
Explanation: SML analýza
Celkový počet vstupů na jednotlivých SML
End of explanation
"""
sns.barplot(x=smlvl_stats.entrySmLvlOcc.sort_index().index, y=smlvl_stats.entrySmLvlOcc.sort_index());
"""
Explanation: Vstupy na jednotlivých levelech
End of explanation
"""
stats.lots.replace({1: 'Long', -1: 'Short'}, inplace=True)
smlvl_stats_buy_sell = stats[['entrySmLvl', 'PnL', 'lots']].groupby(['entrySmLvl', 'lots']).count()
smlvl_stats_buy_sell.sort_index(ascending=False, inplace=True)
smlvl_stats_buy_sell.rename(columns={'PnL':'LongShortCount'}, inplace=True)
smlvl_stats_buy_sell
smlvl_stats_buy_sell['LongShortTotal%'] = smlvl_stats_buy_sell.LongShortCount / smlvl_stats_buy_sell.LongShortCount.sum() *100
smlvl_stats_buy_sell['SMLlongOrShort%'] = smlvl_stats_buy_sell[['LongShortCount']].groupby(level=0).apply(lambda x: 100 * x / float(x.sum()))
smlvl_stats_buy_sell
"""
Explanation: Počet vstupů Buy nebo Sell na SML
End of explanation
"""
stats['Win']=profit_mask
stats['Win'] = stats['Win'].mask(~profit_mask) # groupby bude počítat jen výhry
smlvl_stats_buy_sell['WinCount'] = stats[['entrySmLvl', 'PnL', 'lots', 'Win']].groupby(['entrySmLvl', 'lots', 'Win']).count().droplevel(2)
smlvl_stats_buy_sell['Win%'] = smlvl_stats_buy_sell.WinCount / smlvl_stats_buy_sell.LongShortCount * 100
smlvl_stats_buy_sell
"""
Explanation: Úspěšnost Long obchodů na SML
End of explanation
"""
# stats['Win'] = profit_mask
# smlvl_stats_buy_sell2 = stats[['entrySmLvl', 'PnL', 'lots', 'Win']].groupby(['entrySmLvl', 'lots', 'Win']).sum()
# smlvl_stats_buy_sell2.sort_index(ascending=False, inplace=True)
# smlvl_stats_buy_sell2.rename(columns={'PnL':'WinLossCount'}, inplace=True)
# smlvl_stats_buy_sell2
"""
Explanation: Jen pro kontrolu. Win == True, Loss == False
End of explanation
"""
smlvl_stats_buy_sell.sort_values('Win%', ascending=False)
"""
Explanation: Seřazeny výsledky dle úspěsnosti:
End of explanation
"""
|
BjornFJohansson/pydna-examples | notebooks/simple_examples/Dseqrecord.ipynb | bsd-3-clause | from pydna.dseqrecord import Dseqrecord
"""
Explanation: Demonstration of the Dseqrecord object
End of explanation
"""
mysequence = Dseqrecord("GGATCCAAA")
"""
Explanation: A small Dseqrecord object can be created directly. The Dseqrecord class is a double stranded version of the Biopython SeqRecord class.
End of explanation
"""
mysequence
"""
Explanation: The representation below indicate the size of the sequence and the fact that it is linear (- symbol).
End of explanation
"""
mysequence.seq
from pydna.readers import read
"""
Explanation: The Dseqrecord class is the main pydna data type together with the Dseq class. The sequence information is actually held by an internal Dseq object that is accessible from the .seq property:
End of explanation
"""
read_from_fasta = read("fastaseq.fasta")
read_from_gb = read("gbseq.gb")
read_from_embl = read("emblseq.emb")
"""
Explanation: Dseqrecords can be read from local files in several formats
End of explanation
"""
print(read_from_fasta.seq)
print(read_from_gb.seq)
print(read_from_embl.seq)
"""
Explanation: The sequence files above all contain the same sequence. We can print the sequence by the .seq property.
End of explanation
"""
read_from_string = read('''
>seq_from_string
GGATCCAAA
''')
"""
Explanation: We can also read from a string defined directly in the code:
End of explanation
"""
from pydna.genbank import Genbank
gb = Genbank("bjornjobb@gmail.com")
pUC19 = gb.nucleotide("L09137")
"""
Explanation: We can also download sequences from genbank if we know the accession number. The plasmid pUC19 has the accession number L09137. We have to give pydna a valid email address before we use Genbank in this way. Please change the email address to your own when executing this script. Genbank require to be able to contact its users if there is a problem.
End of explanation
"""
pUC19
from pydna.download import download_text
"""
Explanation: This molecule is circular so the representation below begins with a "o". The size is 2686 bp.
End of explanation
"""
text = download_text("https://gist.githubusercontent.com/BjornFJohansson/e445e5039d61bdcdf933c435438b4585/raw/a6d57a8d5cffcbf0ab76307c82746e5b7265d0c8/YEPlac181snapgene.gb")
YEplac181 = read(text)
YEplac181
"""
Explanation: We can also read sequences remotely from other web sites for example this sequence for YEplac181:
End of explanation
"""
from Bio.Restriction import BamHI
a, b = mysequence.cut(BamHI)
a
b
a.seq
b.seq
a+b
"""
Explanation: Dseqrecord supports the same kind of digestion / ligation functionality as shown for Dseq.
End of explanation
"""
mysequence.write("new_sequence.gb")
"""
Explanation: Finally, we can save Dseqrecords in a local file. The default format is Genbank.
End of explanation
"""
|
open-forcefield-group/openforcefield | examples/deprecated/chemicalEnvironments/create_move_types_and_weights.ipynb | mit | # generic scientific/ipython header
from __future__ import print_function
from __future__ import division
import os, sys
import copy
import numpy as np
"""
Explanation: Creating Weighted Moves
This notebook was created in August 2016 during exploration in how to bias different types of moves in chemical space for the development of smirky and future chemical perception move proposal tools.
Authors
* Christopher I. Bayly OpenEye Scientific Software (on sabatical with Mobley Group UC Irvine)
* Commentary added by Caitlin C. Bannan (Mobley Group UC Irvine) in April 2017
Generate List of Moves
The end goal of this notebook was to generate files with a list of moves in chemical space. Each parameter type (VdW, Bonds, Angles, Proper or Improper torsions) have weighted decisions for how to make changes. These lists of moves are used in the smirksEnvMoves notebook in this directory.
End of explanation
"""
# Parent dictionary of in-common Movetypes-with-Odds to be used as the basis for each parameter's moves
parentMovesWithOdds = {}
parentMovesWithOdds['atmOrBond'] = [ ('atom',10), ('bond',1)]
parentMovesWithOdds['actionChoices'] = [('add',1), ('swap',1), ('delete',1), ('joinAtom',1)]
parentMovesWithOdds['ORorANDType'] = [('ORtype',3), ('ANDtype',1)]
"""
Explanation: Biasing Moves:
Using Odds to make some moves more likely than others within a class such as atomLabel,
ie which atom in e.g. atmOrBond: Odds for changing an atom versus a bond is 10:1
Following the biasing probability so that after a series of biased choices we know what is
the overall probability
End of explanation
"""
def movesWithWeightsFromOdds( MovesWithOdds):
'''Processes a dictionary of movesWithOdds (lists of string/integer tuples)
into a dictionary of movesWithWeights usable to perform weighted
random choices with numpy's random.choice() function.
Argument: a MovesWithOdds dictionary of lists of string/integer tuples
Returns: a MovesWithWeights dictionary of pairs of a moveType-list with a
probabilites-list, the latter used by numpy's random.choice() function.'''
movesWithWeights = {}
for key in MovesWithOdds.keys():
moves = [ item[0] for item in MovesWithOdds[key] ]
odds = [ item[1] for item in MovesWithOdds[key] ]
weights = odds/np.sum(odds)
#print( key, moves, odds, weights)
movesWithWeights[key] = ( moves, weights)
return movesWithWeights
# make 'moves with weights' dictionary for vdW
movesWithOddsVdW = copy.deepcopy( parentMovesWithOdds)
movesWithOddsVdW['atomLabel'] = [ ('unIndexed',10), ('atom1',1)]
movesWithOddsVdW['bondLabel'] = [ ('unIndexed',1)]
movesWithWeightsVdW = movesWithWeightsFromOdds( movesWithOddsVdW)
# make 'moves with weights' dictionary for bonds
movesWithOddsBonds = copy.deepcopy( parentMovesWithOdds)
movesWithOddsBonds['atomLabel'] = [ ('unIndexed',10), ('atom1',1),('atom2',1)]
movesWithOddsBonds['bondLabel'] = [ ('unIndexed',10), ('bond1',1)]
movesWithWeightsBonds = movesWithWeightsFromOdds( movesWithOddsBonds)
# make 'moves with weights' dictionary for angles
movesWithOddsAngles = copy.deepcopy( parentMovesWithOdds)
movesWithOddsAngles['atomLabel'] = [ ('unIndexed',20), ('atom1',10),('atom2',1), ('atom3',10)]
movesWithOddsAngles['bondLabel'] = [ ('unIndexed',10), ('bond1',1),('bond2',1)]
movesWithWeightsAngles = movesWithWeightsFromOdds( movesWithOddsAngles)
# make 'moves with weights' dictionary for torsions
movesWithOddsTorsions = copy.deepcopy( parentMovesWithOdds)
movesWithOddsTorsions['atomLabel'] = [ ('unIndexed',20), ('atom1',10),('atom2',1), ('atom3',1),('atom4',10)]
movesWithOddsTorsions['bondLabel'] = [ ('unIndexed',20), ('bond1',10),('bond2',1), ('bond3',10)]
movesWithWeightsTorsions = movesWithWeightsFromOdds( movesWithOddsTorsions)
# make 'moves with weights' dictionary for impropers
movesWithOddsImpropers = copy.deepcopy( parentMovesWithOdds)
movesWithOddsImpropers['atomLabel'] = [ ('unIndexed',20), ('atom1',10),('atom2',1), ('atom3',10),('atom4',10)]
movesWithOddsImpropers['bondLabel'] = [ ('unIndexed',20), ('bond1',1),('bond2',1), ('bond3',1)]
movesWithWeightsImpropers = movesWithWeightsFromOdds( movesWithOddsImpropers)
testWeights = movesWithWeightsImpropers
for key in testWeights.keys():
print( key, testWeights[key][0], testWeights[key][1])
"""
Explanation: Make 'moves with weights' dictionaries specialized for each parameter type
Here we give the odds for performing a specific move within its class, where we will
make it less probable to perform some moves in preference to others within a class. For example,
within the bond angle parameter, we will set odds in the bondLabel moveType to make it
less probable to change the central atom of the bond compared to the end atoms, and those
in turn less probable compared to changing attached substituent atoms.
The movesWithOdds data structure is a list of tuples so that it is more easily for a
human to read and modify. Then that is processed by function movesWithWeightsFromOdds
to turn it into a probabilities-based format usable by numpy's random.choice() function.
End of explanation
"""
# 'VdW', 'Bond', 'Angle', 'Torsion', 'Improper'
movesWithWeightsMaster = {}
movesWithWeightsMaster['VdW'] = movesWithWeightsVdW
movesWithWeightsMaster['Bond'] = movesWithWeightsBonds
movesWithWeightsMaster['Angle'] = movesWithWeightsAngles
movesWithWeightsMaster['Torsion'] = movesWithWeightsTorsions
movesWithWeightsMaster['Improper'] = movesWithWeightsImpropers
def PickMoveItemWithProb( moveType, moveWithWeights):
'''Picks a moveItem based on a moveType and a dictionary of moveTypes with associated probabilities
Arguments:
moveType: string corresponding to a key in the moveWithWeights dictionary, e.g. atomTor
moveWithWeights: a dictionary based on moveType keys which each point to a list of probabilites
associated with the position in the list
Returns:
the randomly-chosen position in the list, based on the probability, together with the probability'''
listOfIndexes = range(0, len( moveWithWeights[moveType][1]) )
listIndex = np.random.choice(listOfIndexes, p= moveWithWeights[moveType][1])
return moveWithWeights[moveType][0][listIndex], moveWithWeights[moveType][1][listIndex]
"""
Explanation: Make master dict-of-dicts so that parameter type can choose the correct movesWithWeights dict
End of explanation
"""
# NBVAL_SKIP
movesWithWeightsTest = movesWithWeightsMaster['Torsion']
key = np.random.choice( movesWithWeightsTest.keys() )
nSamples = 10000
print( nSamples, 'samples on moveType', key)
print( key, ' Moves: ', movesWithWeightsTest[key][0])
print( key, ' Weights:', movesWithWeightsTest[key][1])
counts = [0]*len(movesWithWeightsTest[key][1])
for turn in range(0, nSamples):
choice, prob = PickMoveItemWithProb( key, movesWithWeightsTest)
idx = movesWithWeightsTest[key][0].index(choice)
counts[ idx] += 1
print( key, ' Counts: ', counts)
def PropagateMoveTree( moveType, movesWithWeights, accumMoves, accumProb):
'''Expands a moveList by the input moveType, randomly picking a move
of that type from the list in movesWithWeights, biased by the weight
(probability) also from movesWithWeights. It incorporates that probability
into the accumulated probability that was passed it with the existing list
Arguments:
moveType: a string which is a key in the movesWithWeights dictionary
movesWithWeights: a dictionary of a list of allowed moves of a certain
moveType paired with a list of a probability associated with each move.
accumMoves: the list of moves (being strings) to be expanded by this function.
accumProb: the accumulated probability so far of the moves in accumMoves
Returns:
accumMoves: the list of moves (being strings) expanded by this function
accumProb: the revised accumulated probability of the moves in accumMoves
'''
choice, prob = PickMoveItemWithProb( moveType, movesWithWeights)
#print( 'before', choice, prob, accumProb)
accumMoves.append( choice)
accumProb *= prob
#print( 'after', choice, prob, accumProb)
return accumMoves, accumProb
def GenerateMoveTree( parameterType):
'''Generates a list of micro-moves describing how to attempt to change the chemical
graph associated with a parameter type. Each micro-move makes a weighted random
decision on some aspect of the overall move, which will be made by effecting each
of the micro-moves in the list.
Argument:
parameterType: this string refers to a force-field parameter type (e.g. 'Torsion')
and determines what kind of moveTypes, moves, and weights will be used in
weight random micro-moves
Returns:
moveTree: the list of micro-moves describing how to attempt to change the chemical
graph associated with a parameter type.
cumProb: the weights-biased probability of making the overall move, i.e. effecting
the list of micro-moves.'''
cumProb = 1.0
moveTree = []
paramType = parameterType
movesWithWeights = movesWithWeightsMaster[paramType]
cumProb = 1.0
moveTree = []
moveFlow = ['actionChoices', 'atmOrBond', 'whichLabel', 'ORorANDType']
for stage in moveFlow:
if stage=='whichLabel' and moveTree[-1]=='atom':
moveTree, cumProb = PropagateMoveTree( 'atomLabel', movesWithWeights, moveTree, cumProb)
elif stage=='whichLabel' and moveTree[-1]=='bond':
moveTree, cumProb = PropagateMoveTree( 'bondLabel', movesWithWeights, moveTree, cumProb)
else:
moveTree, cumProb = PropagateMoveTree( stage, movesWithWeights, moveTree, cumProb)
return moveTree, cumProb
"""
Explanation: Test to see if actual picks by PickMoveItemWithProb match target probabilities
End of explanation
"""
parameterType = 'Torsion'
nSamples = 10
moveTree, cumProb = GenerateMoveTree( parameterType)
for i in range(0,nSamples):
print( GenerateMoveTree( parameterType) )
"""
Explanation: Test GenerateMoveTree
End of explanation
"""
parameterType = 'Torsion'
nSamples = 10000
ofs = open('moveTrees.'+parameterType+'.txt','w')
moveTree, cumProb = GenerateMoveTree( parameterType)
for i in range(0,nSamples):
moveTree, prob = GenerateMoveTree( parameterType)
ofs.write( '%.6f ' % prob )
for microMove in moveTree:
ofs.write( '%s ' % microMove )
ofs.write( '\n' )
ofs.close()
"""
Explanation: Write a bunch of GenerateMoveTree moves to a file
End of explanation
"""
|
jorisroovers/machinelearning-playground | machine-learning/keras/simple.ipynb | apache-2.0 | # Imports
import numpy
import pandas
def generate_data():
# Generate Random Data
cluster_size = 1000 # number of data points in a cluster
dimensions = 2
# Cluster A random numbers
cA_offset = (5,5)
cA = pandas.DataFrame(numpy.random.rand(cluster_size, dimensions) + cA_offset, columns=["x", "y"])
# Cluster B random numbers
cB_offset = (1,1)
cB = pandas.DataFrame(numpy.random.rand(cluster_size, dimensions) + cB_offset, columns=["x", "y"])
# Cluster C random numbers
cC_offset = (5,1)
cC = pandas.DataFrame(numpy.random.rand(cluster_size, dimensions) + cC_offset, columns=["x", "y"])
# Assign targets to the clusters
cA["target"] = 0
cB["target"] = 1
cC["target"] = 2
return cA, cB, cC
cA, cB, cC = generate_data()
# Show sample data
cA.head()
# Plot a small subset of the data
%matplotlib inline
import matplotlib.pyplot as plt
# Use head() to only plot some points
plt.scatter(cA['x'].head(), cA['y'].head())
plt.scatter(cB['x'].head(), cB['y'].head())
plt.scatter(cC['x'].head(), cC['y'].head())
plt.xlim([0, 7]) # show x-scale from 0 to 7
plt.ylim([0, 7])
# Concat all the input data (required for our neural network)
# ignore_index=True reindexes instead of keeping the existing indexes
# This makes sure we don't have repeating indexes
input_data = pandas.concat([cA, cB, cC], ignore_index=True)
# Shuffle data set: https://stackoverflow.com/questions/29576430/shuffle-dataframe-rows
# This makes sure we don't bias our neural network on the order things are inputted
input_data = input_data.sample(frac=1).reset_index(drop=True)
input_data.head() # Show sample from input_data
"""
Explanation: Generating Sample Data
End of explanation
"""
import keras
# Before we build our model, we first need to manipulate our dataset a bit so that we can easily use it with Keras.
# For input layer, keras expects our dataset to be in the format: [[x1, y1], [x2, y2], [x3, y3], ...]
training_input = input_data[["x", "y"]].values
# For the output layer, keras expects the target classes to be one-hot encoded (https://en.wikipedia.org/wiki/One-hot)
# This basically means converting our list of target classes to a list with a vector for each of integer classes,
# with a 1 on the position that corresponds to the integer representing the class
# e.g. [0, 1, 2, 0, ...] -> [[1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 0, 0], ...]
# Keras has a utility function to_categorical(...) that does exactly this
# Note: assumption is made that classes are labeled 0->n (starting a 0, no gaps)
training_output = keras.utils.to_categorical(input_data["target"].values)
training_input, training_output
from keras.models import Sequential, Model
from keras.layers import Input, Dense
# 1. BUILD MODEL
# Build model with 2 inputs and 3 outputs
# Use the softmax activation function to make sure our output vector has values between 0 and 1
# (= probabilities of belonging to the respective class)
# Without the softmax function, the outputs wouldn't be squeezed between 0 and 1.
a = Input(shape=(2,))
b = Dense(3, activation='softmax')(a)
model = Model(inputs=a, outputs=b)
# Alternative way to build model:
# model = Sequential()
# model.add(Dense(units=3, input_dim=2))
# 2. COMPILE MODEL
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
# Loss options:
# - categorical_crossentropy -> For a multi-class classification problem
# - binary_crossentropy -> For a binary classification problem
# - mse -> For a mean squared error regression problem
# Optimizer options:
# - sgd -> Stochastic Gradient Descent
keras.utils.print_summary(model)
# Training model
model.fit(training_input, training_output, epochs=20)
test_dataset = numpy.random.rand(10, dimensions) + cC_offset
classes = model.predict(test_dataset)
classes
"""
Explanation: Building a Model
We're going to build a very simple neural network that looks like this
2 inputs -> [no hidden nodes] -> 3 outputs
The first input will correspond to the x-coordinate of each point, the second input to the y-coordinate.
The outputs will represent each of our target classes. In other words, the output will be a vector of dimension 3 containing probabilities that the given input corresponds to each of the target classes.
Example:
End of explanation
"""
# Generate a dataset to evaluate our model
evaluation_dataset = generate_data()
# Concat, shuffle (same as before, but with new dataset)
evaluation_dataset = pandas.concat([evaluation_dataset[0], evaluation_dataset[1] , evaluation_dataset[2]], ignore_index=True) # concat dataset
input_data = input_data.sample(frac=1).reset_index(drop=True) # Shuffle dataset
input_data.head()
# Determine input, output for model
evaluation_input = evaluation_dataset[["x", "y"]].values
evaluation_output = keras.utils.to_categorical(evaluation_dataset["target"].values)
loss_and_metrics = model.evaluate(evaluation_input, evaluation_output, batch_size=128)
print model.metrics_names[0], "=", loss_and_metrics[0]
print model.metrics_names[1], "=", loss_and_metrics[1]
"""
Explanation: Model Evaluation
End of explanation
"""
|
zoltanctoth/bigdata-training | spark/Logistic Regression Example - without output.ipynb | gpl-2.0 | training = sqlContext.read.parquet("data/training.parquet")
test = sqlContext.read.parquet("data/test.parquet")
test.printSchema()
test.first()
"""
Explanation: Spark ML
Read training and test data. In this case test data is labeled as well (we will generate our label based on the arrdelay field)
End of explanation
"""
from pyspark.sql.types import DoubleType
from pyspark.sql.functions import udf
is_late = udf(lambda delay: 1.0 if delay > 0 else 0.0, DoubleType())
training = training.withColumn("is_late",is_late(training.arrdelay))
"""
Explanation: Generate label column for the training data
End of explanation
"""
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.feature import VectorAssembler
from pyspark.ml import Pipeline
# Create feature vectors. Ignore arr_delay and it's derivate, is_late
feature_assembler = VectorAssembler(
inputCols=[x for x in training.columns if x not in ["is_late","arrdelay"]],
outputCol="features")
reg = LogisticRegression().setParams(
maxIter = 100,
labelCol="is_late",
predictionCol="prediction")
model = Pipeline(stages=[feature_assembler, reg]).fit(training)
[x for x in training.columns if x not in ["is_late","arrdelay"]]
"""
Explanation: Create and fit Spark ML model
End of explanation
"""
predicted = model.transform(test)
predicted.show()
predicted.select("is_late", "prediction").show()
"""
Explanation: Predict whether the aircraft will be late
End of explanation
"""
predicted = predicted.withColumn("is_late",is_late(predicted.arrdelay))
predicted.select("is_late", "prediction").show()
predicted.crosstab("is_late","prediction").show()
"""
Explanation: Check model performance
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/ca1574468d033ed7a4e04f129164b25b/20_cluster_1samp_spatiotemporal.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Eric Larson <larson.eric.d@gmail.com>
# Stefan Appelhoff <stefan.appelhoff@mailbox.org>
#
# License: BSD-3-Clause
import numpy as np
from numpy.random import randn
from scipy import stats as stats
import mne
from mne.epochs import equalize_epoch_counts
from mne.stats import (spatio_temporal_cluster_1samp_test,
summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
"""
Explanation: Permutation t-test on source data with spatio-temporal clustering
This example tests if the evoked response is significantly different between
two conditions across subjects. Here just for demonstration purposes
we simulate data from multiple subjects using one subject's data.
The multiple comparisons problem is addressed with a cluster-level
permutation test across space and time.
End of explanation
"""
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
event_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path / 'subjects'
src_fname = subjects_dir / 'fsaverage' / 'bem' / 'fsaverage-ico-5-src.fif'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
"""
Explanation: Set parameters
End of explanation
"""
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
event_id = 1 # L auditory
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
event_id = 3 # L visual
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
equalize_epoch_counts([epochs1, epochs2])
"""
Explanation: Read epochs for all channels, removing a bad one
End of explanation
"""
fname_inv = meg_path / 'sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE, sLORETA, or eLORETA)
inverse_operator = read_inverse_operator(fname_inv)
sample_vertices = [s['vertno'] for s in inverse_operator['src']]
# Let's average and compute inverse, resampling to speed things up
evoked1 = epochs1.average()
evoked1.resample(50, npad='auto')
condition1 = apply_inverse(evoked1, inverse_operator, lambda2, method)
evoked2 = epochs2.average()
evoked2.resample(50, npad='auto')
condition2 = apply_inverse(evoked2, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition1.crop(0, None)
condition2.crop(0, None)
tmin = condition1.tmin
tstep = condition1.tstep * 1000 # convert to milliseconds
"""
Explanation: Transform to source space
End of explanation
"""
n_vertices_sample, n_times = condition1.data.shape
n_subjects = 6
print(f'Simulating data for {n_subjects} subjects.')
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 2) * 10
X[:, :, :, 0] += condition1.data[:, :, np.newaxis]
X[:, :, :, 1] += condition2.data[:, :, np.newaxis]
"""
Explanation: Transform to common cortical space
Normally you would read in estimates across several subjects and morph
them to the same cortical space (e.g., fsaverage). For example purposes,
we will simulate this by just having each "subject" have the same
response (just noisy in source space) here.
<div class="alert alert-info"><h4>Note</h4><p>Note that for 6 subjects with a two-sided statistical test, the minimum
significance under a permutation test is only
``p = 1/(2 ** 6) = 0.015``, which is large.</p></div>
End of explanation
"""
# Read the source space we are morphing to
src = mne.read_source_spaces(src_fname)
fsave_vertices = [s['vertno'] for s in src]
morph_mat = mne.compute_source_morph(
src=inverse_operator['src'], subject_to='fsaverage',
spacing=fsave_vertices, subjects_dir=subjects_dir).morph_mat
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 2)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 2)
"""
Explanation: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 source space
with vertices 0:10242 for each hemisphere. Usually you'd have to morph
each subject's data separately (and you might want to use morph_data
instead), but here since all estimates are on 'sample' we can use one
morph matrix for all the heavy lifting.
End of explanation
"""
X = np.abs(X) # only magnitude
X = X[:, :, :, 0] - X[:, :, :, 1] # make paired contrast
"""
Explanation: Finally, we want to compare the overall activity levels in each condition,
the diff is taken along the last axis (condition). The negative sign makes
it so condition1 > condition2 shows up as "red blobs" (instead of blue).
End of explanation
"""
print('Computing adjacency.')
adjacency = mne.spatial_src_adjacency(src)
"""
Explanation: Find adjacency matrix
For cluster-based permutation testing, we must define adjacency relations
that govern which points can become members of the same cluster. While
these relations are rather obvious for dimensions such as time or frequency
they require a bit more work for spatial dimension such as channels or
source vertices.
Here, to use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial adjacency matrix (instead of spatio-temporal).
But note that clustering still takes place along the
temporal dimension and can be
controlled via the max_step parameter in
:func:mne.stats.spatio_temporal_cluster_1samp_test.
If we wanted to specify an adjacency matrix for both space and time
explicitly we would have to use :func:mne.stats.combine_adjacency,
however for the present case, this is not needed.
End of explanation
"""
# Note that X needs to be a multi-dimensional array of shape
# observations (subjects) × time × space, so we permute dimensions
X = np.transpose(X, [2, 1, 0])
# Here we set a cluster forming threshold based on a p-value for
# the cluster based permutation test.
# We use a two-tailed threshold, the "1 - p_threshold" is needed
# because for two-tailed tests we must specify a positive threshold.
p_threshold = 0.001
df = n_subjects - 1 # degrees of freedom for the test
t_threshold = stats.distributions.t.ppf(1 - p_threshold / 2, df=df)
# Now let's actually do the clustering. This can take a long time...
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_1samp_test(X, adjacency=adjacency, n_jobs=None,
threshold=t_threshold, buffer_size=None,
verbose=True)
"""
Explanation: Compute statistic
End of explanation
"""
# Select the clusters that are statistically significant at p < 0.05
good_clusters_idx = np.where(cluster_p_values < 0.05)[0]
good_clusters = [clusters[idx] for idx in good_clusters_idx]
"""
Explanation: Selecting "significant" clusters
After performing the cluster-based permutationt test, you may wish to
select the observed clusters that can be considered statistically
significant under the permutation distribution. This can easily be
done using the code snippet below.
However, it is crucial to be aware that a statistically significant
observed cluster does not directly translate into statistical
significance of the channels, time points, frequency bins, etc. that
form the cluster!
For more information, see the FieldTrip tutorial.
.. include:: ../../links.inc
End of explanation
"""
print('Visualizing clusters.')
# Now let's build a convenient representation of our results, where consecutive
# cluster spatial maps are stacked in the time dimension of a SourceEstimate
# object. This way by moving through the time dimension we will be able to see
# subsequent cluster maps.
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration.
# blue blobs are for condition A < condition B, red for A > B
brain = stc_all_cluster_vis.plot(
hemi='both', views='lateral', subjects_dir=subjects_dir,
time_label='temporal extent (ms)', size=(800, 800),
smoothing_steps=5, clim=dict(kind='value', pos_lims=[0, 1, 40]))
# We could save this via the following:
# brain.save_image('clusters.png')
"""
Explanation: Visualize the clusters
End of explanation
"""
|
keras-team/keras-io | examples/vision/ipynb/nnclr.ipynb | apache-2.0 | !pip install tensorflow-datasets
"""
Explanation: Self-supervised contrastive learning with NNCLR
Author: Rishit Dagli<br>
Date created: 2021/09/13<br>
Last modified: 2021/09/13<br>
Description: Implementation of NNCLR, a self-supervised learning method for computer vision.
Introduction
Self-supervised learning
Self-supervised representation learning aims to obtain robust representations of samples
from raw data without expensive labels or annotations. Early methods in this field
focused on defining pretraining tasks which involved a surrogate task on a domain with ample
weak supervision labels. Encoders trained to solve such tasks are expected to
learn general features that might be useful for other downstream tasks requiring
expensive annotations like image classification.
Contrastive Learning
A broad category of self-supervised learning techniques are those that use contrastive
losses, which have been used in a wide range of computer vision applications like
image similarity,
dimensionality reduction (DrLIM)
and face verification/identification.
These methods learn a latent space that clusters positive samples together while
pushing apart negative samples.
NNCLR
In this example, we implement NNCLR as proposed in the paper
With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations,
by Google Research and DeepMind.
NNCLR learns self-supervised representations that go beyond single-instance positives, which
allows for learning better features that are invariant to different viewpoints, deformations,
and even intra-class variations.
Clustering based methods offer a great approach to go beyond single instance positives,
but assuming the entire cluster to be positives could hurt performance due to early
over-generalization. Instead, NNCLR uses nearest neighbors in the learned representation
space as positives.
In addition, NNCLR increases the performance of existing contrastive learning methods like
SimCLR(Keras Example)
and reduces the reliance of self-supervised methods on data augmentation strategies.
Here is a great visualization by the paper authors showing how NNCLR builds on ideas from
SimCLR:
We can see that SimCLR uses two views of the same image as the positive pair. These two
views, which are produced using random data augmentations, are fed through an encoder to
obtain the positive embedding pair, we end up using two augmentations. NNCLR instead
keeps a support set of embeddings representing the full data distribution, and forms
the positive pairs using nearest-neighbours. A support set is used as memory during
training, similar to a queue (i.e. first-in-first-out) as in
MoCo.
This example requires TensorFlow 2.6 or higher, as well as tensorflow_datasets, which can
be installed with this command:
End of explanation
"""
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow import keras
from tensorflow.keras import layers
"""
Explanation: Setup
End of explanation
"""
AUTOTUNE = tf.data.AUTOTUNE
shuffle_buffer = 5000
# The below two values are taken from https://www.tensorflow.org/datasets/catalog/stl10
labelled_train_images = 5000
unlabelled_images = 100000
temperature = 0.1
queue_size = 10000
contrastive_augmenter = {
"brightness": 0.5,
"name": "contrastive_augmenter",
"scale": (0.2, 1.0),
}
classification_augmenter = {
"brightness": 0.2,
"name": "classification_augmenter",
"scale": (0.5, 1.0),
}
input_shape = (96, 96, 3)
width = 128
num_epochs = 25
steps_per_epoch = 200
"""
Explanation: Hyperparameters
A greater queue_size most likely means better performance as shown in the original
paper, but introduces significant computational overhead. The authors show that the best
results of NNCLR are achieved with a queue size of 98,304 (the largest queue_size they
experimented on). We here use 10,000 to show a working example.
End of explanation
"""
dataset_name = "stl10"
def prepare_dataset():
unlabeled_batch_size = unlabelled_images // steps_per_epoch
labeled_batch_size = labelled_train_images // steps_per_epoch
batch_size = unlabeled_batch_size + labeled_batch_size
unlabeled_train_dataset = (
tfds.load(
dataset_name, split="unlabelled", as_supervised=True, shuffle_files=True
)
.shuffle(buffer_size=shuffle_buffer)
.batch(unlabeled_batch_size, drop_remainder=True)
)
labeled_train_dataset = (
tfds.load(dataset_name, split="train", as_supervised=True, shuffle_files=True)
.shuffle(buffer_size=shuffle_buffer)
.batch(labeled_batch_size, drop_remainder=True)
)
test_dataset = (
tfds.load(dataset_name, split="test", as_supervised=True)
.batch(batch_size)
.prefetch(buffer_size=AUTOTUNE)
)
train_dataset = tf.data.Dataset.zip(
(unlabeled_train_dataset, labeled_train_dataset)
).prefetch(buffer_size=AUTOTUNE)
return batch_size, train_dataset, labeled_train_dataset, test_dataset
batch_size, train_dataset, labeled_train_dataset, test_dataset = prepare_dataset()
"""
Explanation: Load the Dataset
We load the STL-10 dataset from
TensorFlow Datasets, an image recognition dataset for developing unsupervised
feature learning, deep learning, self-taught learning algorithms. It is inspired by the
CIFAR-10 dataset, with some modifications.
End of explanation
"""
class RandomResizedCrop(layers.Layer):
def __init__(self, scale, ratio):
super(RandomResizedCrop, self).__init__()
self.scale = scale
self.log_ratio = (tf.math.log(ratio[0]), tf.math.log(ratio[1]))
def call(self, images):
batch_size = tf.shape(images)[0]
height = tf.shape(images)[1]
width = tf.shape(images)[2]
random_scales = tf.random.uniform((batch_size,), self.scale[0], self.scale[1])
random_ratios = tf.exp(
tf.random.uniform((batch_size,), self.log_ratio[0], self.log_ratio[1])
)
new_heights = tf.clip_by_value(tf.sqrt(random_scales / random_ratios), 0, 1)
new_widths = tf.clip_by_value(tf.sqrt(random_scales * random_ratios), 0, 1)
height_offsets = tf.random.uniform((batch_size,), 0, 1 - new_heights)
width_offsets = tf.random.uniform((batch_size,), 0, 1 - new_widths)
bounding_boxes = tf.stack(
[
height_offsets,
width_offsets,
height_offsets + new_heights,
width_offsets + new_widths,
],
axis=1,
)
images = tf.image.crop_and_resize(
images, bounding_boxes, tf.range(batch_size), (height, width)
)
return images
"""
Explanation: Augmentations
Other self-supervised techniques like SimCLR,
BYOL, SwAV etc.
rely heavily on a well-designed data augmentation pipeline to get the best performance.
However, NNCLR is less dependent on complex augmentations as nearest-neighbors already
provide richness in sample variations. A few common techniques often included
augmentation pipelines are:
Random resized crops
Multiple color distortions
Gaussian blur
Since NNCLR is less dependent on complex augmentations, we will only use random
crops and random brightness for augmenting the input images.
Random Resized Crops
End of explanation
"""
class RandomBrightness(layers.Layer):
def __init__(self, brightness):
super(RandomBrightness, self).__init__()
self.brightness = brightness
def blend(self, images_1, images_2, ratios):
return tf.clip_by_value(ratios * images_1 + (1.0 - ratios) * images_2, 0, 1)
def random_brightness(self, images):
# random interpolation/extrapolation between the image and darkness
return self.blend(
images,
0,
tf.random.uniform(
(tf.shape(images)[0], 1, 1, 1), 1 - self.brightness, 1 + self.brightness
),
)
def call(self, images):
images = self.random_brightness(images)
return images
"""
Explanation: Random Brightness
End of explanation
"""
def augmenter(brightness, name, scale):
return keras.Sequential(
[
layers.Input(shape=input_shape),
layers.Rescaling(1 / 255),
layers.RandomFlip("horizontal"),
RandomResizedCrop(scale=scale, ratio=(3 / 4, 4 / 3)),
RandomBrightness(brightness=brightness),
],
name=name,
)
"""
Explanation: Prepare augmentation module
End of explanation
"""
def encoder():
return keras.Sequential(
[
layers.Input(shape=input_shape),
layers.Conv2D(width, kernel_size=3, strides=2, activation="relu"),
layers.Conv2D(width, kernel_size=3, strides=2, activation="relu"),
layers.Conv2D(width, kernel_size=3, strides=2, activation="relu"),
layers.Conv2D(width, kernel_size=3, strides=2, activation="relu"),
layers.Flatten(),
layers.Dense(width, activation="relu"),
],
name="encoder",
)
"""
Explanation: Encoder architecture
Using a ResNet-50 as the encoder architecture
is standard in the literature. In the original paper, the authors use ResNet-50 as
the encoder architecture and spatially average the outputs of ResNet-50. However, keep in
mind that more powerful models will not only increase training time but will also
require more memory and will limit the maximal batch size you can use. For the purpose of
this example, we just use four convolutional layers.
End of explanation
"""
class NNCLR(keras.Model):
def __init__(
self, temperature, queue_size,
):
super(NNCLR, self).__init__()
self.probe_accuracy = keras.metrics.SparseCategoricalAccuracy()
self.correlation_accuracy = keras.metrics.SparseCategoricalAccuracy()
self.contrastive_accuracy = keras.metrics.SparseCategoricalAccuracy()
self.probe_loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
self.contrastive_augmenter = augmenter(**contrastive_augmenter)
self.classification_augmenter = augmenter(**classification_augmenter)
self.encoder = encoder()
self.projection_head = keras.Sequential(
[
layers.Input(shape=(width,)),
layers.Dense(width, activation="relu"),
layers.Dense(width),
],
name="projection_head",
)
self.linear_probe = keras.Sequential(
[layers.Input(shape=(width,)), layers.Dense(10)], name="linear_probe"
)
self.temperature = temperature
feature_dimensions = self.encoder.output_shape[1]
self.feature_queue = tf.Variable(
tf.math.l2_normalize(
tf.random.normal(shape=(queue_size, feature_dimensions)), axis=1
),
trainable=False,
)
def compile(self, contrastive_optimizer, probe_optimizer, **kwargs):
super(NNCLR, self).compile(**kwargs)
self.contrastive_optimizer = contrastive_optimizer
self.probe_optimizer = probe_optimizer
def nearest_neighbour(self, projections):
support_similarities = tf.matmul(
projections, self.feature_queue, transpose_b=True
)
nn_projections = tf.gather(
self.feature_queue, tf.argmax(support_similarities, axis=1), axis=0
)
return projections + tf.stop_gradient(nn_projections - projections)
def update_contrastive_accuracy(self, features_1, features_2):
features_1 = tf.math.l2_normalize(features_1, axis=1)
features_2 = tf.math.l2_normalize(features_2, axis=1)
similarities = tf.matmul(features_1, features_2, transpose_b=True)
batch_size = tf.shape(features_1)[0]
contrastive_labels = tf.range(batch_size)
self.contrastive_accuracy.update_state(
tf.concat([contrastive_labels, contrastive_labels], axis=0),
tf.concat([similarities, tf.transpose(similarities)], axis=0),
)
def update_correlation_accuracy(self, features_1, features_2):
features_1 = (
features_1 - tf.reduce_mean(features_1, axis=0)
) / tf.math.reduce_std(features_1, axis=0)
features_2 = (
features_2 - tf.reduce_mean(features_2, axis=0)
) / tf.math.reduce_std(features_2, axis=0)
batch_size = tf.shape(features_1, out_type=tf.float32)[0]
cross_correlation = (
tf.matmul(features_1, features_2, transpose_a=True) / batch_size
)
feature_dim = tf.shape(features_1)[1]
correlation_labels = tf.range(feature_dim)
self.correlation_accuracy.update_state(
tf.concat([correlation_labels, correlation_labels], axis=0),
tf.concat([cross_correlation, tf.transpose(cross_correlation)], axis=0),
)
def contrastive_loss(self, projections_1, projections_2):
projections_1 = tf.math.l2_normalize(projections_1, axis=1)
projections_2 = tf.math.l2_normalize(projections_2, axis=1)
similarities_1_2_1 = (
tf.matmul(
self.nearest_neighbour(projections_1), projections_2, transpose_b=True
)
/ self.temperature
)
similarities_1_2_2 = (
tf.matmul(
projections_2, self.nearest_neighbour(projections_1), transpose_b=True
)
/ self.temperature
)
similarities_2_1_1 = (
tf.matmul(
self.nearest_neighbour(projections_2), projections_1, transpose_b=True
)
/ self.temperature
)
similarities_2_1_2 = (
tf.matmul(
projections_1, self.nearest_neighbour(projections_2), transpose_b=True
)
/ self.temperature
)
batch_size = tf.shape(projections_1)[0]
contrastive_labels = tf.range(batch_size)
loss = keras.losses.sparse_categorical_crossentropy(
tf.concat(
[
contrastive_labels,
contrastive_labels,
contrastive_labels,
contrastive_labels,
],
axis=0,
),
tf.concat(
[
similarities_1_2_1,
similarities_1_2_2,
similarities_2_1_1,
similarities_2_1_2,
],
axis=0,
),
from_logits=True,
)
self.feature_queue.assign(
tf.concat([projections_1, self.feature_queue[:-batch_size]], axis=0)
)
return loss
def train_step(self, data):
(unlabeled_images, _), (labeled_images, labels) = data
images = tf.concat((unlabeled_images, labeled_images), axis=0)
augmented_images_1 = self.contrastive_augmenter(images)
augmented_images_2 = self.contrastive_augmenter(images)
with tf.GradientTape() as tape:
features_1 = self.encoder(augmented_images_1)
features_2 = self.encoder(augmented_images_2)
projections_1 = self.projection_head(features_1)
projections_2 = self.projection_head(features_2)
contrastive_loss = self.contrastive_loss(projections_1, projections_2)
gradients = tape.gradient(
contrastive_loss,
self.encoder.trainable_weights + self.projection_head.trainable_weights,
)
self.contrastive_optimizer.apply_gradients(
zip(
gradients,
self.encoder.trainable_weights + self.projection_head.trainable_weights,
)
)
self.update_contrastive_accuracy(features_1, features_2)
self.update_correlation_accuracy(features_1, features_2)
preprocessed_images = self.classification_augmenter(labeled_images)
with tf.GradientTape() as tape:
features = self.encoder(preprocessed_images)
class_logits = self.linear_probe(features)
probe_loss = self.probe_loss(labels, class_logits)
gradients = tape.gradient(probe_loss, self.linear_probe.trainable_weights)
self.probe_optimizer.apply_gradients(
zip(gradients, self.linear_probe.trainable_weights)
)
self.probe_accuracy.update_state(labels, class_logits)
return {
"c_loss": contrastive_loss,
"c_acc": self.contrastive_accuracy.result(),
"r_acc": self.correlation_accuracy.result(),
"p_loss": probe_loss,
"p_acc": self.probe_accuracy.result(),
}
def test_step(self, data):
labeled_images, labels = data
preprocessed_images = self.classification_augmenter(
labeled_images, training=False
)
features = self.encoder(preprocessed_images, training=False)
class_logits = self.linear_probe(features, training=False)
probe_loss = self.probe_loss(labels, class_logits)
self.probe_accuracy.update_state(labels, class_logits)
return {"p_loss": probe_loss, "p_acc": self.probe_accuracy.result()}
"""
Explanation: The NNCLR model for contrastive pre-training
We train an encoder on unlabeled images with a contrastive loss. A nonlinear projection
head is attached to the top of the encoder, as it improves the quality of representations
of the encoder.
End of explanation
"""
model = NNCLR(temperature=temperature, queue_size=queue_size)
model.compile(
contrastive_optimizer=keras.optimizers.Adam(),
probe_optimizer=keras.optimizers.Adam(),
)
pretrain_history = model.fit(
train_dataset, epochs=num_epochs, validation_data=test_dataset
)
"""
Explanation: Pre-train NNCLR
We train the network using a temperature of 0.1 as suggested in the paper and
a queue_size of 10,000 as explained earlier. We use Adam as our contrastive and probe
optimizer. For this example we train the model for only 30 epochs but it should be
trained for more epochs for better performance.
The following two metrics can be used for monitoring the pretraining performance
which we also log (taken from
this Keras example):
Contrastive accuracy: self-supervised metric, the ratio of cases in which the
representation of an image is more similar to its differently augmented version's one,
than to the representation of any other image in the current batch. Self-supervised
metrics can be used for hyperparameter tuning even in the case when there are no labeled
examples.
Linear probing accuracy: linear probing is a popular metric to evaluate self-supervised
classifiers. It is computed as the accuracy of a logistic regression classifier trained
on top of the encoder's features. In our case, this is done by training a single dense
layer on top of the frozen encoder. Note that contrary to traditional approach where the
classifier is trained after the pretraining phase, in this example we train it during
pretraining. This might slightly decrease its accuracy, but that way we can monitor its
value during training, which helps with experimentation and debugging.
End of explanation
"""
finetuning_model = keras.Sequential(
[
layers.Input(shape=input_shape),
augmenter(**classification_augmenter),
model.encoder,
layers.Dense(10),
],
name="finetuning_model",
)
finetuning_model.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")],
)
finetuning_history = finetuning_model.fit(
labeled_train_dataset, epochs=num_epochs, validation_data=test_dataset
)
"""
Explanation: Evaluate our model
A popular way to evaluate a SSL method in computer vision or for that fact any other
pre-training method as such is to learn a linear classifier on the frozen features of the
trained backbone model and evaluate the classifier on unseen images. Other methods often
include fine-tuning on the source dataset or even a target dataset with 5% or 10% labels
present. You can use the backbone we just trained for any downstream task such as image
classification (like we do here) or segmentation or detection, where the backbone models
are usually pre-trained with supervised learning.
End of explanation
"""
|
peterwittek/qml-rg | Archiv_Session_Spring_2017/Exercises/10_CIFAR with sklearn.ipynb | gpl-3.0 | import math
import os
from matplotlib import pyplot as plt
import numpy as np
from six.moves import cPickle
import matplotlib.pyplot as plt
from sklearn import manifold
from tools import CifarLoader
# General parameters for classification
n_neighbors = 30
n_components = 2
"""
Explanation: CIFAR embedding through sklearn
Simple program to check how the different algorithms available through sklearn do at classifing images of cats and dogs. As expected, the problem is not simple enough for this programs to work. To be able to distinguish between the two, a more complex routine will be needed.
End of explanation
"""
def plotting(X, Y):
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X[:, 0],X[:, 1], c='b', label='Cats')
ax1.scatter(Y[:, 0], Y[:, 1], c='r', label='Dogs')
plt.legend(loc='upper left');
return plt.show()
"""
Explanation: A little function for plotting
End of explanation
"""
catsdogs = CifarLoader(path = 'data_batch_1')
cats_and_dogs = catsdogs.cats + catsdogs.dogs
n_cats, n_dogs = len(catsdogs.cats), len(catsdogs.dogs)
dims = cats_and_dogs[0].shape
cats_and_dogs = np.array(cats_and_dogs).reshape((n_cats+n_dogs, dims[0]*dims[1]*dims[1]))
"""
Explanation: Let's use Patrick's program to load the CIFAR images. We load the images + reshape them as numpy arrays, needed for the following steps.
End of explanation
"""
print("Computing Isomap embedding")
iso = manifold.Isomap(n_neighbors, n_components).fit_transform(cats_and_dogs)
print("Done.")
plotting(iso[:n_cats], iso[n_cats:])
"""
Explanation: Let's first try the Isomap embedding
End of explanation
"""
print("Computing Spectral embedding")
emb = manifold.LocallyLinearEmbedding(n_neighbors, n_components, eigen_solver='auto',
method = 'standard').fit_transform(cats_and_dogs)
print("Done.")
plotting(emb[:n_cats], emb[n_cats:])
"""
Explanation: Now let's do Spectral embedding. There are different method for this routine. However this 'standard' is the one giving less problems!
End of explanation
"""
print("Computing t-SNE")
tSNE = manifold.TSNE(n_components=n_components, init='pca', random_state=0).fit_transform(cats_and_dogs)
print("Done.")
plotting(tSNE[:n_cats], tSNE[n_cats:])
"""
Explanation: Finally t-SNE
End of explanation
"""
|
ShinjiKatoA16/UCSY-sw-eng | Python-7 Input and Output.ipynb | mit | fd = open('README.md', 'r')
print(fd.readline(), end='') # \n is included in input string
for s in fd: # file object(descriptor) is iterable, and can be used in for loop
print(s.strip()) # strip() removes extra space and \n
# print(s.split()) # convert string to List
"""
Explanation: I/O
File I/O
Similar to fopen() in C, open() close() read() write() seek() are supported, not only these function calls, readline() readlines() are also supported.
File object can be treated as a iterator, may be it is more common.
print() function can write to file with file= option. open(), close(), readline(), print() will be most commonly used function for text I/O.
readline() function returns the 1 line data include \n, and returns '' (0 length string) for EOF (End Of File).
End of explanation
"""
s = '100'
print(int(s)+1)
s = '1 2 3'
for i in map(int, s.split()):
print(i)
s = '1 2 3 4'
x = list(map(int, s.split()))
print(x)
y = list()
for i in s.split(): # ['1', '2', '3', '4']
y.append(int(i))
print(y)
"""
Explanation: Data conversion
int() function convert string or float to integer, str() convert to string, float() convert to float.
map(func, iterable-object) apply function to iterable-object and returns iterable object.
End of explanation
"""
# need to import sys module to use standard I/O file descriptor
import sys
print('input somthing: ')
# jupyter does not handle stdin ?
# sys.stdin is used as a file-descriptor
s = sys.stdin.readline()
print(s.split()) # convert input to List
"""
Explanation: Standard In, Standard Out and Standard Error
sys.stdin, sys.stdout and sys.stderr are file descriptors which are already opened for application programs.
stardard-in is assinged to keyboard, standard-out and standard-error are assinged to display as a default. These assigned can be overrided when executing the program from shell.
$ program_name <input_file
$ program_name >output_file
$ program_name <input_file >output_file
$ program_name 2>error_file
End of explanation
"""
|
sisnkemp/deep-learning | intro-to-rnns/Anna_KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
encoded[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
len(vocab)
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
"""
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
"""
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
"""
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
"""
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
"""
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
"""
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
"""
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
"""
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
"""
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
"""
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
"""
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
"""
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
"""
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
"""
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
"""
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
Naereen/notebooks | A_short_study_of_Renyi_entropy.ipynb | mit | !pip install watermark matplotlib numpy
%load_ext watermark
%watermark -v -m -a "Lilian Besson" -g -p matplotlib,numpy
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#A-short-study-of-Rényi-entropy" data-toc-modified-id="A-short-study-of-Rényi-entropy-1"><span class="toc-item-num">1 </span>A short study of Rényi entropy</a></div><div class="lev2 toc-item"><a href="#Requirements" data-toc-modified-id="Requirements-11"><span class="toc-item-num">1.1 </span>Requirements</a></div><div class="lev2 toc-item"><a href="#Utility-functions" data-toc-modified-id="Utility-functions-12"><span class="toc-item-num">1.2 </span>Utility functions</a></div><div class="lev2 toc-item"><a href="#Definition,-common-and-special-cases" data-toc-modified-id="Definition,-common-and-special-cases-13"><span class="toc-item-num">1.3 </span>Definition, common and special cases</a></div><div class="lev2 toc-item"><a href="#Plotting-some-values" data-toc-modified-id="Plotting-some-values-14"><span class="toc-item-num">1.4 </span>Plotting some values</a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-15"><span class="toc-item-num">1.5 </span>Conclusion</a></div>
# A short study of Rényi entropy
I want to study here the Rényi entropy, using [Python](https://www.python.org/).
I will define a function implementing $H_{\alpha}(X)$, from the given formula, for discrete random variables, and check the influence of the parameter $\alpha$,
$$ H_{\alpha}(X) := \frac{1}{1-\alpha} \log_2(\sum_i^n p_i^{\alpha}),$$
where $X$ has $n$ possible values, and the $i$-th outcome has probability $p_i\in[0,1]$.
- *Reference*: [this blog post by John D. Cook](https://www.johndcook.com/blog/2018/11/21/renyi-entropy/), [this Wikipédia page](https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy) and [this page on MathWorld](http://mathworld.wolfram.com/RenyiEntropy.html),
- *Author*: [Lilian Besson](https://perso.crans.org/besson/)
- *License*: [MIT License](https://lbesson.mit-license.org/)
- *Date*: 22th of November, 2018
## Requirements
End of explanation
"""
X1 = [0.25, 0.5, 0.25]
X2 = [0.1, 0.25, 0.3, 0.45]
X3 = [0, 0.5, 0.5]
X4 = np.full(100, 1/100)
X5 = np.full(1000, 1/1000)
X6 = np.arange(100, dtype=float)
X6 /= np.sum(X6)
"""
Explanation: Utility functions
We start by giving three examples of such vectors $X=(p_i)_{1\leq i \leq n}$, a discrete probability distributions on $n$ values.
End of explanation
"""
np.seterr(all="ignore")
def x_log2_x(x):
""" Return x * log2(x) and 0 if x is 0."""
results = x * np.log2(x)
if np.size(x) == 1:
if np.isclose(x, 0.0):
results = 0.0
else:
results[np.isclose(x, 0.0)] = 0.0
return results
"""
Explanation: We need a function to safely compute $x \mapsto x \log_2(x)$, with special care in case $x=0$. This one will accept a numpy array or a single value as argument:
End of explanation
"""
x_log2_x(0)
x_log2_x(0.5)
x_log2_x(1)
x_log2_x(2)
x_log2_x(10)
"""
Explanation: For examples:
End of explanation
"""
x_log2_x(X1)
x_log2_x(X2)
x_log2_x(X3)
x_log2_x(X4)[:10]
x_log2_x(X5)[:10]
x_log2_x(X6)[:10]
"""
Explanation: and with vectors, slots with $p_i=0$ are handled without error:
End of explanation
"""
def renyi_entropy(alpha, X):
assert alpha >= 0, "Error: renyi_entropy only accepts values of alpha >= 0, but alpha = {}.".format(alpha) # DEBUG
if np.isinf(alpha):
# XXX Min entropy!
return - np.log2(np.max(X))
elif np.isclose(alpha, 0):
# XXX Max entropy!
return np.log2(len(X))
elif np.isclose(alpha, 1):
# XXX Shannon entropy!
return - np.sum(x_log2_x(X))
else:
return (1.0 / (1.0 - alpha)) * np.log2(np.sum(X ** alpha))
# Curryfied version
def renyi_entropy_2(alpha):
def re(X):
return renyi_entropy(alpha, X)
return re
# Curryfied version
def renyi_entropy_3(alphas, X):
res = np.zeros_like(alphas)
for i, alpha in enumerate(alphas):
res[i] = renyi_entropy(alpha, X)
return res
"""
Explanation: Definition, common and special cases
From the mathematical definition, an issue will happen if $\alpha=1$ or $\alpha=\inf$, so we deal with the special cases manually.
$X$ is here given as the vector of $(p_i)_{1\leq i \leq n}$.
End of explanation
"""
alphas = np.linspace(0, 10, 1000)
renyi_entropy_3(alphas, X1)[:10]
def plot_renyi_entropy(alphas, X):
fig = plt.figure()
plt.plot(alphas, renyi_entropy_3(alphas, X))
plt.xlabel(r"Value for $\alpha$")
plt.ylabel(r"Value for $H_{\alpha}(X)$")
plt.title(r"Réniy entropy for $X={}$".format(X[:10]))
plt.show()
# return fig
plot_renyi_entropy(alphas, X1)
plot_renyi_entropy(alphas, X2)
plot_renyi_entropy(alphas, X3)
plot_renyi_entropy(alphas, X4)
plot_renyi_entropy(alphas, X5)
plot_renyi_entropy(alphas, X6)
"""
Explanation: Plotting some values
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/inm/cmip6/models/inm-cm5-0/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'inm-cm5-0', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: INM
Source ID: INM-CM5-0
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:04
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
billzhao1990/CS231n-Spring-2017 | assignment2/Dropout.ipynb | mit | # As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
"""
Explanation: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
End of explanation
"""
np.random.seed(231)
x = np.random.randn(500, 500) + 10
for p in [0.3, 0.6, 0.75]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print('Running tests with p = ', p)
print('Mean of input: ', x.mean())
print('Mean of train-time output: ', out.mean())
print('Mean of test-time output: ', out_test.mean())
print('Fraction of train-time output set to zero: ', (out == 0).mean())
print('Fraction of test-time output set to zero: ', (out_test == 0).mean())
print()
"""
Explanation: Dropout forward pass
In the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
End of explanation
"""
np.random.seed(231)
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print('dx relative error: ', rel_error(dx, dx_num))
"""
Explanation: Dropout backward pass
In the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
End of explanation
"""
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [0, 0.25, 0.5]:
print('Running check with dropout = ', dropout)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
print()
"""
Explanation: Fully-connected nets with Dropout
In the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
End of explanation
"""
# Train two identical nets, one with dropout and one without
np.random.seed(231)
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [0, 0.5, 0.75]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print(dropout)
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.
End of explanation
"""
|
GoogleCloudPlatform/dialogflow-email-agent-demo | Training_Data_for_Signature_Extraction.ipynb | apache-2.0 | ! pip install bs4 lxml
from bs4 import BeautifulSoup
import lxml
import html
import pandas as pd
import random
import re
import json
"""
Explanation: This Colab uses the BC3: British Columbia Conversation Corpora to generate a training dataset for Google Cloud Vertex AI Entity Extraction to train an email signature extraction model.
Load Python Modules
First, let's load some packages to help with parsing the XML file provided by the University of British Columbia
End of explanation
"""
! wget https://www.cs.ubc.ca/cs-research/lci/research-groups/natural-language-processing/bc3/bc3.1.0.zip
! unzip bc3.1.0.zip
"""
Explanation: Download Email Data
End of explanation
"""
with open("./corpus.xml", "r") as file:
soup = BeautifulSoup(file, "lxml")
# regex to remove prior thread
regex = re.compile(r'On .* wrote: .*', flags=re.DOTALL)
email_soup = soup.find_all('text')
emails = []
for email in email_soup:
# email_text = email.text.replace("\n"," ") # remove new lines
email_text = re.sub(regex, '', email.text) # remove prior thread
emails.append(html.unescape(email_text))
"""
Explanation: Load Email Data from XML
End of explanation
"""
with open(f'./bc3_emails.jsonl', 'w') as outfile:
for e in emails:
json.dump(
{
"textContent": e
}, outfile)
outfile.write('\n')
"""
Explanation: Write Email Data to JSONL
Vertex AI will allow you to import a jsonl file as training data automatically, and it also handles the train/test split.
End of explanation
"""
|
karlstroetmann/Algorithms | Python/Chapter-08/Heapsort-Performance.ipynb | gpl-2.0 | def swap(A, i, j):
A[i], A[j] = A[j], A[i]
"""
Explanation: This notebook implements an array-based version of Heapsort.
Heapsort
The function call swap(A, i, j) takes an array A and two indexes i and j and exchanges the elements at these indexes.
End of explanation
"""
def sink(A, k, n):
while 2 * k + 1 <= n:
j = 2 * k + 1
if j + 1 <= n and A[j] > A[j + 1]:
j += 1
if A[k] < A[j]:
return
swap(A, k, j)
k = j
"""
Explanation: The procedure sink takes three arguments.
- A is the array representing the heap.
- k is an index into the array A.
- n is the upper bound of the part of this array that has to be transformed into a heap.
The array A itself might actually have more than $n+1$ elements, but for the
purpose of the method sink we restrict our attention to the subarray
A[k:n].
When calling sink, the assumption is that A[k:n+1] should represent a heap
that possibly has its heap condition violated at its root, i.e. at index k. The
purpose of the procedure sink is to restore the heap condition at index k.
- We compute the index j of the left subtree below index k.
- We check whether there also is a right subtree at position j+1.
This is the case if j + 1 <= n.
- If the heap condition is violated at index k, we exchange the element at position k
with the child that has the higher priority, i.e. the child that is smaller.
- Next, we check in line 9 whether the heap condition is violated at index k.
If the heap condition is satisfied, there is nothing left to do and the procedure returns.
Otherwise, the element at position k is swapped with
the element at position j.
Of course, after this swap it is possible that the heap condition is
violated at position j. Therefore, k is set to j and the while-loop continues
as long as the node at position k has at least one child, i.e. as long as
2 * k + 1 <= n.
End of explanation
"""
def heap_sort(A):
n = len(A) - 1
for k in range((n + 1) // 2 - 1, -1, -1):
sink(A, k, n)
while n >= 1:
swap(A, 0, n)
n -= 1
sink(A, 0, n)
"""
Explanation: The function call heapSort(A) has the task to sort the array A and proceeds in two phases.
- In phase one our goal is to transform the array Ainto a heap that is stored in A.
In order to do so, we traverse the array A in reverse in a loop.
The invariant of this loop is that before
sink is called, all trees rooted at an index greater than
k satisfy the heap condition. Initially this is true because the trees that
are rooted at indices greater than $(n + 1) // 2 - 1$ are trivial, i.e. they only
consist of their root node.
In order to maintain the invariant for index k, sink is called with
argument k, since at this point, the tree rooted at index k satisfies
the heap condition except possibly at the root. It is then the job of $\texttt{sink}$ to
establish the heap condition at index k. If the element at the root has a
priority that is too low, sink ensures that this element sinks down in the tree
as far as necessary.
- In phase two we remove the elements from the heap one-by-one and insert them at the end of
the array.
When the while-loop starts, the array A contains a heap. Therefore,
the smallest element is found at the root of the heap. Since we want to sort the
array A descendingly, we move this element to the end of the array A and in
return move the element from the end of the arrayAto the front.
After this exchange, the sublist A[0:n-1] represents a heap, except that the
heap condition might now be violated at the root. Next, we decrement n, since the
last element of the array A is already in its correct position.
In order to reestablish the heap condition at the root, we call sink with index
0.
End of explanation
"""
def heap_sort(A):
n = len(A) - 1
for k in range((n + 1) // 2 - 1, -1, -1):
sink(A, k, n)
while n >= 1:
swap(A, 0, n)
n -= 1
sink(A, 0, n)
"""
Explanation: The version of heap_sort given below adds some animation.
End of explanation
"""
import random as rnd
def isOrdered(L):
for i in range(len(L) - 1):
assert L[i] >= L[i+1]
from collections import Counter
def sameElements(L, S):
assert Counter(L) == Counter(S)
"""
Explanation: Testing
End of explanation
"""
def testSort(n, k):
for i in range(n):
L = [ rnd.randrange(2*k) for x in range(k) ]
oldL = L[:]
heap_sort(L)
isOrdered(L)
sameElements(L, oldL)
assert len(L) == len(oldL)
print('.', end='')
print()
print("All tests successful!")
%%time
testSort(100, 20_000)
"""
Explanation: The function $\texttt{testSort}(n, k)$ generates $n$ random lists of length $k$, sorts them, and checks whether the output is sorted and contains the same elements as the input.
End of explanation
"""
%%time
k = 1000_000
L = [ rnd.randrange(2 * k) for x in range(k) ]
S = heap_sort(L)
"""
Explanation: Next, we sort a million random integers. It is not as fast as merge sort, but we do not need an auxiliary array and hence we don't need additional storage.
End of explanation
"""
|
ewulczyn/ewulczyn.github.io | ipython/ab_testing_with_multinomial_data/ab_testing_with_multinomial_data.ipynb | mit | def plot_donation_amounts(counts):
keys = list(counts.keys())
values = list(counts.values())
fig = plt.figure(figsize=(15, 6))
ind = 1.5*np.arange(len(keys)) # the x locations for the groups
a_rects = plt.bar(ind, values, align='center', facecolor ='yellow', edgecolor='gray')
plt.xticks(ind)
def autolabel(rects):
# attach some text labels
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x()+rect.get_width()/2., 1.01*height, '%d'%int(height),
ha='center', va='bottom')
autolabel(a_rects)
plt.xlabel('Donation Amounts')
plt.ylabel('Amount Frequencies')
amounts = {3.0: 419, 5.0: 307, 10.0: 246, 20.0: 163, 30.0: 89, 50.0: 38, 100.0: 23, 1.0: 9, 15.0: 7, 25.0: 3, 2.0: 2, 200.0: 1, 9.0: 1, 18.0: 1, 48.0: 1, 35.0: 1, 52.0: 1, 36.0: 1}
plot_donation_amounts(amounts)
"""
Explanation: Beautifully Bayesian: How to do AB Testing with Discrete Rewards
In previous posts, I have discussed methods for AB testing when we are interested only in the success rate of a design. It is often the case, however, that success rates don't tell the whole story. In the case of banner adds, a successful conversion can lead to different amounts of reward. So while one design might give a high success rate but low reward per success, another design might have a low success rate but very high reward per success. We often care most about the expected reward of a design. If we just compare success rates and choose the design with the higher rate, we are not necessarily choosing the option with the higher expected reward. This post discusses how to do AB testing when the reward is limited to a set of fixed values. In particular, I demonstrate Bayesian computational techniques for determining:
the probability that one design gives a higher expected reward than another
confidence intervals for the percent difference in expected reward between two designs
A Motivating Example
To motivate the methods, let me give you a concrete use case. The Wikimedia Foundation (WMF) does extensive AB testing on their fundraising banners. Usually, the fundraising team only compares banners based on the fraction of users who made a donation (i.e. the success rate) because they are optimizing for a broad donor base.
As the foundation's budget continues to grow, it has become more important increase the donation amount (i.e. the reward) per user as well. There are several factors that influence how much people give, including the ask string, the set of suggested donation amounts and the payment processor options. In order to increase the revenue per user in a principled way, it is necessary to AB test the expected donation amount per user for different designs.
Fundraising Banners offer a potential donor a small set suggested donation amounts. The image below shows a sample banner:
<!--
include sample banner
-->
In this case, there are 7 discrete amount choices given. There is, of course, the implicit choice of \$0, which a user exercises when they do not donate at all. In addition to the set of fixed choices, there is an option to enter a custom amount. In practice, only 2% of donors use the custom amount option. The image below gives an example of the empirical distribution over positive donation amounts from a recent test. Even visually, we get the impression that the fixed amounts dominate the character of the distribution.
End of explanation
"""
import numpy as np
from numpy.random import dirichlet
from numpy.random import multinomial
import matplotlib.pyplot as plt
import seaborn as sns
# Set of reward values for banner A
values_a = np.array([0.0, 2.0, 3.0, 4.0])
# True probability of each reward for banner A
p_a = np.array([0.4, 0.3, 0.2, 0.1])
# True expected return per user for banner A
return_a = p_a.dot(values_a.transpose())
def run_banner(p, n):
"""
Simulate running a banner for n users
with true probability vector over rewards p.
Returns a vector of counts for how often each
reward occurred.
"""
return multinomial(n, p, size=1)[0]
def get_posterior_expected_reward_sample(alpha, counts, values, n=40000):
"""
To sample from the posterior distribution over revenue per user,
first draw a sample from the posterior distribution over the the
vector of probabilities for each reward value. Second, take the dot product
with the vector of reward values.
"""
dirichlet_sample = dirichlet(counts + alpha, n)
return dirichlet_sample.dot(values.transpose())
#lets assume we know nothing about our banners and take a uniform prior
alpha = np.ones(values_a.shape)
#simulate running the banner 1000 times
counts = run_banner(p_a, 1000)
# get a sample from the distribution over expected revenue per user
return_distribution = get_posterior_expected_reward_sample(alpha, counts, values_a)
# plot the posterior distribution agains the true value
fig, ax = plt.subplots()
ax.hist(return_distribution, bins = 40, alpha = 0.6, normed = True, label = 'n = 1000')
ax.axvline(x=return_a, color = 'b')
plt.xlabel('Expected Reward per User')
"""
Explanation: A clear choice for modeling the distribution over fixed donation amounts is the multinomial distribution. For modeling custom donations amounts, we could consider a continuous distribution over the set of positive numbers such as the log-normal distribution. Then we could model the distribution of over both fixed and custom amounts as a mixture between the two. For now, I will focus on just modeling the distribution over fixed amounts as they make up the vast majority of donations.
A primer on Bayesian Stats and Monte Carlo Methods
The key to doing Bayesian AB testing is to be able to sample from the joint posterior distribution $\mathcal P \left({p_a, p_b | Data}\right)$ over the parameters $p_a$, $p_b$ governing the data generating distributions. Note that in the Bayesian setting, we consider the unknown parameters of our data generating distribution as random variables. In our case study, the parameters of interest are the vectors parameterizing the multinomial distributions over donation amounts for each banner.
You can get a numeric representation of the distribution over a function $f$ of your parameters $\mathcal P \left({ f(p_a, p_b) | Data}\right)$ by sampling from $\mathcal P \left({p_a, p_b | Data}\right)$ and applying the function $f$ to each sample $(p_a, p_b)$. With these numeric representations, it is possible to find confidence intervals over $f(p_a, p_b|Data)$ and do numerical integration over regions of the parameter space that are of interest. <!-- such as $\mathcal P \left({f(p_a) > f(p_b) | Data }\right)$ . --> In many cases, $p_a$ is independent of $p_b$. This is true in our fundraising example where users are split into non-overlapping treatment groups and the users in one treatment group do not affect users in the other. A common scenario where independence does not hold, even for separate treatment groups, is when the groups share a common pool of resources to choose from. To give a concrete example, say you are a hotel booking site. AB testing aspects of the booking process can be hard since booking outcomes between the two groups interact: if a user in one group books a vacancy, that booking is no longer available to any other user.
When the parameters of interest are independent, then the joint posterior distribution factors into the product of the individual posterior distributions:
\[
\mathcal P \left({ p_a, p_b | Data }\right) = \mathcal P \left({p_a | Data_a }\right) * \mathcal P \left({p_b | Data_b }\right)
\]
This means we can we can generate a draw from the joint posterior distribution by drawing once from each individual posterior distribution.
Getting a Distribution Over the Expected Revenue per User
When running a banner we, observe data from a multinomial distribution parameterized by a vector $p$ where $p_i$ corresponds to the probability of getting a reward $r_i$ from a user. We do not know the value of $p$. Our goal is to estimate $p$ based on the data we observe. The Bayesian way is not to find a point estimate of $p$, but to define a distribution over $p$ that captures our level uncertainty in the true value of $p$ after taking into account the data that we observe.
The Dirichlet distribution can be interpreted as a distribution over multinomial distributions and is the way we characterize our uncertainty in $p$. The Dirichlet distribution is also a conjugate prior of the multinomial distribution. This means that if our prior distribution over $p$ is $Dirichlet(\alpha)$, then after observing count $c_i$ for each reward value, our posterior distribution over $p$ is $Dirichlet(\alpha + c)$.
The expected reward of a banner is given by $R = p^T \cdot v$, the dot product between the vector of reward values and and the vector of probabilities over reward values. We can sample from the posterior distribution over the expected revenue per user by sampling from $Dirichlet(\alpha + c)$ distribution and then taking a dot product between that sample and the vector of values $v$.
From Theory to Code
To make the above theory more concrete, lets simulate running a single banner and use the ideas from above to model the expected return per user.
End of explanation
"""
counts = run_banner(p_a, 10000)
return_distribution = get_posterior_expected_reward_sample(alpha, counts, values_a)
ax.hist(return_distribution, bins = 40, alpha = 0.6, normed = True, label = 'n = 10000')
ax.axvline(x=return_a, color = 'b')
ax.legend()
fig
"""
Explanation: As we run the banner longer and more users enter the experiment, we should expect the distribution to concentrate around a tighter interval around the true return:
End of explanation
"""
def get_credible_interval(dist, confidence):
lower = np.percentile(dist, (100.0 - confidence) /2.0)
upper = np.percentile(dist, 100.0 - (100.0 - confidence) /2.0)
return lower, upper
def interval_covers(interval,true_value):
"""
Check if the credible interval covers the true parameter
"""
if interval[1] < true_value:
return 0
if interval[0] > true_value:
return 0
return 1
# Simulate multiple runs and count what fraction of the time the interval covers
confidence = 95
iters = 10000
cover_count = 0.0
for i in range(iters):
#simulate running the banner
counts = run_banner(p_a, 1000)
# get the posterior distribution over rewards per user
return_distribution = get_posterior_expected_reward_sample(alpha, counts, values_a)
# form a credible interval over reward per user
return_interval = get_credible_interval(return_distribution, confidence)
# record if the interval covered the true reward per user
cover_count+= interval_covers(return_interval, return_a)
print ("%d%% credible interval covers true return %.3f%% of the time" %(confidence, 100*(cover_count/iters)))
"""
Explanation: Introducing Credible Intervals
One of the most useful things we can do with our posterior distribution over revenue per user is to build credible intervals. A 95% credible interval represents an interval that contains the true expected revenue per user with 95% certainty. The simplest way to generate credible intervals from a sample is to use quantiles. The quantile-based 95% probability interval is merely the 2.5% and 97.5% quantiles of the sample. We could get more sophisticated by trying to find the tightest interval that contains 95% of the samples. This may give quite different intervals than the quantile method when the distribution is asymmetric. For now we will stick with the quantile based method. As a sanity check, lets see if the method I describe will lead to 95% credible intervals that cover the true expected revenue per user 95% of the time. We will do this by repeatedly simulating user choices, building 95% credible intervals and recording if the interval covers the true expected revenue per user.
End of explanation
"""
# Set of returns for treatment B
values_b = np.array([0.0, 2.5, 3.0, 5.0])
# True probability of each reward for treatment B
p_b = np.array([0.60, 0.10, 0.12, 0.18])
# True expected reward per user for banner B
return_b = p_b.dot(values_b)
# simulate running both banners
counts_a = run_banner(p_a, 1000)
return_distribution_a = get_posterior_expected_reward_sample(alpha, counts_a, values_a)
counts_b = run_banner(p_b, 1000)
return_distribution_b = get_posterior_expected_reward_sample(alpha, counts_b, values_b)
#plot the posterior distributions
plt.figure()
plt.hist(return_distribution_a, bins = 40, alpha = 0.4, normed = True, label = 'A')
plt.axvline(x=return_a, color = 'b')
plt.hist(return_distribution_b, bins = 40, alpha = 0.4, normed = True, label = 'B')
plt.axvline(x=return_b, color = 'g')
plt.xlabel('Expected Revenue per User')
plt.legend()
#compute the probability that banner A is better than banner B
prob_a_better = (return_distribution_a > return_distribution_b).mean()
print ("P(R_A > R_B) = %0.4f" % prob_a_better)
"""
Explanation: Now you know how to generate the posterior distribution $\mathcal P \left({R |Data}\right)$ over the expected revenue per user of a single banner and sample from it. This is the basic building block for the next section on comparing the performance of banners.
My Banner Is Better Than Yours
The simplest question one can ask in an AB test is: What is the probability that design A is better than design B? More specifically, what is the probability that $R_a$, the expected reward of A, is greater than $R_b$, the expected reward of B. Mathematically, this corresponds to integrating the joint posterior distribution over expected reward per user $\mathcal P \left({R_a, R_b | Data}\right)$ over the region $R_a > R_b$. Computationally, this corresponds to sampling from $\mathcal P \left({R_a, R_b | Data}\right)$ and computing for what fraction of samples $(r_a, r_b)$ we find that $r_a > r_b$. Since $R_a$ and $R_b$ are independent, we sample from $\mathcal P \left({R_a, R_b | Data}\right)$ by sampling from $\mathcal P \left({R_a | Data}\right)$ and $\mathcal P \left({R_a | Data}\right)$ individually. In the code below, I introduce a banner B, simulate running both banners on 1000 users and then compute the probability of A being the winner.
End of explanation
"""
results = []
for sample_size in range(500, 5000, 500):
for i in range(1000):
counts_a = run_banner(p_a, sample_size)
return_distribution_a = get_posterior_expected_reward_sample(alpha, counts_a, values_a)
counts_b = run_banner(p_b, sample_size)
return_distribution_b = get_posterior_expected_reward_sample(alpha, counts_b, values_b)
prob_a_better = (return_distribution_a > return_distribution_b).mean()
results.append({'n':sample_size, 'p': prob_a_better})
sns.boxplot(x='n', y = 'p', data = pd.DataFrame(results))
plt.xlabel('sample size')
plt.ylabel('P(R_A > R_B)')
"""
Explanation: On this particular run, banner A did better than expected and banner B did worse than expected. Judging from the data we observe, we are 97% certain that banner A is better than banner B. Looking at one particular run, is not particularly instructive. For a given true difference $R_a - R_b$ in expected rewards, the key factor that influences $\mathcal P \left({R_a > R_b | Data}\right)$ is the sample size of the test. The following plot characterizes the the distribution over our estimates of $\mathcal P \left({R_a > R_b | Data}\right)$ for different sample sizes.
End of explanation
"""
true_percent_difference = 100 * ((return_a - return_b) / return_b)
# simulate running both banners
counts_a = run_banner(p_a, 4000)
return_distribution_a = get_posterior_expected_reward_sample(alpha, counts_a, values_a)
counts_b = run_banner(p_b, 4000)
return_distribution_b = get_posterior_expected_reward_sample(alpha, counts_b, values_b)
#compute distribution over percent differences
percent_difference_distribution = 100* ((return_distribution_a - return_distribution_b) / return_distribution_b)
#plot the posterior distributions
plt.figure()
plt.hist(percent_difference_distribution, bins = 40, alpha = 0.6, normed = True)
plt.axvline(x=true_percent_difference, color = 'b')
plt.xlabel('Percent Difference')
#compute the probability that banner A is better than banner B
lower, upper = get_credible_interval(percent_difference_distribution, 95)
print ("The percent lift that A has over B lies in the interval (%0.3f, %0.3f) with 95%% certainty" % (lower, upper))
"""
Explanation: The plot illustrates an intuitive fact: the larger your sample, the more sure you will be about which banner is better. For the given ground truth distributions over rewards, we see that after 500 samples per treatment, the median certainty that banner A is better is 80%, with an interquartile range between 55% and 95% . After 4500 samples per treatment, the median certainty is 99.5 % and the interquartile range spans 96.8% and 99.93%. Being able to assign a level of certainty about whether one design is better than another is a great first step. The next question you will likely want to answer is how much better it is.
How much better?
One approach, is to compare the absolute difference in expected rewards per user. If you need to communicate your results to other people, it is preferable to have a scale free measure of difference, such as the percent difference. In this section, I will describe how to build confidence intervals for the percent difference between designs. We can get a sample from the distribution over the percent difference $\mathcal P \left({ 100 * \frac{R_a - R_b}{ R_b}| Data }\right)$, by taking a sample $(r_a, r_b)$ from $\mathcal P \left({R_a, R_b| Data }\right)$ and applying the percent difference function to it. Once we have a distribution, we can from confidence intervals using quantiles as described above.
End of explanation
"""
# Simulate multiple runs and count what fraction of the time the interval covers
confidence = 95
iters = 10000
cover_count = 0.0
for i in range(iters):
#simulate running the banner
counts_a = run_banner(p_a, 4000)
counts_b = run_banner(p_b, 4000)
# get the posterior distribution over percent difference
return_distribution_a = get_posterior_expected_reward_sample(alpha, counts_a, values_a)
return_distribution_b = get_posterior_expected_reward_sample(alpha, counts_b, values_b)
percent_difference_distribution = 100* ((return_distribution_a - return_distribution_b) / return_distribution_b)
# get credible interval
interval = get_credible_interval(percent_difference_distribution, confidence)
# record if the interval covered the true reward per user
cover_count+= interval_covers(interval, true_percent_difference)
print ("%d%% credible interval covers true percent difference %.3f%% of the time" %(confidence, 100*(cover_count/iters)))
"""
Explanation: To test the accuracy of the method, we can repeat the exercise from above of repeatedly generating confidence intervals and seeing if our x% confidence intervals cover the true percent difference x% of the time. You will see that they do:
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | bsd-3-clause | %matplotlib inline
from statsmodels.compat import lmap
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import statsmodels.api as sm
"""
Explanation: M-Estimators for Robust Linear Modeling
End of explanation
"""
norms = sm.robust.norms
def plot_weights(support, weights_func, xlabels, xticks):
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax.plot(support, weights_func(support))
ax.set_xticks(xticks)
ax.set_xticklabels(xlabels, fontsize=16)
ax.set_ylim(-0.1, 1.1)
return ax
"""
Explanation: An M-estimator minimizes the function
$$Q(e_i, \rho) = \sum_i~\rho \left (\frac{e_i}{s}\right )$$
where $\rho$ is a symmetric function of the residuals
The effect of $\rho$ is to reduce the influence of outliers
$s$ is an estimate of scale.
The robust estimates $\hat{\beta}$ are computed by the iteratively re-weighted least squares algorithm
We have several choices available for the weighting functions to be used
End of explanation
"""
help(norms.AndrewWave.weights)
a = 1.339
support = np.linspace(-np.pi * a, np.pi * a, 100)
andrew = norms.AndrewWave(a=a)
plot_weights(
support, andrew.weights, ["$-\pi*a$", "0", "$\pi*a$"], [-np.pi * a, 0, np.pi * a]
)
"""
Explanation: Andrew's Wave
End of explanation
"""
help(norms.Hampel.weights)
c = 8
support = np.linspace(-3 * c, 3 * c, 1000)
hampel = norms.Hampel(a=2.0, b=4.0, c=c)
plot_weights(support, hampel.weights, ["3*c", "0", "3*c"], [-3 * c, 0, 3 * c])
"""
Explanation: Hampel's 17A
End of explanation
"""
help(norms.HuberT.weights)
t = 1.345
support = np.linspace(-3 * t, 3 * t, 1000)
huber = norms.HuberT(t=t)
plot_weights(support, huber.weights, ["-3*t", "0", "3*t"], [-3 * t, 0, 3 * t])
"""
Explanation: Huber's t
End of explanation
"""
help(norms.LeastSquares.weights)
support = np.linspace(-3, 3, 1000)
lst_sq = norms.LeastSquares()
plot_weights(support, lst_sq.weights, ["-3", "0", "3"], [-3, 0, 3])
"""
Explanation: Least Squares
End of explanation
"""
help(norms.RamsayE.weights)
a = 0.3
support = np.linspace(-3 * a, 3 * a, 1000)
ramsay = norms.RamsayE(a=a)
plot_weights(support, ramsay.weights, ["-3*a", "0", "3*a"], [-3 * a, 0, 3 * a])
"""
Explanation: Ramsay's Ea
End of explanation
"""
help(norms.TrimmedMean.weights)
c = 2
support = np.linspace(-3 * c, 3 * c, 1000)
trimmed = norms.TrimmedMean(c=c)
plot_weights(support, trimmed.weights, ["-3*c", "0", "3*c"], [-3 * c, 0, 3 * c])
"""
Explanation: Trimmed Mean
End of explanation
"""
help(norms.TukeyBiweight.weights)
c = 4.685
support = np.linspace(-3 * c, 3 * c, 1000)
tukey = norms.TukeyBiweight(c=c)
plot_weights(support, tukey.weights, ["-3*c", "0", "3*c"], [-3 * c, 0, 3 * c])
"""
Explanation: Tukey's Biweight
End of explanation
"""
x = np.array([1, 2, 3, 4, 500])
"""
Explanation: Scale Estimators
Robust estimates of the location
End of explanation
"""
x.mean()
"""
Explanation: The mean is not a robust estimator of location
End of explanation
"""
np.median(x)
"""
Explanation: The median, on the other hand, is a robust estimator with a breakdown point of 50%
End of explanation
"""
x.std()
"""
Explanation: Analogously for the scale
The standard deviation is not robust
End of explanation
"""
stats.norm.ppf(0.75)
print(x)
sm.robust.scale.mad(x)
np.array([1, 2, 3, 4, 5.0]).std()
"""
Explanation: Median Absolute Deviation
$$ median_i |X_i - median_j(X_j)|) $$
Standardized Median Absolute Deviation is a consistent estimator for $\hat{\sigma}$
$$\hat{\sigma}=K \cdot MAD$$
where $K$ depends on the distribution. For the normal distribution for example,
$$K = \Phi^{-1}(.75)$$
End of explanation
"""
sm.robust.scale.iqr(x)
"""
Explanation: Another robust estimator of scale is the Interquartile Range (IQR)
$$\left(\hat{X}{0.75} - \hat{X}{0.25}\right),$$
where $\hat{X}_{p}$ is the sample p-th quantile and $K$ depends on the distribution.
The standardized IQR, given by $K \cdot \text{IQR}$ for
$$K = \frac{1}{\Phi^{-1}(.75) - \Phi^{-1}(.25)} \approx 0.74,$$
is a consistent estimator of the standard deviation for normal data.
End of explanation
"""
sm.robust.scale.qn_scale(x)
"""
Explanation: The IQR is less robust than the MAD in the sense that it has a lower breakdown point: it can withstand 25\% outlying observations before being completely ruined, whereas the MAD can withstand 50\% outlying observations. However, the IQR is better suited for asymmetric distributions.
Yet another robust estimator of scale is the $Q_n$ estimator, introduced in Rousseeuw & Croux (1993), 'Alternatives to the Median Absolute Deviation'. Then $Q_n$ estimator is given by
$$
Q_n = K \left\lbrace \vert X_{i} - X_{j}\vert : i<j\right\rbrace_{(h)}
$$
where $h\approx (1/4){{n}\choose{2}}$ and $K$ is a given constant. In words, the $Q_n$ estimator is the normalized $h$-th order statistic of the absolute differences of the data. The normalizing constant $K$ is usually chosen as 2.219144, to make the estimator consistent for the standard deviation in the case of normal data. The $Q_n$ estimator has a 50\% breakdown point and a 82\% asymptotic efficiency at the normal distribution, much higher than the 37\% efficiency of the MAD.
End of explanation
"""
np.random.seed(12345)
fat_tails = stats.t(6).rvs(40)
kde = sm.nonparametric.KDEUnivariate(fat_tails)
kde.fit()
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax.plot(kde.support, kde.density)
print(fat_tails.mean(), fat_tails.std())
print(stats.norm.fit(fat_tails))
print(stats.t.fit(fat_tails, f0=6))
huber = sm.robust.scale.Huber()
loc, scale = huber(fat_tails)
print(loc, scale)
sm.robust.mad(fat_tails)
sm.robust.mad(fat_tails, c=stats.t(6).ppf(0.75))
sm.robust.scale.mad(fat_tails)
"""
Explanation: The default for Robust Linear Models is MAD
another popular choice is Huber's proposal 2
End of explanation
"""
from statsmodels.graphics.api import abline_plot
from statsmodels.formula.api import ols, rlm
prestige = sm.datasets.get_rdataset("Duncan", "carData", cache=True).data
print(prestige.head(10))
fig = plt.figure(figsize=(12, 12))
ax1 = fig.add_subplot(211, xlabel="Income", ylabel="Prestige")
ax1.scatter(prestige.income, prestige.prestige)
xy_outlier = prestige.loc["minister", ["income", "prestige"]]
ax1.annotate("Minister", xy_outlier, xy_outlier + 1, fontsize=16)
ax2 = fig.add_subplot(212, xlabel="Education", ylabel="Prestige")
ax2.scatter(prestige.education, prestige.prestige)
ols_model = ols("prestige ~ income + education", prestige).fit()
print(ols_model.summary())
infl = ols_model.get_influence()
student = infl.summary_frame()["student_resid"]
print(student)
print(student.loc[np.abs(student) > 2])
print(infl.summary_frame().loc["minister"])
sidak = ols_model.outlier_test("sidak")
sidak.sort_values("unadj_p", inplace=True)
print(sidak)
fdr = ols_model.outlier_test("fdr_bh")
fdr.sort_values("unadj_p", inplace=True)
print(fdr)
rlm_model = rlm("prestige ~ income + education", prestige).fit()
print(rlm_model.summary())
print(rlm_model.weights)
"""
Explanation: Duncan's Occupational Prestige data - M-estimation for outliers
End of explanation
"""
dta = sm.datasets.get_rdataset("starsCYG", "robustbase", cache=True).data
from matplotlib.patches import Ellipse
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(
111,
xlabel="log(Temp)",
ylabel="log(Light)",
title="Hertzsprung-Russell Diagram of Star Cluster CYG OB1",
)
ax.scatter(*dta.values.T)
# highlight outliers
e = Ellipse((3.5, 6), 0.2, 1, alpha=0.25, color="r")
ax.add_patch(e)
ax.annotate(
"Red giants",
xy=(3.6, 6),
xytext=(3.8, 6),
arrowprops=dict(facecolor="black", shrink=0.05, width=2),
horizontalalignment="left",
verticalalignment="bottom",
clip_on=True, # clip to the axes bounding box
fontsize=16,
)
# annotate these with their index
for i, row in dta.loc[dta["log.Te"] < 3.8].iterrows():
ax.annotate(i, row, row + 0.01, fontsize=14)
xlim, ylim = ax.get_xlim(), ax.get_ylim()
from IPython.display import Image
Image(filename="star_diagram.png")
y = dta["log.light"]
X = sm.add_constant(dta["log.Te"], prepend=True)
ols_model = sm.OLS(y, X).fit()
abline_plot(model_results=ols_model, ax=ax)
rlm_mod = sm.RLM(y, X, sm.robust.norms.TrimmedMean(0.5)).fit()
abline_plot(model_results=rlm_mod, ax=ax, color="red")
"""
Explanation: Hertzprung Russell data for Star Cluster CYG 0B1 - Leverage Points
Data is on the luminosity and temperature of 47 stars in the direction of Cygnus.
End of explanation
"""
infl = ols_model.get_influence()
h_bar = 2 * (ols_model.df_model + 1) / ols_model.nobs
hat_diag = infl.summary_frame()["hat_diag"]
hat_diag.loc[hat_diag > h_bar]
sidak2 = ols_model.outlier_test("sidak")
sidak2.sort_values("unadj_p", inplace=True)
print(sidak2)
fdr2 = ols_model.outlier_test("fdr_bh")
fdr2.sort_values("unadj_p", inplace=True)
print(fdr2)
"""
Explanation: Why? Because M-estimators are not robust to leverage points.
End of explanation
"""
l = ax.lines[-1]
l.remove()
del l
weights = np.ones(len(X))
weights[X[X["log.Te"] < 3.8].index.values - 1] = 0
wls_model = sm.WLS(y, X, weights=weights).fit()
abline_plot(model_results=wls_model, ax=ax, color="green")
"""
Explanation: Let's delete that line
End of explanation
"""
yy = y.values[:, None]
xx = X["log.Te"].values[:, None]
"""
Explanation: MM estimators are good for this type of problem, unfortunately, we do not yet have these yet.
It's being worked on, but it gives a good excuse to look at the R cell magics in the notebook.
End of explanation
"""
params = [-4.969387980288108, 2.2531613477892365] # Computed using R
print(params[0], params[1])
abline_plot(intercept=params[0], slope=params[1], ax=ax, color="red")
"""
Explanation: Note: The R code and the results in this notebook has been converted to markdown so that R is not required to build the documents. The R results in the notebook were computed using R 3.5.1 and robustbase 0.93.
```ipython
%load_ext rpy2.ipython
%R library(robustbase)
%Rpush yy xx
%R mod <- lmrob(yy ~ xx);
%R params <- mod$coefficients;
%Rpull params
```
ipython
%R print(mod)
Call:
lmrob(formula = yy ~ xx)
\--> method = "MM"
Coefficients:
(Intercept) xx
-4.969 2.253
End of explanation
"""
np.random.seed(12345)
nobs = 200
beta_true = np.array([3, 1, 2.5, 3, -4])
X = np.random.uniform(-20, 20, size=(nobs, len(beta_true) - 1))
# stack a constant in front
X = sm.add_constant(X, prepend=True) # np.c_[np.ones(nobs), X]
mc_iter = 500
contaminate = 0.25 # percentage of response variables to contaminate
all_betas = []
for i in range(mc_iter):
y = np.dot(X, beta_true) + np.random.normal(size=200)
random_idx = np.random.randint(0, nobs, size=int(contaminate * nobs))
y[random_idx] = np.random.uniform(-750, 750)
beta_hat = sm.RLM(y, X).fit().params
all_betas.append(beta_hat)
all_betas = np.asarray(all_betas)
se_loss = lambda x: np.linalg.norm(x, ord=2) ** 2
se_beta = lmap(se_loss, all_betas - beta_true)
"""
Explanation: Exercise: Breakdown points of M-estimator
End of explanation
"""
np.array(se_beta).mean()
all_betas.mean(0)
beta_true
se_loss(all_betas.mean(0) - beta_true)
"""
Explanation: Squared error loss
End of explanation
"""
|
google/rba | Standard Regression (BQML).ipynb | apache-2.0 | ###########################################################################
#
# Copyright 2021 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# This solution, including any related sample code or data, is made available
# on an “as is,” “as available,” and “with all faults” basis, solely for
# illustrative purposes, and without warranty or representation of any kind.
# This solution is experimental, unsupported and provided solely for your
# convenience. Your use of it is subject to your agreements with Google, as
# applicable, and may constitute a beta feature as defined under those
# agreements. To the extent that you make any data available to Google in
# connection with your use of the solution, you represent and warrant that you
# have all necessary and appropriate rights, consents and permissions to permit
# Google to use and process that data. By using any portion of this solution,
# you acknowledge, assume and accept all risks, known and unknown, associated
# with its usage, including with respect to your deployment of any portion of
# this solution in your systems, or usage in connection with your business,
# if at all.
###########################################################################
"""
Explanation: Standard Regression (BQML)
End of explanation
"""
################################################################################
######################### CHANGE BQ PROJECT NAME BELOW #########################
################################################################################
project_name = '' #add proj name and dataset
# Google credentials authentication libraries
from google.colab import auth
auth.authenticate_user()
# BigQuery Magics
'''
BigQuery magics are used to run BigQuery SQL queries in a python environment.
These queries can also be run in the BigQuery UI
'''
from google.cloud import bigquery
from google.cloud.bigquery import magics
magics.context.project = project_name #update project name
client = bigquery.Client(project=magics.context.project)
%load_ext google.cloud.bigquery
bigquery.USE_LEGACY_SQL = False
# data processing libraries
import numpy as np
import pandas as pd
# modeling and metrics
from statsmodels.stats.stattools import durbin_watson
import statsmodels.api as sm
!pip install relativeImp
from relativeImp import relativeImp
# visutalization
import matplotlib.pyplot as plt
import seaborn as sns
"""
Explanation: 0) Dependencies
End of explanation
"""
################################################################################
######################### CHANGE BQ PROJECT NAME BELOW #########################
################################################################################
%%bigquery df
SELECT *
FROM `.RBA_demo.cleaned_data`; #update project name
df.columns
df.describe()
"""
Explanation: 1) Import dataset
Import the data using the bigquery magics (%% command).
Pulls all of the data from the cleaned data table and stores into a dataframe "df"
End of explanation
"""
################################################################################
######################### CHANGE BQ PROJECT NAME BELOW #########################
################################################################################
%%bigquery
CREATE OR REPLACE MODEL `.RBA_demo.RBA_model` #update project name
OPTIONS (model_type='linear_reg',
input_label_cols = ['y1'])
AS SELECT *
FROM `.RBA_demo.cleaned_data`; #update project name
"""
Explanation: 2) Run the RBA Model in BQML
End of explanation
"""
%%bigquery model_coefficients_results
SELECT
*
FROM
ML.WEIGHTS(MODEL `.RBA_demo.RBA_model`,
STRUCT(true AS standardize)) #update project name
model_coefficients_results
"""
Explanation: 2.1) Print the model coefficient results
Call the model coefficient weights from the model and save to a dataframe "model_coefficients_results".
The standardize parameter is an optional parameter that determines whether the model weights should be standardized to assume that all features have a mean of zero and a standard deviation of one. Standardizing the weights allows the absolute magnitude of the weights to be compared to each other.
End of explanation
"""
################################################################################
######################### CHANGE BQ PROJECT NAME BELOW #########################
################################################################################
"""
Explanation: 2.2) Print the model evaluation metrics
End of explanation
"""
%%bigquery evaluation_metrics
SELECT *
FROM ML.EVALUATE(MODEL `.RBA_demo.RBA_model`) #update project name
evaluation_metrics
"""
Explanation: Call the model evaluation metrics from the model and save to a dataframe "evaluation_metrics".
For linear regression models The ML.EVALUATE function returns: mean absolute error,mean squared erorr, mean squared log error, median absolute error, r-squared, and explained variance metrics.
End of explanation
"""
conversions = 'y1'
tactics = df[df.columns[df.columns != conversions]].columns.to_list()
relative_importance_results = relativeImp(df,
outcomeName = conversions,
driverNames = tactics)
relative_importance_results
"""
Explanation: 3) Calculate contribution of each digital media tactic on conversions
Use the relativeImp package
to conduct key driver analysis and generate relative importance values by feature in the model.
The relativeImp function produces a raw relative importance and a normalized relative importance value.
- Raw relative importance sums to the r-squared of the linear model.
- Normalized relative importance is scaled to sum to 1
End of explanation
"""
################################################################################
######################### CHANGE BQ PROJECT NAME BELOW #########################
################################################################################
"""
Explanation: 4) Validate Linear Regression Model Assumptions
For any statistical model it is important to validate model assumptions.
With RBA, we validate the standard linear model assumptions of:
Linearity
Normality of errors
Absence of multicollinearity
Homoscedasticity
Absence of autocorrelation of residuals
If any of the model assumptions fail, a different model specification, as well as re-examination of the data should be considered
Incorrect model use can lead to unreliable results
4.1) Generate model predictions and residuals
End of explanation
"""
%%bigquery model_predictions
SELECT
*
FROM
ML.PREDICT(MODEL `.RBA_demo.RBA_model`, #update project name
(
SELECT
*
FROM
`.RBA_demo.cleaned_data`)); #update project name
"""
Explanation: Select the predicted conversions (y1) of the model and actual conversions from the data (y1) using the ML.PREDICT function
End of explanation
"""
model_predictions['residuals'] = model_predictions.predicted_y1 - model_predictions.y1
"""
Explanation: Calculate model residuals as the difference from predicted y1 values and actual y1 values
End of explanation
"""
plt.plot(model_predictions.predicted_y1,model_predictions.y1,'o',alpha=0.5)
plt.show()
"""
Explanation: 4.2) Linearity
Visually inspect linearity between target variable (y1) and predictions
End of explanation
"""
fig = sm.qqplot(model_predictions.residuals)
plt.xlabel('Model Residuals'); plt.ylabel('Density'); plt.title('Distribution of Residuals');
"""
Explanation: 4.3) Normality of Errors
Visually inspect the residuals to confirm normality
End of explanation
"""
plt.plot(model_predictions.residuals,'o',alpha=0.5)
plt.show()
"""
Explanation: 4.4) Absence of Multicollinearity
Multicollinearity was checked and handled during data pre-processing stage.
4.5) Homoscedasticity
Visually inspect residuals to confirm constant variance
End of explanation
"""
dw = durbin_watson(model_predictions.residuals)
print('Durbin-Watson',dw)
if dw < 1.5:
print('Positive autocorrelation', '\n')
elif dw > 2.5:
print('Negative autocorrelation', '\n')
else:
print('Little to no autocorrelation', '\n')
"""
Explanation: 4.6) Absence of Autocorrelation of the residuals
The Durbin Watson test is a statistical test for detecting autocorrelation of the model residuals
End of explanation
"""
|
vitojph/kschool-nlp | notebooks-py2/pos-tagger-es.ipynb | gpl-3.0 | import nltk
from nltk.corpus import cess_esp
cess_esp = cess_esp.tagged_sents()
print(cess_esp[0])
"""
Explanation: PoS tagging en Español
En este primer ejercicio vamos a jugar con uno de los corpus en español que está disponible desde NLTK: CESS_ESP, un treebank anotado a partir de una colección de noticias en español.
Este corpus está actualmente incluído en un recurso más amplio, el corpus AnCora que desarrollan en la Universitat de Barcelona. Para más información, podéis leer el artículo de M. Taulé, M. A. Martí y M. Recasens "AnCora: Multilevel Annotated Corpora for Catalan and Spanish". Proceedings of 6th International Conference on Language Resources and Evaluation (LREC 2008). 2008. Marrakesh (Morocco).
Antes de nada, ejecuta la siguiente celda para acceder al corpus y a otras herramientas que vamos a usar en este ejercicio.
End of explanation
"""
# escribe tu código aquí
"""
Explanation: Fíjate que las etiquetas que se usan en el treebank español son diferentes a las etiquetas que habíamos visto en inglés. Para empezar, el español es una lengua con una morfología más rica: si queremos reflejar el género y el número de los adjetivos, por ejemplo, no nos vale con etiquetar los adjetivos con una simple JJ.
Echa un vistazo a las etiquetas morfológicas y trata de interpretar su significado. En estas primeras 50 palabras encontramos:
da0ms0: determinante artículo masculino singular
ncms000: nombre común masculino singular
aq0cs0: adjetivo calificativo de género común singular
np00000: nombre propio
sps00: preposición
vmis3s0: verbo principal indicativo pasado 3ª persona del singular
Aquí tienes el la explicación de las etiquetas y el catálogo completo de rasgos para el etiquetado en español usadas en este corpus. A partir de lo que aprendas en el enlace anterior:
Imprime por pantalla solo las palabras etiquetadas como formas verbales en 3ª persona del plural del pretérito perfecto simple de indicativo.
Calcula qué porcentaje del total representan las palabras del corpus CEES_ESP etiquetadas como formas verbales en 3ª persona del plural del pretérito perfecto simple de indicativo.
End of explanation
"""
!cp ../data/universal_tagset-ES.map ~/nltk_data/taggers/universal_tagset/es-ancora.map
"""
Explanation: Las etiquetas morfológicas que hemos visto son bastante complejas, ya que incorporan los rasgos de la flexión del español. Afortunadamente, NLTK permite cargar los corpus etiquetados con un conjunto de etiquetas universal y simplificado (todos los detalles en el paper) utilizando la opcion tagset='universal'. Para ello, asegúrate de que has almacenado dentro de tu directorio de recursos de nltk el mapeo de etiquetas originales del corpus con su versión simplificada. Este fichero se llama universal_tagset-ES.map y lo tienes en la carpeta data del respositorio. Es recomendable renombrarlo, por ejemplo:
End of explanation
"""
from nltk.corpus import cess_esp
cess_esp._tagset = 'es-ancora'
oraciones = cess_esp.tagged_sents(tagset='universal')
print(oraciones[0])
"""
Explanation: Después, ejecuta la siguiente celda y fíjate cómo hemos cargado una lista de oraciones etiquetadas con esta nueva versión de las etiquetas.
End of explanation
"""
# escribe tu código aquí
# prueba tu etiquetador basado en trigramas con las siguientes oraciones que,
# con toda seguridad, no aparecen en el corpus
print(trigramTagger.tag("Este banco está ocupado por un padre y por un hijo. El padre se llama Juan y el hijo ya te lo he dicho".split()))
print(trigramTagger.tag("""El presidente del gobierno por fin ha dado la cara para anunciar aumentos de presupuesto en Educación y Sanidad a costa de dejar de subvencionar las empresas de los amigotes.""".split()))
print(trigramTagger.tag("El cacique corrupto y la tonadillera se comerán el turrón en prisión.".split()))
"""
Explanation: Estas etiquetas son más sencillas, ¿verdad? Básicamente tenemos DET para determinante, NOUN para nombre, VERB para verbo, ADJ para adjetivo, ADP para preposición, etc.
Vamos a utilizar este corpus para entrenar varios etiquetadores basados en ngramas, tal y como hicimos en clase y se explica en la presentación nltk-pos.
Construye de manera incremental cuatro etiquetadores.
un etiquetador que por defecto que asuma que una palabra desconocida es un nombre común en masculino singular y asigne la etiqueta correspondiente a todas las palabras.
un etiquetador basado en unigramas que aprenda a partir de la lista oraciones y utilice en etiquetador anterior como respaldo.
un etiquetador basado en bigramas que aprenda a partir de la lista oraciones y utilice en etiquetador anterior como respaldo.
un etiquetador basado en trigramas que aprenda a partir de la lista oraciones y utilice en etiquetador anterior como respaldo.
End of explanation
"""
|
RaspberryJamBe/ipython-notebooks | notebooks/nl-be/Communicatie - Mail verzenden.ipynb | cc0-1.0 | MAIL_SERVER = "mail.****.com"
FROM_ADDRESS = "noreply@****.com"
TO_ADDRESS = "my_friend@****.com"
"""
Explanation: Vereiste:
Voor het verzenden van Mail heb je een uitgaande mailserver nodig (die in het geval van dit script ook niet geauthenticeerde uitgaande communicatie moet toelaten). Vul de vereiste gegevens in in de volgende variabelen:
End of explanation
"""
from sender import Mail
mail = Mail(MAIL_SERVER)
mail.fromaddr = ("Geheime aanbidder", FROM_ADDRESS)
mail.send_message("Raspberry Pi heeft een boontje voor je", to=TO_ADDRESS, body="Hey lekker ding! Zin in een smoothie?")
"""
Explanation: Een mail verzenden is, mits het inladen van de juiste bibliotheek, een fluitje van een cent...
End of explanation
"""
APPKEY = "******"
mail.fromaddr = ("Uw deurbel", FROM_ADDRESS)
mail_to_addresses = {
"Donald Duck":"dd@****.com",
"Maleficent":"mf@****.com",
"BozeWolf":"bw@****.com"
}
def on_message(sender, channel, message):
boodschap = "{}: Er is aangebeld bij {}".format(channel, message)
print(boodschap)
mail.send_message("Raspberry Pi alert!", to=mail_to_addresses[message], body=boodschap)
import ortc
oc = ortc.OrtcClient()
oc.cluster_url = "http://ortc-developers.realtime.co/server/2.1"
def on_connected(sender):
print('Connected')
oc.subscribe('deurbel', True, on_message)
oc.set_on_connected_callback(on_connected)
oc.connect(APPKEY)
"""
Explanation: ... maar als we het wat verder doordrijven kunnen we ons deurbel project via de cloud koppelen aan het verzenden van een mail!
APPKEY is de Application Key voor een (gratis) http://www.realtime.co/ "Realtime Messaging Free" subscription.
Zie "104 - Remote deurbel - Een cloud API gebruiken om berichten te sturen" voor meer gedetailleerde info.
End of explanation
"""
|
jmhsi/justin_tinker | data_science/courses/deeplearning2/DCGAN.ipynb | apache-2.0 | %matplotlib inline
import importlib
import utils2; importlib.reload(utils2)
from utils2 import *
from tqdm import tqdm
"""
Explanation: Generative Adversarial Networks in Keras
End of explanation
"""
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train.shape
n = len(X_train)
X_train = X_train.reshape(n, -1).astype(np.float32)
X_test = X_test.reshape(len(X_test), -1).astype(np.float32)
X_train /= 255.; X_test /= 255.
"""
Explanation: The original GAN!
See this paper for details of the approach we'll try first for our first GAN. We'll see if we can generate hand-drawn numbers based on MNIST, so let's load that dataset first.
We'll be refering to the discriminator as 'D' and the generator as 'G'.
End of explanation
"""
def plot_gen(G, n_ex=16):
plot_multi(G.predict(noise(n_ex)).reshape(n_ex, 28,28), cmap='gray')
"""
Explanation: Train
This is just a helper to plot a bunch of generated images.
End of explanation
"""
def noise(bs): return np.random.rand(bs,100)
"""
Explanation: Create some random data for the generator.
End of explanation
"""
def data_D(sz, G):
real_img = X_train[np.random.randint(0,n,size=sz)]
X = np.concatenate((real_img, G.predict(noise(sz))))
return X, [0]*sz + [1]*sz
def make_trainable(net, val):
net.trainable = val
for l in net.layers: l.trainable = val
"""
Explanation: Create a batch of some real and some generated data, with appropriate labels, for the discriminator.
End of explanation
"""
def train(D, G, m, nb_epoch=5000, bs=128):
dl,gl=[],[]
for e in tqdm(range(nb_epoch)):
X,y = data_D(bs//2, G)
dl.append(D.train_on_batch(X,y))
make_trainable(D, False)
gl.append(m.train_on_batch(noise(bs), np.zeros([bs])))
make_trainable(D, True)
return dl,gl
"""
Explanation: Train a few epochs, and return the losses for D and G. In each epoch we:
Train D on one batch from data_D()
Train G to create images that the discriminator predicts as real.
End of explanation
"""
MLP_G = Sequential([
Dense(200, input_shape=(100,), activation='relu'),
Dense(400, activation='relu'),
Dense(784, activation='sigmoid'),
])
MLP_D = Sequential([
Dense(300, input_shape=(784,), activation='relu'),
Dense(300, activation='relu'),
Dense(1, activation='sigmoid'),
])
MLP_D.compile(Adam(1e-4), "binary_crossentropy")
MLP_m = Sequential([MLP_G,MLP_D])
MLP_m.compile(Adam(1e-4), "binary_crossentropy")
dl,gl = train(MLP_D, MLP_G, MLP_m, 8000)
"""
Explanation: MLP GAN
We'll keep thinks simple by making D & G plain ole' MLPs.
End of explanation
"""
plt.plot(dl[100:])
plt.plot(gl[100:])
"""
Explanation: The loss plots for most GANs are nearly impossible to interpret - which is one of the things that make them hard to train.
End of explanation
"""
plot_gen()
"""
Explanation: This is what's known in the literature as "mode collapse".
End of explanation
"""
X_train = X_train.reshape(n, 28, 28, 1)
X_test = X_test.reshape(len(X_test), 28, 28, 1)
"""
Explanation: OK, so that didn't work. Can we do better?...
DCGAN
There's lots of ideas out there to make GANs train better, since they are notoriously painful to get working. The paper introducing DCGANs is the main basis for our next section. Add see https://github.com/soumith/ganhacks for many tips!
Because we're using a CNN from now on, we'll reshape our digits into proper images.
End of explanation
"""
CNN_G = Sequential([
Dense(512*7*7, input_dim=100, activation=LeakyReLU()),
BatchNormalization(mode=2),
Reshape((7, 7, 512)),
UpSampling2D(),
Convolution2D(64, 3, 3, border_mode='same', activation=LeakyReLU()),
BatchNormalization(mode=2),
UpSampling2D(),
Convolution2D(32, 3, 3, border_mode='same', activation=LeakyReLU()),
BatchNormalization(mode=2),
Convolution2D(1, 1, 1, border_mode='same', activation='sigmoid')
])
"""
Explanation: Our generator uses a number of upsampling steps as suggested in the above papers. We use nearest neighbor upsampling rather than fractionally strided convolutions, as discussed in our style transfer notebook.
End of explanation
"""
CNN_D = Sequential([
Convolution2D(256, 5, 5, subsample=(2,2), border_mode='same',
input_shape=(28, 28, 1), activation=LeakyReLU()),
Convolution2D(512, 5, 5, subsample=(2,2), border_mode='same', activation=LeakyReLU()),
Flatten(),
Dense(256, activation=LeakyReLU()),
Dense(1, activation = 'sigmoid')
])
CNN_D.compile(Adam(1e-3), "binary_crossentropy")
"""
Explanation: The discriminator uses a few downsampling steps through strided convolutions.
End of explanation
"""
sz = n//200
x1 = np.concatenate([np.random.permutation(X_train)[:sz], CNN_G.predict(noise(sz))])
CNN_D.fit(x1, [0]*sz + [1]*sz, batch_size=128, nb_epoch=1, verbose=2)
CNN_m = Sequential([CNN_G, CNN_D])
CNN_m.compile(Adam(1e-4), "binary_crossentropy")
K.set_value(CNN_D.optimizer.lr, 1e-3)
K.set_value(CNN_m.optimizer.lr, 1e-3)
"""
Explanation: We train D a "little bit" so it can at least tell a real image from random noise.
End of explanation
"""
dl,gl = train(CNN_D, CNN_G, CNN_m, 2500)
plt.plot(dl[10:])
plt.plot(gl[10:])
"""
Explanation: Now we can train D & G iteratively.
End of explanation
"""
plot_gen(CNN_G)
"""
Explanation: Better than our first effort, but still a lot to be desired:...
End of explanation
"""
|
kcyu1993/ML_course_kyu | labs/ex03/template/ex03.ipynb | mit | def compute_cost_MSE(y, tx, beta):
"""compute the loss by mse."""
e = y - tx.dot(beta)
mse = e.dot(e) / (2 * len(e))
return mse
def compute_cost_MAE(y, tx, w):
y = np.array(y)
return np.sum(abs(y - np.dot(tx, w))) / y.shape[0]
def least_squares(y, tx):
"""calculate the least squares solution."""
# ***************************************************
# INSERT YOUR CODE HERE
# least squares: TODO
# returns mse, and optimal weights
# ***************************************************
weight = np.linalg.solve(np.dot(tx.T,tx), np.dot(tx.T,y))
return least_square_mse(y,tx, weight),weight
def least_square_mse(y, tx, w):
return compute_cost_MSE(y, tx, w)
def rmse(y, tx, w):
return np.sqrt(compute_cost_MSE)
"""
Explanation: Least squares and linear basis functions models
Least squares
End of explanation
"""
from helpers import *
def test_your_least_squares():
height, weight, gender = load_data_from_ex02(sub_sample=False, add_outlier=False)
x, mean_x, std_x = standardize(height)
y, tx = build_model_data(x, weight)
# ***************************************************
# INSERT YOUR CODE HERE
# least square or grid search: TODO
# this code should compare the optimal weights obtained
# by least squares vs. grid search
# ***************************************************
mse, lsq_w = least_squares(y,tx)
print(lsq_w)
test_your_least_squares()
"""
Explanation: Load the data
Here we will reuse the dataset height_weight_genders.csv from previous exercise section to check the correctness of your implementation. Please compare it with your previous result.
End of explanation
"""
# load dataset
x, y = load_data()
print("shape of x {}".format(x.shape))
print("shape of y {}".format(y.shape))
def build_poly(x, degree):
"""polynomial basis functions for input data x, for j=0 up to j=degree."""
# ***************************************************
# INSERT YOUR CODE HERE
# polynomial basis function: TODO
# this function should return the matrix formed
# by applying the polynomial basis to the input data
# ***************************************************
x = np.array(x)
res = x
for d in range(2, degree + 1):
res = np.concatenate((res, x ** d), axis=-1)
# print(len(x),degree)
# print(res)
res = np.reshape(res, (degree, len(x)))
res = np.c_[np.ones((len(res.T), 1)),res.T]
return res
def build_poly2(x, degree):
"""polynomial basis function."""
X = np.ones((x.shape[0], degree + 1))
for i in range(degree):
X[:, i + 1:degree + 1] *= x[:, np.newaxis]
return X
test = np.array(range(10))
build_poly2(test,2)
"""
Explanation: Least squares with a linear basis function model
Start from this section, we will use the dataset dataEx3.csv.
Implement polynomial basis functions
End of explanation
"""
from plots import *
# from .build_polynomial import *
def polynomial_regression():
"""Constructing the polynomial basis function expansion of the data,
and then running least squares regression."""
# define parameters
degrees = [1, 3, 7, 12]
# define the structure of figure
num_row = 2
num_col = 2
f, axs = plt.subplots(num_row, num_col)
for ind, degree in enumerate(degrees):
# ***************************************************
# INSERT YOUR CODE HERE
# form the data to do polynomial regression.: TODO
# ***************************************************
x_degree = build_poly(x,degree)
# ***************************************************
# INSERT YOUR CODE HERE
# least square and calculate rmse: TODO
# ***************************************************
lsq_degree, weight = least_squares(y,x_degree)
# print(weight)
rmse = np.sqrt(2*lsq_degree)
print("Processing {i}th experiment, degree={d}, rmse={loss}".format(
i=ind + 1, d=degree, loss=rmse))
# plot fit
plot_fitted_curve(
y, x, weight, degree, axs[ind // num_col][ind % num_col])
plt.tight_layout()
plt.savefig("visualize_polynomial_regression")
plt.show()
polynomial_regression()
"""
Explanation: Let us play with polynomial regression. Note that we will use your implemented function compute_mse. Please copy and paste your implementation from exercise02.
End of explanation
"""
def split_data(x, y, ratio, seed=1):
"""split the dataset based on the split ratio."""
# set seed
np.random.seed(seed)
# ***************************************************
# INSERT YOUR CODE HERE
# split the data based on the given ratio: TODO
# ***************************************************
# Random shuffle the index by enumerate.
pair = np.c_[x,y]
np.random.shuffle(pair)
index = np.round(x.size * ratio,0).astype('int16')
p1, p2 = np.split(pair,[index])
x1,y1 = zip(*p1)
x2,y2 = zip(*p2)
return x1,y1,x2,y2
def split_data2(x, y, ratio, seed=1):
"""split the dataset based on the split ratio."""
# set seed
np.random.seed(seed)
ntr = round(y.shape[0] * ratio)
ind = np.random.permutation(range(y.shape[0]))
x_tr = x[ind[:ntr]]
x_te = x[ind[ntr:]]
y_tr = y[ind[:ntr]]
y_te = y[ind[ntr:]]
return (x_tr, y_tr , x_te , y_te)
test_x = np.array( range(0,10))
test_y = np.array(range(0,10))
print(split_data(test_x, test_y, 0.5))
print(split_data2(test_x, test_y, 0.5))
"""
Explanation: Evaluating model predication performance
Let us show the train and test splits for various polynomial degrees. First of all, please fill in the function split_data()
End of explanation
"""
def train_test_split_demo(x, y, degree, ratio, seed):
"""polynomial regression with different split ratios and different degrees."""
# ***************************************************
# INSERT YOUR CODE HERE
# split the data, and return train and test data: TODO
# ***************************************************
trainX,trainY,testX,testY = split_data(x,y,ratio,seed)
# ***************************************************
# INSERT YOUR CODE HERE
# form train and test data with polynomial basis function: TODO
# ***************************************************
# print(len(trainX))
# trainX = np.c_[np.ones((len(trainX),1)), build_poly(trainX,degree)]
# testX = np.c_[np.ones((len(testX),1)), build_poly(testX,degree)]
trainX = build_poly(trainX, degree)
testX = build_poly(testX, degree)
# ***************************************************
# INSERT YOUR CODE HERE
# calcualte weight through least square.: TODO
# ***************************************************
mse, weight = least_squares(trainY,trainX)
# ***************************************************
# INSERT YOUR CODE HERE
# calculate RMSE for train and test data,
# and store them in rmse_tr and rmse_te respectively: TODO
# ***************************************************
mse_test = np.sum((testY-np.dot(testX,weight))**2)/len(testY)
rmse_tr = np.sqrt(2*mse)
rmse_te = np.sqrt(2*mse_test)
print("proportion={p}, degree={d}, Training RMSE={tr:.3f}, Testing RMSE={te:.3f}".format(
p=ratio, d=degree, tr=rmse_tr, te=rmse_te))
seed = 6
degrees = [1,3, 7, 12]
split_ratios = [0.9, 0.5, 0.1]
for split_ratio in split_ratios:
for degree in degrees:
train_test_split_demo(x, y, degree, split_ratio, seed)
"""
Explanation: Then, test your split_data function below.
End of explanation
"""
def ridge_regression(y, tx, lamb):
"""implement ridge regression."""
# ***************************************************
# INSERT YOUR CODE HERE
# ridge regression: TODO
# ***************************************************
# Hes = tx.T * tx + 2*N*lambda * I_m
G = np.eye(tx.shape[1])
G[0,0] = 0
hes = np.dot(tx.T,tx) + lamb * G
weight = np.linalg.solve(hes,np.dot(tx.T,y))
mse = compute_cost_MSE(y, tx, weight)
return mse,weight
def ridge_regression_demo(x, y, degree, ratio, seed):
"""ridge regression demo."""
# define parameter
lambdas = np.logspace(-3, 1, 10)
trainX,trainY,testX,testY = split_data(x,y,ratio,seed)
trainX = build_poly(trainX,degree)
testX = build_poly(testX,degree)
_rmse_te = []
_rmse_tr = []
# define the structure of figure
# num_row = 6
# num_col = 2
# f, axs = plt.subplots(num_row, num_col)
for ind, lamb in enumerate(lambdas):
mse, weight = ridge_regression(trainY,trainX,lamb)
# ***************************************************
# INSERT YOUR CODE HERE
# calculate RMSE for train and test data,
# and store them in rmse_tr and rmse_te respectively: TODO
# ***************************************************
mse_test = compute_cost_MSE(testY, testX, weight)
rmse_tr = np.sqrt(2*mse)
rmse_te = np.sqrt(2*mse_test)
_rmse_te.append(rmse_te)
_rmse_tr.append(rmse_tr)
print("lambda={l}, proportion={p}, degree={d}, weight={w}, Training RMSE={tr:.3f}, Testing RMSE={te:.3f}".format(
l=ind, p=ratio, d=degree, w=len(weight), tr=rmse_tr, te=rmse_te))
# plot fit
# plot_fitted_curve(
# y, x, weight, degree, axs[ind // num_col][ind % num_col])
print(_rmse_te, _rmse_tr)
# plt.hold(False)
# rmse_tr_plt, = plt.plot(lambdas, _rmse_tr, 's-b', label="train error")
# plt.semilogx()
# plt.hold(True)
# rmse_te_plt, = plt.plot(lambdas, _rmse_te, 's-r', label="test error")
# plt.xlabel('lambda')
# plt.ylabel('rmse')
# plt.title('ridge regression for polynomial degree {deg}'.format(deg=degree))
# plt.legend(handles=[rmse_tr_plt, rmse_te_plt])
# plt.show()
plot_train_test(_rmse_tr, _rmse_te, lambdas, degree)
seed = 11
degree = 7
split_ratio = 0.5
ridge_regression_demo(x, y, degree, split_ratio, seed)
def polynomial_regression():
"""Constructing the polynomial basis function expansion of the data,
and then running least squares regression."""
# define parameters
degrees = [7]
# define the structure of figure
num_row = 2
num_col = 2
f, axs = plt.subplots(num_row, num_col)
for ind, degree in enumerate(degrees):
# ***************************************************
# INSERT YOUR CODE HERE
# form the data to do polynomial regression.: TODO
# ***************************************************
x_degree = build_poly(x,degree)
# ***************************************************
# INSERT YOUR CODE HERE
# least square and calculate rmse: TODO
# ***************************************************
lsq_degree, weight = least_squares(y,x_degree)
# print(weight)
rmse = np.sqrt(2*lsq_degree)
print("Processing {i}th experiment, degree={d}, rmse={loss}".format(
i=ind + 1, d=degree, loss=rmse))
# plot fit
plot_fitted_curve(
y, x, weight, degree, axs[ind // num_col][ind % num_col])
plt.tight_layout()
plt.savefig("visualize_polynomial_regression")
plt.show()
polynomial_regression()
"""
Explanation: Ridge Regression
Please fill in the function below.
End of explanation
"""
|
analysiscenter/dataset | examples/experiments/augmentation/augmentation.ipynb | apache-2.0 | import sys
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook as tqn
%matplotlib inline
sys.path.append('../../..')
sys.path.append('../../utils')
import utils
from secondbatch import MnistBatch
from simple_conv_model import ConvModel
from batchflow import V, B
from batchflow.opensets import MNIST
"""
Explanation: Is Augmentation Necessary?
In this notebook, we will check how the network trained on ordinary data copes with the augmented data and what will happen if it is learned from the augmented data.
How the implement class with the neural network you'll see in this file.
End of explanation
"""
mnistset = MNIST(batch_class=MnistBatch)
"""
Explanation: Create batch class depended from MnistBatch
End of explanation
"""
normal_train_ppl = (
mnistset.train.p
.init_model('dynamic',
ConvModel,
'conv',
config={'inputs': dict(images={'shape': (28, 28, 1)},
labels={'classes': (10),
'transform': 'ohe',
'name': 'targets'}),
'loss': 'ce',
'optimizer':'Adam',
'input_block/inputs': 'images',
'head/units': 10,
'output': dict(ops=['labels',
'proba',
'accuracy'])})
.train_model('conv',
feed_dict={'images': B('images'),
'labels': B('labels')})
)
normal_test_ppl = (
mnistset.test.p
.import_model('conv', normal_train_ppl)
.init_variable('test_accuracy', init_on_each_run=int)
.predict_model('conv',
fetches='output_accuracy',
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=V('test_accuracy'),
mode='w'))
"""
Explanation: Already familiar to us the construction to create the pipelines. These pipelines train NN on simple MNIST images, without shift.
End of explanation
"""
batch_size = 400
for i in tqn(range(600)):
normal_train_ppl.next_batch(batch_size, n_epochs=None)
normal_test_ppl.next_batch(batch_size, n_epochs=None)
"""
Explanation: Train the model by using next_batch method
End of explanation
"""
acc = normal_test_ppl.get_variable('test_accuracy')
print('Accuracy on normal data: {:.2%}'.format(acc))
"""
Explanation: Get variable from pipeline and print accuracy on data without shift
End of explanation
"""
shift_test_ppl= (
mnistset.test.p
.import_model('conv', normal_train_ppl)
.shift_flattened_pic()
.init_variable('predict', init_on_each_run=int)
.predict_model('conv',
fetches='output_accuracy',
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=V('predict'),
mode='w')
.run(batch_size, n_epochs=1)
)
print('Accuracy with shift: {:.2%}'.format(shift_test_ppl.get_variable('predict')))
"""
Explanation: Now check, how change accuracy, if the first model testing on shift data
End of explanation
"""
shift_train_ppl = (
mnistset.train.p
.shift_flattened_pic()
.init_model('dynamic',
ConvModel,
'conv',
config={'inputs': dict(images={'shape': (28, 28, 1)},
labels={'classes': (10),
'transform': 'ohe',
'name': 'targets'}),
'loss': 'ce',
'optimizer':'Adam',
'input_block/inputs': 'images',
'head/units': 10,
'output': dict(ops=['labels',
'proba',
'accuracy'])})
.train_model('conv',
feed_dict={'images': B('images'),
'labels': B('labels')})
)
for i in tqn(range(600)):
shift_train_ppl.next_batch(batch_size, n_epochs=None)
"""
Explanation: In order for the model to be able to predict the augmentation data, we will teach it on such data
End of explanation
"""
shift_test_ppl = (
mnistset.test.p
.import_model('conv', shift_train_ppl)
.shift_flattened_pic()
.init_variable('acc', init_on_each_run=list)
.init_variable('img', init_on_each_run=list)
.init_variable('predict', init_on_each_run=list)
.predict_model('conv',
fetches=['output_accuracy', 'inputs', 'output_proba'],
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=[V('acc'), V('img'), V('predict')],
mode='a')
.run(1, n_epochs=1)
)
print('Accuracy with shift: {:.2%}'.format(np.mean(shift_test_ppl.get_variable('acc'))))
"""
Explanation: And now check, how change accuracy on shift data
End of explanation
"""
acc = shift_test_ppl.get_variable('acc')
img = shift_test_ppl.get_variable('img')
predict = shift_test_ppl.get_variable('predict')
_, ax = plt.subplots(3, 4, figsize=(16, 16))
ax = ax.reshape(-1)
for i in range(12):
index = np.where(np.array(acc) == 0)[0][i]
ax[i].imshow(img[index]['images'].reshape(-1,28))
ax[i].set_xlabel('Predict: {}'.format(np.argmax(predict[index][0])), fontsize=18)
ax[i].grid()
"""
Explanation: It's really better than before.
It is interesting, on what figures we are mistaken?
End of explanation
"""
|
egillanton/Udacity-SDCND | 1. Computer Vision and Deep Learning/L2 LeNet Lab/LeNet-Lab.ipynb | mit | from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
"""
Explanation: LeNet Lab
Source: Yan LeCun
Load Data
Load the MNIST data, which comes pre-loaded with TensorFlow.
You do not need to modify this section.
End of explanation
"""
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
"""
Explanation: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
End of explanation
"""
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
"""
Explanation: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
End of explanation
"""
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
"""
Explanation: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
End of explanation
"""
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
"""
Explanation: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
End of explanation
"""
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
layer_1_filter_shape = [5,5,1,6]
layer_1_weights = tf.Variable(tf.truncated_normal(shape = layer_1_filter_shape, mean = mu, stddev = sigma))
layer_1_bias = tf.Variable(tf.zeros(6))
layer_1_strides = [1, 1, 1, 1]
layer_1_padding = 'VALID'
layer_1 = tf.nn.conv2d(x, layer_1_weights, layer_1_strides, layer_1_padding) + layer_1_bias
# Activation.
layer_1 = tf.nn.relu(layer_1)
# Pooling. Input = 28x28x6. Output = 14x14x6.
p1_filter_shape = [1, 2, 2, 1]
p1_strides = [1, 2, 2, 1]
p1_padding = 'VALID'
layer_1 = tf.nn.max_pool(layer_1, p1_filter_shape, p1_strides, p1_padding)
# Layer 2: Convolutional. Output = 10x10x16.
layer_2_filter_shape = [5,5,6,16]
layer_2_weights = tf.Variable(tf.truncated_normal(shape = layer_2_filter_shape , mean = mu, stddev = sigma))
layer_2_bias = tf.Variable(tf.zeros(16))
layer_2_strides = [1, 1, 1, 1]
layer_2_padding = 'VALID'
layer_2 = tf.nn.conv2d(layer_1, layer_2_weights, layer_2_strides, layer_2_padding) + layer_2_bias
# Activation.
layer_2 = tf.nn.relu(layer_2)
# Pooling. Input = 10x10x16. Output = 5x5x16.
p2_filter_shape = [1, 2, 2, 1]
p2_strides = [1, 2, 2, 1]
p2_padding = 'VALID'
layer_2 = tf.nn.max_pool(layer_2, p2_filter_shape, p2_strides, p2_padding)
# Flatten. Input = 5x5x16. Output = 400.
layer_2 = flatten(layer_2)
# Layer 3: Fully Connected. Input = 400. Output = 120.
layer_3_filter_shape = [400,120]
layer_3_weights = tf.Variable(tf.truncated_normal(shape = layer_3_filter_shape, mean = mu, stddev = sigma))
layer_3_bias = tf.Variable(tf.zeros(120))
layer_3 = tf.matmul(layer_2, layer_3_weights) + layer_3_bias
# Activation.
layer_3 = tf.nn.relu(layer_3)
# Layer 4: Fully Connected. Input = 120. Output = 84.
layer_4_filter_shape = [120, 84]
layer_4_weights = tf.Variable(tf.truncated_normal(shape = layer_4_filter_shape, mean = mu, stddev = sigma))
layer_4_bias = tf.Variable(tf.zeros(84))
layer_4 = tf.matmul(layer_3, layer_4_weights) + layer_4_bias
# Activation.
layer_4 = tf.nn.relu(layer_4)
# Layer 5: Fully Connected. Input = 84. Output = 10.
layer_5_filter_shape = [84, 10]
layer_5_weights = tf.Variable(tf.truncated_normal(shape = layer_5_filter_shape, mean = mu, stddev = sigma))
layer_5_bias = tf.Variable(tf.zeros(10))
logits = tf.matmul(layer_4, layer_5_weights) + layer_5_bias
return logits
"""
Explanation: TODO: Implement LeNet-5
Implement the LeNet-5 neural network architecture.
This is the only cell you need to edit.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 10 outputs.
Output
Return the result of the 2nd fully connected layer.
End of explanation
"""
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
"""
Explanation: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
End of explanation
"""
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
"""
Explanation: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
End of explanation
"""
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
"""
Explanation: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
End of explanation
"""
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
"""
Explanation: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
"""
Explanation: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section.
End of explanation
"""
|
fabriziocosta/pyMotif | glam2_example.ipynb | mit | #printing motives as lists
for motif in glam2.motives_list:
for m in motif:
print m
print
"""
Explanation: <h3>Print motives as list</h3>
End of explanation
"""
glam2.display_logo(do_alignment=False)
glam2.display_logo(motif_num=1)
"""
Explanation: <h3>Display Sequence logo of unaligned motives</h3>
End of explanation
"""
glam2.align_motives() #MSA with Muscle
motives1=glam2.aligned_motives_list
for m in motives1:
for i in m:
print i
print
"""
Explanation: <h3>Multiple Sequence Alignment of motives with Muscle</h3>
Note: Motives in this example were already aligned, hence no dashes appear in the alignment
End of explanation
"""
glam2.display_logo(do_alignment=True)
"""
Explanation: <h3>Display sequence logo of aligned motives</h3>
End of explanation
"""
glam2.display()
glam2.matrix()
"""
Explanation: <h3>Position Weight Matrices of Motives</h3>
End of explanation
"""
glam2.display(motif_num=3)
"""
Explanation: <h4>Display PWM of a single motif</h4>
End of explanation
"""
test_seq = 'GGAGAAAATACCGC' * 10
seq_score = glam2.score(motif_num=2, seq=test_seq)
print seq_score
"""
Explanation: <h3>Scoring a single sequence w.r.t a motif</h3>
End of explanation
"""
glam_3 = Glam2(alphabet='dna', gap_in_alphabet=True, scoring_criteria='hmm', alignment_runs=3)
matches = glam_3.fit_transform(fasta_file="seq9.fa", return_match=True)
for m in matches: print m
"""
Explanation: <h3> Transform with HMM as scoring criteria</h3>
End of explanation
"""
|
patelparth30j/yelp-sentiment-analysis | yelp_03bagOfWords.ipynb | mit | read_filename = os.path.join(yelp_utils.YELP_DATA_CSV_DIR, 'business_review_user' + data_subset + '.csv')
df_data = pd.read_csv(read_filename, engine='c', encoding='utf-8')
"""
Explanation: Read the csv file generated in yelp_datacleaning
End of explanation
"""
df_data_preprocessed_review = df_data.copy();
%time df_data_preprocessed_review['review_text'] = df_data_preprocessed_review['review_text'].apply( \
yelp_utils.lowercase_and_remove_punctuation_and_remove_numbers_and_tokenize_stem_and_restring)
"""
Explanation: Pre processing
<b>Perform following NLP task on data-
1. Lower case
2. Remove punctuation
3. Remove numbers
4. Remove stop words
5. Stem the words using PorterStemmer</b>
End of explanation
"""
df_data.review_text[1]
"""
Explanation: Data before preprocessing
End of explanation
"""
df_data_preprocessed_review.review_text[1]
"""
Explanation: Data after preprocessing
End of explanation
"""
vectorizer = CountVectorizer(analyzer = "word",
tokenizer = None,
preprocessor = None,
ngram_range = (1, 1),
strip_accents = 'unicode',
max_features = 1000)
feature_matrix = vectorizer.fit_transform(df_data_preprocessed_review.review_text)
feature_matrix
spare_matrix_file = os.path.join(YELP_DATA_SPARSE_MATRIX_DIR, "bagWords"+ data_subset)
save_sparse_csr(spare_matrix_file, feature_matrix)
test = load_sparse_csr(spare_matrix_file + ".npz")
print np.array_equal(feature_matrix.data, test.data)
print np.array_equal(feature_matrix.indices, test.indices)
print np.array_equal(feature_matrix.indptr, test.indptr)
print np.array_equal(feature_matrix.shape, test.shape)
"""
Explanation: Generating bag of words
End of explanation
"""
|
amitkaps/hackermath | Module_1d_linear_regression_gradient.ipynb | mit | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
plt.rcParams['figure.figsize'] = (9, 6)
pop = pd.read_csv('data/cars_small.csv')
pop.head()
"""
Explanation: Linear Regression (Gradient Descent)
So far we have looked at direct matrix method for solving the $Ax = b$ problem. But most machine learning algorithms may not be directly computatable. So the alternative way to do is to define a cost function that needs to be minimised and use a gradient descent approach to solve it.
End of explanation
"""
from sklearn import linear_model
from sklearn import metrics
X = pop['kmpl'].values.reshape(-1,1)
y = pop['price']
model = linear_model.LinearRegression(fit_intercept=True)
model.fit(X,y)
model.coef_
beta0 = round(model.intercept_)
beta1 = round(model.coef_[0])
print("b0 = ", beta0, "b1 =", beta1)
"""
Explanation: Solving using Sklearn
End of explanation
"""
x = np.arange(min(pop.kmpl),max(pop.kmpl),1)
plt.xlabel('kmpl')
plt.ylabel('price')
y_p = beta0 + beta1 * x
plt.plot(x, y_p, '-')
plt.scatter(pop.kmpl, pop.price, s = 150, alpha = 0.5 )
"""
Explanation: Plotting the Regression Line
$$ price = 1158 - 36 * kmpl ~~~~ \textit{(population = 42)}$$
End of explanation
"""
# Solving using Ax = b approach
n = pop.shape[0]
x0 = np.ones(n)
x1 = pop.kmpl
X = np.c_[x0, x1]
X = np.asmatrix(X)
y = np.asmatrix(pop.price.values.reshape(-1,1))
b = np.asmatrix([[beta0],
[beta1]])
# Error calculation
def error_term(X,y,b,n):
M = (y - X*b)
error = M.T*M / n
return error[0,0]
round(error_term(X,y,b,n))
"""
Explanation: Calculating the Error term
The error term as we saw is defined as:
$$ E(\beta)= \frac {1}{n} {||y - X\beta||}^2 $$
End of explanation
"""
# Calculate a range of values for b0 and b1
b0_min, b0_max = -500, 2000
b1_min, b1_max = -100, 100
bb0, bb1 = np.meshgrid(np.arange(b0_min, b0_max, (b0_max - b0_min)/100),
np.arange(b1_min, b1_max, (b1_max - b1_min)/100))
# Calculate a mesh of values for b0 and b1 and error term for each
bb = np.c_[bb0.ravel(), bb1.ravel()]
errors = [error_term(X,y,np.asmatrix(i).T,n) for i in bb]
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
trisurf = ax.plot_trisurf(bb[:,0], bb[:,1], errors, cmap=plt.cm.viridis, linewidth=0)
ax.view_init(azim=20)
fig.colorbar(trisurf, shrink=0.3, aspect=5)
ax.set_xlabel('Intercept')
ax.set_ylabel('Slope')
ax.set_zlabel("Error")
"""
Explanation: Shape of the error term
Lets plot the error term and see that the error surface is CONVEX. And there is one absolute minimum point.
End of explanation
"""
# Get the errors
Z = np.asmatrix(errors).reshape(bb0.shape)
# Get the contour
e_min, e_max = np.min(errors), np.max(errors)
levels = np.logspace(0, np.log10(e_max), 40)
# Plot the contour
fig = plt.figure()
cs = plt.contourf(bb0, bb1, Z, cmap=plt.cm.viridis, levels = levels, linewidth=0.3, alpha = 0.5)
fig.colorbar(cs, shrink=0.5, aspect=5)
plt.xlabel('Intercept')
plt.ylabel('Slope')
"""
Explanation: Let's plot the error term as a 2D plot - as a contour plot and make it more clear.
End of explanation
"""
b_initial = np.asmatrix([[500],
[-100]])
def gradient(X,y,b,n):
g = (2/n)*X.T*(X*b - y)
return g
error_term(X,y,b_initial,n)
gradient(X,y,b_initial,n)
b_next = b_initial - 0.01 * gradient(X,y,b_initial,n)
b_next
"""
Explanation: Gradient Descent
In our linear regression problem, there is only one minimum. Our error surface is convex. So we can start from one point on the error surface and gradually move down in the error surface in the direction of the minimum.
"At a theoretical level, gradient descent is an algorithm that minimizes the cost functions. Given a cost function defined by a set of parameters, gradient descent starts with an initial set of parameter values and iteratively moves toward a set of parameter values that minimize the cost function. This iterative minimization is achieved using calculus, taking steps in the negative direction of the function gradient."
So the gradient in linear regression is:
$$ \nabla E(\beta)= \frac {2}{n} X^T(X\beta−y) $$
Lets start at an arbitary point for the features (0,0) and calculate the gradient
End of explanation
"""
def gradient_descent (eta, epochs, X, y, n):
# Set Initial Values
b = np.asmatrix([[-250],[-50]])
e = error_term(X,y,b,n)
b0s, b1s, errors = [], [], []
b0s.append(b.item(0))
b1s.append(b.item(1))
errors.append(e)
# Run the calculation for those many epochs
for i in range(epochs):
g = gradient(X,y,b,n)
b = b - eta * g
e = error_term(X,y,b,n)
b0s.append(b.item(0))
b1s.append(b.item(1))
errors.append(e)
print('error =', round(errors[-1]), ' b0 =', round(b0s[-1]), 'b1 =', round(b1s[-1]))
return errors, b0s, b1s
"""
Explanation: Learning Rate and Epochs
Now we need to update our parameters in the direction of the gradient. And keep repeating the process.
$$ \beta_{i+1} = \beta_{i} - \eta * \nabla E(\beta)$$
Learning rate ($\eta$) - how far we need to move towards the gradient in each step
Epoch - Number of times we want to execute this process
End of explanation
"""
errors, b0s, b1s = gradient_descent (0.001, 100, X, y, n)
"""
Explanation: Lets see the error rate with different Learning Rate and Epochs
End of explanation
"""
def plot_gradient_descent(eta, epoch, gradient_func):
es, b0s, b1s = gradient_func(eta, epoch, X, y, n)
# Plot the intercept and coefficients
plt.figure(figsize=(15,6))
plt.subplot(1, 2, 1)
#plt.tight_layout()
# Plot the contour
cs = plt.contourf(bb0, bb1, Z, cmap=plt.cm.viridis, levels = levels,
linewidth=0.3, alpha = 0.5)
# Plot the intercept and coefficients
plt.plot(b0s, b1s, '-b', linewidth = 2)
plt.xlim([-500,2000])
plt.ylim([-100,100])
plt.xlabel('Intercept')
plt.ylabel('Slope')
# Plot the error rates
plt.subplot(1, 2, 2)
plt.plot(np.log10(es))
plt.ylim([3,10])
plt.xlabel('Epochs')
plt.ylabel('log(Error)')
plot_gradient_descent(0.0005, 15, gradient_descent)
"""
Explanation: Exercise
Try with learning rate = 0.001, Epochs = 1000
Try with learning rate = 0.02, Epochs = 1000
Try with learning rate = 0.001, Epochs = 50000
Plotting Epochs and Learning Rate
End of explanation
"""
def gradient_descent_line (eta, epochs, X, y, n):
gamma = 0.5
# Set Initial Values
b = np.asmatrix([[-250],[-50]])
es = error_term(X,y,b,n)
b0s, b1s, errors = [], [], []
b0s.append(b.item(0))
b1s.append(b.item(1))
errors.append(es)
# Run the calculation for those many epochs
for i in range(epochs):
es_old = error_term(X,y,b,n)
g = gradient(X,y,b,n)
b = b - eta * g
es = error_term(X,y,b,n)
#print(e,e_old)
if es > es_old - eta/2*(g.T*g):
eta = eta * gamma
b0s.append(b.item(0))
b1s.append(b.item(1))
errors.append(es)
print('error =', round(errors[-1]), ' b0 =', round(b0s[-1]), 'b1 =', round(b1s[-1]))
return errors, b0s, b1s
"""
Explanation: Exercise
Plot with learning rate = 0.001, Epochs = 1000
Plot with learning rate = 0.005, Epochs = 10000
Plot with learning rate = 0.001, Epochs = 50000
Challenges with Simple Gradient Descent
Convexity
In our linear regression problem, there was only one minimum. Our error surface was convex. Regardless of where we started, we would eventually arrive at the absolute minimum. In general, this need not be the case. It’s possible to have a problem with local minima that a gradient search can get stuck in. There are several approaches to mitigate this (e.g., stochastic gradient search).
Performance
We used vanilla gradient descent with a learning rate of 0.001 in the above example, and ran it for 50000 iterations. There are approaches such a line search, that can reduce the number of iterations required and also avoid the overshooting problem with large learning rate.
Convergence
We didn’t talk about how to determine when the search finds a solution. This is typically done by looking for small changes in error iteration-to-iteration (e.g., where the gradient is near zero).
Line Search Gradient Descent
To avoid the problem of setting the learning rate and overshooting, we can adaptively choose the step size
$$ \beta_{i+1} = \beta_{i} - \eta * \nabla E(\beta)$$
First, fix a parameter $ 0 < \gamma < 1$, then start with $\eta = 1$, and while
$$ E(\beta_{i+1}) > E(\beta) - \frac{\eta}{2} * {||\nabla E(\beta)||}^2 $$
then
$$ \eta = \gamma * \eta $$
Typically, we take $\gamma = [0.1, 0.8] $
End of explanation
"""
errors, b0s, b1s = gradient_descent_line (1, 50000, X, y, n)
errors, b0s, b1s = gradient_descent (0.0005, 50000, X, y, n)
"""
Explanation: Avoid Overshooting
Let see the performance difference between Simple Gradient Descent and Line Search Gradient Descent
End of explanation
"""
plot_gradient_descent(1, 10000, gradient_descent_line)
"""
Explanation: Plotting the Line Search Gradient Descent
End of explanation
"""
|
bbfamily/abu | abupy_lecture/21-A股UMP决策(ABU量化使用文档).ipynb | gpl-3.0 | # 基础库导入
from __future__ import print_function
from __future__ import division
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import ipywidgets
%matplotlib inline
import os
import sys
# 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题
sys.path.insert(0, os.path.abspath('../'))
import abupy
# 使用沙盒数据,目的是和书中一样的数据环境
abupy.env.enable_example_env_ipython()
from abupy import AbuFactorAtrNStop, AbuFactorPreAtrNStop, AbuFactorCloseAtrNStop, AbuFactorBuyBreak
from abupy import abu, EMarketTargetType, AbuMetricsBase, ABuMarketDrawing, ABuProgress, ABuSymbolPd
from abupy import EMarketTargetType, EDataCacheType, EMarketSourceType, EMarketDataFetchMode, EStoreAbu, AbuUmpMainMul
from abupy import AbuUmpMainDeg, AbuUmpMainJump, AbuUmpMainPrice, AbuUmpMainWave, feature, AbuFeatureDegExtend
from abupy import AbuUmpEdgeDeg, AbuUmpEdgePrice, AbuUmpEdgeWave, AbuUmpEdgeFull, AbuUmpEdgeMul, AbuUmpEegeDegExtend
from abupy import AbuUmpMainDegExtend, ump, Parallel, delayed, AbuMulPidProgress
# 关闭沙盒数据
abupy.env.disable_example_env_ipython()
"""
Explanation: ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align:middle;padding:10px 20px;"><font size="6" color="black"><b>第21节 A股UMP决策</b></font>
</center>
作者: 阿布
阿布量化版权所有 未经允许 禁止转载
abu量化系统github地址 (欢迎+star)
本节ipython notebook
上一节通过切割A股市场训练集测试集symbol,分别对切割的训练集和测试集做了回测,本节将示例A股ump主裁,边裁决策。
首先导入abupy中本节使用的模块:
End of explanation
"""
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL
abu_result_tuple = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='train_cn')
abu_result_tuple_test = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='test_cn')
ABuProgress.clear_output()
print('训练集结果:')
metrics_train = AbuMetricsBase.show_general(*abu_result_tuple, returns_cmp=True ,only_info=True)
print('测试集结果:')
metrics_test = AbuMetricsBase.show_general(*abu_result_tuple_test, returns_cmp=True, only_info=True)
"""
Explanation: 下面读取上一节存储的训练集和测试集回测数据,如下所示:
End of explanation
"""
# 需要全局设置为A股市场,在ump会根据市场类型保存读取对应的ump
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
ump_deg=None
ump_mul=None
ump_price=None
ump_main_deg_extend=None
# 使用训练集交易数据训练主裁
orders_pd_train_cn = abu_result_tuple.orders_pd
def train_main_ump():
print('AbuUmpMainDeg begin...')
AbuUmpMainDeg.ump_main_clf_dump(orders_pd_train_cn, save_order=False, show_order=False)
print('AbuUmpMainPrice begin...')
AbuUmpMainPrice.ump_main_clf_dump(orders_pd_train_cn, save_order=False, show_order=False)
print('AbuUmpMainMul begin...')
AbuUmpMainMul.ump_main_clf_dump(orders_pd_train_cn, save_order=False, show_order=False)
print('AbuUmpMainDegExtend begin...')
AbuUmpMainDegExtend.ump_main_clf_dump(orders_pd_train_cn, save_order=False, show_order=False)
# 依然使用load_main_ump,避免下面多进程内存拷贝过大
load_main_ump()
def load_main_ump():
global ump_deg, ump_mul, ump_price, ump_main_deg_extend
ump_deg = AbuUmpMainDeg(predict=True)
ump_mul = AbuUmpMainMul(predict=True)
ump_price = AbuUmpMainPrice(predict=True)
ump_main_deg_extend = AbuUmpMainDegExtend(predict=True)
print('load main ump complete!')
def select(select):
if select == 'train main ump':
train_main_ump()
else:
load_main_ump()
_ = ipywidgets.interact_manual(select, select=['train main ump', 'load main ump'])
"""
Explanation: 1. A股训练集主裁训练
下面开始使用训练集交易数据训练主裁,裁判组合使用两个abupy中内置裁判AbuUmpMainDeg和AbuUmpMainPrice,两个外部自定义裁判使用‘第18节 自定义裁判决策交易‘中编写的AbuUmpMainMul和AbuUmpMainDegExtend
第一次运行select:train main ump,然后点击run select,如果已经训练过可select:load main ump直接读取以训练好的主裁:
End of explanation
"""
# 选取有交易结果的数据order_has_result
order_has_result = abu_result_tuple_test.orders_pd[abu_result_tuple_test.orders_pd.result != 0]
"""
Explanation: 2. 验证A股主裁是否称职
下面首先通过从测试集交易中筛选出来已经有交易结果的交易,如下:
End of explanation
"""
order_has_result.filter(regex='^buy(_deg_|_price_|_wave_|_jump)').head()
"""
Explanation: order_has_result的交易单中记录了所买入时刻的交易特征,如下所示:
End of explanation
"""
def apply_ml_features_ump(order, predicter, progress, need_hit_cnt):
if not isinstance(order.ml_features, dict):
import ast
# 低版本pandas dict对象取出来会成为str
ml_features = ast.literal_eval(order.ml_features)
else:
ml_features = order.ml_features
progress.show()
# 将交易单中的买入时刻特征传递给ump主裁决策器,让每一个主裁来决策是否进行拦截
return predicter.predict_kwargs(need_hit_cnt=need_hit_cnt, **ml_features)
def pararllel_func(ump_object, ump_name):
with AbuMulPidProgress(len(order_has_result), '{} complete'.format(ump_name)) as progress:
# 启动多进程进度条,对order_has_result进行apply
ump_result = order_has_result.apply(apply_ml_features_ump, axis=1, args=(ump_object, progress, 2,))
return ump_name, ump_result
if sys.version_info > (3, 4, 0):
# python3.4以上并行处理4个主裁,每一个主裁启动一个进程进行拦截决策
parallel = Parallel(
n_jobs=4, verbose=0, pre_dispatch='2*n_jobs')
out = parallel(delayed(pararllel_func)(ump_object, ump_name)
for ump_object, ump_name in zip([ump_deg, ump_mul, ump_price, ump_main_deg_extend],
['ump_deg', 'ump_mul', 'ump_price', 'ump_main_deg_extend']))
else:
# 3.4下由于子进程中pickle ump的内部类会找不到,所以暂时只使用一个进程一个一个的处理
out = [pararllel_func(ump_object, ump_name) for ump_object, ump_name in zip([ump_deg, ump_mul, ump_price, ump_main_deg_extend],
['ump_deg', 'ump_mul', 'ump_price', 'ump_main_deg_extend'])]
# 将每一个进程中的裁判的拦截决策进行汇总
for sub_out in out:
order_has_result[sub_out[0]] = sub_out[1]
"""
Explanation: 可以通过一个一个迭代交易单,将交易单中的买入时刻特征传递给ump主裁决策器,让每一个主裁来决策是否进行拦截,这样可以统计每一个主裁的拦截成功率,以及整体拦截率等,如下所示:
备注:
如下的代码使用abupy中再次封装joblib的多进程调度使用示例,以及abupy中封装的多进程进度条的使用示例
3.4下由于子进程中pickle ump的内部类会找不到,所以暂时只使用一个进程一个一个的处理
End of explanation
"""
block_pd = order_has_result.filter(regex='^ump_*')
# 把所有主裁的决策进行相加
block_pd['sum_bk'] = block_pd.sum(axis=1)
block_pd['result'] = order_has_result['result']
# 有投票1的即会进行拦截
block_pd = block_pd[block_pd.sum_bk > 0]
print('四个裁判整体拦截正确率{:.2f}%'.format(block_pd[block_pd.result == -1].result.count() / block_pd.result.count() * 100))
block_pd.tail()
"""
Explanation: 通过把所有主裁的决策进行相加, 如果有投票1的即会进行拦截,四个裁判整体拦截正确率统计:
End of explanation
"""
from sklearn import metrics
def sub_ump_show(block_name):
sub_block_pd = block_pd[(block_pd[block_name] == 1)]
# 如果失败就正确 -1->1 1->0
sub_block_pd.result = np.where(sub_block_pd.result == -1, 1, 0)
return metrics.accuracy_score(sub_block_pd[block_name], sub_block_pd.result) * 100, sub_block_pd.result.count()
print('角度裁判拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_deg')))
print('角度扩展裁判拦拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_main_deg_extend')))
print('单混裁判拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_mul')))
print('价格裁判拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_price')))
"""
Explanation: 下面统计每一个主裁的拦截正确率:
End of explanation
"""
# 需要全局设置为A股市场,在ump会根据市场类型保存读取对应的ump
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
print('AbuUmpEdgeDeg begin...')
AbuUmpEdgeDeg.ump_edge_clf_dump(orders_pd_train_cn)
edge_deg = AbuUmpEdgeDeg(predict=True)
print('AbuUmpEdgePrice begin...')
AbuUmpEdgePrice.ump_edge_clf_dump(orders_pd_train_cn)
edge_price = AbuUmpEdgePrice(predict=True)
print('AbuUmpEdgeMul begin...')
AbuUmpEdgeMul.ump_edge_clf_dump(orders_pd_train_cn)
edge_mul = AbuUmpEdgeMul(predict=True)
print('AbuUmpEegeDegExtend begin...')
AbuUmpEegeDegExtend.ump_edge_clf_dump(orders_pd_train_cn)
edge_deg_extend = AbuUmpEegeDegExtend(predict=True)
print('fit edge complete!')
"""
Explanation: 3. A股训练集边裁训练
下面开始使用训练集交易数据训练训裁,裁判组合依然使用两个abupy中内置裁判AbuUmpEdgeDeg和AbuUmpEdgePrice,两个外部自定义裁判使用‘第18节 自定义裁判决策交易‘中编写的AbuUmpEdgeMul和AbuUmpEegeDegExtend,如下所示
备注:由于边裁的运行机制,所以边裁的训练非常快,这里直接进行训练,不再从本地读取裁判决策数据
End of explanation
"""
def apply_ml_features_edge(order, predicter, progress):
if not isinstance(order.ml_features, dict):
import ast
# 低版本pandas dict对象取出来会成为str
ml_features = ast.literal_eval(order.ml_features)
else:
ml_features = order.ml_features
# 边裁进行裁决
progress.show()
# 将交易单中的买入时刻特征传递给ump边裁决策器,让每一个边裁来决策是否进行拦截
edge = predicter.predict(**ml_features)
return edge.value
def edge_pararllel_func(edge, edge_name):
with AbuMulPidProgress(len(order_has_result), '{} complete'.format(edge_name)) as progress:
# # 启动多进程进度条,对order_has_result进行apply
edge_result = order_has_result.apply(apply_ml_features_edge, axis=1, args=(edge, progress,))
return edge_name, edge_result
if sys.version_info > (3, 4, 0):
# python3.4以上并行处理4个边裁的决策,每一个边裁启动一个进程进行拦截决策
parallel = Parallel(
n_jobs=4, verbose=0, pre_dispatch='2*n_jobs')
out = parallel(delayed(edge_pararllel_func)(edge, edge_name)
for edge, edge_name in zip([edge_deg, edge_price, edge_mul, edge_deg_extend],
['edge_deg', 'edge_price', 'edge_mul', 'edge_deg_extend']))
else:
# 3.4下由于子进程中pickle ump的内部类会找不到,所以暂时只使用一个进程一个一个的处理
out = [edge_pararllel_func(edge, edge_name) for edge, edge_name in zip([edge_deg, edge_price, edge_mul, edge_deg_extend],
['edge_deg', 'edge_price', 'edge_mul', 'edge_deg_extend'])]
# 将每一个进程中的裁判的拦截决策进行汇总
for sub_out in out:
order_has_result[sub_out[0]] = sub_out[1]
"""
Explanation: 4. 验证A股边裁是否称职
使用与主裁类似的方式,一个一个迭代交易单,将交易单中的买入时刻特征传递给ump边裁决策器,让每一个边裁来决策是否进行拦截,统计每一个边裁的拦截成功率,以及整体拦截率等,如下所示:
备注:如下的代码使用abupy中再次封装joblib的多进程调度使用示例,以及abupy中封装的多进程进度条的使用示例
End of explanation
"""
block_pd = order_has_result.filter(regex='^edge_*')
"""
由于predict返回的结果中1代表win top
但是我们只需要知道loss_top,所以只保留-1, 其他1转换为0。
"""
block_pd['edge_block'] = \
np.where(np.min(block_pd, axis=1) == -1, -1, 0)
# 拿出真实的交易结果
block_pd['result'] = order_has_result['result']
# 拿出-1的结果,即判定loss_top的
block_pd = block_pd[block_pd.edge_block == -1]
print('四个裁判整体拦截正确率{:.2f}%'.format(block_pd[block_pd.result == -1].result.count() /
block_pd.result.count() * 100))
print('四个边裁拦截交易总数{}, 拦截率{:.2f}%'.format(
block_pd.shape[0],
block_pd.shape[0] / order_has_result.shape[0] * 100))
block_pd.head()
"""
Explanation: 通过把所有边裁的决策进行统计, 如果有投票-1的结果即判定loss_top的拿出来和真实交易结果result组成结果集,统计四个边裁的整体拦截正确率以及拦截率,如下所示:
End of explanation
"""
from sklearn import metrics
def sub_edge_show(edge_name):
sub_edge_block_pd = order_has_result[(order_has_result[edge_name] == -1)]
return metrics.accuracy_score(sub_edge_block_pd[edge_name], sub_edge_block_pd.result) * 100, sub_edge_block_pd.shape[0]
print('角度边裁拦截正确率{0:.2f}%, 拦截交易数量{1:}'.format(*sub_edge_show('edge_deg')))
print('单混边裁拦截正确率{0:.2f}%, 拦截交易数量{1:}'.format(*sub_edge_show('edge_mul')))
print('价格边裁拦截正确率{0:.2f}%, 拦截交易数量{1:}'.format(*sub_edge_show('edge_price')))
print('角度扩展边裁拦截正确率{0:.2f}%, 拦截交易数量{1:}'.format(*sub_edge_show('edge_deg_extend')))
"""
Explanation: 下面再统计每一个 边裁的拦截正确率:
End of explanation
"""
# 开启内置主裁
abupy.env.g_enable_ump_main_deg_block = True
abupy.env.g_enable_ump_main_price_block = True
# 开启内置边裁
abupy.env.g_enable_ump_edge_deg_block = True
abupy.env.g_enable_ump_edge_price_block = True
# 回测时需要开启特征生成,因为裁判开启需要生成特征做为输入
abupy.env.g_enable_ml_feature = True
# 回测时使用上一次切割好的测试集数据
abupy.env.g_enable_last_split_test = True
abupy.beta.atr.g_atr_pos_base = 0.05
"""
Explanation: 4. 在abu系统中开启主裁拦截模式,开启边裁拦截模式
内置边裁的开启很简单,只需要通过env中的相关设置即可完成,如下所示,分别开启主裁和边裁的两个内置裁判:
End of explanation
"""
feature.clear_user_feature()
# 10,30,50,90,120日走势拟合角度特征的AbuFeatureDegExtend,做为回测时的新的视角来录制比赛
feature.append_user_feature(AbuFeatureDegExtend)
# 打开使用用户自定义裁判开关
ump.manager.g_enable_user_ump = True
# 先clear一下
ump.manager.clear_user_ump()
# 把新的裁判AbuUmpMainDegExtend类名称使用append_user_ump添加到系统中
ump.manager.append_user_ump(AbuUmpEegeDegExtend)
# 把新的裁判AbuUmpMainDegExtend类名称使用append_user_ump添加到系统中
ump.manager.append_user_ump(AbuUmpMainDegExtend)
"""
Explanation: 用户自定义裁判的开启在‘第18节 自定义裁判决策交易‘ 也示例过,通过ump.manager.append_user_ump即可
注意下面还需要把10,30,50,90,120日走势拟合角度特征的AbuFeatureDegExtend,做为回测时的新的视角来录制比赛(记录回测特征),因为裁判里面有AbuUmpEegeDegExtend和AbuUmpMainDegExtend,它们需要生成带有10,30,50,90,120日走势拟合角度特征的回测交易单
代码如下所示:
End of explanation
"""
# 初始化资金500万
read_cash = 5000000
# 买入因子依然延用向上突破因子
buy_factors = [{'xd': 60, 'class': AbuFactorBuyBreak},
{'xd': 42, 'class': AbuFactorBuyBreak}]
# 卖出因子继续使用上一节使用的因子
sell_factors = [
{'stop_loss_n': 1.0, 'stop_win_n': 3.0,
'class': AbuFactorAtrNStop},
{'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},
{'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}
]
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL
"""
Explanation: 买入因子,卖出因子等依然使用相同的设置,如下所示:
End of explanation
"""
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL
abu_result_tuple_test_ump = None
def run_loop_back_ump():
global abu_result_tuple_test_ump
abu_result_tuple_test_ump, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
choice_symbols=None,
start='2012-08-08', end='2017-08-08')
# 把运行的结果保存在本地,以便之后分析回测使用,保存回测结果数据代码如下所示
abu.store_abu_result_tuple(abu_result_tuple_test_ump, n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='test_ump_cn')
ABuProgress.clear_output()
def run_load_ump():
global abu_result_tuple_test_ump
abu_result_tuple_test_ump = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='test_ump_cn')
def select_ump(select):
if select == 'run loop back ump':
run_loop_back_ump()
else:
run_load_ump()
_ = ipywidgets.interact_manual(select_ump, select=['run loop back ump', 'load test ump data'])
"""
Explanation: 完成裁判组合的开启,即可开始回测,回测操作流程和之前的操作一样:
下面开始回测,第一次运行select:run loop back ump,然后点击run select_ump,如果已经回测过可select:load test ump data直接从缓存数据读取:
End of explanation
"""
AbuMetricsBase.show_general(*abu_result_tuple_test_ump, returns_cmp=True, only_info=True)
AbuMetricsBase.show_general(*abu_result_tuple_test, returns_cmp=True, only_info=True)
"""
Explanation: 下面对比针对A股市场测试集交易开启主裁,边裁拦截和未开启主裁,边裁,结果可以看出拦截了接近一半的交易,胜率以及盈亏比都有大幅度提高:
End of explanation
"""
|
seanjh/DSRecommendationSystems | task2.ipynb | apache-2.0 | global_mean = ratings_train.map(lambda r: (r[2])).mean()
global_mean
"""
Explanation: Calculate the general mean u for all ratings
End of explanation
"""
#convert training data to dataframe with attribute
df = sqlContext.createDataFrame(ratings_train, ['userId', 'movieId', 'ratings'])
#sort the data by movie
df_orderByMovie = df.orderBy(df.movieId)
#group the movie and count each movie
movie_count = df_orderByMovie.groupBy(df_orderByMovie.movieId).count()
#calculate the sum of the ratings of each movie
sum_byMovie = df_orderByMovie.groupBy(['movieId']).sum()
#drop some unrelated column
drop_column1 = sum_byMovie.drop(sum_byMovie[1])
final_drop = drop_column1.drop(drop_column1[1])
#join the sum of count and sum of rating for each movie
movie_sorted = movie_count.join(final_drop, "movieId")
#sorted the dataset by each movie
new_movie_sorted = movie_sorted.orderBy(movie_sorted.movieId)
#calculate item specific bias
item_bias = new_movie_sorted.map(lambda r: [r[0], (r[2] - r[1]*global_mean)/(25+r[1])])
new_item_bias = sqlContext.createDataFrame(item_bias, ['movieId', 'item_bias'])
"""
Explanation: calculate item-specific bias, according to the paper we referenced, for each item i, its bias is equal to the summation of difference between all ratings of to the same item and global mean and then the result is divided by the sum of a regulation parameter and the quantity of the ratings
End of explanation
"""
#order the training set by user
df_orderByUser = df.orderBy(df.userId)
#join the item bias dataset to with the same movieId
contain_itemBias = df_orderByUser.join(new_item_bias, "movieId")
#sorted the dataset by user
sorted_byUser = contain_itemBias.orderBy(['userId'])
#calculate the numerical part of item specific bais
subtraction = sorted_byUser.map(lambda r: [r[1], r[2] - global_mean - r[3]])
user_bias_part1 = sqlContext.createDataFrame(subtraction, ['userId', 'subtraction'])
sum_byUser = user_bias_part1.groupBy(['userId']).sum()
#count the user
sum_UserCollect = user_bias_part1.groupBy(['userId']).count()
#order the data set by user
ordered_sum_UserCollect = sum_UserCollect.orderBy(sum_UserCollect.userId)
drop_column2 = sum_byUser.drop(sum_byUser[1])
final_drop2 = drop_column2.orderBy(drop_column2.userId)
user_bias_table = final_drop2.join(ordered_sum_UserCollect, 'userId')
ordered_userBiaTable = user_bias_table.orderBy(user_bias_table.userId)
user_bias = ordered_userBiaTable.map(lambda r: [r[0], r[1]/(10+r[2])])
user_specific_bias = sqlContext.createDataFrame(user_bias, ['userId', 'user_bias'])
merge1 = df_orderByUser.join(user_specific_bias, 'userId')
merge2 = merge1.join(new_item_bias, 'movieId')
new_ratings_train = merge2.map(lambda r: [r[0], r[1], r[2] - r[3] - r[4]])
temp = sqlContext.createDataFrame(new_ratings_train, ['movieId', 'userId', 'new_ratings'])
final_new_ratings_train = temp.orderBy(temp.userId)
final_new_ratings_train.take(10)
#now, we perform the same procedure as task1
#first, we sort the data by timestamp.
new_ratings_byTime = final_new_ratings_train.join(df, ['userId', 'movieId'])
#example of dataset
new_ratings_byTime.take(20)
new_ratings_byTime = new_ratings_byTime.drop(new_ratings_byTime[3])
def prepare_validation(validation):
return validation.map(lambda p: (p[0], p[1]))
import math
# Evaluate the model on training data
def train_evaluate_als(train, validation, rank, iterations_num, lambda_val):
model = ALS.train(train, rank, iterations_num, lambda_val)
predictions = model.predictAll(prepare_validation(validation)).map(lambda r: ((r[0], r[1]), r[2]))
ratesAndPreds = validation.map(lambda r: ((r[0], r[1]), r[2])).join(predictions)
MSE = ratesAndPreds.map(lambda r: (r[1][0] - r[1][1])**2).mean()
RMSE = math.sqrt(MSE)
return MSE, RMSE
ranks = [10, 20, 30, 40, 50]
lambda_values = [0.01,0.1,1.0,10.0]
ITERATIONS = 10
def report_mse_results(rank, lambda_value, mse, rmse):
print("Rank=%d, Lambda=%0.2f, MSE=%s, RMSE=%s" % (rank, lambda_value, mse, rmse))
def evaluate_parameters(train, validation, ranks, lambda_values):
for r in ranks:
for l in lambda_values:
mse, rmse = train_evaluate_als(new_ratings_byTime.rdd, validation, r, ITERATIONS, l)
report_mse_results(r, l, mse, rmse)
evaluate_parameters(new_ratings_byTime, ratings_validation, ranks, lambda_values)
"""
Explanation: Caculate the user-specific bias
End of explanation
"""
|
jaredleekatzman/DeepSurv | notebooks/DeepSurv Example.ipynb | mit | train_dataset_fp = './example_data.csv'
train_df = pd.read_csv(train_dataset_fp)
train_df.head()
"""
Explanation: Read in dataset
First, I read in the dataset and print the first five elements to get a sense of what the dataset looks like
End of explanation
"""
# event_col is the header in the df that represents the 'Event / Status' indicator
# time_col is the header in the df that represents the event time
def dataframe_to_deepsurv_ds(df, event_col = 'Event', time_col = 'Time'):
# Extract the event and time columns as numpy arrays
e = df[event_col].values.astype(np.int32)
t = df[time_col].values.astype(np.float32)
# Extract the patient's covariates as a numpy array
x_df = df.drop([event_col, time_col], axis = 1)
x = x_df.values.astype(np.float32)
# Return the deep surv dataframe
return {
'x' : x,
'e' : e,
't' : t
}
# If the headers of the csv change, you can replace the values of
# 'event_col' and 'time_col' with the names of the new headers
# You can also use this function on your training dataset, validation dataset, and testing dataset
train_data = dataframe_to_deepsurv_ds(train_df, event_col = 'Event', time_col= 'Time')
"""
Explanation: Transform the dataset to "DeepSurv" format
DeepSurv expects a dataset to be in the form:
{
'x': numpy array of float32
'e': numpy array of int32
't': numpy array of float32
'hr': (optional) numpy array of float32
}
You are providing me a csv, which I read in as a pandas dataframe. Then I convert the pandas dataframe into the DeepSurv dataset format above.
End of explanation
"""
hyperparams = {
'L2_reg': 10.0,
'batch_norm': True,
'dropout': 0.4,
'hidden_layers_sizes': [25, 25],
'learning_rate': 1e-05,
'lr_decay': 0.001,
'momentum': 0.9,
'n_in': train_data['x'].shape[1],
'standardize': True
}
"""
Explanation: Now once you have your dataset all formatted, define you hyper_parameters as a Python dictionary.
I'll provide you with some example hyper-parameters, but you should replace the values once you tune them to your specific dataset
End of explanation
"""
# Create an instance of DeepSurv using the hyperparams defined above
model = deep_surv.DeepSurv(**hyperparams)
# DeepSurv can now leverage TensorBoard to monitor training and validation
# This section of code is optional. If you don't want to use the tensorboard logger
# Uncomment the below line, and comment out the other three lines:
# logger = None
experiment_name = 'test_experiment_sebastian'
logdir = './logs/tensorboard/'
logger = TensorboardLogger(experiment_name, logdir=logdir)
# Now we train the model
update_fn=lasagne.updates.nesterov_momentum # The type of optimizer to use. \
# Check out http://lasagne.readthedocs.io/en/latest/modules/updates.html \
# for other optimizers to use
n_epochs = 2000
# If you have validation data, you can add it as the second parameter to the function
metrics = model.train(train_data, n_epochs=n_epochs, logger=logger, update_fn=update_fn)
"""
Explanation: Once you prepared your dataset, and defined your hyper-parameters. Now it's time to train DeepSurv!
End of explanation
"""
# Print the final metrics
print('Train C-Index:', metrics['c-index'][-1])
# print('Valid C-Index: ',metrics['valid_c-index'][-1])
# Plot the training / validation curves
viz.plot_log(metrics)
"""
Explanation: There are two different ways to visualzie how the model trained:
Tensorboard (install ()[tensorboard]) which provides realtime metrics. Run the command in shell:
tensorboard --logdir './logs/tensorboard'
Visualize the training functions post training (below)
End of explanation
"""
|
fggp/ctcsound | cookbook/drafts/plot_audio_file.ipynb | lgpl-2.1 | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import soundfile as sf
"""
Explanation: Plotting an Audio File
For the transformation of the audio data to a numpy array the soundfile library is used. It is based on libsndfile which is also used by Csound. Other Python modules like wave have problems in reading 24-bit files and are more complicated to use.
Basic plotting
First load the dependencies.
End of explanation
"""
file = '../examples/fox.wav'
amps, sr = sf.read(file)
time = np.linspace(0, len(amps)/sr, num=len(amps))
plt.plot(time,amps)
plt.show()
"""
Explanation: The most simple usage reads the file and calculates the time for the x-axis (otherwise this axis would show samples as units).
End of explanation
"""
file = '../examples/fox.wav'
amps, sr = sf.read(file)
time = np.linspace(0, len(amps)/sr, num=len(amps))
fig,ax = plt.subplots(figsize=(15,4))
ax.hlines(0, 0, len(amps)/sr)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.set_xlim(0, len(amps)/sr)
ax.set_ylim(-1,1)
ax.plot(time,amps, color='black', lw=1)
plt.show()
"""
Explanation: Plotting adjustments
We follow some settings in iCsound here for the same example.
End of explanation
"""
def plotSF(file, skip=0, end=-1, figsize=(15,4)):
"""Plots a sound file via Matplotlib.
Requires matplotlib.pyplot as plt and
the python soundfile library as sf.
file = file name as string
start = start in seconds to read in file
end = end in seconds to read in file
figsize = tuple as in matplotlib.pyplot.plot"""
sr = sf.SoundFile(file).samplerate
start = round(skip*sr)
if end == -1:
stop = None
else:
stop = round(end*sr)
amps, sr = sf.read(file,start=start, stop=stop)
numframes = len(amps)
endsecs = skip + numframes/sr
time = np.linspace(skip, endsecs, numframes)
fig,ax = plt.subplots(figsize=figsize)
ax.hlines(0, skip, endsecs)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.set_xlim(skip, endsecs)
ax.set_ylim(-1,1)
ax.plot(time,amps, color='black', lw=1)
plt.show()
plotSF('../examples/fox.wav', skip=0.4, end=0.7, figsize=(10,6))
"""
Explanation: Wrap in a function
We add the options to show only a selection of the soundfile by specifying start and end in seconds,
End of explanation
"""
|
jayme-anchante/cv-bio | interview_tests/Teste BI em Python.ipynb | mit | # Links para as bases de dados do R:
mtcars_link = 'https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/master/csv/datasets/mtcars.csv'
quakes_link = 'https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/datasets/quakes.csv'
cars_link = 'https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/datasets/cars.csv'
"""
Explanation: Teste de BI
Candidato: Jayme Anchante
Questões R Básico:
End of explanation
"""
import pandas as pd
mtcars = pd.read_csv(mtcars_link)
mtcars.head()
mtcars.rename(columns = {'Unnamed: 0': 'name'}, inplace = True)
mtcars.head()
x = mtcars.mpg[mtcars.name.str[:4] == 'Merc'].mean()
x
"""
Explanation: 1. Usando mtcars, trazer a média de miles per galon da marca Mercedez. Atribuir isso a uma variável x.
End of explanation
"""
mtcars[['mpg', 'wt']].corr()
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt; plt.style.use('ggplot')
mpg_wt = mtcars[['mpg', 'wt']]
joint = sns.jointplot('wt', 'mpg', data = mpg_wt, kind = 'reg', size = 12)
plt.subplots_adjust(top=0.95)
joint.fig.suptitle('Correlação entre peso do veículo e consumo de combustível', fontsize = 28)
plt.xlabel('peso (1000 lbs) ', fontsize = 22)
plt.ylabel('consumo (Miles/(US) gallon)', fontsize = 22);
"""
Explanation: 2. Testar se há correlação entre o peso do carro e o consumo de gasolina. Existe? Por quê?
End of explanation
"""
quakes = pd.read_csv(quakes_link)
quakes.head()
quakes.rename(columns = {'Unnamed: 0': 'id'}, inplace = True)
print('A maior magnitude de um terremoto é', quakes['mag'].max(), 'na escala Richter!')
print('A magnitude média é de', round(quakes['mag'].mean(), 4), 'na escala Richter')
print('O desvio das magnitudes é de', round(quakes['mag'].std(), 4))
"""
Explanation: <font size="4"> Há "forte" correlação linear negativa ou inversa entre peso e quilometragem dos veículos: quanto maior o peso, menor a quilometragem. Provavelmente, o motor "exige" mais combustível de veículos mais pesados do que dos mais leves para locomover-se. </font>
3. Usando quakes, qual é a maior magnitude de um terremoto? e qual a magnitude média? e o desvio entre as magnitudes?
End of explanation
"""
cars = pd.read_csv(cars_link)
cars.tail()
del cars['Unnamed: 0']
cars['speed'].max()
joint = sns.jointplot('speed', 'dist', data = cars, kind = 'reg', size = 12)
plt.subplots_adjust(top=0.95)
joint.fig.suptitle('Correlação entre a velocidade e a distância de frenagem', fontsize = 28)
plt.xlabel('velocidade (mph)', fontsize = 22)
plt.ylabel('distância (ft)', fontsize = 22);
speed = cars['speed'].reshape(50, 1)
dist = cars['dist'].reshape(50, 1)
from sklearn import linear_model
reg = linear_model.LinearRegression()
reg.fit(X = speed, y = dist)
reg.coef_
print('A distância de frenagem é de', reg.predict(90), 'ft caso o carro esteja a 90 mph')
"""
Explanation: 4. Usando cars, qual é a distância de frenagem se o carro estiver a 90 milhas por hora.
End of explanation
"""
import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
# Create table
c.execute('''CREATE TABLE users
(id int, name text)''')
c.execute('''CREATE TABLE tasks
(id int, event text, id_resp int)''')
# Insert a row of data
c.execute('''INSERT INTO users VALUES
(1, 'Igor Sanchez'),
(2, 'Joao Junior'),
(3, 'Rodrigo Pinto'),
(4, 'Amandio Pereira'),
(5, 'Karoline Leal')''')
# Insert a row of data
c.execute('''INSERT INTO tasks VALUES
(1, 'send report', 3),
(2, 'drink coffee', 2),
(3, 'travel CWB', 3),
(4, 'call mkt', 6)''')
# Save (commit) the changes
conn.commit()
for row in c.execute("SELECT * from users"):
print(row)
for row in c.execute("SELECT * from tasks"):
print(row)
"""
Explanation: Questões SQL Básico:
1. Dadas as tabelas abaixo:
End of explanation
"""
for row in c.execute('''SELECT * FROM users
LEFT JOIN tasks
ON users.id = tasks.id_resp'''):
print(row)
"""
Explanation: Qual o resultado da query abaixo?
SELECT * FROM users LEFT JOIN tasks ON users.id = tasks.id_resp;
End of explanation
"""
# Create table
c.execute('''CREATE TABLE firmas
(id int, periodo int, estado text, origem text, qtd_users int)''')
# Insert a row of data
c.execute('''INSERT INTO firmas VALUES
(3, 201705, 'PR', 'MGservico', 80),
(1, 201705, 'PR', 'MGservico', 100),
(2, 201705, 'PR', 'MGservico', 110),
(4, 201705, 'RS', 'MGcomercio', 50),
(5, 201706, 'RS', 'MGcomercio', 200),
(6, 201706, 'SP', 'Abertura', 250),
(7, 201706, 'SP', 'Abertura', 400),
(8, 201706, 'SP', 'Abertura', 310)''')
# Save (commit) the changes
conn.commit()
for row in c.execute("SELECT * from firmas"):
print(row)
"""
Explanation: <font size="4"> A query retorna todos os valores da tabela da esquerda (users), os registros pareados na tabela da direita (tasks). O resultado é NULL (NA em R, None em Python) na tabela da direita nas linhas não pareadas.</font>
2. Especifique cada tipo de JOIN:
• Left Join: A query retorna todos os valores da tabela da esquerda, os registros pareados na tabela da direita. O resultado é NULL (NA em R, None em Python) na tabela da direita nas linhas não pareadas.
• Right Join: A query retorna todos os valores da tabela da direita, os registros pareados na tabela da esquerda. O resultado é NULL (NA em R, None em Python) na tabela da esquerda nas linhas não pareadas.
• Inner Join: A query retorna apenas registros em que houve pareamento de valores em ambas as tabelas.
• Full Join: A query retorna todos os registros em que houve pareamento ou na tabela da esquerda ou na tabela da direita. Ou seja, retorna todos os valores de ambas as tabelas.
3. O que é uma chave primária de uma tabela?
A chave primária identifica de forma exclusiva cada registro de uma tabela.
A chave primária deve conter apenas valores únicos, e não pode conter valores NULL (NA em R, None em Python).
Uma tabela pode conter apenas uma chave primária, que pode consistir de uma ou múltiplos campos (colunas).
4. Quais dessas funções são destinadas a agregação de dados, GROUP BY?
LEN(), RIGHT(), SUM(), REPLACE(), COUNT(), CONCAT(), ABS()
As funções de agregação de dados utilizadas com GROUP BY nessa amostra são SUM() e COUNT().
5. Dado a tabela:
End of explanation
"""
for row in c.execute('''SELECT * FROM firmas
WHERE periodo = 201705 AND estado = "PR" AND qtd_users > 80 '''):
print(row)
"""
Explanation: a. Escreva a clausula WHERE que retorne as quantidades do período 201705 para o estado do PR quando as quantidades forem superiores a 80.
End of explanation
"""
c = conn.cursor()
c.execute("DROP TABLE users")
c.execute("DROP TABLE tasks")
# Create table
c.execute('''CREATE TABLE users
(id int, name text, status text)''')
c.execute('''CREATE TABLE tasks
(id int, event text, id_resp int, status text)''')
# Insert a row of data
c.execute('''INSERT INTO users VALUES
(1, 'Igor Sanchez', 'ativo'),
(2, 'Joao Junior', 'ativo'),
(3, 'Rodrigo Pinto', 'inativo'),
(4, 'Amandio Pereira', 'inativo'),
(5, 'Karoline Leal', 'ativo')''')
# Insert a row of data
c.execute('''INSERT INTO tasks VALUES
(1, 'send report', 3, 'null'),
(2, 'drink coffee', 2, 'undone'),
(3, 'travel CWB', 3, 'null'),
(4, 'call mkt', 6, 'done'),
(5, 'feed the badger', 2, 'undone'),
(4, 'buy a badger', 6, 'done')''')
# Save (commit) the changes
conn.commit()
for row in c.execute("SELECT * from users"):
print(row)
for row in c.execute("SELECT * FROM tasks"):
print(row)
"""
Explanation: b. Quais id linhas serão retornadas?
As linhas id cujo valor são 1 e 2.
6. Dadas as tabelas abaixo:
End of explanation
"""
for row in c.execute('''SELECT *, users.status AS funcionario_ativo FROM users
FULL OUTER JOIN tasks ON users.id = tasks.id_resp'''):
print(row)
"""
Explanation: a. Faça uma query contendo o resultado das duas tabelas juntas, renomenando o campo status da tabela users para funcionario_ativo.
A query seria:
End of explanation
"""
for row in c.execute('''SELECT u.id, u.name, u.status AS funcionário_ativo, t.id, t.event, t.id_resp, t.status
FROM users u
LEFT JOIN tasks t ON u.id = t.id_resp
UNION ALL
SELECT u.id, u.name, u.status, t.id, t.event, t.id_resp, t.status
FROM tasks t
LEFT JOIN users u ON u.id = t.id_resp
WHERE u.status IS NULL'''):
print(row)
"""
Explanation: Entretanto, SQLite não suporta RIGHT e FULL OUTER JOIN. Portanto, eu tive que "emular" o comando FULL OUTER JOIN usando as cláusulas UNION e LEFT JOIN.
Fonte: http://www.sqlitetutorial.net/sqlite-full-outer-join/
End of explanation
"""
for row in c.execute('''SELECT users.name, tasks.event, CASE
WHEN users.status = "ativo" AND tasks.status = "done" THEN "sucesso"
WHEN users.status = "ativo" AND tasks.status = "undone" THEN "falha"
WHEN users.status = "inativo" AND tasks.status = "null" then "reatribuir"
END AS status_do_evento FROM tasks
LEFT JOIN users
ON users.id = tasks.id_resp'''):
print(row)
# We can also close the connection if we are done with it.
# Just be sure any changes have been committed or they will be lost.
conn.close()
"""
Explanation: b. Faça outra query que traga os eventos com o nome do responsável. O resultado não deve trazer os campos de status de ambas tabelas, porém deve trazer um novo campo de status_do_evento que deve construindo da seguinte forma:
• se o status do funcionário for ativo e o status do evento for done, marcar como sucesso
• se o status do funcionário for ativo e o status do evento for undone, marcar como falha
• se o status do funcionário for inativo e o status do evento for nulo, marcar como reatribuir
End of explanation
"""
|
fastai/course-v3 | nbs/dl2/11_train_imagenette.ipynb | apache-2.0 | path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)
size = 128
tfms = [make_rgb, RandomResizedCrop(size, scale=(0.35,1)), np_to_float, PilRandomFlip()]
bs = 64
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
ll.valid.x.tfms = [make_rgb, CenterCrop(size), np_to_float]
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=8)
"""
Explanation: Imagenet(te) training
Jump_to lesson 12 video
End of explanation
"""
#export
def noop(x): return x
class Flatten(nn.Module):
def forward(self, x): return x.view(x.size(0), -1)
def conv(ni, nf, ks=3, stride=1, bias=False):
return nn.Conv2d(ni, nf, kernel_size=ks, stride=stride, padding=ks//2, bias=bias)
#export
act_fn = nn.ReLU(inplace=True)
def init_cnn(m):
if getattr(m, 'bias', None) is not None: nn.init.constant_(m.bias, 0)
if isinstance(m, (nn.Conv2d,nn.Linear)): nn.init.kaiming_normal_(m.weight)
for l in m.children(): init_cnn(l)
def conv_layer(ni, nf, ks=3, stride=1, zero_bn=False, act=True):
bn = nn.BatchNorm2d(nf)
nn.init.constant_(bn.weight, 0. if zero_bn else 1.)
layers = [conv(ni, nf, ks, stride=stride), bn]
if act: layers.append(act_fn)
return nn.Sequential(*layers)
#export
class ResBlock(nn.Module):
def __init__(self, expansion, ni, nh, stride=1):
super().__init__()
nf,ni = nh*expansion,ni*expansion
layers = [conv_layer(ni, nh, 3, stride=stride),
conv_layer(nh, nf, 3, zero_bn=True, act=False)
] if expansion == 1 else [
conv_layer(ni, nh, 1),
conv_layer(nh, nh, 3, stride=stride),
conv_layer(nh, nf, 1, zero_bn=True, act=False)
]
self.convs = nn.Sequential(*layers)
self.idconv = noop if ni==nf else conv_layer(ni, nf, 1, act=False)
self.pool = noop if stride==1 else nn.AvgPool2d(2, ceil_mode=True)
def forward(self, x): return act_fn(self.convs(x) + self.idconv(self.pool(x)))
#export
class XResNet(nn.Sequential):
@classmethod
def create(cls, expansion, layers, c_in=3, c_out=1000):
nfs = [c_in, (c_in+1)*8, 64, 64]
stem = [conv_layer(nfs[i], nfs[i+1], stride=2 if i==0 else 1)
for i in range(3)]
nfs = [64//expansion,64,128,256,512]
res_layers = [cls._make_layer(expansion, nfs[i], nfs[i+1],
n_blocks=l, stride=1 if i==0 else 2)
for i,l in enumerate(layers)]
res = cls(
*stem,
nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
*res_layers,
nn.AdaptiveAvgPool2d(1), Flatten(),
nn.Linear(nfs[-1]*expansion, c_out),
)
init_cnn(res)
return res
@staticmethod
def _make_layer(expansion, ni, nf, n_blocks, stride):
return nn.Sequential(
*[ResBlock(expansion, ni if i==0 else nf, nf, stride if i==0 else 1)
for i in range(n_blocks)])
#export
def xresnet18 (**kwargs): return XResNet.create(1, [2, 2, 2, 2], **kwargs)
def xresnet34 (**kwargs): return XResNet.create(1, [3, 4, 6, 3], **kwargs)
def xresnet50 (**kwargs): return XResNet.create(4, [3, 4, 6, 3], **kwargs)
def xresnet101(**kwargs): return XResNet.create(4, [3, 4, 23, 3], **kwargs)
def xresnet152(**kwargs): return XResNet.create(4, [3, 8, 36, 3], **kwargs)
"""
Explanation: XResNet
Jump_to lesson 12 video
End of explanation
"""
cbfs = [partial(AvgStatsCallback,accuracy), ProgressCallback, CudaCallback,
partial(BatchTransformXCallback, norm_imagenette),
# partial(MixUp, alpha=0.2)
]
loss_func = LabelSmoothingCrossEntropy()
arch = partial(xresnet18, c_out=10)
opt_func = adam_opt(mom=0.9, mom_sqr=0.99, eps=1e-6, wd=1e-2)
#export
def get_batch(dl, learn):
learn.xb,learn.yb = next(iter(dl))
learn.do_begin_fit(0)
learn('begin_batch')
learn('after_fit')
return learn.xb,learn.yb
"""
Explanation: Train
Jump_to lesson 12 video
End of explanation
"""
# export
def model_summary(model, data, find_all=False, print_mod=False):
xb,yb = get_batch(data.valid_dl, learn)
mods = find_modules(model, is_lin_layer) if find_all else model.children()
f = lambda hook,mod,inp,out: print(f"====\n{mod}\n" if print_mod else "", out.shape)
with Hooks(mods, f) as hooks: learn.model(xb)
learn = Learner(arch(), data, loss_func, lr=1, cb_funcs=cbfs, opt_func=opt_func)
learn.model = learn.model.cuda()
model_summary(learn.model, data, print_mod=False)
arch = partial(xresnet34, c_out=10)
learn = Learner(arch(), data, loss_func, lr=1, cb_funcs=cbfs, opt_func=opt_func)
learn.fit(1, cbs=[LR_Find(), Recorder()])
learn.recorder.plot(3)
#export
def create_phases(phases):
phases = listify(phases)
return phases + [1-sum(phases)]
print(create_phases(0.3))
print(create_phases([0.3,0.2]))
lr = 1e-2
pct_start = 0.5
phases = create_phases(pct_start)
sched_lr = combine_scheds(phases, cos_1cycle_anneal(lr/10., lr, lr/1e5))
sched_mom = combine_scheds(phases, cos_1cycle_anneal(0.95, 0.85, 0.95))
cbsched = [
ParamScheduler('lr', sched_lr),
ParamScheduler('mom', sched_mom)]
learn = Learner(arch(), data, loss_func, lr=lr, cb_funcs=cbfs, opt_func=opt_func)
learn.fit(5, cbs=cbsched)
"""
Explanation: We need to replace the old model_summary since it used to take a Runner.
End of explanation
"""
#export
def cnn_learner(arch, data, loss_func, opt_func, c_in=None, c_out=None,
lr=1e-2, cuda=True, norm=None, progress=True, mixup=0, xtra_cb=None, **kwargs):
cbfs = [partial(AvgStatsCallback,accuracy)]+listify(xtra_cb)
if progress: cbfs.append(ProgressCallback)
if cuda: cbfs.append(CudaCallback)
if norm: cbfs.append(partial(BatchTransformXCallback, norm))
if mixup: cbfs.append(partial(MixUp, mixup))
arch_args = {}
if not c_in : c_in = data.c_in
if not c_out: c_out = data.c_out
if c_in: arch_args['c_in' ]=c_in
if c_out: arch_args['c_out']=c_out
return Learner(arch(**arch_args), data, loss_func, opt_func=opt_func, lr=lr, cb_funcs=cbfs, **kwargs)
learn = cnn_learner(xresnet34, data, loss_func, opt_func, norm=norm_imagenette)
learn.fit(5, cbsched)
"""
Explanation: cnn_learner
Jump_to lesson 12 video
End of explanation
"""
!./notebook2script.py 11_train_imagenette.ipynb
"""
Explanation: Imagenet
You can see all this put together in the fastai imagenet training script. It's the same as what we've seen so far, except it also handles multi-GPU training. So how well does this work?
We trained for 60 epochs, and got an error of 5.9%, compared to the official PyTorch resnet which gets 7.5% error in 90 epochs! Our xresnet 50 training even surpasses standard resnet 152, which trains for 50% more epochs and has 3x as many layers.
Export
End of explanation
"""
|
planet-os/notebooks | nasa-opennex/OpenNEX DCP30 Analysis Using Pandas.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
# set default figure size
from pylab import rcParams
rcParams['figure.figsize'] = 16, 8
import pandas as pd
import urllib2
"""
Explanation: OpenNEX DCP30 Analysis Using Pandas
This notebook illustrates how to analyze a subset of OpenNEX DCP30 data using Python and pandas. Specifically, we will be analyzing temperature data in the Chicago area to understand how the CESM1-CAM5 climate model behaves under different RCP scenarios during the course of this century.
A dataset for this example is available at http://opennex.planetos.com/dcp30/k6Lef. On that page you will find a bash script that can be used to deploy a Docker container which will serve the selected data. Deployment of the container is beyond the scope of this example.
<hr>
Import Required Modules
Let's begin by importing the required modules. We'll need pandas for analysis and urllib2 to request data from our access server. We'll use matplotlib to create a chart of our analysis.
End of explanation
"""
def load_data(ip_addr):
data = pd.read_csv(urllib2.urlopen("http://%s:7645/data.csv" % (ip_addr)))
for col in ['Model', 'Scenario', 'Variable']:
data[col] = data[col].astype('category')
data['Date'] = data['Date'].astype('datetime64')
data['Temperature'] = data['Value'] - 273.15
return data
"""
Explanation: Loading Data into a Dataframe
The load_data function reads data directly from your access server's endpoint. It accepts the ip_addr parameter, which must correspond to the IP address of your data access server.
For local deployments, this may be localhost or a local IP address. If you've deployed into an EC2 instance, you'll need to ensure the port is accessible and replace localhost with your instance's public IP address.
It's easier to work with the resulting data if we tell pandas about the date and categorical columns. The function declares these column types, and also converts the temperature from degrees Kelvin to degrees Celsius.
End of explanation
"""
def do_graph(df):
model = df.loc[1,'Model']
df['Year'] = df['Date'].map(lambda d: "%d-01-01" % (d.year)).astype('datetime64')
by_year = df.groupby(['Year', 'Scenario']).max().loc[:,['Temperature']]
groups = by_year.reset_index().set_index('Year').groupby('Scenario')
for key, grp in groups:
plt.plot(grp.index, grp['Temperature'], label=key)
plt.legend(loc='best')
plt.title("Maximum mean temperature for warmest month using model %s" % (model))
plt.xlabel("Year")
plt.ylabel("Temperature [Celsius]")
plt.show()
"""
Explanation: Plotting the Scenarios
After loading that data, we can use matplotlib to visualize what the model predicts over the course of this century. This function reduces the data to show the warmest month for each year and displays the values under each RCP scenario.
End of explanation
"""
# Note: make sure you pass load_data the correct IP address. This is only an example.
data = load_data("localhost")
data.head()
do_graph(data)
"""
Explanation: Putting it all Together
Let's load the data, quickly inspect it using the head method, then use do_graph to visualize it.
End of explanation
"""
|
0x4a50/udacity-0x4a50-deep-learning-nanodegree | intro-to-rnns/Anna_KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
encoded[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
len(vocab)
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
"""
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
"""
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
"""
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
"""
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
def build_cell(lstm_size, keep_prob):
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
"""
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
"""
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
"""
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
"""
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
"""
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
"""
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
"""
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
"""
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
"""
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
"""
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
"""
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
JoseGuzman/myIPythonNotebooks | genetics/PrimerDesign.ipynb | gpl-2.0 | %pylab inline
from itertools import product, permutations
from math import pow
"""
Explanation: <H1>PrimerDesign</H1>
We need to define a sequence of 17 bases with the following requirements:
<ul>
<li>Total GC content: 40-60%</li>
<li>GC Clamp: < 3 in the last 5 bases at the 3' end of the primer.</li>
</ul>
End of explanation
"""
pow(4,17) # all possible ATGC combinations in sequences of 17 bases
pow(4,7) # all possible ATCG combinations in sequences of 7 elements
pow(4,5) # all possible ATCG combinations in sequences of 5 elements
"""
Explanation: The function product is what we need to obtain a sequence of x elements with the four nucleotides A, G, C and T. This will give us $4^{x}$. To compute the product of an iterable with itself, specify the lenght of the sequence with the optional repeat keyword argument. For example,
product(A, repeat=4) means the same as product(A, A, A, A) will be taken.
End of explanation
"""
perms = [''.join(p) for p in permutations('ATCG')]
perms
pow(2,5) # only AT combinations in 5 elements
print list(permutations(['A','T'], 'C'))
pow(2,5)+ 5*pow(2,4) + 5*pow(2,4) +
mySeq = Seq
x = [i for i in list(product('GCAT', repeat=7))]
x[0]
x = [i for i in list(product('GCAT'), repeat)]
mybase = ('A', 'T', 'C','G')
product('ATCG',2)
from Bio.Seq import Seq # Biopython
from Bio.SeqUtils import GC
from Bio import pairwise2
from Bio.pairwise2 import format_alignment
mySeq = Seq('ATCG')
GC(mySeq) # returns % of GC content
mySeq
"""
Explanation: In the first and last 5 bases, we need zero, one or two G or C
End of explanation
"""
# this is the number of all possible C or G combinations in a sequence of 8 elements.
pow(2,9)
"""
Explanation: <H2> Generate sequences with around 50% of GC</H2>
We will insert about 50% of GC content in a 17 pairbase sequence. For that, we will fill the sequence with 9 nucleotides containing either G or C. This with first give us 2^9 sequence combinations.
End of explanation
"""
256*512
# example of joining list
myGC = [''.join(i) for i in list(product('GC', repeat=9))] # 512 sequences
myAT = [''.join(j) for j in list(product('AT', repeat=8))] # 256 sequences
print(myGC[0],myAT[0])
zip(myGC[0],myAT[0])
mystring = str()
for i,j in zip(myGC[0],myAT[100]):
mystring +=i+j
mystring
def generateSeq(listGC, listAT):
"""
Create all possible combinations of the sequences in
list GC and listAT. The only requirement is that
the
Arguments:
==========
listGC -- a list of strings containing GC nucleotides
listAT -- a list of strings containing AT nucleotides
Returns
=======
A list of Seq objects
"""
mySeqList = list()
for list1 in listGC:
for list2 in listAT:
mystring = str()
for i,j in zip(list1, list2):
mystring += i+j
mystring +=list1[-1]# add last element from listGC
mySeqList.append(Seq(mystring))
return mySeqList
generateSeq(myGC[:3],myAT[:3]) #dummy test
mySeq = generateSeq(myGC,myAT)
len(mySeq)
"""
Explanation: For every GC sequence, we will add a AT sequence with the same combinatorial procedure
End of explanation
"""
def GCClamp(seq):
"""
returns the number of G or C within the last five bases of the sequence
"""
return seq[-5:].count('G') + seq[-5:].count('C')
mySeq[0]
GCClamp(mySeq[0])
GC
# count the number of sequences GC Clamp below three
[seq for seq in mySeq if GCClamp(seq)]
mySeq[0][-5:].count('G')
'G' in mySeq[0][-5:]
mySeq[0][-5:]
print 'original = ' + mySeq[100000]
print 'complement = ' + mySeq[100000].complement()
alignments = pairwise2.align.globalxx(mySeq[100000], mySeq[100000])
alignments
%timeit
for a in pairwise2.align.globalxx(mySeq[100000].complement(), mySeq[100000].complement()):
print(format_alignment(*a))
al1,al2, score, begin, end = a
print score
"""
Explanation: we will apply now more restrictions to the sequences
<H2>GC Clamp</H2>
This is the number of G or C in the last 5 bases of the sequence
End of explanation
"""
def countScores(seqList, threshold=None):
"""
Counts the number of sequences whose complementary gives
similarty less than
the threshold given as a argument.
Argument:
=========
seqList -- list, this is a list of Seq objects
threshod -- int, the number of complementary bases that binds
Returns:
========
A interger with the number of sequencies that fullfit that requirement
"""
#generate complement list
compSeq = [i.complement() for i in seqList]
counter = 0
for seq in seqList:
average = list()
for comp in compSeq:
a = pairwise2.align.globalxx(seq, comp)
average.append(a[0][2]) # append score
if np.mean(average)<threshold:
counter +=1
return counter
countScores(mySeq[:3], threshold=10) # test for a list of three seq three
countScores(mySeq, threshold=10)
for a in pairwise2.align.globalxx(mySeq[0], mySeq[0].complement()):
print(format_alignment(*a))
al1,al2, score, begin, end = a
print score
alignments = pairwise2.align.globalxx("ACCGT", "ACG")
for a in pairwise2.align.globalxx("ACCGT", "ACG"):
print(format_alignment(*a))
print(mylist[0])
print(mylist[1])
for a in pairwise2.align.globalxx(mylist[0], mylist[1]):
print(format_alignment(*a))
myseq = 'ATCG'
print list(product(myseq, repeat=2))
256*256
"""
Explanation: Count all the posibilities with score less than 10
End of explanation
"""
|
hunterherrin/phys202-2015-work | assignments/assignment09/IntegrationEx02.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy import integrate
"""
Explanation: Integration Exercise 2
Imports
End of explanation
"""
def integrand(x, a):
return 1.0/(x**2 + a**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.pi/a
print('Numerical:', integral_approx(1.0))
print('Exact:', integral_exact(1.0))
assert True # leave this cell to grade the above integral
"""
Explanation: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps:
Typeset the integral using LateX in a Markdown cell.
Define an integrand function that computes the value of the integrand.
Define an integral_approx funciton that uses scipy.integrate.quad to peform the integral.
Define an integral_exact function that computes the exact value of the integral.
Call and print the return value of integral_approx and integral_exact for one set of parameters.
Here is an example to show what your solutions should look like:
Example
Here is the integral I am performing:c
$$ I_1 = \int_0^\infty \frac{dx}{x^2 + a^2} = \frac{\pi}{2a} $$
End of explanation
"""
def integrand1(x,p):
return (x**(p-1))/(1+x)
def integral_approx1(p):
I,e=integrate.quad(integrand1, 0, np.inf, args=(p,))
return I
def integral_exact1(p):
return np.pi/(np.sin(p*np.pi))
print('Numerical:', integral_approx1(0.5))
print('Exact:', integral_exact1(0.5))
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 1
$$ I_1=\int_0^\infty \frac{x^{p-1}dx}{1+x}=\frac{\pi}{\sin p\pi}, 0<p<1 $$
End of explanation
"""
def integrand2(x,a):
return (a**2-x**2)**(1/2)
def integral_approx2(a):
I,e=integrate.quad(integrand2, 0,a,args=(a,))
return I
def integral_exact2(a):
return np.pi*(a**2)/4
print('Numerical:', integral_approx2(1.0))
print('Exact:', integral_exact2(1.0))
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 2
$$ I_2=\int_0^a \sqrt{a^{2}-x^{2}}dx=\frac{\pi a^{2}}{4} $$
End of explanation
"""
def integrand3(x,a):
return 1/((a**2-x**2)**(1/2))
def integral_approx3(a):
I,e=integrate.quad(integrand3, 0,np.inf,args=(a,))
return I
def integral_exact3(a):
return np.pi/2
#print('Numerical:', integral_approx3(1.0))
#print('Exact:', integral_exact3(1.0))
#this integral seems to be flawed, for there will always be an x which makes the denominator 0 and causes the function being integrated to be discontinuos
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 3
$$ I_3=\int_0^\infty \frac{dx}{\sqrt{a^{2}-x^{2}}}=\frac{\pi}{2} $$
End of explanation
"""
def integrand4(x,p):
return (np.sin(p*x)**2)/(x**2)
def integral_approx4(p):
I,e=integrate.quad(integrand4, 0,np.inf,args=(p,))
return I
def integral_exact4(p):
return np.pi*p/2
print('Numerical:', integral_approx4(1.0))
print('Exact:', integral_exact4(1.0))
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 4
$$ I_4=\int_0^\infty \frac{\sin^{2}px}{x^{2}}dx=\frac{\pi p}{2} $$
End of explanation
"""
def integrand5(x,p):
return (1-np.cos(p*x))/(x**2)
def integral_approx5(p):
I,e=integrate.quad(integrand5, 0,np.inf,args=(p,))
return I
def integral_exact5(p):
return np.pi*p/2
print('Numerical:', integral_approx5(1.0))
print('Exact:', integral_exact5(1.0))
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 5
$$ I_4=\int_0^\infty \frac{1-\cos px}{x^{2}}dx=\frac{\pi p}{2} $$
End of explanation
"""
|
arcyfelix/Courses | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/11-Advanced-Quantopian-Topics/00-Pipeline-Example-Walkthrough.ipynb | apache-2.0 | from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
"""
Explanation: Pipeline Example
End of explanation
"""
from quantopian.pipeline.filters import Q1500US
"""
Explanation: Getting the Securities we want.
The Q500US and Q1500US
These gropus of tradeable stocks are refered to as "universes", because all your trades will use these stocks as their "Universe" of available stock, they won't be trading with anything outside these groups.
End of explanation
"""
universe = Q1500US()
"""
Explanation: There are two main benefits of the Q500US and Q1500US. Firstly, they greatly reduce the risk of an order not being filled. Secondly, they allow for more meaningful comparisons between strategies as now they will be used as the standard universes for algorithms.
End of explanation
"""
from quantopian.pipeline.data import morningstar
sector = morningstar.asset_classification.morningstar_sector_code.latest
"""
Explanation: Filtering the universe further with Classifiers
Let's only grab stocks in the energy sector: https://www.quantopian.com/help/fundamentals#industry-sector
End of explanation
"""
#from quantopian.pipeline.classifiers.morningstar import Sector
#morningstar_sector = Sector()
energy_sector = sector.eq(309)
"""
Explanation: Alternative:
End of explanation
"""
from quantopian.pipeline.factors import SimpleMovingAverage, AverageDollarVolume
# Dollar volume factor
dollar_volume = AverageDollarVolume(window_length = 30)
# High dollar volume filter
high_dollar_volume = dollar_volume.percentile_between(90, 100)
# Top open price filter (high dollar volume securities)
top_open_price = USEquityPricing.open.latest.top(50,
mask = high_dollar_volume)
# Top percentile close price filter (high dollar volume, top 50 open price)
high_close_price = USEquityPricing.close.latest.percentile_between(90, 100,
mask = top_open_price)
"""
Explanation: Masking Filters
Masks can be also be applied to methods that return filters like top, bottom, and percentile_between.
Masks are most useful when we want to apply a filter in the earlier steps of a combined computation. For example, suppose we want to get the 50 securities with the highest open price that are also in the top 10% of dollar volume.
Suppose that we then want the 90th-100th percentile of these securities by close price. We can do this with the following:
End of explanation
"""
def make_pipeline():
# Base universe filter.
base_universe = Q1500US()
# Sector Classifier as Filter
energy_sector = sector.eq(309)
# Masking Base Energy Stocks
base_energy = base_universe & energy_sector
# Dollar volume factor
dollar_volume = AverageDollarVolume(window_length = 30)
# Top half of dollar volume filter
high_dollar_volume = dollar_volume.percentile_between(95, 100)
# Final Filter Mask
top_half_base_energy = base_energy & high_dollar_volume
# 10-day close price average.
mean_10 = SimpleMovingAverage(inputs=[USEquityPricing.close],
window_length = 10,
mask = top_half_base_energy)
# 30-day close price average.
mean_30 = SimpleMovingAverage(inputs=[USEquityPricing.close],
window_length = 30,
mask = top_half_base_energy)
# Percent difference factor.
percent_difference = (mean_10 - mean_30) / mean_30
# Create a filter to select securities to short.
shorts = percent_difference < 0
# Create a filter to select securities to long.
longs = percent_difference > 0
# Filter for the securities that we want to trade.
securities_to_trade = (shorts | longs)
return Pipeline(
columns = {
'longs': longs,
'shorts': shorts,
'percent_diff':percent_difference
},
screen=securities_to_trade
)
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
result
result.info()
"""
Explanation: Applying Filters and Factors
Let's apply our own filters, following along with some of the examples above. Let's select the following securities:
Stocks in Q1500US
Stocks that are in the energy Sector
They must be relatively highly traded stocks in the market (by dollar volume traded, need to be in the top 5% traded)
Then we'll calculate the percent difference as we've done previously. Using this percent difference we'll create an unsophisticated strategy that shorts anything with negative percent difference (the difference between the 10 day mean and the 30 day mean).
End of explanation
"""
from quantopian.algorithm import attach_pipeline,pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import AverageDollarVolume,SimpleMovingAverage
from quantopian.pipeline.filters.morningstar import Q1500US
from quantopian.pipeline.data import morningstar
def initialize(context):
schedule_function(my_rebalance,date_rules.week_start(),time_rules.market_open(hours = 1))
my_pipe = make_pipeline()
attach_pipeline(my_pipe, 'my_pipeline')
def my_rebalance(context,data):
for security in context.portfolio.positions:
if security not in context.longs and security not in context.shorts and data.can_trade(security):
order_target_percent(security,0)
for security in context.longs:
if data.can_trade(security):
order_target_percent(security,context.long_weight)
for security in context.shorts:
if data.can_trade(security):
order_target_percent(security,context.short_weight)
def my_compute_weights(context):
if len(context.longs) == 0:
long_weight = 0
else:
long_weight = 0.5 / len(context.longs)
if len(context.shorts) == 0:
short_weight = 0
else:
short_weight = 0.5 / len(context.shorts)
return (long_weight,short_weight)
def before_trading_start(context,data):
context.output = pipeline_output('my_pipeline')
# LONG
context.longs = context.output[context.output['longs']].index.tolist()
# SHORT
context.shorts = context.output[context.output['shorts']].index.tolist()
context.long_weight,context.short_weight = my_compute_weights(context)
def make_pipeline():
# Universe Q1500US
base_universe = Q1500US()
# Energy Sector
sector = morningstar.asset_classification.morningstar_sector_code.latest
energy_sector = sector.eq(309)
# Make Mask of 1500US and Energy
base_energy = base_universe & energy_sector
# Dollar Volume (30 Days) Grab the Info
dollar_volume = AverageDollarVolume(window_length = 30)
# Grab the top 5% in avg dollar volume
high_dollar_volume = dollar_volume.percentile_between(95, 100)
# Combine the filters
top_five_base_energy = base_energy & high_dollar_volume
# 10 day mean close
mean_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length = 10, mask = top_five_base_energy)
# 30 day mean close
mean_30 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length = 30, mask = top_five_base_energy)
# Percent Difference
percent_difference = (mean_10-mean_30)/mean_30
# List of Shorts
shorts = percent_difference < 0
# List of Longs
longs = percent_difference > 0
# Final Mask/Filter for anything in shorts or longs
securities_to_trade = (shorts | longs)
# Return Pipeline
return Pipeline(columns={
'longs':longs,
'shorts':shorts,
'perc_diff':percent_difference
},screen=securities_to_trade)
"""
Explanation: Executing this Strategy in the IDE
End of explanation
"""
|
tcmoore3/mdtraj | examples/rmsd-drift.ipynb | lgpl-2.1 | import mdtraj.testing
crystal_fn = mdtraj.testing.get_fn('native.pdb')
trajectory_fn = mdtraj.testing.get_fn('frame0.xtc')
crystal = md.load(crystal_fn)
trajectory = md.load(trajectory_fn, top=crystal) # load the xtc. the crystal structure defines the topology
trajectory
"""
Explanation: Find two files that are distributed with MDTraj for testing purposes --
we can us them to make our plot
End of explanation
"""
rmsds_to_crystal = md.rmsd(trajectory, crystal, 0)
heavy_atoms = [atom.index for atom in crystal.topology.atoms if atom.element.symbol != 'H']
heavy_rmds_to_crystal = md.rmsd(trajectory, crystal, 0, atom_indices=heavy_atoms)
from matplotlib.pylab import *
figure()
plot(trajectory.time, rmsds_to_crystal, 'r', label='all atom')
plot(trajectory.time, heavy_rmds_to_crystal, 'b', label='heavy atom')
legend()
title('RMSDs to crystal')
xlabel('simulation time (ps)')
ylabel('RMSD (nm)')
"""
Explanation: RMSD with exchangeable hydrogen atoms is generally not a good idea
so let's take a look at just the heavy atoms
End of explanation
"""
|
wmfschneider/CHE30324 | Homework/HW8-soln.ipynb | gpl-3.0 | import numpy as np
import matplotlib.pyplot as plt
r = np.linspace(0,12,100) # r=R/a0
P = (1+r+1/3*r**2)*np.exp(-r)
plt.plot(r,P)
plt.xlim(0)
plt.ylim(0)
plt.xlabel('Internuclear Distance $R/a0$')
plt.ylabel('Overlap S')
plt.title('The Overlap Between Two 1s Orbitals')
plt.show()
"""
Explanation: Chem 30324, Spring 2020, Homework 8
Due April 3, 2020
Chemical bonding
The electron wavefunctions (molecular orbitals) in molecules can be thought of as coming from combinations of atomic orbitals on the constituent atoms. One of the factors that determines whether two atomic orbitals form a bond is there ability to overlap. Consider two atoms, A and B, aligned on the z axis and separated by a distance $R$.
1. The overlap between two 1s orbitals on A and B can be shown to be: $$S = \left {1+\frac{R}{a_0}+\frac{1}{3}\left (\frac{R}{a_0}\right )^2\right }e^{-R/a_0}$$ Plot out the overlap as a function of the internuclear distance $R$. Qualitatively explain why it has the shape it has.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
A = 1
alpha = 1
R0 =[0,.25,.5,.75,1,2]
R = np.linspace(0,10,100)
for i in R0:
V = A*(1-np.exp(-alpha*(R-i)))**2
plt.plot(R,V, label = i)
plt.ylim(0,2)
plt.xlim(0,8)
plt.legend()
plt.xlabel('Internuclear Distance (R)')
plt.ylabel('Wavefunction (V(R))')
plt.title('Variation in R0')
plt.show()
print('R0 is the equilibrium bond distance.')
import numpy as np
import matplotlib.pyplot as plt
A = 1
alpha = [0,.25,.5,.75,1,2]
R0 = 1
R = np.linspace(0,10,100)
for i in alpha:
V = A*(1-np.exp(-i*(R-R0)))**2
plt.plot(R,V, label = i)
plt.ylim(0,2)
plt.xlim(0,8)
plt.legend()
plt.xlabel('Internuclear Distance (R)')
plt.ylabel('Wavefunction (V(R))')
plt.title('Variation in alpha')
plt.show()
print('Alpha is the stiffness (spring constant) of the bond between the two atoms.')
import numpy as np
import matplotlib.pyplot as plt
A = [0,.25,.5,.75,1,2]
alpha = 1
R0 = 1
R = np.linspace(0,10,100)
for i in A:
V = i*(1-np.exp(-alpha*(R-R0)))**2
plt.plot(R,V, label = i)
plt.ylim(0,2)
plt.xlim(0,8)
plt.legend()
plt.xlabel('Internuclear Distance (R)')
plt.ylabel('Wavefunction (V(R))')
plt.title('Variation in A')
plt.show()
print('A is the difference in energy between a molecule and its atoms---the bond dissociation energy.')
"""
Explanation: 2. The overlap functions for other pairs of orbitals are more complicated, but the general features are easily inferred. Neatly sketch the orbital overlap between a 1s orbital on A and 2p$_z$ orbital on B as a function $R$. Carefully indicate the limiting values as $R \rightarrow 0$ and $R \rightarrow \infty$.
3. Choose some other pair of atomic orbitals on A and B and sketch out their overlap as a function of $R$. Carefully indicate the limiting values as $ R \rightarrow 0$ and $ R\rightarrow \infty$.
4. What property besides overlap determines whether two atomic orbitals will form a bond?
The similarity of the energies of the two atomic orbitals, ie the value of $\beta = \langle \phi_1 | \hat{f} | \phi_2 \rangle $
5. A chemical bond is a compromise between the electrons trying to get close to both nuclei and the nuclei trying to stay apart. The function below captures this compromise as a function of internuclear distance, $R$. Plot the function for different values of the parameters $A$, $\alpha$, and $R_0$. Provide a physical interpretation of each of the parameters. $$V(R) = A \left ( 1 - e^{(-\alpha(R - R_0))}\right)^2$$
End of explanation
"""
# Carbon Monoxide
# From https://cccbdb.nist.gov/bondlengthmodel2.asp?method=12&basis=5, L = 1.128 Angstrom
import numpy as np
import matplotlib.pyplot as plt
E_C = -37.79271 # Ha, energy of single C atom
E_O = -74.98784 # Ha, energy of single O atom
length = [1.00, 1.05, 1.10, 1.15, 1.2, 1.25] # Angstrom
E_CO = [-113.249199,-113.287858,-113.305895,-113.309135,-113.301902,-113.287408] # Ha, energy of CO
E_bond = [] # energy of CO bond
for i in E_CO:
E_bond.append((i-E_C-E_O)*27.212) # eV, Energy[CO - C - O] = Energy[bond]
fit = np.polyfit(length, E_bond, 2) # quadratic fit
print("Fitted result: E = %fx^2 + (%f)x + %f"%(fit[0],fit[1],fit[2]))
# Find E_min
x = np.linspace(0.9, 1.4, 100)
z = fit[0]*x**2 + fit[1]*x + fit[2] # from result above
E_min_CO = min(z) # Find the minimum in energy array
print('E_min_CO = %feV.'%(E_min_CO))
# Plot E vs length
plt.plot(length, E_bond, '.', label='Webmo Data')
plt.plot(x, z, '--',label='Quadratic Fit')
plt.xlabel('Bond length (Angstrom)')
plt.ylabel('Energy (eV)')
plt.title('CO Molecular Energy vs. Bond Length')
plt.legend()
plt.show()
# Find equilbrium bond length
import sympy as sp
x = sp.symbols('x')
z = fit[0]*x**2 + fit[1]*x + fit[2] # from result above
l = sp.solve(sp.diff(z,x),x)
print('L_equilibrium = %f A > 1.128 A (in literature).'%(l[0])) # equilibrium bond length
#Boron Nitride
#From https://cccbdb.nist.gov/bondlengthmodel2.asp?method=12&basis=5, L= 1.325 Angstrom
import numpy as np
import matplotlib.pyplot as plt
E_B = -24.61703 # Ha, energy of single B atom
E_N = -54.51279 # Ha, energy of single N atom
length = [1.15, 1.2, 1.25, 1.3, 1.35, 1.4] # Angstrom
E_BN = [-79.359357,-79.376368,-79.383355,-79.382896,-79.377003,-79.367236] # Ha, energy of BN
E_bond = [] # energy of BN bond
for i in E_BN:
E_bond.append((i-E_B-E_N)*27.212)
fit = np.polyfit(length, E_bond, 2) # quadratic fit
print("Fitted result: E = %fx^2 + (%f)x + %f"%(fit[0],fit[1],fit[2]))
# Find E_min
x = np.linspace(1.1, 1.5, 100)
z = fit[0]*x**2 + fit[1]*x + fit[2] # from result above
E_min_BN = min(z) # Find the minimum in energy array
print('E_min_BN = %feV.'%(E_min_BN))
# Plot E vs length
plt.plot(length, E_bond, '.', label='Webmo Data')
plt.plot(x, z, '--',label='Quadratic Fit')
plt.xlabel('Bond length (Angstrom)')
plt.ylabel('Energy (eV)')
plt.title('BN Molecular Energy vs. Bond Length')
plt.legend()
plt.show()
# Find equilbrium bond length
import sympy as sp
x = sp.symbols('x')
z = fit[0]*x**2 + fit[1]*x + fit[2] # from result above
l = sp.solve(sp.diff(z,x),x)
print('L_equilibrium = %f A < 1.325 A (in literature).'%(l[0])) # equilibrium bond length
#Berrylium Oxide
#From https://cccbdb.nist.gov/bondlengthmodel2.asp?method=12&basis=5, L = 1.331 Angstrom
import numpy as np
import matplotlib.pyplot as plt
E_Be = -14.64102 # Ha
E_O = -74.98784 # Ha
length = [1.2, 1.25, 1.3, 1.35, 1.4, 1.45] # Angstrom
E_BeO = [-89.880569,-89.893740,-89.899599,-89.899934,-89.896149,-89.889335] # Ha, energy of BeO
E_bond = [] # energy of BeO bond
for i in E_BeO:
E_bond.append((i-E_Be-E_O)*27.212)
fit = np.polyfit(length, E_bond, 2) # quadratic fit
print("Fitted result: E = %fx^2 + (%f)x + %f"%(fit[0],fit[1],fit[2]))
# Find E_min
x = np.linspace(1.1, 1.6, 100)
z = fit[0]*x**2 + fit[1]*x + fit[2] # from result above
E_min_BeO = min(z) # Find the minimum in energy array
print('E_min_BeO = %feV.'%(E_min_BeO))
# Plot E vs length
plt.plot(length, E_bond, '.', label='Webmo Data')
plt.plot(x, z, '--',label='Quadratic Fit')
plt.xlabel('Bond length (Angstrom)')
plt.ylabel('Energy (eV)')
plt.title('BeO Molecular Energy vs. Bond Length')
plt.legend()
plt.show()
# Find equilbrium bond length
import sympy as sp
x = sp.symbols('x')
z = fit[0]*x**2 + fit[1]*x + fit[2] # from result above
l = sp.solve(sp.diff(z,x),x)
print('L_equilibrium = %f A > 1.331 A (in literature).'%(l[0])) # equilibrium bond length
"""
Explanation: 6. For each pair, draw a Lewis dot structure. Indicate which bond is stronger in the pair, and give a very brief rationalization:
(a) H$_2$ vs LiH
(b) N$_2$ vs H$_2$
(c) N$_2$ vs CO
(d) H$_2$ vs He$_2$
a) $$H:H$$ $$Li:H$$
$H_2$ has a stronger bond because the two hydrogens have similar energies.
b) $$:N:::N:$$ $$H:H$$
$N_2$ has a stronger bond since there are 3 bonds instead of just one.
c) $$:N:::N:$$ $$:C:::O:$$
An argument for both structures can be made. There is not an agreed upon answer in the literature.
d) $$H:H$$ $$ :He\quad He:$$
$H_2$ has a stronger bond since $He_2$ doesn't have a bond.
Computational chemistry.
Today properties of a molecule are more often than not calculated rather than inferred. Quantitative molecular quantum mechanical calculations require specialized numerical solvers like Orca. Following are instructions for using Orca with the Webmo graphical interface.
Now, let’s set up your calculation (you may do this with a partner or partners if you choose):
Log into the Webmo server https://www.webmo.net/demoserver/cgi-bin/webmo/login.cgi using "guest" as your username and password.
Select New Job-Creat New Job.
Use the available tools to sketch a molecule.
Use the right arrow at the bottom to proceed to the Computational Engines.
Select Orca
Select "Molecular Energy," “B3LYP” functional and the default def2-SVP basis set.
Select the right arrow to run the calculation.
From the job manager window choose the completed calculation to view the results.
The molecule you are to study depends on your last name. Choose according to the list:
+ A-G: CO
+ H-R: BN
+ S-Z: BeO
For your convenience, here are the total energies (in Hartree, 27.212 eV/Hartree) of the constituent atoms, calculated using the B3LYP DFT treatment of $v_{ee}$ and the def2-SVP basis set:
|Atom|Energy|Atom|Energy|
|-|-|-|-|
|B|–24.61703|N| -54.51279|
|Be|-14.64102|O|-74.98784|
|C|-37.79271|F|-99.60655|
7. Construct a potential energy surface for your molecule. Using covalent radii, guess an approximate equilbrium bond length, and use the Webmo editor to draw the molecule with that length. Specify the “Molecular Energy” option to Orga and the def2-SVP basis set. Calculate and plot out total molecular energy vs. bond distance in increments of 0.05 Å about your guessed minimum, including enough points to encompass the actual minimum. (You will find it convenient to subtract off the individual atom energies from the molecular total energy and to convert to more convenient units, like eV or kJ/mol.) By fitting the few points nearest the minimum, determine the equilibrium bond length. How does your result compare to literature?
End of explanation
"""
print('CO Molecule:')
J = 1.6022e-19 # J, 1 eV = 1.6022e-19 J
L = 1e-10 # m, 1 angstrom = 1e-10 m
# k [=] Energy/Length^2
k_CO = 2*71.30418671*J/L**2 # J/m**2
c = 2.99792e8 # m/s
m_C = 12.0107*1.6605e-27 # kg
m_O = 15.9994*1.6605e-27 # kg
mu_CO = m_C*m_O/(m_C+m_O) # kg, reduced mass
nu_CO = 1/(2*np.pi*c)*np.sqrt(k_CO/mu_CO)/100 # cm^-1, wavenumber
print('The harmonic vibrational frequency is %f cm^-1.'%(nu_CO))
print('BN Molecule:')
J = 1.6022e-19 # J, 1 eV = 1.6022e-19 J
L = 1e-10 # m, 1 angstrom = 1e-10 m
# k [=] Energy/Length^2
k_BN = 2*36.0384*J/L**2 # J/m**2
c = 2.99792e8 # m/s
m_B = 10.811*1.6605e-27 # kg
m_N = 14.0067*1.6605e-27 # kg
mu_BN = m_B*m_N/(m_B+m_N) # kg, reduced mass
nu_BN = 1/(2*np.pi*c)*np.sqrt(k_BN/mu_BN)/100 # cm^-1, wavenumber
print('The harmonic vibrational frequency is %f cm^-1.'%(nu_BN))
print('BeO Molecule:')
J = 1.6022e-19 # J, 1 eV = 1.6022e-19 J
L = 1e-10 # m, 1 angstrom = 1e-10 m
# k [=] Energy/Length^2
k_BeO = 2*26.920637*J/L**2 # J/m**2
c = 2.99792e8 # m/s
m_Be = 9.01218*1.6605e-27 # kg
m_O = 15.9994*1.6605e-27 # kg
mu_BeO = m_Be*m_O/(m_Be+m_O) # kg, reduced mass
nu_BeO = 1/(2*np.pi*c)*np.sqrt(k_BeO/mu_BeO)/100 # cm^-1, wavenumber
print('The harmonic vibrational frequency is %f cm^-1.'%(nu_BeO))
"""
Explanation: 8. Use the quadratic fit from Question 8 to determine the harmonic vibrational frequency of your molecule, in cm$^{-1}$. Recall that the force constant is the second derivative of the energy at the minimum, and that the frequency (in wavenumbers) is related to the force constant according to $$\tilde{\nu} = \frac{1}{2\pi c}\sqrt{\frac{k}{\mu}}$$
End of explanation
"""
# Get experimental vibrational zero-point energy from NIST database: https://cccbdb.nist.gov/exp1x.asp
nu_CO_exp = 1084.9 # cm^-1
nu_BN_exp = 760.2 # cm^-1
nu_BeO_exp = 728.5 # cm^-1
print('CO Molecule:')
# Note: E_ZPC = E_min + ZPE_harmonic_oscillator
h = 6.62607e-34
NA = 6.02214e23
J = 1.6022e-19 # eV to J
E_min_CO = (-16.300903*J)*NA/1000 # converted from eV to kJ/mol from problem 8
# Calculations
E0_CO = (0.5*h*nu_CO*100*c)*NA/1000 # kJ/mol, ZPE harmonic oscillator
EB_CO = E_min_CO + E0_CO # kJ/mol, ZPC bond energy
# Experiments
E0_CO_exp = (0.5*h*nu_CO_exp*100*c)*NA/1000
EB_CO_exp = E_min_CO + E0_CO_exp
print('|E_ZPC| = %f kJ/mol < %f kJ/mol.'%(-EB_CO,-EB_CO_exp))
print('BN Molecule:')
# Note: E_ZPC = E_min + ZPE_harmonic_oscillator
h = 6.62607e-34
NA = 6.02214e23
J = 1.6022e-19 # eV to J
E_min_BN = (-4.633537*J)*NA/1000 # converted from eV to kJ/mol from problem 8
# Calculations
E0_BN = (0.5*h*nu_BN*100*c)*NA/1000 # kJ/mol, ZPE harmonic oscillator
EB_BN = E_min_BN + E0_BN # kJ/mol, ZPC bond energy
# Experiments
E0_BN_exp = (0.5*h*nu_BN_exp*100*c)*NA/1000
EB_BN_exp = E_min_BN + E0_BN_exp
print('|E_ZPC| = %f kJ/mol < %f kJ/mol.'%(-EB_BN,-EB_BN_exp))
print('BeO Molecule:')
# Note: E_ZPC = E_min + ZPE_harmonic_oscillator
h = 6.62607e-34
NA = 6.02214e23
J = 1.6022e-19 # eV to J
E_min_BeO = (-5.850784*J)*NA/1000 # converted from eV to kJ/mol from problem 8
# Calculations
E0_BeO = (0.5*h*nu_BeO*100*c)*NA/1000 # kJ/mol, ZPE harmonic oscillator
EB_BeO = E_min_BeO + E0_BeO # kJ/mol, ZPC bond energy
# Experiments
E0_BeO_exp = (0.5*h*nu_BeO_exp*100*c)*NA/1000
EB_BeO_exp = E_min_BeO + E0_BeO_exp
print('|E_ZPC| = %f kJ/mol < %f kJ/mol.'%(-EB_BeO,-EB_BeO_exp))
"""
Explanation: 9. Use your results to determine the zero-point-corrected bond energy of your molecule. How does this model compare with the experimental value?
End of explanation
"""
C2H6 = 1.531 # Angstrom
C2H4 = 1.331 # Angstrom
C2H2 = 1.205 # Angstrom
import matplotlib.pyplot as plt
plt.scatter([0,1,2],[C2H2,C2H4,C2H6])
plt.xlabel('Molecules')
plt.ylabel('Bond length (A)')
plt.xticks(np.arange(3), ('C2H2','C2H4','C2H6'))
plt.show()
"""
Explanation: Computational chemistry, part deux
Diatomics are a little mundane. These same methods can be used to compute the properties of much more complicated things. As example, the OQMD database http://oqmd.org/ contains results for many solids. We don't have time to get this complicated in class, but at least you can compute properties of some molecules.
10. Working with some of your classmates, compute the equilibrium structures of C$_2$H$_6$, C$_2$H$_4$, and C$_2$H$_2$. Compare their equilibrium C-C bond lengths. Do they vary in the way you expect?
Log into the Webmo server https://www.webmo.net/demoserver/cgi-bin/webmo/login.cgi using "guest" as your username and password.
Select New Job-Creat New Job.
Use the available tools to sketch a molecule. Make sure the bond distances and angles are in a plausible range.
Use the right arrow at the bottom to proceed to the Computational Engines.
Select Orca
Select "Geometry optimization," “B3LYP” functional and the default def2-SVP basis set.
Select the right arrow to run the calculation.
From the job manager window choose the completed calculation to view the results.
End of explanation
"""
E_H2 = -1.16646206791 # Ha
E_C2H2 = -77.3256461775 # Ha, acetylene
E_C2H4 = -78.5874580928 # Ha, ethylene
E_C2H6 = -79.8304174812 # Ha, ethane
E_rxn1 = (E_C2H4 - E_C2H2 - E_H2)*2625.50 # kJ/mol, H2 + C2H2 -> C2H4
E_rxn2 = (E_C2H6 - E_C2H4 - E_H2)*2625.50 # kJ/mol, H2 + C2H4 -> C2H6
print("E_rnx1 = %f kJ/mol, E_rnx2 = %f kJ/mol"%(E_rxn1, E_rxn2))
"""
Explanation: 11. Compute the corresponding vibrational spectra. Could you distinguish these molecules by their spectra?
Log into the Webmo server https://www.webmo.net/demoserver/cgi-bin/webmo/login.cgi using "guest" as your username and password.
Select the job with the optimized geometry and open it.
Use the right arrow at the bottom to proceed to the Computational Engines.
Select Orca
Select "Vibrational frequency," “B3LYP” functional and the default def2-SVP basis set.
Select the right arrow to run the calculation.
From the job manager window choose the completed calculation to view the results.
C2H2
C2H4
C2H6
The vibrational spectra are clearly very different, so these molecules can be distinguished by IR.
12. Compute the structure and energy of H$_2$. Use it to compare the energies to hydrogenate acetylene to ethylene and ethylene to ethane. Which is easier to hydrogenate? Can you see why selective hydrogenation of acetylene to ethylene is difficult to do?
End of explanation
"""
|
wikistat/Apprentissage | GRC-carte_Visa/Apprent-Python-Visa.ipynb | gpl-3.0 | # Importation des librairies.
import numpy as np
import pandas as pd
import random as rd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
# Lecture d'un data frame
vispremv = pd.read_table('vispremv.dat', delimiter=' ')
vispremv.shape
vispremv.head()
# Variables quantitatives
vispremv.describe()
"""
Explanation: <center>
<a href="http://www.insa-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg" style="float:left; max-width: 120px; display: inline" alt="INSA"/></a>
<a href="http://wikistat.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg" style="float:right; max-width: 250px; display: inline" alt="Wikistat"/></a>
</center>
Scénarios d'Apprentissage Statistique
GRC: Score d'appétence d'un produit bancaire en <a href="https://www.python.org/"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Python_logo_and_wordmark.svg/390px-Python_logo_and_wordmark.svg.png" style="max-width: 120px; display: inline" alt="Python"/></a> avec <a href="http://scikit-learn.org/stable/#"><img src="http://scikit-learn.org/stable/_static/scikit-learn-logo-small.png" style="max-width: 100px; display: inline" alt="Scikit-learn"/></a>
Résumé
Les données sont composées de 825 clients d'une banque décrits par 32 variables concernant leurs avoirs, et utilisations de leurs comptes. Après avoir réalisé, avec R ou Python, le premier objectif de segmentation ou profilage des types de comportement des clients, le 2ème consiste à estimer puis prévoir un score d'appétence pour un produit bancaie, ici la carte visa premier. Comparaison des différentes méthodes et algorihtmes d'apprentissage pour atteindre cet objectif de la régression logistique au boosting (extrem gradient) en passant par les arbres, les SVM ou random forest. Une procédure de validation croisée généralisée est itérée sur une selection de ces méthodes. Celles d'agrégation de modèles conduisent aux meilleurs résultats.
Introduction
Objectif
Un calepin, qu'il est préférable d'exécuter préalablement, décrit le premier objectif d'exploration puis segmentation ou profilage des types de comportement des clients d'une banque.
Ce deuxième calepin propose de construire un score d'appétence pour la carte Visa Premier. Ce deuxième objectif est intégré à la saison 3 (Apprentissage Statistique). Il s'agit d'un score d'appétence mais ce pourrait être le score d'attrition (churn) d'un opérateur téléphonique ou encore un score de défaillance d'un emprunteur ou de faillite d'une entreprise; les outils de modélisation sont les mêmes et sont très largement utilisés dans tout le secteur tertiaire pour l'aide à la décision.
Remarque: un scénario plus ancien propose une exécution avec SAS, encore utilisé dans de nombreuses grandes entreprises.
Présentation des données
Les variables
La liste des variables est issue d'une base de données retraçant l'historique mensuel bancaire et les caractéristiques de tous les clients. Un sondage a été réalisé afin d'alléger les traitements ainsi qu'une première sélection de variables. Les variables contenues dans le fichier initial sont décrites dans le tableau ci-dessous. Elles sont observées sur 1425 clients.
Tableau: Liste des variables initiales et de leur libellé Attention, certains sont écrits en majuscules dans les programmes puis en minuscules après transfomation des données (logarithme, recodage) au cours d ela phase d'exploration. Les noms des variables logarithmes des variables quantitatives se terminent par Lles variables qualitatives se terminent par Qou q.
Identifiant | Libellé
--|--
sexeq | Sexe (qualitatif)
ager | Age en années
famiq | Situation familiale: Fmar Fcel Fdiv Fuli Fsep Fveu
relat | Ancienneté de relation en mois
pcspq | Catégorie socio-professionnelle (code num)
opgnb | Nombre d'opérations par guichet dans le mois
moyrv | Moyenne des mouvements nets créditeurs des 3 mois en Kf
tavep | Total des avoirs épargne monétaire en francs
endet | Taux d'endettement
gaget | Total des engagements en francs
gagec | Total des engagements court terme en francs
gagem | Total des engagements moyen terme en francs
kvunb | Nombre de comptes à vue
qsmoy | Moyenne des soldes moyens sur 3 mois
qcred | Moyenne des mouvements créditeurs en Kf
dmvtp | Age du dernier mouvement (en jours)\hline
boppn | Nombre d'opérations à M-1
facan | Montant facturé dans l'année en francs
lgagt | Engagement long terme
vienb | Nombre de produits contrats vie
viemt | Montant des produits contrats vie en francs
uemnb | Nombre de produits épargne monétaire
xlgnb | Nombre de produits d'épargne logement
xlgmt | Montant des produits d'épargne logement en francs
ylvnb | Nombre de comptes sur livret
ylvmt | Montant des comptes sur livret en francs
rocnb | Nombre de paiements par carte bancaire à M-1
nptag | Nombre de cartes point argent
itavc | Total des avoirs sur tous les comptes
havef | Total des avoirs épargne financière en francs
jnbjd | Nombre de jours à débit à M
**carvp` | Possession de la carte VISA Premier**
Réponde aux questions en s'aidant des résultats des exécutions
Préparation des données
Lecture
Les données sont disponibles dans le répertoire de ce calepin et chargées en même temps. Elles sont issues de la première phase de prétraitement ou data munging pour détecter, corriger les erreurs et incohérences, éliminer des redondances, traiter les données manquantes, transformer certaines variables.
End of explanation
"""
vispremv.dtypes
# Transformation en indicatrices
vispremDum=pd.get_dummies(vispremv[["SEXEQ","FAMIQ","PCSPQ"]])
# Une seule est conservée pour les variables binaires
vispremDum.drop(["SEXEQ_Sfem","FAMIQ_Fseu"], axis = 1, inplace = True)
# Sélection des variables numériques
vispremNum = vispremv.select_dtypes(exclude=['object'])
# Concaténation des variables retenues
vispremR=pd.concat([vispremDum,vispremNum],axis=1)
vispremR.columns
"""
Explanation: Vérifier ci-dessous que la plupart des variables ont deux versions, l'une quantitative et l'autre qualitative. La version en R de ce calepin compare deux stratégies: l'une basée sur l'utilisation des variables explicatives initiales quantitatives, l'autre sur celles qualitatives après découpage en classes. Compte tenu des résultats et des contraintes de python, ou plus précisement de scikit-learn , il est plus adapté de ne considérer que les variables quantitatives.
Les variables qualitatives (sexe, csp, famille) sont transformées en indicatrices à l'exception de la cible CARVP.
End of explanation
"""
vispremR.shape
# La variable à expliquer est recodée
y=vispremv["CARVP"].map(lambda x: 0 if x=="Cnon" else 1)
"""
Explanation: Q Combien d'individus et combien de variables sont finalement concernés?
End of explanation
"""
rd_seed=111 # Modifier cette valeur d'initialisation
npop=len(vispremv)
xApp,xTest,yApp,yTest=train_test_split(vispremR,y,test_size=200,random_state=rd_seed)
xApp.shape
"""
Explanation: Extraction des échantillons apprentissage et test
End of explanation
"""
from sklearn.linear_model import LogisticRegression
# Grille de valeurs du paramètre de pénalisaiton
param=[{"C":[0.5,1,5,10,12,15,30]}]
logitL = GridSearchCV(LogisticRegression(penalty="l1"), param,cv=5,n_jobs=-1)
logitLasso=logitL.fit(xApp, yApp)
# Sélection du paramètre optimal
logitLasso.best_params_["C"]
print("Meilleur score (apprentissage) = %f, Meilleur paramètre = %s" %
(1.-logitLasso.best_score_,logitLasso.best_params_))
"""
Explanation: Régression logistique
Cette ancienne méthode reste toujours très utilisée. D'abord par habitude mais aussi par efficacité pour le traitement de données très volumineuses lors de l'estimation de très gros modèles (beaucoup de variables) notamment par exemple chez Criteo ou CDiscount.
Estimation et optimisation
La procédure de sélection de modèle est celle par pénalisation: ridge, Lasso ou une combinaison (elastic net). Contrairement aux procédures disponibles en R (stepwise, backward, forward) optimisant un critère comme AIC, l'algorithme proposé dans scikit-learn nepermet pas une prise en compte simple des interactions. D'autre part les compléments usuels (test de Wald ou du rapport de vraisemblance) ne sont pas directement fournis.
Optimisation Lasso
End of explanation
"""
# Prévision
yChap = logitLasso.predict(xTest)
# matrice de confusion
table=pd.crosstab(yChap,yTest)
print(table)
# Erreur sur l'échantillon test
print("Erreur de test régression Lasso = %f" % (1-logitLasso.score(xTest, yTest)))
"""
Explanation: Erreur de prévision
End of explanation
"""
# Grilles de valeurs du paramètre de pénalisation
param=[{"C":[0.5,1,5,10,12,15,30]}]
logitR = GridSearchCV(LogisticRegression(penalty="l2"), param,cv=5,n_jobs=-1)
logitRidge=logitR.fit(xApp, yApp)
# Sélection du paramètre optimal
logitRidge.best_params_["C"]
print("Meilleur score = %f, Meilleur paramètre = %s" % (1.-logitRidge.best_score_,logitRidge.best_params_))
# Prévision
yChap = logitRidge.predict(xTest)
# matrice de confusion
table=pd.crosstab(yChap,yTest)
print(table)
# Erreur sur l'échantillon test
print("Erreur de test régression Ridge = %f" % (1-logitRidge.score(xTest, yTest)))
"""
Explanation: Optimisation ridge
End of explanation
"""
LassoOpt=LogisticRegression(penalty="l1",C=12)
LassoOpt=LassoOpt.fit(xApp, yApp)
# Récupération des coefficients
vect_coef=np.matrix.transpose(LassoOpt.coef_)
vect_coef=vect_coef.ravel()
#Affichage des 25 plus importants
coef=pd.Series(abs(vect_coef),index=xApp.columns).sort_values(ascending=False)
print(coef)
plt.figure(figsize=(7,4))
coef.plot(kind='bar')
plt.title('Coeffients')
plt.tight_layout()
plt.show()
"""
Explanation: Q Noter l'erreur de prévision; Comparer avec celle estimée par validation croisée.
Interprétation
L'objet LassoOpt issu de GridSearchCV ne retient pas les paramètres estimés dans le modèle. Il faut donc ré-estimer avec la valeur optimale du paramètre de pénalisation si l'on souhaite afficher ces coefficients.
End of explanation
"""
from sklearn.metrics import roc_curve
listMethod=[["Lasso",logitLasso],["Ridge",logitRidge]]
for method in enumerate(listMethod):
probas_ = method[1][1].predict_proba(xTest)
fpr, tpr, thresholds = roc_curve(yTest, probas_[:,1])
plt.plot(fpr, tpr, lw=1,label="%s"%method[1][0])
plt.xlabel('Taux de faux positifs')
plt.ylabel('Taux de vrais positifs')
plt.legend(loc="best")
plt.show()
"""
Explanation: Q Quelles sont les variables importantes? Comment interpréter?
Q La pénalisation Lasso est-elle effective?
Il serait intéressant de comparer acec les versions ridge et elestic net d'optiisation du modèle.
Courbe ROC
End of explanation
"""
from sklearn import discriminant_analysis
from sklearn.neighbors import KNeighborsClassifier
"""
Explanation: Analyse discriminante
Trois méthodes sont disponibles: paramétrique linéaire ou quadratique et non paramétrique (k plus proches voisins).
End of explanation
"""
lda = discriminant_analysis.LinearDiscriminantAnalysis()
disLin=lda.fit(xApp, yApp)
# Prévision de l'échantillon test
yChap = disLin.predict(xTest)
# matrice de confusion
table=pd.crosstab(yChap,yTest)
print(table)
# Erreur de prévision sur le test
print("Erreur de test lda = %f" % (1-disLin.score(xTest,yTest)))
"""
Explanation: Dicriminante linéaire
Estimation du modèle; il n'y a pas de procédure de sélection de variables proposées. Puis prévision de l'échantillon test.
End of explanation
"""
qda = discriminant_analysis.QuadraticDiscriminantAnalysis()
disQua=qda.fit(xApp, yApp)
# Prévision de l'échantillon test
yChap = disQua.predict(xTest)
# matrice de confusion
table=pd.crosstab(yChap,yTest)
print(table)
# Erreur de prévision sur le test
print("Erreur de test qda = %f" % (1-disQua.score(xTest,yTest)))
"""
Explanation: Q Que dire de la qualité? Des possibilités d'interprétation?
Q Que signifie le warning? Quelles variables osnt en cause?
Discriminante quadratique
End of explanation
"""
knn=KNeighborsClassifier(n_neighbors=10)
# Définition du modèle
disKnn=knn.fit(xApp, yApp)
# Prévision de l'échantillon test
yChap = disKnn.predict(xTest)
# matrice de confusion
table=pd.crosstab(yChap,yTest)
print(table)
# Erreur de prévision sur le test
print("Erreur de test knn = %f" % (1-disKnn.score(xTest,yTest)))
yChap
#Optimisation du paramètre de complexité k
#Grille de valeurs
param_grid=[{"n_neighbors":list(range(1,15))}]
disKnn=GridSearchCV(KNeighborsClassifier(),param_grid,cv=5,n_jobs=-1)
disKnnOpt=disKnn.fit(xApp, yApp) # GridSearchCV est lui même un estimateur
# paramètre optimal
disKnnOpt.best_params_["n_neighbors"]
print("Meilleur score = %f, Meilleur paramètre = %s" % (1.-disKnnOpt.best_score_,disKnnOpt.best_params_))
# Prévision de l'échantillon test
yChap = disKnnOpt.predict(xTest)
# matrice de confusion
table=pd.crosstab(yChap,yTest)
print(table)
# Estimation de l'erreur de prévision sur l'échantillon test
print("Erreur de test knn_opt = %f" % (1-disKnnOpt.score(xTest,yTest)))
"""
Explanation: K plus proches voisins
End of explanation
"""
from sklearn.metrics import roc_curve
# Liste des méthodes
listMethod=[["lda",disLin],["qda",disQua],["knn",disKnnOpt]]
# Tracé des courbes
for method in enumerate(listMethod):
probas_ = method[1][1].predict_proba(xTest)
fpr, tpr, thresholds = roc_curve(yTest, probas_[:,1])
plt.plot(fpr, tpr, lw=1,label="%s"%method[1][0])
plt.xlabel('Taux de faux positifs')
plt.ylabel('Taux de vrais positifs')
plt.legend(loc="best")
plt.show()
"""
Explanation: Courbes ROC
End of explanation
"""
from sklearn.tree import DecisionTreeClassifier
# définition du modèle
tree= DecisionTreeClassifier()
treeC=tree.fit(xApp, yApp)
"""
Explanation: Arbres binaires de décision
Les arbres binaires de décision concurrencent la régression logistique et gardent une place de choix dans les services de Gestion de la Relation Client, maintenant de Science des Données, par la facilité d'interprétation des modèles qui en découlent. L'optimisation de la complexité d'un artbre peut être délicate à opérer cr très sensible aux fluctuations de l'échantillon.
End of explanation
"""
# Optimisation de la profondeur de l'arbre
param=[{"max_depth":list(range(2,10))}]
tree= GridSearchCV(DecisionTreeClassifier(),param,cv=10,n_jobs=-1)
treeOpt=tree.fit(xApp, yApp)
# paramètre optimal
print("Meilleur score = %f, Meilleur paramètre = %s" % (1. - treeOpt.best_score_,treeOpt.best_params_))
# Prévision de l'échantillon test
yChap = treeOpt.predict(xTest)
# matrice de confusion
table=pd.crosstab(yChap,yTest)
print(table)# Erreur de prévision sur le test
print("Erreur de test tree qualitatif = %f" % (1-treeOpt.score(xTest,yTest)))
from sklearn.tree import export_graphviz
from sklearn.externals.six import StringIO
import pydotplus
treeG=DecisionTreeClassifier(max_depth=treeOpt.best_params_['max_depth'])
treeG.fit(xApp,yApp)
dot_data = StringIO()
export_graphviz(treeG, out_file=dot_data)
graph=pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_png("treeOpt.png")
from IPython.display import Image
Image(filename='treeOpt.png')
"""
Explanation: Q Quel est le critère d'homogénéité des noeuds utilisé par défaut?
Q Quel est le problème concernant l'élagage de l'arbre dans Scikkit-learn vis à vis des possibliités de la librairie rpart de R?
End of explanation
"""
# Liste des méthodes
listMethod=[["Logit",logitLasso],["lda",disLin],["Arbre",treeOpt]]
# Tracé des courbes
for method in enumerate(listMethod):
probas_ = method[1][1].predict_proba(xTest)
fpr, tpr, thresholds = roc_curve(yTest, probas_[:,1])
plt.plot(fpr, tpr, lw=1,label="%s"%method[1][0])
plt.xlabel('Taux de faux positifs')
plt.ylabel('Taux de vrais positifs')
plt.legend(loc="best")
plt.show()
"""
Explanation: Courbes ROC
Comparaison des méthodes précédentes.
La valeur de seuil par défaut (0.5) n'étant pas nécessairement celle "optimale", il est important de comparer les courbes ROC.
End of explanation
"""
from sklearn.ensemble import BaggingClassifier
bag= BaggingClassifier(n_estimators=100,oob_score=False)
bagC=bag.fit(xApp, yApp)
# Prévision de l'échantillon test
yChap = bagC.predict(xTest)
# matrice de confusion
table=pd.crosstab(yChap,yTest)
print(table)
# Erreur de prévision sur le test
print("Erreur de test avec le bagging = %f" % (1-bagC.score(xTest,yTest)))
"""
Explanation: Commenter les résultats.
Q Intérêt de la régression logistique par rapport à l'analyse discriminante linéaire?
Q Conséquence du croisement des courbes ROC sur l'évaluation de l'AUC.
L'échantillon test reste de taille modeste (200). une étude plus systématique est nécessaire ainsi que la prise en compte des autres méthodes.
Algorithmes d'agrégation de modèles
Il s'agit de comparer les principaux algorithmes issus de l'apprentissage machine: bagging, random forest, boosting.
Bagging
Q Quel est par défaut l'estimateur qui est agrégé?
Q Quel est le nombre d'estimateurs par défaut? Est-il nécessaire de l'optimiser?
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
# Optimisation de max_features
param=[{"max_features":list(range(2,10,1))}]
rf= GridSearchCV(RandomForestClassifier(n_estimators=100),param,cv=5,n_jobs=-1)
rfOpt=rf.fit(xApp, yApp)
# paramètre optimal
print("Meilleur score = %f, Meilleur paramètre = %s" % (1. - rfOpt.best_score_,rfOpt.best_params_))
# Prévision de l'échantillon test
yChap = rfOpt.predict(xTest)
# matrice de confusion
table=pd.crosstab(yChap,yTest)
print(table)
# Erreur de prévision sur le test
print("Erreur de test random forest opt -quantitatif = %f" % (1-rfOpt.score(xTest,yTest)))
"""
Explanation: Q Exécuter plusieurs fois la cellule ci-dessus. Que penser de la stabilité de l'estimation de l'erreur et donc de sa fiabilité?
Random forest
Q Quel paramètre doit être optimisé pour cet algorithme? Quel est sa valeur par défaut?
Q Le nombre d'arbres de la forêt est-il un paramètre sensible?
End of explanation
"""
from sklearn.ensemble import GradientBoostingClassifier
# Optimisation de deux paramètres
paramGrid = [
{'n_estimators': list(range(100,601,50)), 'learning_rate': [0.1,0.2,0.3,0.4]}
]
gbmC= GridSearchCV(GradientBoostingClassifier(),paramGrid,cv=5,n_jobs=-1)
gbmOpt=gbmC.fit(xApp, yApp)
# paramètre optimal
print("Meilleur score = %f, Meilleur paramètre = %s" % (1. - gbmOpt.best_score_,gbmOpt.best_params_))
# Prévision de l'échantillon test
yChap = gbmOpt.predict(xTest)
# matrice de confusion
table=pd.crosstab(yChap,yTest)
print(table)
# Erreur de prévision sur le test
print("Erreur de test gbm opt = %f" % (1-gbmOpt.score(xTest,yTest)))
"""
Explanation: Gradient boosting
Q Quel est l'algorithme de boosting historique? Lequel est utilisé ici?
Q Quels sont les paramètres qu'il est important de contrôler, optimiser?
Q Quelle est la valeur par défaut de celui non optimisé ci-dessous?
End of explanation
"""
# Liste des méthodes
listMethod=[["Logit",logitLasso],["lda",disLin],["Arbre",treeOpt],["RF",rfOpt],["GBM",gbmOpt]]
# Tracé des courbes
for method in enumerate(listMethod):
probas_ = method[1][1].predict_proba(xTest)
fpr, tpr, thresholds = roc_curve(yTest, probas_[:,1])
plt.plot(fpr, tpr, lw=1,label="%s"%method[1][0])
plt.xlabel('Taux de faux positifs')
plt.ylabel('Taux de vrais positifs')
plt.legend(loc="best")
plt.show()
"""
Explanation: Courbes ROC
End of explanation
"""
from sklearn.utils import check_random_state
import time
check_random_state(13)
tps0=time.clock()
# définition des estimateurs
logit = LogisticRegression(penalty="l1")
lda = discriminant_analysis.LinearDiscriminantAnalysis()
arbre = DecisionTreeClassifier()
rf = RandomForestClassifier(n_estimators=200)
gbm = GradientBoostingClassifier()
# Nombre d'itérations
B=3 # pour utiliser le programme, mettre plutôt B=30
# définition des grilles de paramètres
listMethGrid=[
[logit,{"C":[0.5,1,5,10,12,15,30]}],
[lda,{}],
[arbre,{"max_depth":[2,3,4,5,6,7,8,9,10]}],
[rf,{"max_features":[2,3,4,5,6]}],
[gbm,{"n_estimators": list(range(100,601,50)),"learning_rate": [0.1,0.2,0.3,0.4]}]
]
# Initialisation à 0 des erreurs pour chaque méthode (colonne) et chaque itération (ligne)
arrayErreur=np.empty((B,5))
for i in range(B): # itérations sur B échantillons test
# extraction apprentissage et test
xApp,xTest,yApp,yTest=train_test_split(vispremR,y,test_size=200)
# optimisation de chaque méthode et calcul de l'erreur sur le test
for j,(method, grid_list) in enumerate(listMethGrid):
methodGrid=GridSearchCV(method,grid_list,cv=5,n_jobs=-1).fit(xApp, yApp)
methodOpt = methodGrid.best_estimator_
methFit=methodOpt.fit(xApp, yApp)
arrayErreur[i,j]=1-methFit.score(xTest,yTest)
tps1=time.clock()
print("Temps execution en mn :",(tps1 - tps0)/60)
dataframeErreur=pd.DataFrame(arrayErreur,columns=["Logit","LDA","Arbre","RF","GBM"])
# Distribution des erreurs
dataframeErreur[["Logit","LDA","Arbre","RF","GBM"]].boxplot(return_type='dict')
plt.show()
"""
Explanation: Q Quelles meilleure méthode interprétable? Quelle meilleure méthode?
Q Que dire de l'extrem gradient boosting ? Du nombre de paramètres à optimiser? De son implémentation en Python par rapport à R? De sa disponibilité sous Windows?
Exercice Ajouter les réseaux de neurones et les SVM dans la comparaison.
Validation croisée Monte Carlo
L'échantillon est de faible taille (#200), et les estimations des taux de bien classés comme le tracé des courbes ROC sont très dépendants de l’échantillon test; on peut s’interroger sur l’identité du modèle le plus performant ainsi que sur la significativité des différences observées entre les méthodes. Il est donc important d’itérer le processus (validation croisée Monte Carlo) sur plusieurs échantillons tests. Exécuter la fonction en annexe en choisissant les méthodes semblant les plus performantes.
End of explanation
"""
# Moyennes
dataframeErreur.mean()
"""
Explanation: Q Finalement, quelle meilleure méthode? Quelle meilleure méthode interprétable?
Exercice: tester XGBoost.
End of explanation
"""
|
thehackerwithin/berkeley | code_examples/data_tidying_python_r/Data Tidying and Transformation in Python.ipynb | bsd-3-clause | from __future__ import print_function # For the python2 people
import pandas as pd # This is typically how pandas is loaded
"""
Explanation: Data Tidying and Transformation in Python
by David DeTomaso, Diya Das, and Andrey Indukaev
The goal
Data tidying is a necessary first step for data analysis - it's the process of taking your messily formatted data (missing values, unwieldy coding/organization, etc.) and literally tidying it up so it can be easily used for downstream analyses. To quote Hadley Wickham, "Tidy datasets are easy to manipulate, model and visualise, and have a specific structure:
each variable is a column, each observation is a row, and each type of observational unit
is a table."
These data are actually pretty tidy, so we're going to be focusing on cleaning and transformation, but these manipulations will give you some idea of how to tidy untidy data.
The datasets
We are going to be using the data from the R package nycflights13. There are five datasets corresponding to flights departing NYC in 2013. We will load directly into R from the library, but the repository also includes CSV files we created for the purposes of the Python demo.
Python requirements
For this tutorial we'll be using the following packages in Python
- pandas (depends on numpy)
- seaborn (depends on matplotlib)
You can install these with either pip or conda
pandas
Pandas is an extremely useful package for data-manipulation in python. It allows for a few things:
Mixed types in a data matrix
Non-numeric row/column indexes
Database-like join/merge/group-by operations
Data import/export to a variety of formats (text, Excel, JSON)
The core pandas object is a 'dataframe' - modeled after DataFrames in R
End of explanation
"""
airlines = pd.read_table("airlines.txt")
airports = pd.read_table("airports.txt")
flights = pd.read_table("flights.txt")
planes = pd.read_table("planes.txt")
weather = pd.read_table("weather.txt")
"""
Explanation: Reading data from a file
Let's read data from a file
There are five tables we'll be using as part of the NYCFlights13 dataset
To view them, first extract the archive that comes with this repo
bash
unzip nycflights13.zip
Now, to read them in as dataframes, we'll use the read_table function from pandas
This is a general purpose function for reading tabular data in a text file format. If you follow the link, you can see that there are many configurable options. We'll just use the defaults (assumes tab-delimited)
End of explanation
"""
print(type(planes)) # Yup, it's a DataFrame
# What does it look like?
planes # Jupyter Notebooks do some nifty formatting here
# How big is it?
print(planes.shape) # Works like numpy
print(planes.columns) # What are the column labels?
print(planes.index) # What are the row labels?
# Let's grab a column
planes['manufacturer']
# Inspecting this column further
manufacturer = planes['manufacturer']
print(type(manufacturer)) # It's a Series
"""
Explanation: Inspecting a dataframe // What's in the flights dataset?
Let's run through an example using the flights dataset. This dataset includes...well what does it include? You could read the documentation, but let's take a look first.
Anatomy of a pandas DataFrame
There are a couple of concepts that are important to understand when working with dataframes
- DataFrame class
- Series
- Index / Columns
To understand these, lets look at the 'planes' dataframe
End of explanation
"""
# Indexing into Series
print("Indexing into Series: ", manufacturer[3])
# Indexing into DataFrame
print("Indexing into DataFrame: ", planes.loc[3, 'manufacturer'])
"""
Explanation: Series
one-dimensional and only have one set of labels (called index)
DataFrames
two-dimensional and have row-labels (called index) and column-labels (called columns)
DataFrames are made up of Series (each column is a series)
End of explanation
"""
third_row = planes.loc[3] # get the third row
third_row
print(type(third_row))
"""
Explanation: DataFrame Indexing
We already showed that you can grab a column using
dataframe[column_name]
To grab a row, use:
dataframe.loc[row_name]
And to grab a specific element use:
dataframe.loc[row_name, column_name]
End of explanation
"""
planes = planes.set_index('tailnum')
# OR
planes = pd.read_table('planes.txt', index_col=0) #Set the first column as the index
planes.loc['N10156']
"""
Explanation: Dataframe index
So far the row-index has been numeric (just 0 through ~3300). However, we might want to use labels here too.
To do this, we can select a column to be the dataframe's index
Only do this if the column contains unique data
End of explanation
"""
print(planes.iloc[3]) # Get the third row
print(planes.iloc[:, 3]) # Get the third column
"""
Explanation: But now how do I get the 3rd row?
Here's where iloc comes into play.
Works like loc but uses integers
End of explanation
"""
print('What are the first 5 rows?')
flights.head()
print('What are the last 5 rows?')
flights.tail()
print('Sample random rows')
flights.sample(3, axis=0) # Axis 0 represents the rows, axis 1, the columns
"""
Explanation: Exploring our dataset - let's look at the flights table
End of explanation
"""
print('What are the dimensions of the flights dataframe?\n')
print(flights.shape)
print('Are there any NAs in the flights dataframe?\n')
print(flights.isnull().any())
print('Selecting for flights where there is complete data, what are the dimensions?\n')
print("Original Matrix Shape:", flights.shape)
null_rows = flights.isnull().any(axis=1) # Rows where any value is null
flights_complete = flights.loc[~null_rows]
print("Complete-rows shape:", flights_complete.shape)
"""
Explanation: Identifying and removing NAs in a dataset
We noticed some NAs above (hopefully). How do you find them and remove observations for which there are NAs?
End of explanation
"""
print(type(null_rows))
null_rows
"""
Explanation: Aside: Why does this work with loc?
Earlier I showed .loc operating on row/column labels.
Well, it can also operate on boolean (true/false) lists (or numpy arrays, or pandas Series)
Above, what is null_rows?
End of explanation
"""
print('How might I obtain a summary of the original dataset?')
flights.describe() # Similar to R's 'summary'
# use include='all' to include the non-numberic columns too
"""
Explanation: The great thing about Pandas is that if you pass in a Series, the order of the elements in it doesn't matter anymore. It uses the index to align the Series to the row/column index of the dataframe.
This is very useful when creating a boolean index from one dataframe to be used to select rows in another!
Alternately, with removing NA values there is a dropna function that can be used.
Now...back to flights!
End of explanation
"""
# An example
flights['air_time'].mean() # Returns a single value
subset = flights[['air_time', 'dep_delay', 'arr_delay']]
subset.mean(axis=0) # Axis 0: collapse all rows, result has Index = to original Columns
"""
Explanation: Performing a function along an axis
Pandas allows easy application of descriptive function along an axis.
any which we used earlier, is an example of that. If the data is boolean, any collapses a series of boolean values into True if any of the values are true (otherwise, False)
Can also use min, max, mean, var, std, count
End of explanation
"""
result = flights_complete.groupby('origin')['dep_delay'].mean()
result
# What is this object?
print(type(result))
"""
Explanation: If you want to apply an arbitrary function along an axis, look into the apply function
Performing column-wise operations while grouping by other columns // Departure delay by airport of origin
Sometimes you may want to perform some aggregate function on data by category, which is encoded in another column. Here we calculate the statistics for departure delay, grouping by origin of the flight - remember this is the greater NYC area, so there are only three origins!
End of explanation
"""
flights_complete.groupby('origin')['dep_delay'].describe()
"""
Explanation: Other descriptive functions work here, like 'std', 'count', 'min', 'max'
Also: describe
End of explanation
"""
print('Subsetting the dataset to have 2 dataframes')
flightsUA = flights.loc[flights.carrier == 'UA',]
flightsAA = flights.loc[flights.carrier == 'AA',]
print('Checking the number of rows in two dataframes')
print(flightsUA.shape[0] + flightsAA.shape[0])
print('Combining two dataframes than checking the number of rows in the resulting data frame')
flightsUAandAA = pd.concat([flightsUA,flightsAA], axis=0) # axis=1 would stitch them together horizontally
print(flightsUAandAA.shape[0])
"""
Explanation: Merging tables 'vertically' // Subsetting and re-combining flights from different airlines
You will likely need to combine datasets at some point. For simple acts of stitching two dataframes together, the pandas concat method is used.
Let's create a data frame with information on flights by United Airlines and American Airlines only, by creating two data frames via subsetting data about each airline one by one and then merging.
The main requirement is that the columns must have the same names (may be in different order).
End of explanation
"""
print('Binding 3 data frames and checking the number of rows')
allthree = pd.concat([flightsUA,flightsAA,flightsUAandAA])
allthree.shape[0]
"""
Explanation: Nothing special, just be sure the dataframes have the columns with the same names and types.
End of explanation
"""
airports.head()
"""
Explanation: Merge two tables by a single column // What are the most common destination airports?
The flights dataset has destination airports coded, as three-letter airport codes. I'm pretty good at decoding them, but you don't have to be.
End of explanation
"""
print('Merging in pandas')
flights_readdest = flights_complete.merge(airports, left_on='dest', right_on = 'faa', how='left')
flights_readdest.head()
"""
Explanation: The airports table gives us a key! Let's merge the flights data with the airports data, using dest in flights and faa in airports.
End of explanation
"""
len(set(airports.faa) - set(flights.dest))
"""
Explanation: Why did we use how='left'?
End of explanation
"""
flights_readdest.columns
flights_sm = flights_readdest[['origin', 'name', 'year', 'month', 'day', 'air_time']]
flights_sm.head()
# Renaming is not so simple in pandas
flights_sm = flights_sm.rename(columns = {'name': 'dest'})
flights_sm.head()
"""
Explanation: There are 1357 airports in the airports table that aren't in the flights table at all.
Here are the different arguments for how and what they'd do:
'left': use all rows for flights_complete, and only rows from airports that match
'right': use all rows for airports, and only rows from flights that match
'inner': use only rows for airports and flights that match on the dest/faa columns
'outer': use all rows from both airports and flights
Well this merged dataset is nice, but do we really need all of this information?
End of explanation
"""
airtime = flights_complete.merge(airports, left_on='dest', right_on='faa', how='left') \
.loc[:, ['origin', 'name', 'air_time']] \
.groupby(['origin', 'name'])['air_time'] \
.mean()
print(airtime.shape)
airtime.head()
"""
Explanation: Since each operation gives us back a dataframe, they are easily chained:
End of explanation
"""
airtime.groupby(level='origin').max()
# What if we want to know where the flight goes?
rows = airtime.groupby(level='origin').idxmax() # This returns the indices in airtime where the max was found
airtime[rows] # Index by it to get the max rows
"""
Explanation: Goal: What's the longest flight from each airport, on average?
Here, 'airtime' is a little abnormal because it's Index has two levels
- First level is the 'origin'
- Second level is the name of the destination
This is because we grouped by two variables.
Now we need to group by 'origin' and apply the 'max' function. Groupby can work for the levels of a multi-index too
End of explanation
"""
pvt_airtime = airtime.unstack() # Since airtime has a hierarchical index, we can use unstack
pvt_airtime
"""
Explanation: Pivot Table // Average flight time from origin to destination
Let's put destinations in rows and origins in columns, and have air_time as values.
End of explanation
"""
airtime_df = pd.DataFrame(airtime).reset_index()
airtime_df.head()
airtime_pv = airtime_df.pivot(index='origin',
columns='name',
values='air_time')
airtime_pv
"""
Explanation: However, often you want to pivot just a regular dataframe. I'll create one from airtime for an example:
End of explanation
"""
weather.head()
print(flights_complete.columns & weather.columns) # What columns do they share?
flights_weather = flights_complete.merge(weather,
on=["year", "month","day","hour", "origin"])
print(flights_complete.shape)
print(flights_weather.shape)
"""
Explanation: Multi-column merge // What's the weather like for departing flights?
Flights...get delayed. What's the first step if you want to know if the departing airport's weather is at all responsible for the delay? Luckily, we have a weather dataset for that.
Let's take a look.
End of explanation
"""
# Let's grab flights+weather where the delay was greater than 200 minutes
flights_weather_posdelays = flights_weather.loc[flights_weather.dep_delay > 200]
flights_weather_posdelays.shape
# Anything unusual about these flights?
%matplotlib notebook
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure()
plt.hist(flights_weather.dropna().wind_gust, 30, range=(0, 50), normed=True, label='normal', alpha=.7)
plt.hist(flights_weather_posdelays.dropna().wind_gust, 30, range=(0,50), normed=True, label='delayed', alpha=.7)
plt.legend(loc='best')
plt.title('Wind Gust')
plt.figure()
plt.hist(flights_weather.dropna().pressure, 30, normed=True, label='normal', alpha=.7)
plt.hist(flights_weather_posdelays.dropna().pressure, 30, normed=True, label='delayed', alpha=.7)
plt.legend(loc='best')
plt.title('Pressure')
plt.figure()
plt.hist(flights_weather.dropna().hour, 30, normed=True, label='normal', alpha=.7)
plt.hist(flights_weather_posdelays.dropna().hour, 30, normed=True, label='delayed', alpha=.7)
plt.legend(loc='best')
plt.title('Hour')
"""
Explanation: flights_weather has less rows. Default behavior of merge is 'inner' and so this means there are some flight year/month/day/hour/origin combos where we don't have a weather entry
End of explanation
"""
flights_weather.sort_values('dep_delay').head(10)
flights_weather.sort_values('dep_delay', ascending=False).head(10)
"""
Explanation: Arranging a dataframe // What's the weather like for the most and least delayed flights?
Let's sort the flights_weather dataframe on dep_delay and get data for the top 10 and bottom 10 delays.
End of explanation
"""
flights_complete.dest.str.lower().head() # For string columns, use .str to access string methods
flights_complete.dest.str.upper().head()
"""
Explanation: Some other tidying
Capitalization issues.
End of explanation
"""
flights_complete.head()
day_delay = pd.melt(flights_complete, id_vars=['hour', 'time_hour'], value_vars=['dep_delay', 'arr_delay'], var_name='type_of_delay')
day_delay
plt.figure()
sns.stripplot(x='hour', y='value', hue='type_of_delay', data=day_delay)
"""
Explanation: Wide to long formatted data
End of explanation
"""
day_delay_first = day_delay.drop_duplicates('time_hour', keep='first')
day_delay_first.head()
"""
Explanation: Well this is a bit hard to read. What about the first entry for each type of delay in each hour?
Removing duplicates
End of explanation
"""
flights.isnull().sum(axis=0)
flights_incomplete = flights.loc[flights.isnull().any(axis=1)]
flights_incomplete.shape
"""
Explanation: An incomplete investigation of NAs
Let's examine where there are NAs in the flights dataset.
End of explanation
"""
pd.crosstab(
index=flights_incomplete.dep_time.isnull(), # Series of bool values
columns=flights_incomplete.dep_delay.isnull() # series of bool values
)
"""
Explanation: Do flights with NA departure time also have an NA departure delay?
End of explanation
"""
|
kit-cel/wt | qc/quantization/Uniform_Quantization_Sine.ipynb | gpl-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import librosa
import librosa.display
import IPython.display as ipd
"""
Explanation: Illustration of Uniform Quantization
This code is provided as supplementary material of the lecture Quellencodierung.
This code illustrates
* Uniform scalar quantization with midrise characteristic
End of explanation
"""
sr = 22050 # sample rate
T = 1.0 # seconds
t = np.linspace(0, T, int(T*sr), endpoint=False) # time variable
x = np.sin(2*np.pi*2*t) # pure sine wave at 2 Hz
"""
Explanation: Generate artificial signal
$$
x[k] = \sin\left(2\pi\frac{2k}{f_s}\right),\qquad k = 0,\ldots,f_s
$$
End of explanation
"""
# Sample to 4 bit ... 16 quantization levels
w = 4
# fix x_max based on the current signal, leave some tiny room
x_max = np.max(x) + 1e-10
Delta_x = x_max / (2**(w-1))
xh_max = (2**w-1)*Delta_x/2
# Quantize
xh_uniform_midrise = np.sign(x)*Delta_x*(np.floor(np.abs(x)/Delta_x)+0.5)
font = {'size' : 12}
plt.rc('font', **font)
plt.rc('text', usetex=True)
plt.figure(figsize=(6, 6))
plt.subplot(3,1,1)
plt.plot(range(len(t)),x, c=(0,0.59,0.51))
plt.autoscale(enable=True, axis='x', tight=True)
#plt.title('Original')
plt.xlabel('Sample index $k$', fontsize=14)
plt.ylabel('$x[k]$', fontsize=14)
plt.ylim((-1.1,+1.1))
plt.subplot(3,1,2)
plt.plot(range(len(t)),xh_uniform_midrise, c=(0,0.59,0.51))
plt.autoscale(enable=True, axis='x', tight=True)
#plt.title('Quantized')
plt.xlabel('Sample index $k$', fontsize=14)
plt.ylabel('$\hat{x}[k]$', fontsize=14)
plt.ylim((-1.1,+1.1))
plt.subplot(3,1,3)
plt.plot(range(len(t)),x-xh_uniform_midrise,c=(0,0.59,0.51))
plt.autoscale(enable=True, axis='x', tight=True)
#plt.title('Quantization error signal')
plt.xlabel('Sample index $k$', fontsize=14)
plt.ylabel('$e[k]$', fontsize=14)
plt.ylim((-1.1,+1.1))
plt.tight_layout()
plt.savefig('figure_DST_7.2c.pdf',bbox_inches='tight')
"""
Explanation: Uniform Quantization
End of explanation
"""
|
intel-analytics/BigDL | docs/docs/ClusterServingGuide/OtherFrameworkUsers/keras-to-cluster-serving-example.ipynb | apache-2.0 | import tensorflow as tf
import os
import PIL
tf.__version__
# Obtain data from url:"https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip"
zip_file = tf.keras.utils.get_file(origin="https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip",
fname="cats_and_dogs_filtered.zip", extract=True)
# Find the directory of validation set
base_dir, _ = os.path.splitext(zip_file)
test_dir = os.path.join(base_dir, 'validation')
# Set images size to 160x160x3
image_size = 160
# Rescale all images by 1./255 and apply image augmentation
test_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
# Flow images using generator to the test_generator
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(image_size, image_size),
batch_size=1,
class_mode='binary')
# Create the base model from the pre-trained model MobileNet V2
IMG_SHAPE=(160,160,3)
model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
"""
Explanation: In this example, we will use tensorflow.keras package to create a keras image classification application using model MobileNetV2, and transfer the application to Cluster Serving step by step.
Original Keras application
We will first show an original Keras application, which download the data and preprocess it, then create the MobileNetV2 model to predict.
End of explanation
"""
prediction=model.predict(test_generator.next()[0])
print(prediction)
"""
Explanation: In keras, input could be ndarray, or generator. We could just use model.predict(test_generator). But to simplify, here we just input the first record to model.
End of explanation
"""
# Save trained model to ./transfer_learning_mobilenetv2
model.save('/tmp/transfer_learning_mobilenetv2')
! ls /tmp/transfer_learning_mobilenetv2
"""
Explanation: Great! Now the Keras application is completed.
Export TensorFlow SavedModel
Next, we transfer the application to Cluster Serving. The first step is to save the model to SavedModel format.
End of explanation
"""
! pip install analytics-zoo-serving
# we go to a new directory and initialize the environment
! mkdir cluster-serving
os.chdir('cluster-serving')
! cluster-serving-init
! tail wget-log.2
# if you encounter slow download issue like above, you can just use following command to download
# ! wget https://repo1.maven.org/maven2/com/intel/analytics/zoo/analytics-zoo-bigdl_0.12.1-spark_2.4.3/0.9.0/analytics-zoo-bigdl_0.12.1-spark_2.4.3-0.9.0-serving.jar
# if you are using wget to download, or get "analytics-zoo-xxx-serving.jar" after "ls", please call mv *serving.jar zoo.jar after downloaded.
# After initialization finished, check the directory
! ls
"""
Explanation: Deploy Cluster Serving
After model prepared, we start to deploy it on Cluster Serving.
First install Cluster Serving
End of explanation
"""
## Analytics-zoo Cluster Serving
model:
# model path must be provided
path: /tmp/transfer_learning_mobilenetv2
! head config.yaml
"""
Explanation: We config the model path in config.yaml to following (the detail of config is at Cluster Serving Configuration)
End of explanation
"""
! $FLINK_HOME/bin/start-cluster.sh
"""
Explanation: Start Cluster Serving
Cluster Serving requires Flink and Redis installed, and corresponded environment variables set, check Cluster Serving Installation Guide for detail.
Flink cluster should start before Cluster Serving starts, if Flink cluster is not started, call following to start a local Flink cluster.
End of explanation
"""
! cluster-serving-start
"""
Explanation: After configuration, start Cluster Serving by cluster-serving-start (the detail is at Cluster Serving Programming Guide)
End of explanation
"""
from zoo.serving.client import InputQueue, OutputQueue
input_queue = InputQueue()
"""
Explanation: Prediction using Cluster Serving
Next we start Cluster Serving code at python client.
End of explanation
"""
arr = test_generator.next()[0]
arr
# Use async api to put and get, you have pass a name arg and use the name to get
input_queue.enqueue('my-input', t=arr)
output_queue = OutputQueue()
prediction = output_queue.query('my-input')
# Use sync api to predict, this will block until the result is get or timeout
prediction = input_queue.predict(arr)
prediction
"""
Explanation: In Cluster Serving, only NdArray is supported as input. Thus, we first transform the generator to ndarray (If you do not know how to transform your input to NdArray, you may get help at data transform guide)
End of explanation
"""
# don't forget to delete the model you save for this tutorial
! rm -rf /tmp/transfer_learning_mobilenetv2
"""
Explanation: If everything works well, the result prediction would be the exactly the same NdArray object with the output of original Keras model.
End of explanation
"""
|
nmih/ssbio | docs/notebooks/GEM-PRO - Genes & Sequences.ipynb | mit | import sys
import logging
# Import the GEM-PRO class
from ssbio.pipeline.gempro import GEMPRO
# Printing multiple outputs per cell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
"""
Explanation: GEM-PRO - Genes & Sequences
This notebook gives an example of how to run the GEM-PRO pipeline with a dictionary of gene IDs and their protein sequences.
<div class="alert alert-info">
**Input:**
Dictionary of gene IDs and protein sequences
</div>
<div class="alert alert-info">
**Output:**
GEM-PRO model
</div>
Imports
End of explanation
"""
# Create logger
logger = logging.getLogger()
logger.setLevel(logging.INFO) # SET YOUR LOGGING LEVEL HERE #
# Other logger stuff for Jupyter notebooks
handler = logging.StreamHandler(sys.stderr)
formatter = logging.Formatter('[%(asctime)s] [%(name)s] %(levelname)s: %(message)s', datefmt="%Y-%m-%d %H:%M")
handler.setFormatter(formatter)
logger.handlers = [handler]
"""
Explanation: Logging
Set the logging level in logger.setLevel(logging.<LEVEL_HERE>) to specify how verbose you want the pipeline to be. Debug is most verbose.
CRITICAL
Only really important messages shown
ERROR
Major errors
WARNING
Warnings that don't affect running of the pipeline
INFO (default)
Info such as the number of structures mapped per gene
DEBUG
Really detailed information that will print out a lot of stuff
<div class="alert alert-warning">
**Warning:**
`DEBUG` mode prints out a large amount of information, especially if you have a lot of genes. This may stall your notebook!
</div>
End of explanation
"""
# SET FOLDERS AND DATA HERE
import tempfile
ROOT_DIR = tempfile.gettempdir()
PROJECT = 'genes_and_sequences_GP'
GENES_AND_SEQUENCES = {'b0870': 'MIDLRSDTVTRPSRAMLEAMMAAPVGDDVYGDDPTVNALQDYAAELSGKEAAIFLPTGTQANLVALLSHCERGEEYIVGQAAHNYLFEAGGAAVLGSIQPQPIDAAADGTLPLDKVAMKIKPDDIHFARTKLLSLENTHNGKVLPREYLKEAWEFTRERNLALHVDGARIFNAVVAYGCELKEITQYCDSFTICLSKGLGTPVGSLLVGNRDYIKRAIRWRKMTGGGMRQSGILAAAGIYALKNNVARLQEDHDNAAWMAEQLREAGADVMRQDTNMLFVRVGEENAAALGEYMKARNVLINASPIVRLVTHLDVSREQLAEVAAHWRAFLAR',
'b3041': 'MNQTLLSSFGTPFERVENALAALREGRGVMVLDDEDRENEGDMIFPAETMTVEQMALTIRHGSGIVCLCITEDRRKQLDLPMMVENNTSAYGTGFTVTIEAAEGVTTGVSAADRITTVRAAIADGAKPSDLNRPGHVFPLRAQAGGVLTRGGHTEATIDLMTLAGFKPAGVLCELTNDDGTMARAPECIEFANKHNMALVTIEDLVAYRQAHERKAS'}
PDB_FILE_TYPE = 'mmtf'
# Create the GEM-PRO project
my_gempro = GEMPRO(gem_name=PROJECT, root_dir=ROOT_DIR, genes_and_sequences=GENES_AND_SEQUENCES, pdb_file_type=PDB_FILE_TYPE)
"""
Explanation: Initialization of the project
Set these three things:
ROOT_DIR
The directory where a folder named after your PROJECT will be created
PROJECT
Your project name
LIST_OF_GENES
Your list of gene IDs
A directory will be created in ROOT_DIR with your PROJECT name. The folders are organized like so:
```
ROOT_DIR
└── PROJECT
├── data # General storage for pipeline outputs
├── model # SBML and GEM-PRO models are stored here
├── genes # Per gene information
│ ├── <gene_id1> # Specific gene directory
│ │ └── protein
│ │ ├── sequences # Protein sequence files, alignments, etc.
│ │ └── structures # Protein structure files, calculations, etc.
│ └── <gene_id2>
│ └── protein
│ ├── sequences
│ └── structures
├── reactions # Per reaction information
│ └── <reaction_id1> # Specific reaction directory
│ └── complex
│ └── structures # Protein complex files
└── metabolites # Per metabolite information
└── <metabolite_id1> # Specific metabolite directory
└── chemical
└── structures # Metabolite 2D and 3D structure files
```
<div class="alert alert-info">**Note:** Methods for protein complexes and metabolites are still in development.</div>
End of explanation
"""
# Mapping using BLAST
my_gempro.blast_seqs_to_pdb(all_genes=True, seq_ident_cutoff=.9, evalue=0.00001)
my_gempro.df_pdb_blast.head(2)
"""
Explanation: Mapping sequence --> structure
Since the sequences have been provided, we just need to BLAST them to the PDB.
<p><div class="alert alert-info">**Note:** These methods do not download any 3D structure files.</div></p>
Methods
End of explanation
"""
# Download all mapped PDBs and gather the metadata
my_gempro.pdb_downloader_and_metadata()
my_gempro.df_pdb_metadata.head(2)
# Set representative structures
my_gempro.set_representative_structure()
my_gempro.df_representative_structures.head()
# Looking at the information saved within a gene
my_gempro.genes.get_by_id('b0870').protein.representative_structure
my_gempro.genes.get_by_id('b0870').protein.representative_structure.get_dict()
"""
Explanation: Downloading and ranking structures
Methods
<div class="alert alert-warning">
**Warning:**
Downloading all PDBs takes a while, since they are also parsed for metadata. You can skip this step and just set representative structures below if you want to minimize the number of PDBs downloaded.
</div>
End of explanation
"""
# Prep I-TASSER model folders
my_gempro.prep_itasser_modeling('~/software/I-TASSER4.4', '~/software/ITLIB/', runtype='local', all_genes=False)
"""
Explanation: Creating homology models
For those proteins with no representative structure, we can create homology models for them. ssbio contains some built in functions for easily running I-TASSER locally or on machines with SLURM (ie. on NERSC) or Torque job scheduling.
You can load in I-TASSER models once they complete using the get_itasser_models later.
<p><div class="alert alert-info">**Info:** Homology modeling can take a long time - about 24-72 hours per protein (highly dependent on the sequence length, as well as if there are available templates).</div></p>
Methods
End of explanation
"""
import os.path as op
my_gempro.save_json(op.join(my_gempro.model_dir, '{}.json'.format(my_gempro.id)), compression=False)
"""
Explanation: Saving your GEM-PRO
<p><div class="alert alert-warning">**Warning:** Saving is still experimental. For a full GEM-PRO with sequences & structures, depending on the number of genes, saving can take >5 minutes.</div></p>
End of explanation
"""
|
RNAer/Calour | doc/source/notebooks/microbiome_diff_abundance.ipynb | bsd-3-clause | import calour as ca
ca.set_log_level(11)
%matplotlib notebook
import numpy as np
np.random.seed(2018)
"""
Explanation: Microbiome differential abundance tutorial
This is a jupyter notebook example of how to identify bacteria different between two conditions
Setup
End of explanation
"""
cfs=ca.read_amplicon('data/chronic-fatigue-syndrome.biom',
'data/chronic-fatigue-syndrome.sample.txt',
normalize=10000,min_reads=1000)
print(cfs)
"""
Explanation: Load the data
we use the Chronic faitigue syndrome dataset from
Giloteaux, L., Goodrich, J.K., Walters, W.A., Levine, S.M., Ley, R.E. and Hanson, M.R., 2016.
Reduced diversity and altered composition of the gut microbiome in individuals with myalgic encephalomyelitis/chronic fatigue syndrome.
Microbiome, 4(1), p.30.
End of explanation
"""
cfs=cfs.filter_abundance(10)
"""
Explanation: Looking at the data
remove non-interesting bacteria
keep only bacteria with > 10 (normalized) reads total over all samples.
End of explanation
"""
cfs=cfs.cluster_features()
"""
Explanation: cluster the bacteria
so we'll see how it looks before the differential abundance
End of explanation
"""
cfs=cfs.sort_samples('Subject')
"""
Explanation: sort the samples according to sick/healthy
for the plot. Note this is not required for the differnetial abundance (but doesn't hurt either)
End of explanation
"""
cfs.plot(sample_field='Subject',gui='jupyter')
"""
Explanation: plot the original data
End of explanation
"""
dd=cfs.diff_abundance('Subject','Control','Patient')
"""
Explanation: So we see there are lots of bacteria, hard to see which ones are significantly different between Control (healthy) and Patient (CFS).
Differential abundance (diff_abundance)
Default options
(rank mean test after rank transform, with ds-FDR FDR control of 0.1)
Note the results of diff_abundance() is a new Experiment, containing only statistically significant bacteria. The bacteria are sorted according to the effect size (where positive effect size is when group1 > group2, and negative is group1 < group2).
End of explanation
"""
dd.plot(sample_field='Subject',gui='jupyter')
"""
Explanation: So we got 54 differentially abundant bacteria when we compare all samples that have 'Control' in sample metadata field 'Subject' compared to samples that have 'Patient'
Let's see what we got in a heatmap
End of explanation
"""
dd.plot(sample_field='Subject',gui='jupyter',bary_fields=['_calour_direction'])
print(dd.feature_metadata.columns)
"""
Explanation: The bacteria at the top are higher (with statistical significance) in 'Patient' compared to 'Control', with the topmost bacteria the most different.
The bottom bacteria are higher in 'Control' compared to 'Patient', with the bottom-most bacteria the most different.
Note that calour does not show where in the middle is the transition from higher in Control to higher in Patient. This is since we believe if you don't see it by eye, it is not interesting.
However, we can add a colorbar r to indicate the group where the bacteria are higher.
We use the '_calour_direction' field which is added by the diff_abundance() function.
The diff_abundance() also adds the p-value associated with each bacteria ('_calour_pval'), and the effect size('_calour_stat') as fields in the dd.feature_metadata Pandas dataframe
End of explanation
"""
dd=cfs.diff_abundance('Subject','Control','Patient')
dd2=cfs.diff_abundance('Subject','Control','Patient')
print('WITHOUT resetting the random seed:')
print('%d different bacteria between the two function calls' % len(set(dd.feature_metadata.index)^set(dd2.feature_metadata.index)))
dd=cfs.diff_abundance('Subject','Control','Patient', random_seed=2018)
dd2=cfs.diff_abundance('Subject','Control','Patient', random_seed=2018)
print('WITH resetting the random seed:')
print('%d different bacteria between the two function calls' % len(set(dd.feature_metadata.index)^set(dd2.feature_metadata.index)))
"""
Explanation: Random nature of diff_abundance
diff_abundance() calculates the effect size (and p-value) based on random permutations of the sample labels.
Therefore, running diff_abundance() twice may result in slightly different results.
In order to get exactly the same results, we can use the random_seed parameter.
End of explanation
"""
dd2=cfs.diff_abundance('Subject','Control','Patient', alpha=0.25)
print('alpha=0.1:\n%s\n\nalpha=0.25\n%s' % (dd, dd2))
"""
Explanation: Using a different FDR threshold
We can get a larger number of significant bacteria by increasing the FDR threshold (maximal fraction of bacteria which are deemed significant by mistake)
This is done using the alpha paramter. Here we allow up to quarter of the results to be due to false rejects.
End of explanation
"""
dd2=cfs.diff_abundance('Subject','Control','Patient', fdr_method='bhfdr', random_seed=2018)
print('dsFDR:\n%s\n\nBH-FDR\n%s' % (dd, dd2))
"""
Explanation: Using different FDR control methods
Instead of ds-FDR, we can opt for Benjaminy-Hochberg FDR. However, in most cases, it will be more conservative (detect less significant bacteria), due to the discreteness and sparsity of typical microbiome data.
All FDR methods are listed in the diff_abundance API doc.
End of explanation
"""
dd2=cfs.diff_abundance('Subject','Control','Patient', transform=None, random_seed=2018)
print('rankt transform:\n%s\n\nno transform\n%s' % (dd, dd2))
"""
Explanation: Using different data normalization before the test
Instead of the default (rank transform), we can use log2 transform (transform='log2'), or skip the transformation at all (transform=None).
All transform options are listed in the diff_abundance API doc.
We recommend using the default rank transform in order to reduce the effect of outliers.
End of explanation
"""
|
linhbngo/cpsc-4770_6770 | 03-cloudlab-genilib.ipynb | gpl-3.0 | !unzip codes/cloudlab/emulab-0.9.zip -d codes/cloudlab
!cd codes/cloudlab/emulab-geni-lib-1baf79cf12cb/;\
source activate python2;\
python setup.py install --user
!ls /home/lngo/.local/lib/python2.7/site-packages/
!rm -Rf codes/cloudlab/emulab-geni-lib-1baf79cf12cb/
"""
Explanation: Important
This notebook is to be run inside Jupyter. If you see In [ ]: to the left of a cell, it means that this is an executable Jupyter cell.
To run a Jupyter cell, one of the followings can be done:
- Press the Run button in the tool bar
- Hit Shift-Enter
- Hit Ctrl-Enter
In an executable Jupyter cell, the ! denotes a Linux command (or a sequence of commands) that will be sent to execute in the CentOS VM. All Linux commands in shell will assume a starting directory that is the current directory of the notebook.
In an executable Jupyter cell, the %% at the first line of the cell denotes a cell magic (a single configuration option that directs how the cell is executed). %%writefile is a cell magic that instruct Jupyter to not execute the remainder of the cell, but to save them in to a file whose path is specified after the cell magic.
Step 1. Set up emulab geni-lib package for CloudLab
Open a new terminal and run the following command:
$ sudo yum install -y unzip
$ conda create -n python2 python=2.7
$ source activate python2
$ conda install ipykernel
$ python -m ipykernel install --name python2 --user
$ conda install lxml
Restart your Jupyter Server
Reopen this notebook, go to Kernel, and change Kernel to Python 2
End of explanation
"""
%%writefile codes/cloudlab/xenvm.py
"""An example of constructing a profile with a single Xen VM from CloudLab.
Instructions:
Wait for the profile instance to start, and then log in to the VM via the
ssh port specified below. (Note that in this case, you will need to access
the VM through a high port on the physical host, since we have not requested
a public IP address for the VM itself.)
"""
import geni.portal as portal
import geni.rspec.pg as rspec
# Create a Request object to start building the RSpec.
request = portal.context.makeRequestRSpec()
# Create a XenVM
node = request.XenVM("node")
node.disk_image = "urn:publicid:IDN+emulab.net+image+emulab-ops:CENTOS7-64-STD"
node.routable_control_ip = "true"
node.addService(rspec.Execute(shell="/bin/sh",
command="sudo yum update"))
node.addService(rspec.Execute(shell="/bin/sh",
command="sudo yum install -y httpd"))
node.addService(rspec.Execute(shell="/bin/sh",
command="sudo systemctl restart httpd.service"))
# Print the RSpec to the enclosing page.
portal.context.printRequestRSpec()
"""
Explanation: Step 2: Reload geni-lib for the first time
On the top bar of this notebook, select Kernel and then Restart
End of explanation
"""
!source activate python2;\
python codes/cloudlab/xenvm.py
"""
Explanation: Step 3: Test emulab geni-lib installation
Executing the cell below should produce an XML element with the following content:
<rspec xmlns:client="http://www.protogeni.net/resources/rspec/ext/client/1" xmlns:emulab="http://www.protogeni.net/resources/rspec/ext/emulab/1" xmlns:jacks="http://www.protogeni.net/resources/rspec/ext/jacks/1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.geni.net/resources/rspec/3" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd" type="request">
<rspec_tour xmlns="http://www.protogeni.net/resources/rspec/ext/apt-tour/1">
<description type="markdown">An example of constructing a profile with a single Xen VM.</description>
<instructions type="markdown">Wait for the profile instance to start, and then log in to the VM via the
ssh port specified below. (Note that in this case, you will need to access
the VM through a high port on the physical host, since we have not requested
a public IP address for the VM itself.)
</instructions>
</rspec_tour>
<node client_id="node" exclusive="false">
<sliver_type name="emulab-xen"/>
</node>
</rspec>
End of explanation
"""
|
AlJohri/DAT-DC-12 | notebooks/exercise_nba.ipynb | mit | # read the data into a DataFrame
import pandas as pd
url = 'https://raw.githubusercontent.com/kjones8812/DAT4-students/master/kerry/Final/NBA_players_2015.csv'
nba = pd.read_csv(url, index_col=0)
nba.head()
# examine the columns
# examine the positions
"""
Explanation: KNN exercise with NBA player data
Introduction
NBA player statistics from 2014-2015 (partial season): data, data dictionary
Goal: Predict player position using assists, steals, blocks, turnovers, and personal fouls
Step 1: Read the data into Pandas
End of explanation
"""
# map positions to numbers
# create feature matrix (X) (use fields: 'ast', 'stl', 'blk', 'tov', 'pf')
# create response vector (y)
"""
Explanation: Step 2: Create X and y
Use the following features: assists, steals, blocks, turnovers, personal fouls
End of explanation
"""
# import class
# instantiate with K=5
# fit with data
"""
Explanation: Step 3: Train a KNN model (K=5)
End of explanation
"""
# create a list to represent a player
# make a prediction
# calculate predicted probabilities
"""
Explanation: Step 4: Predict player position and calculate predicted probability of each position
Predict for a player with these statistics: 1 assist, 1 steal, 0 blocks, 1 turnover, 2 personal fouls
End of explanation
"""
# repeat for K=50
# calculate predicted probabilities
"""
Explanation: Step 5: Repeat steps 3 and 4 using K=50
End of explanation
"""
# allow plots to appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
# increase default figure and font sizes for easier viewing
plt.rcParams['figure.figsize'] = (6, 4)
plt.rcParams['font.size'] = 14
"""
Explanation: Bonus: Explore the features to decide which ones are predictive
End of explanation
"""
|
rubensfernando/mba-analytics-big-data | Python/2016-08-01/aula5-parte3-json.ipynb | mit | import simplejson as json
json_string = '{"pnome": "Dino", "unome":"Magri"}'
arq_json = json.loads(json_string)
print(arq_json['pnome'])
json_lista = ['foo', {'bar': ('baz', None, 1.0, 2)}]
print(json.dumps(json_lista))
json_dic = {"c": 0, "b": 0, "a": 0}
print(json.dumps(json_dic, sort_keys=True))
"""
Explanation: JSON
Ver slides 36 até 40.
Básico
Codificando objetos básicos do Python
End of explanation
"""
print(json.dumps(json_dic, sort_keys=True, indent=4 * ' '))
"""
Explanation: Imprimindo com os espaços para melhor visualização
End of explanation
"""
|
aleph314/K2 | Data Mining/Recommender Systems/solution/Recommender-Engine_solution.ipynb | gpl-3.0 | # Importing the data
import pandas as pd
import numpy as np
header = ['user_id', 'item_id', 'rating', 'timestamp']
data_movie_raw = pd.read_csv('../data/ml-100k/u.data', sep='\t', names=header)
data_movie_raw.head()
"""
Explanation: Recommender Engine
Perhaps the most famous example of a recommender engine in the Data Science world was the Netflix competition started in 2006, in which teams from all around the world competed to improve on Netflix's reccomendation algorithm. The final prize of $1,000,000 was awarded to a team which developed a solution which had about a 10% increase in accuracy over Netflix's. In fact, this competition resulted in the development of some new techniques which are still in use. For more reading on this topic, see Simon Funk's blog post
In this exercise, you will build a collaborative-filter recommendatin engine using both a cosine similarity approach and SVD (singular value decomposition). Before proceding download the MovieLens dataset.
Importing and Pre-processing the Data
First familiarize yourself with the data you downloaded, and then import the u.data file and take a look at the first few rows.
End of explanation
"""
from sklearn.model_selection import train_test_split
# First split into train and test sets
data_train_raw, data_test_raw = train_test_split(data_movie_raw, train_size = 0.8)
# Turning to pivot tables
data_train = data_train_raw.pivot_table(index = 'user_id', columns = 'item_id', values = 'rating').fillna(0)
data_test = data_test_raw.pivot_table(index = 'user_id', columns = 'item_id', values = 'rating').fillna(0)
# Print the firest few rows
data_train.head()
"""
Explanation: Before building any recommendation engines, we'll have to get the data into a useful form. Do this by first splitting the data into testing and training sets, and then by constructing two new dataframes whose columns are each unique movie and rows are each unique user, filling in 0 for missing values.
End of explanation
"""
# Libraries
import pandas as pd
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
class cos_engine:
def __init__(self, data_all):
"""Constructor for cos_engine class
Args:
data_all: Raw dataset containing all movies to build
a list of movies already seen by each user.
"""
# Create copy of data
self.data_all = data_all.copy()
# Now build a list of movies each of you has seen
self.seen = []
for user in data_all.user_id.unique():
cur_seen = {}
cur_seen["user"] = user
cur_seen["seen"] = self.data_all[data_all.user_id == user].item_id
self.seen.append(cur_seen)
def fit(self, data_train):
"""Performs cosine similarity on a sparse matrix data_train
Args:
data_train: A pandas data frame data to estimate cosine similarity
"""
# Create a copy of the dataframe
self.data_train = data_train.copy()
# Save the indices and column names
self.users = self.data_train.index
self.items = self.data_train.columns
# Compute mean vectors
self.user_means = self.data_train.replace(0, np.nan).mean(axis = 1)
self.item_means = self.data_train.T.replace(0, np.nan).mean(axis = 1)
# Get similarity matrices and compute sums for normalization
# For non adjusted cosine similarity, neglect subtracting the means.
self.data_train_adj = (self.data_train.replace(0, np.nan).T - self.user_means).fillna(0).T
self.user_cos = cosine_similarity(self.data_train_adj)
self.item_cos = cosine_similarity(self.data_train_adj.T)
self.user_cos_sum = np.abs(self.user_cos).sum(axis = 1)
self.item_cos_sum = np.abs(self.item_cos).sum(axis = 1)
self.user_cos_sum = self.user_cos_sum.reshape(self.user_cos_sum.shape[0], 1)
self.item_cos_sum = self.item_cos_sum.reshape(self.item_cos_sum.shape[0], 1)
def predict(self, method = 'user'):
"""Predicts using Cosine Similarity
Args:
method: A string indicating what method to use, user or item.
Default user.
Returns:
A pandas dataframe containing the prediction values
"""
# Store prediction locally and turn to dataframe
if method == 'user':
self.pred = self.user_means[:, np.newaxis] + ((self.user_cos @ self.data_train_adj) / self.user_cos_sum)
self.pred = pd.DataFrame(self.pred, index = data_train.index, columns = data_train.columns)
elif method == 'item':
self.pred = self.item_means[:, np.newaxis] + ((self.data_train @ self.item_cos) / self.item_cos_sum.T).T
self.pred = pd.DataFrame(self.pred, columns = data_train.index.values, index = data_train.columns)
return(self.pred)
def test(self, data_test, root = False):
"""Tests fit given test data in data_test
Args:
data_test: A pandas dataframe containing test data
root: A boolean indicating whether to return the RMSE.
Default False
Returns:
The Mean Squared Error of the fit if root = False, the Root Mean\
Squared Error otherwise.
"""
# Build a list of common indices (users) in the train and test set
row_idx = list(set(self.pred.index) & set(data_test.index))
# Prime the variables for loop
err = [] # To hold the Sum of Squared Errors
N = 0 # To count preditions for MSE calculation
for row in row_idx:
# Get the rows
test_row = data_test.loc[row, :]
pred_row = self.pred.loc[row, :]
# Get indices of nonzero elements in the test set
idx = test_row.index[test_row.nonzero()[0]]
# Get only common movies
temp_test = test_row[idx]
temp_pred = pred_row[idx]
# Compute error and count
temp_err = ((temp_pred - temp_test)**2).sum()
N = N + len(idx)
err.append(temp_err)
mse = np.sum(err) / N
# Switch for RMSE
if root:
err = np.sqrt(mse)
else:
err = mse
return(err)
def recommend(self, user, num_recs):
"""Tests fit given test data in data_test
Args:
data_test: A pandas dataframe containing test data
root: A boolean indicating whether to return the RMSE.
Default False
Returns:
The Mean Squared Error of the fit if root = False, the Root Mean
Squared Error otherwise.
"""
# Get list of already seen movies for this user
idx_seen = next(item for item in self.seen if item["user"] == 2)["seen"]
# Remove already seen movies and recommend
rec = self.pred.loc[user, :].drop(idx_seen).nlargest(num_recs)
return(rec.index)
# Testing
cos_en = cos_engine(data_movie_raw)
cos_en.fit(data_train)
# Predict using user similarity
pred1 = cos_en.predict(method = 'user')
err = cos_en.test(data_test, root = True)
rec1 = cos_en.recommend(1, 5)
print("RMSE:", err)
print("Reccomendations for user 1:", rec1.values)
# And now with item
pred2 = cos_en.predict(method = 'item')
err = cos_en.test(data_test, root = True)
rec2 = cos_en.recommend(1, 5)
print("RMSE:", err)
print("Reccomendations for item 1:", rec2.values)
"""
Explanation: Now split the data into a training and test set, using a ratio 80/20 for train/test.
Cosine Similarity
Building a recommendation engine can be thought of as "filling in the holes" in the sparse matrix you made above. For example, take a look at the MovieLense data. You'll see that that matrix is mostly zeros. Our task here is to predict what a given user will rate a given movie depending on the users tastes. To determine a users taste, we can use cosine similarity which is given by $$s_u^{cos}(u_k,u_a)
= \frac{ u_k \cdot u_a }{ \left \| u_k \right \| \left \| u_a \right \| }
= \frac{ \sum x_{k,m}x_{a,m} }{ \sqrt{\sum x_{k,m}^2\sum x_{a,m}^2} }$$
for users $u_a$ and $u_k$ on ratings given by $x_{a,m}$ and $x_{b,m}$. This is just the cosine of the angle between the two vectors. Likewise, this can also be calculated for the similarity between two items, $i_a$ and $i_b$, given by $$s_u^{cos}(i_m,i_b)
= \frac{ i_m \cdot i_b }{ \left \| i_m \right \| \left \| i_b \right \| }
= \frac{ \sum x_{a,m} x_{a,b} }{ \sqrt{ \sum x_{a,m}^2 \sum x_{a,b}^2 } }$$
Then, the similarity between two users is given by $$\hat{x}{k,m} = \bar{x}{k} + \frac{\sum\limits_{u_a} s_u^{cos}(u_k,u_a) (x_{a,m})}{\sum\limits_{u_a}|s_u^{cos}(u_k,u_a)|}$$ and for items given by $$\hat{x}{k,m} = \frac{\sum\limits{i_b} s_u^{cos}(i_m,i_b) (x_{k,b}) }{\sum\limits_{i_b}|s_u^{cos}(i_m,i_b)|}$$
Use these ideas to construct a class cos_engine which can be used to recommend movies for a given user. Be sure to also test your algorithm, reporting its accuracy. Bonus: Use adjusted cosine similiarity.
End of explanation
"""
# Libraries
import pandas as pd
import numpy as np
import scipy.sparse as sp
from scipy.sparse.linalg import svds
class svd_engine:
def __init__(self, data_all, k = 6):
"""Constructor for svd_engine class
Args:
k: The number of latent variables to fit
"""
self.k = k
# Create copy of data
self.data_all = data_all.copy()
# Now build a list of movies each you has seen
self.seen = []
for user in data_all.user_id.unique():
cur_seen = {}
cur_seen["user"] = user
cur_seen["seen"] = self.data_all[data_all.user_id == user].item_id
self.seen.append(cur_seen)
def fit(self, data_train):
"""Performs SVD on a sparse matrix data_train
Args:
data_train: A pandas data frame data to estimate SVD
Returns:
Matricies u, s, and vt of SVD
"""
# Save local copy of data
self.data_train = data_train.copy()
# Compute adjusted matrix
self.user_means = self.data_train.replace(0, np.nan).mean(axis = 1)
self.item_means = self.data_train.T.replace(0, np.nan).mean(axis = 1)
self.data_train_adj = (self.data_train.replace(0, np.nan).T - self.user_means).fillna(0).T
# Save the indices and column names
self.users = data_train.index
self.items = data_train.columns
# Train the model
self.u, self.s, self.vt = svds(self.data_train_adj, k = self.k)
return(self.u, np.diag(self.s), self.vt)
def predict(self):
"""Predicts using SVD
Returns:
A pandas dataframe containing the prediction values
"""
# Store prediction locally and turn to dataframe, adding the mean back
self.pred = pd.DataFrame(self.u @ np.diag(self.s) @ self.vt,
index = self.users,
columns = self.items)
self.pred = self.user_means[:, np.newaxis] + self.pred
return(self.pred)
def test(self, data_test, root = False):
"""Tests fit given test data in data_test
Args:
data_test: A pandas dataframe containing test data
root: A boolean indicating whether to return the RMSE.
Default False
Returns:
The Mean Squared Error of the fit if root = False, the Root Mean\
Squared Error otherwise.
"""
# Build a list of common indices (users) in the train and test set
row_idx = list(set(self.pred.index) & set(data_test.index))
# Prime the variables for loop
err = [] # To hold the Sum of Squared Errors
N = 0 # To count predictions for MSE calculation
for row in row_idx:
# Get the rows
test_row = data_test.loc[row, :]
pred_row = self.pred.loc[row, :]
# Get indices of nonzero elements in the test set
idx = test_row.index[test_row.nonzero()[0]]
# Get only common movies
temp_test = test_row[idx]
temp_pred = pred_row[idx]
# Compute error and count
temp_err = ((temp_pred - temp_test)**2).sum()
N = N + len(idx)
err.append(temp_err)
mse = np.sum(err) / N
# Switch for RMSE
if root:
err = np.sqrt(mse)
else:
err = mse
return(err)
def recommend(self, user, num_recs):
"""Tests fit given test data in data_test
Args:
data_test: A pandas dataframe containing test data
root: A boolean indicating whether to return the RMSE.
Default False
Returns:
The Mean Squared Error of the fit if root = False, the Root Mean\
Squared Error otherwise.
"""
# Get list of already seen movies for this user
idx_seen = next(item for item in self.seen if item["user"] == 2)["seen"]
# Remove already seen movies and recommend
rec = self.pred.loc[user, :].drop(idx_seen).nlargest(num_recs)
return(rec.index)
# Testing
svd_en = svd_engine(data_movie_raw, k = 20)
svd_en.fit(data_train)
svd_en.predict()
err = svd_en.test(data_test, root = True)
rec = svd_en.recommend(1, 5)
print("RMSE:", err)
print("Reccomendations for user 1:", rec.values)
"""
Explanation: SVD
Above we used Cosine Similarity to fill the holes in our sparse matrix. Another, and much more popular, method for matrix completion is called a Singluar Value Decomposition. SVD factors our data matrix into three smaller matricies, given by $$\textbf{M} = \textbf{U} \bf{\Sigma} \textbf{V}^*$$ where $\textbf{M}$ is our data matrix, $\textbf{U}$ is a unitary matrix containg the latent variables in the user space, $\bf{\Sigma}$ is diagonal matrix containing the singular values of $\textbf{M}$, and $\textbf{V}$ is a unitary matrix containing the latent variables in the item space. For more information on the SVD see the Wikipedia article.
Numpy contains a package to estimate the SVD of a sparse matrix. By making estimates of the matricies $\textbf{U}$, $\bf{\Sigma}$, and $\textbf{V}$, and then by multiplying them together, we can reconstruct an estimate for the matrix $\textbf{M}$ with all the holes filled in.
Use these ideas to construct a class svd_engine which can be used to recommend movies for a given user. Be sure to also test your algorithm, reporting its accuracy. Bonus: Tune any parameters.
End of explanation
"""
# Parameter tuning
import matplotlib.pyplot as plt
err = []
for cur_k in range(1, 50):
svd_en = svd_engine(data_movie_raw, k = cur_k)
svd_en.fit(data_train)
svd_en.predict()
err.append(svd_en.test(data_test))
plt.plot(range(1, 50), err)
plt.title('RMSE versus k')
plt.xlabel('k')
plt.ylabel('RMSE')
plt.show()
err.index(min(err))
"""
Explanation: Overall RMSE of about 0.98
End of explanation
"""
# Build the engine
svd_en = svd_engine(data_movie_raw, k = 7)
svd_en.fit(data_train)
svd_en.predict()
# Now make recommendations
recs = []
for user in data_movie_raw.user_id.unique():
temp_rec = svd_en.recommend(user, 5)
recs.append(temp_rec)
recs[0]
"""
Explanation: 7 is the optimal value of k in this case. Note that no cross-validation was performed!
Now we'll build the best recommender and recommend 5 movies to each user.
End of explanation
"""
|
sdpython/pyquickhelper | _doc/notebooks/example_about_files.ipynb | mit | from pyquickhelper.filehelper import download, gzip_files, zip7_files, zip_files
download("https://docs.python.org/3.4/library/urllib.request.html")
gzip_files("request.html.gz", ["urllib.request.html"])
import os
os.listdir(".")
ipy = [ _ for _ in os.listdir(".") if ".ipynb" in _ ]
if os.path.exists("request.html.zip"):
os.remove("notebooks.zip")
zip_files("notebooks.zip", ipy)
"""
Explanation: Helpers about files
How to compress, encrypt, decrypt files with pyquickhelper.
compress
End of explanation
"""
from pyquickhelper.filehelper import explore_folder_iterfile_repo
files = list ( explore_folder_iterfile_repo(".") )
files
"""
Explanation: The following example get all files registered in a repository GIT or SVN.
End of explanation
"""
%load_ext pyquickhelper
%encrypt_file notebooks.zip notebooks.enc passwordpassword
%decrypt_file notebooks.enc notebooks2.zip passwordpassword
"""
Explanation: encrypt, decrypt
End of explanation
"""
|
NGSchool2016/ngschool2016-materials | jupyter/agyorkei/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb | gpl-3.0 | %pylab inline
"""
Explanation: Set the matplotlib magic to notebook enable inline plots
End of explanation
"""
import subprocess
import matplotlib.pyplot as plt
import random
import numpy as np
"""
Explanation: Calculate the Nonredundant Read Fraction (NRF)
SAM format example:
SRR585264.8766235 0 1 4 15 35M * 0 0 CTTAAACAATTATTCCCCCTGCAAACATTTTCAAT GGGGGGGGGGGGGGGGGGGGGGFGGGGGGGGGGGG XT:A:U NM:i:1 X0:i:1 X1:i:6 XM:i:1 XO:i:0 XG:i:0 MD:Z:8T26
Import the required modules
End of explanation
"""
plt.style.use('ggplot')
figsize(10,5)
"""
Explanation: Make figures prettier and biger
End of explanation
"""
file = "/ngschool/chip_seq/bwa/input.sorted.bam"
"""
Explanation: Parse the SAM file and extract the unique start coordinates.
First store the file name in the variable
End of explanation
"""
p = subprocess.Popen(["samtools", "view", "-q10", "-F260", file],
stdout=subprocess.PIPE)
coords = []
for line in p.stdout:
flag, ref, start = line.decode('utf-8').split()[1:4]
coords.append([flag, ref, start])
coords[:3]
"""
Explanation: Next we read the file using samtools. From each read we need to store the flag, chromosome name and start coordinate.
End of explanation
"""
len(coords)
"""
Explanation: What is the total number of our unique reads?
End of explanation
"""
random.seed(1234)
sample = random.sample(coords, 1000000)
len(sample)
"""
Explanation: Randomly sample the coordinates to get 1M for NRF calculations
End of explanation
"""
uniqueStarts = {'watson': set(), 'crick': set()}
for coord in sample:
flag, ref, start = coord
if int(flag) & 16:
uniqueStarts['crick'].add((ref, start))
else:
uniqueStarts['watson'].add((ref, start))
"""
Explanation: How many of those coordinates are unique? (We will use the set python object which only the unique items.)
End of explanation
"""
len(uniqueStarts['watson'])
"""
Explanation: How many on the Watson strand?
End of explanation
"""
len(uniqueStarts['crick'])
"""
Explanation: And on the Crick?
End of explanation
"""
NRF_input = (len(uniqueStarts['watson']) + len(uniqueStarts['crick']))*1.0/len(sample)
print(NRF_input)
"""
Explanation: Calculate the NRF
End of explanation
"""
def calculateNRF(filePath, pickSample=True, sampleSize=10000000, seed=1234):
p = subprocess.Popen(['samtools', 'view', '-q10', '-F260', filePath],
stdout=subprocess.PIPE)
coordType = np.dtype({'names': ['flag', 'ref', 'start'],
'formats': ['uint16', 'U10', 'uint32']})
coordArray = np.empty(10000000, dtype=coordType)
i = 0
for line in p.stdout:
if i >= len(coordArray):
coordArray = np.append(coordArray, np.empty(1000000, dtype=coordType), axis=0)
fg, rf, st = line.decode('utf-8').split()[1:4]
coordArray[i] = np.array((fg, rf, st), dtype=coordType)
i += 1
coordArray = coordArray[:i]
sample = coordArray
if pickSample and len(coordArray) > sampleSize:
np.random.seed(seed)
sample = np.random.choice(coordArray, sampleSize, replace=False)
uniqueStarts = {'watson': set(), 'crick': set()}
for read in sample:
flag, ref, start = read
if flag & 16:
uniqueStarts['crick'].add((ref, start))
else:
uniqueStarts['watson'].add((ref, start))
NRF = (len(uniqueStarts['watson']) + len(uniqueStarts['crick']))*1.0/len(sample)
return NRF
"""
Explanation: Lets create a function from what we did above and apply it to all of our files!
To use our function on the real sequencing datasets (not only on a small subset) we need to optimize our method a bit- we will use python module called numpy.
End of explanation
"""
NRF_chip = calculateNRF("/ngschool/chip_seq/bwa/sox2_chip.sorted.bam", sampleSize=1000000)
print(NRF_chip)
"""
Explanation: Calculate the NRF for the chip-seq sample
End of explanation
"""
plt.bar([0,2],[NRF_input, NRF_chip], width=1)
plt.xlim([-0.5,3.5]), plt.xticks([0.5, 2.5], ['Input', 'ChIP'])
plt.xlabel('Sample')
plt.ylabel('NRF')
plt.ylim([0, 1.25]), plt.yticks(np.arange(0, 1.2, 0.2))
plt.plot((-0.5,3.5), (0.8,0.8), 'red', linestyle='dashed')
plt.show()
"""
Explanation: Plot the NRF!
End of explanation
"""
countList = []
with open('/ngschool/chip_seq/bedtools/input_coverage.bed', 'r') as covFile:
for line in covFile:
countList.append(int(line.strip('\n').split('\t')[3]))
countList[0:6]
countList[-15:]
"""
Explanation: Calculate the Signal Extraction Scaling
Load the results from the coverage calculations
End of explanation
"""
plt.plot(range(len(countList)), countList)
plt.xlabel('Bin number')
plt.ylabel('Bin coverage')
plt.xlim([0, len(countList)])
plt.show()
"""
Explanation: Lets see where do our reads align to the genome. Plot the distribution of tags along the genome.
End of explanation
"""
countList.sort()
countList[0:6]
"""
Explanation: Now sort the list- order the windows based on the tag count
End of explanation
"""
countSum = sum(countList)
countSum
"""
Explanation: Sum all the aligned tags
End of explanation
"""
countFraction = []
for i, count in enumerate(countList):
if i == 0:
countFraction.append(count*1.0 / countSum)
else:
countFraction.append((count*1.0 / countSum) + countFraction[i-1])
"""
Explanation: Calculate the summaric fraction of tags along the ordered windows.
End of explanation
"""
countFraction[-5:]
"""
Explanation: Look at the last five items of the list:
End of explanation
"""
winNumber = len(countFraction)
winNumber
"""
Explanation: Calculate the number of windows.
End of explanation
"""
winFraction = []
for i in range(winNumber):
winFraction.append(i*1.0 / winNumber)
"""
Explanation: Calculate what fraction of a whole is the position of each window.
End of explanation
"""
winFraction[-5:]
"""
Explanation: Look at the last five items of our new list:
End of explanation
"""
def calculateSES(filePath):
countList = []
with open(filePath, 'r') as covFile:
for line in covFile:
countList.append(int(line.strip('\n').split('\t')[3]))
plt.plot(range(len(countList)), countList)
plt.xlabel('Bin number')
plt.ylabel('Bin coverage')
plt.xlim([0, len(countList)])
plt.show()
countList.sort()
countSum = sum(countList)
countFraction = []
for i, count in enumerate(countList):
if i == 0:
countFraction.append(count*1.0 / countSum)
else:
countFraction.append((count*1.0 / countSum) + countFraction[i-1])
winNumber = len(countFraction)
winFraction = []
for i in range(winNumber):
winFraction.append(i*1.0 / winNumber)
return [winFraction, countFraction]
"""
Explanation: Now prepare the function!
End of explanation
"""
chipSes = calculateSES("/ngschool/chip_seq/bedtools/sox2_chip_coverage.bed")
"""
Explanation: Use our function to calculate the signal extraction scaling for the Sox2 ChIP sample:
End of explanation
"""
plt.plot(winFraction, countFraction, label='input')
plt.plot(chipSes[0], chipSes[1], label='Sox2 ChIP')
plt.ylim([0,1])
plt.xlabel('Ordered window franction')
plt.ylabel('Genome coverage fraction')
plt.legend(loc='best')
plt.show()
"""
Explanation: Now we can plot the calculated fractions for both the input and ChIP sample:
End of explanation
"""
|
OpenWeavers/openanalysis | doc/OpenAnalysis/05 - Data Structures.ipynb | gpl-3.0 | from openanalysis.data_structures import DataStructureBase, DataStructureVisualization
import gi.repository.Gtk as gtk # for displaying GUI dialogs
"""
Explanation: Data Structures
Data structures are a concrete implementation of the specification provided by one or more particular abstract data types (ADT), which specify the operations that can be performed on a data structure and the computational complexity of those operations.
Different kinds of data structures are suited for different kinds of applications, and some are highly specialized to specific tasks. For example, relational databases commonly use B-tree indexes for data retrieval, while compiler implementations usually use hash tables to look up identifiers.
Usually, efficient data structures are key to designing efficient algorithms.
Standard import statement
End of explanation
"""
class BinarySearchTree(DataStructureBase): # Derived from DataStructureBase
class Node: # Class for creating a node
def __init__(self, data):
self.left = None
self.right = None
self.data = data
def __str__(self):
return str(self.data)
def __init__(self):
DataStructureBase.__init__(self, "Binary Search Tree", "t.png") # Initializing with name and path
self.root = None
self.count = 0
def get_root(self): # Returns root node of the tree
return self.root
def insert(self, item): # Inserts item into the tree
newNode = BinarySearchTree.Node(item)
insNode = self.root
parent = None
while insNode is not None:
parent = insNode
if insNode.data > newNode.data:
insNode = insNode.left
else:
insNode = insNode.right
if parent is None:
self.root = newNode
else:
if parent.data > newNode.data:
parent.left = newNode
else:
parent.right = newNode
self.count += 1
def find(self, item): # Finds if item is present in tree or not
node = self.root
while node is not None:
if item < node.data:
node = node.left
elif item > node.data:
node = node.right
else:
return True
return False
def min_value_node(self): # Returns the minimum value node
current = self.root
while current.left is not None:
current = current.left
return current
def delete(self, item): # Deletes item from tree if present
# else shows Value Error
if item not in self:
dialog = gtk.MessageDialog(None, 0, gtk.MessageType.ERROR,
gtk.ButtonsType.CANCEL, "Value not found ERROR")
dialog.format_secondary_text(
"Element not found in the %s" % self.name)
dialog.run()
dialog.destroy()
else:
self.count -= 1
if self.root.data == item and (self.root.left is None or self.root.right is None):
if self.root.left is None and self.root.right is None:
self.root = None
elif self.root.data == item and self.root.left is None:
self.root = self.root.right
elif self.root.data == item and self.root.right is None:
self.root = self.root.left
return self.root
if item < self.root.data:
temp = self.root
self.root = self.root.left
temp.left = self.delete(item)
self.root = temp
elif item > self.root.data:
temp = self.root
self.root = self.root.right
temp.right = self.delete(item)
self.root = temp
else:
if self.root.left is None:
return self.root.right
elif self.root.right is None:
return self.root.left
temp = self.root
self.root = self.root.right
min_node = self.min_value_node()
temp.data = min_node.data
temp.right = self.delete(min_node.data)
self.root = temp
return self.root
def get_graph(self, rt): # Populates self.graph with elements depending
# upon the parent-children relation
if rt is None:
return
self.graph[rt.data] = {}
if rt.left is not None:
self.graph[rt.data][rt.left.data] = {'child_status': 'left'}
self.get_graph(rt.left)
if rt.right is not None:
self.graph[rt.data][rt.right.data] = {'child_status': 'right'}
self.get_graph(rt.right)
"""
Explanation: DataStructureBase is the base class for implementing data structures
DataStructureVisualization is the class that visualizes data structures in GUI
DataStructureBase class
Any data structure, which is to be implemented, has to be derived from this class. Now we shall see data members and member functions of this class:
Data Members
name - Name of the DS
file_path - Path to store output of DS operations
Member Functions
__init__(self, name, file_path) - Initializes DS with a name and a file_path to store the output
insert(self, item) - Inserts item into the DS
delete(Self, item) - Deletes item from the DS, <br/>            if item is not present in the DS, throws a ValueError
find(self, item) - Finds the item in the DS
<br/>          returns True if found, else returns False<br/>          similar to __contains__(self, item)
get_root(self) - Returns the root (for graph and tree DS)
get_graph(self, rt) - Gets the dict representation between the parent and children (for graph and tree DS)
draw(self, nth=None) - Draws the output to visualize the operations performed on the DS<br/>             nth is used to pass an item to visualize a find operation
DataStructureVisualization class
This class is used for visualizing data structures in a GUI (using GTK+ 3). Now we shall see data members and member functions of this class:
Data Members
ds - Any DS, which is an instance of DataStructureBase
Member Functions
__init__(self, ds) - Initializes ds with an instance of DS that is to be visualized
run(self) - Opens a GUI window to visualize the DS operations
An example ..... Binary Search Tree
Now we shall implement the class BinarySearchTree
End of explanation
"""
DataStructureVisualization(BinarySearchTree).run()
import io
import base64
from IPython.display import HTML
video = io.open('../res/bst.mp4', 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" width="500" height="350" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii')))
"""
Explanation: Now, this program can be executed as follows:
End of explanation
"""
|
daniel-severo/dask-ml | docs/source/examples/dask-glm.ipynb | bsd-3-clause | import os
import s3fs
import pandas as pd
import dask.array as da
import dask.dataframe as dd
from distributed import Client
from dask import persist, compute
from dask_glm.estimators import LogisticRegression
"""
Explanation: Dask GLM
dask-glm is a library for fitting generalized linear models on large datasets.
The heart of the project is the set of optimization routines that work on either NumPy or dask arrays.
See these two blogposts describing how dask-glm works internally.
This notebook is shows an example of the higher-level scikit-learn style API built on top of these optimization routines.
End of explanation
"""
client = Client()
"""
Explanation: We'll setup a distributed.Client locally. In the real world you could connect to a cluster of dask-workers.
End of explanation
"""
if not os.path.exists('trip.csv'):
s3 = S3FileSystem(anon=True)
s3.get("dask-data/nyc-taxi/2015/yellow_tripdata_2015-01.csv", "trip.csv")
ddf = dd.read_csv("trip.csv")
ddf = ddf.repartition(npartitions=8)
"""
Explanation: For demonstration, we'll use the perennial NYC taxi cab dataset.
Since I'm just running things on my laptop, we'll just grab the first month's worth of data.
End of explanation
"""
# these filter out less than 1% of the observations
ddf = ddf[(ddf.trip_distance < 20) &
(ddf.fare_amount < 150)]
ddf = ddf.repartition(npartitions=8)
"""
Explanation: I happen to know that some of the values in this dataset are suspect, so let's drop them.
Scikit-learn doesn't support filtering observations inside a pipeline (yet), so we'll do this before anything else.
End of explanation
"""
df_train, df_test = ddf.random_split([0.8, 0.2], random_state=2)
columns = ['VendorID', 'passenger_count', 'trip_distance', 'payment_type', 'fare_amount']
X_train, y_train = df_train[columns], df_train['tip_amount'] > 0
X_test, y_test = df_test[columns], df_test['tip_amount'] > 0
X_train, y_train, X_test, y_test = persist(
X_train, y_train, X_test, y_test
)
X_train.head()
y_train.head()
print(f"{len(X_train):,d} observations")
"""
Explanation: Now, we'll split our DataFrame into a train and test set, and select our feature matrix and target column (whether the passenger tipped).
End of explanation
"""
%%time
# this is a *dask-glm* LogisticRegresion, not scikit-learn
lm = LogisticRegression(fit_intercept=False)
lm.fit(X_train.values, y_train.values)
"""
Explanation: With our training data in hand, we fit our logistic regression.
Nothing here should be surprising to those familiar with scikit-learn.
End of explanation
"""
%%time
lm.score(X_train.values, y_train.values).compute()
"""
Explanation: Again, following the lead of scikit-learn we can measure the performance of the estimator on the training dataset using the .score method.
For LogisticRegression this is the mean accuracy score (what percent of the predicted matched the actual).
End of explanation
"""
%%time
lm.score(X_test.values, y_test.values).compute()
"""
Explanation: and on the test dataset:
End of explanation
"""
from sklearn.base import TransformerMixin, BaseEstimator
from sklearn.pipeline import make_pipeline
"""
Explanation: Pipelines
The bulk of my time "doing data science" is data cleaning and pre-processing.
Actually fitting an estimator or making predictions is a relatively small proportion of the work.
You could manually do all your data-processing tasks as a sequence of function calls starting with the raw data.
Or, you could use scikit-learn's Pipeline to accomplish this and then some.
Pipelines offer a few advantages over the manual solution.
First, your entire modeling process from raw data to final output is in a self-contained object. No more wondering "did I remember to scale this version of my model?" It's there in the Pipeline for you to check.
Second, Pipelines combine well with scikit-learn's model selection utilties, specifically GridSearchCV and RandomizedSearchCV. You're able to search over hyperparameters of the pipeline stages, just like you would for an estimator.
Third, Pipelines help prevent leaking information from your test and validation sets to your training set.
A common mistake is to compute some pre-processing statistic on the entire dataset (before you've train-test split) rather than just the training set. For example, you might normalize a column by the average of all the observations.
These types of errors can lead you overestimate the performance of your model on new observations.
Since dask-glm follows the scikit-learn API, we can reuse scikit-learn's Pipeline machinery, with a few caveats.
Many of the tranformers built into scikit-learn will validate their inputs. As part of this,
array-like things are cast to numpy arrays. Since dask-arrays are array-like they are converted
and things "work", but this might not be ideal when your dataset doesn't fit in memory.
Second, some things are just fundamentally hard to do on large datasets.
For example, naively dummy-encoding a dataset requires a full scan of the data to determine the set of unique values per categorical column.
When your dataset fits in memory, this isn't a huge deal. But when it's scattered across a cluster, this could become
a bottleneck.
If you know the set of possible values ahead of time, you can do much better.
You can encode the categorical columns as pandas Categoricals, and then convert with get_dummies, without having to do an expensive full-scan, just to compute the set of unique values.
We'll do that on the VendorID and payment_type columnms.
End of explanation
"""
class CategoricalEncoder(BaseEstimator, TransformerMixin):
"""Encode `categories` as pandas `Categorical`
Parameters
----------
categories : Dict[str, list]
Mapping from column name to list of possible values
"""
def __init__(self, categories):
self.categories = categories
def fit(self, X, y=None):
# "stateless" transformer. Don't have anything to learn here
return self
def transform(self, X, y=None):
X = X.copy()
for column, categories in self.categories.items():
X[column] = X[column].astype('category').cat.set_categories(categories)
return X
"""
Explanation: First let's write a little transformer to convert columns to Categoricals.
If you aren't familar with scikit-learn transformers, the basic idea is that the transformer must implement two methods: .fit and .tranform.
.fit is called during training.
It learns something about the data and records it on self.
Then .transform uses what's learned during .fit to transform the feature matrix somehow.
A Pipeline is simply a chain of transformers, each one is fit on some data, and passes the output of .transform onto the next step; the final output is an Estimator, like LogisticRegression.
End of explanation
"""
class StandardScaler(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, with_mean=True, with_std=True):
self.columns = columns
self.with_mean = with_mean
self.with_std = with_std
def fit(self, X, y=None):
if self.columns is None:
self.columns_ = X.columns
else:
self.columns_ = self.columns
if self.with_mean:
self.mean_ = X[self.columns_].mean(0)
if self.with_std:
self.scale_ = X[self.columns_].std(0)
return self
def transform(self, X, y=None):
X = X.copy()
if self.with_mean:
X[self.columns_] = X[self.columns_] - self.mean_
if self.with_std:
X[self.columns_] = X[self.columns_] / self.scale_
return X.values
"""
Explanation: We'll also want a daskified version of scikit-learn's StandardScaler, that won't eagerly
convert a dask.array to a numpy array (N.B. the scikit-learn version has more features and error handling, but this will work for now).
End of explanation
"""
from dummy_encoder import DummyEncoder
pipe = make_pipeline(
CategoricalEncoder({"VendorID": [1, 2],
"payment_type": [1, 2, 3, 4, 5]}),
DummyEncoder(),
StandardScaler(columns=['passenger_count', 'trip_distance', 'fare_amount']),
LogisticRegression(fit_intercept=False)
)
"""
Explanation: Finally, I've written a dummy encoder transformer that converts categoricals
to dummy-encoded interger columns. The full implementation is a bit long for a blog post, but you can see it here.
End of explanation
"""
%%time
pipe.fit(X_train, y_train.values)
"""
Explanation: So that's our pipeline.
We can go ahead and fit it just like before, passing in the raw data.
End of explanation
"""
pipe.score(X_train, y_train.values).compute()
pipe.score(X_test, y_test.values).compute()
"""
Explanation: And we can score it as well. The Pipeline ensures that all of the nescessary transformations take place before calling the estimator's score method.
End of explanation
"""
from sklearn.model_selection import GridSearchCV
import dask_searchcv as dcv
"""
Explanation: Grid Search
As explained earlier, Pipelines and grid search go hand-in-hand.
Let's run a quick example with dask-searchcv.
End of explanation
"""
param_grid = {
'standardscaler__with_std': [True, False],
'logisticregression__lamduh': [.001, .01, .1, 1],
}
pipe = make_pipeline(
CategoricalEncoder({"VendorID": [1, 2],
"payment_type": [1, 2, 3, 4, 5]}),
DummyEncoder(),
StandardScaler(columns=['passenger_count', 'trip_distance', 'fare_amount']),
LogisticRegression(fit_intercept=False)
)
gs = dcv.GridSearchCV(pipe, param_grid)
%%time
gs.fit(X_train, y_train.values)
"""
Explanation: We'll search over two hyperparameters
Whether or not to standardize the variance of each column in StandardScaler
The strength of the regularization in LogisticRegression
This involves fitting many models, one for each combination of paramters.
dask-searchcv is smart enough to know that early stages in the pipeline (like the categorical and dummy encoding) are shared among all the combinations, and so only fits them once.
End of explanation
"""
pd.DataFrame(gs.cv_results_)
"""
Explanation: Now we have access to the usual attributes like cv_results_ learned by the grid search object:
End of explanation
"""
gs.score(X_train, y_train.values).compute()
"""
Explanation: And we can do our usual checks on model fit for the training set:
End of explanation
"""
gs.score(X_test, y_test.values).compute()
"""
Explanation: And the test set:
End of explanation
"""
|
davidthomas5412/PanglossNotebooks | MassLuminosityProject/SummerResearch/ValidatingLikelihoodVarianceAndSingleLikelihoodWeightDistribution_20170627.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from matplotlib import rc
rc('text', usetex=True)
!head -n 5 likelihoodvariancetest.txt
multi = np.loadtxt('likelihoodvariancetest.txt')
multi1000 = np.loadtxt('likelihoodvariancetest1000samples.txt')
multi10000 = np.loadtxt('likelihoodvariancetest10000samples.txt')
plt.title('Log-Likelihood Distribution For Fixed Hypersample')
plt.ylabel('Density')
plt.xlabel('Log-Likelihood')
plt.hist(multi[:,4], bins=20, normed=True, alpha=0.5, label='100 samples')
plt.hist(multi1000[:,4], bins=20, normed=True, alpha=0.5, label='1000 samples')
plt.hist(multi10000[:,4], bins=20, normed=True, alpha=0.5, label='10000 samples')
plt.legend(loc=2);
"""
Explanation: Validating Likelihood Variance And Single Likelihood Weight Distribution
In this notebook we seek to characterize the noise that appears to accumulate in our learning process and manifests in poor quality posteriors. We narrow our attention to two cases. First, how the number of samples in each halo's likelihood integral impacts the variance of the total likelihood. Second, how the weight contributions of individual halos varies across hypersamples.
Variance Sensitivity to Number of Importance Samples
We take the hyperparameter from the center of the prior distribution (mean of gaussians - [10.709, 0.35899999999999999, 1.1000000000000001, 0.155]) and compute the total likelihood under a number of different random seeds. This allows us to quantify the intrinsic variance in our process. Remember that the return value is the log-likelihood so even a little variance can lead to astronomical differences in weights. Since we plan to use on the order of 10,000 samples we hope to achieve a log-likelihood variance < 2 (so standard deviation of multiplicative weight change is factor of 4 or so).
End of explanation
"""
print np.std(multi[:,4])
print np.std(multi1000[:,4])
print np.std(multi10000[:,4])
"""
Explanation: And here we have the standard deviations
End of explanation
"""
print 51.0639314938 / len(multi)
print np.mean(multi[:,4]) / 115919
"""
Explanation: If we maintain our key assumption from the Quantifying Scaling Accuracy notebook, that the single log-likelihood integrals are independent and identically distributed gaussians. Then with 100 samples, the mean and variance are -8.46 and 0.06 respectively. Importance sampling is precise.
End of explanation
"""
single = np.zeros((115919,4))
i = j = 0
with open('singleintegralweightvariancetest.txt', 'r') as f:
for line in f:
if 'likelihood' in line:
j += 1
i = 0
else:
single[i,j] = float(line)
i += 1
f.close()
plt.hist(single[:,0], bins=50, alpha=0.4, label='l1')
plt.hist(single[:,1], bins=50, alpha=0.4, label='l2')
plt.hist(single[:,2], bins=50, alpha=0.4, label='l3')
plt.hist(single[:,3], bins=50, alpha=0.4, label='l4');
plt.title('Single Log-Likelihood Weight Histograms')
plt.ylabel('Count')
plt.xlabel('Log-Likelihood');
"""
Explanation: Single Likelihood Weights
In this experiment we take 4 hypersamples and collect the weights for the single likelihood corresponding to each halo (the sum of these single log-likelihoods is the total log-likelihood).
End of explanation
"""
np.corrcoef(single.transpose())
"""
Explanation: These look very similar. A natural next thing to check is the correlation between the weight sequences. We see extremely high correlation.
End of explanation
"""
|
arcyfelix/Courses | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/02-NumPy/Numpy Exercises - Solved.ipynb | apache-2.0 | import numpy as np
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
<center>Copyright Pierian Data 2017</center>
<center>For more information, visit us at www.pieriandata.com</center>
NumPy Exercises
Now that we've learned about NumPy let's test your knowledge. We'll start off with a few simple tasks and then you'll be asked some more complicated questions.
IMPORTANT NOTE! Make sure you don't run the cells directly above the example output shown, otherwise you will end up writing over the example output!
Import NumPy as np
End of explanation
"""
# CODE HERE
np.zeros(10)
"""
Explanation: Create an array of 10 zeros
End of explanation
"""
# CODE HERE
np.ones(10)
"""
Explanation: Create an array of 10 ones
End of explanation
"""
# CODE HERE
np.ones(10) * 5
"""
Explanation: Create an array of 10 fives
End of explanation
"""
# CODE HERE
np.arange(10, 51)
"""
Explanation: Create an array of the integers from 10 to 50
End of explanation
"""
# CODE HERE
np.arange(10, 51, 2)
"""
Explanation: Create an array of all the even integers from 10 to 50
End of explanation
"""
# CODE HERE
np.arange(9).reshape(3,3)
"""
Explanation: Create a 3x3 matrix with values ranging from 0 to 8
End of explanation
"""
# CODE HERE
np.eye(3)
"""
Explanation: Create a 3x3 identity matrix
End of explanation
"""
# CODE HERE
np.random.randn(1)
"""
Explanation: Use NumPy to generate a random number between 0 and 1
End of explanation
"""
# CODE HERE
np.random.randn(25)
"""
Explanation: Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution
End of explanation
"""
np.arange(1, 101).reshape(10, 10) / 100
"""
Explanation: Create the following matrix:
End of explanation
"""
np.linspace(0, 1, 20)
"""
Explanation: Create an array of 20 linearly spaced points between 0 and 1:
End of explanation
"""
# HERE IS THE GIVEN MATRIX CALLED MAT
# USE IT FOR THE FOLLOWING TASKS
mat = np.arange(1,26).reshape(5,5)
mat
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[2:, ]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[3, -1]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[:3, 1].reshape(3, 1)
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[-1, :]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[-2:, :]
"""
Explanation: Numpy Indexing and Selection
Now you will be given a few matrices, and be asked to replicate the resulting matrix outputs:
End of explanation
"""
# CODE HERE
np.sum(mat)
"""
Explanation: Now do the following
Get the sum of all the values in mat
End of explanation
"""
# CODE HERE
np.std(mat)
"""
Explanation: Get the standard deviation of the values in mat
End of explanation
"""
# CODE HERE
np.sum(mat, axis = 0)
"""
Explanation: Get the sum of all the columns in mat
End of explanation
"""
# My favourite number is 7
np.random.seed(7)
"""
Explanation: Bonus Question
We worked a lot with random data with numpy, but is there a way we can insure that we always get the same random numbers? Click Here for a Hint
End of explanation
"""
|
samuelsinayoko/kaggle-housing-prices | xgboost/xgboost-feature-selection.ipynb | mit | from scipy.stats.mstats import mode
import pandas as pd
import numpy as np
import time
from sklearn.preprocessing import LabelEncoder
"""
Read Data
"""
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
target = train['SalePrice']
train = train.drop(['SalePrice'],axis=1)
trainlen = train.shape[0]
"""
Explanation: Mei-Cheng Shih, 2016
This kernel is inspired by the post of JMT5802. The aim of this kernel is to use XGBoost to replace RF which was used as the core of the Boruta package. Since XGBoost generates better quality predictions than RF in this case, the output of this kernel is expected to be mor representative. Moreover, the code also includes the data cleaning process I used to build my model
First, import packages for data cleaning and read the data
End of explanation
"""
df1 = train.head()
df2 = test.head()
pd.concat([df1, df2], axis=0, ignore_index=True)
alldata = pd.concat([train, test], axis=0, join='outer', ignore_index=True)
alldata = alldata.drop(['Id','Utilities'], axis=1)
alldata.dtypes
"""
Explanation: Combined the train and test set for cleaning
End of explanation
"""
fMedlist=['LotFrontage']
fArealist=['MasVnrArea','TotalBsmtSF','BsmtFinSF1','BsmtFinSF2','BsmtUnfSF','BsmtFullBath', 'BsmtHalfBath','MasVnrArea','Fireplaces','GarageArea','GarageYrBlt','GarageCars']
for i in fArealist:
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=0
for i in fMedlist:
alldata.ix[pd.isnull(alldata.ix[:,i]),i] = np.nanmedian(alldata.ix[:,i])
"""
Explanation: Dealing with the NA values in the variables, some of them equal to 0 and some equal to median, based on the txt descriptions
End of explanation
"""
alldata.ix[:,(alldata.dtypes=='int64') & (alldata.columns != 'MSSubClass')]=alldata.ix[:,(alldata.dtypes=='int64') & (alldata.columns!='MSSubClass')].astype('float64')
alldata['MSSubClass']
alldata.head(20)
le = LabelEncoder()
nacount_category = np.array(alldata.columns[((alldata.dtypes=='int64') | (alldata.dtypes=='object')) & (pd.isnull(alldata).sum()>0)])
category = np.array(alldata.columns[((alldata.dtypes=='int64') | (alldata.dtypes=='object'))])
Bsmtset = set(['BsmtQual','BsmtCond','BsmtExposure','BsmtFinType1','BsmtFinType2'])
MasVnrset = set(['MasVnrType'])
Garageset = set(['GarageType','GarageYrBlt','GarageFinish','GarageQual','GarageCond'])
Fireplaceset = set(['FireplaceQu'])
Poolset = set(['PoolQC'])
NAset = set(['Fence','MiscFeature','Alley'])
# Put 0 and null values in the same category
for i in nacount_category:
if i in Bsmtset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['TotalBsmtSF']==0), i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]), i] = alldata.ix[:,i].value_counts().index[0]
elif i in MasVnrset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['MasVnrArea']==0),i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
elif i in Garageset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['GarageArea']==0),i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
elif i in Fireplaceset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['Fireplaces']==0),i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
elif i in Poolset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['PoolArea']==0),i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
elif i in NAset:
alldata.ix[pd.isnull(alldata.ix[:,i]),i]='Empty'
else:
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
for i in category:
alldata.ix[:,i]=le.fit_transform(alldata.ix[:,i])
train = alldata.ix[0:trainlen-1, :]
test = alldata.ix[trainlen:alldata.shape[0],:]
alldata.head()
"""
Explanation: Transforming Data
Use integers to encode categorical data.
Convert all ints to floats for XGBoost
End of explanation
"""
import xgboost as xgb
from sklearn.cross_validation import ShuffleSplit
from sklearn.metrics import mean_squared_error
from sklearn.utils import shuffle
"""
Explanation: Import required packages for Feature Selection Process
End of explanation
"""
o=[30, 462, 523, 632, 968, 970, 1298, 1324]
train=train.drop(o,axis=0)
target=target.drop(o,axis=0)
train.index=range(train.shape[0])
target.index=range(train.shape[0])
"""
Explanation: Start the code, drop some outliers. The outliers were detected by package statsmodel in python, skip details here
Learn how to do this!
End of explanation
"""
est=xgb.XGBRegressor(colsample_bytree=0.4,
gamma=0.045,
learning_rate=0.07,
max_depth=20,
min_child_weight=1.5,
n_estimators=300,
reg_alpha=0.65,
reg_lambda=0.45,
subsample=0.95)
"""
Explanation: Set XGB model, the parameters were obtained from CV based on a Bayesian Optimization Process
End of explanation
"""
n=200
scores=pd.DataFrame(np.zeros([n, train.shape[1]]))
scores.columns=train.columns
ct=0
for train_idx, test_idx in ShuffleSplit(train.shape[0], n, .25):
ct+=1
X_train, X_test = train.ix[train_idx,:], train.ix[test_idx,:]
Y_train, Y_test = target.ix[train_idx], target.ix[test_idx]
r = est.fit(X_train, Y_train)
acc = mean_squared_error(Y_test, est.predict(X_test))
for i in range(train.shape[1]):
X_t = X_test.copy()
X_t.ix[:,i]=shuffle(np.array(X_t.ix[:, i]))
shuff_acc = mean_squared_error(Y_test, est.predict(X_t))
scores.ix[ct-1,i]=((acc-shuff_acc)/acc)
"""
Explanation: Start the test process, the basic idea is to permutate the order of elements in each of the columns randomly and see the impact of the permutation
For the evaluation metric of feature importance, I used ((MSE of pertutaed data)-(MSE of original data))/(MSE of original data)
End of explanation
"""
fin_score=pd.DataFrame(np.zeros([train.shape[1], 4]))
fin_score.columns=['Mean','Median','Max','Min']
fin_score.index=train.columns
fin_score.ix[:,0]=scores.mean()
fin_score.ix[:,1]=scores.median()
fin_score.ix[:,2]=scores.min()
fin_score.ix[:,3]=scores.max()
"""
Explanation: Generate output, the mean, median, max and min of the scores fluctuation
End of explanation
"""
pd.set_option('display.max_rows', None)
fin_score.sort_values('Mean',axis=0)
"""
Explanation: See the importances of features. The higher the value, the less important the factor.
End of explanation
"""
est
test.shape[0]
result = pd.Series(est.predict(test))
result.index
submission = pd.DataFrame({
"Id": result.index + 1461,
"SalePrice": result.values
})
submission.to_csv('submission-xgboost.csv', index=False)
"""
Explanation: The result is a little bit difference from what JMT5802 got, but in general they are similar. For example, OverallQual, GrLivArea are important in both cases, and PoolArea and PoolQC are not important in both cases. Also, based on the test conducted in link below, it is reasonable to say the differences are not obvious in both cases
Also, the main code was modified from the example in the link below, special thanks to the author of the blog
http://blog.datadive.net/selecting-good-features-part-iii-random-forests/
Updates:
After several tests, I removed the variables in the list below, and this action did improve my score a little bit.
['Exterior2nd', 'EnclosedPorch', 'RoofMatl', 'PoolQC', 'BsmtHalfBath',
'RoofStyle', 'PoolArea', 'MoSold', 'Alley', 'Fence', 'LandContour',
'MasVnrType', '3SsnPorch', 'LandSlope']
End of explanation
"""
|
benhoyle/udacity-tensorflow | 2_fullyconnected.ipynb | mit | # These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
"""
Explanation: Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
End of explanation
"""
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
"""
Explanation: First reload the data we generated in 1_notmnist.ipynb.
End of explanation
"""
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
"""
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
"""
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
"""
Explanation: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
End of explanation
"""
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
"""
Explanation: Let's run this computation and iterate:
End of explanation
"""
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
"""
Explanation: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().
End of explanation
"""
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
"""
Explanation: Let's run it:
End of explanation
"""
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights_layer_1 = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases_layer_1 = tf.Variable(tf.zeros([num_labels]))
# Layer 2 weights have an input dimension = output of first layer
weights_layer_2 = tf.Variable(
tf.truncated_normal([num_labels, num_labels]))
biases_layer_2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits_layer_1 = tf.matmul(tf_train_dataset, weights_layer_1) + biases_layer_1
relu_output = tf.nn.relu(logits_layer_1)
logits_layer_2 = tf.matmul(relu_output, weights_layer_2) + biases_layer_2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits_layer_2))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits_layer_2)
logits_l_1_valid = tf.matmul(tf_valid_dataset, weights_layer_1) + biases_layer_1
relu_valid = tf.nn.relu(logits_l_1_valid)
logits_l_2_valid = tf.matmul(relu_valid, weights_layer_2) + biases_layer_2
valid_prediction = tf.nn.softmax(logits_l_2_valid)
logits_l_1_test = tf.matmul(tf_test_dataset, weights_layer_1) + biases_layer_1
relu_test = tf.nn.relu(logits_l_1_test)
logits_l_2_test = tf.matmul(relu_test, weights_layer_2) + biases_layer_2
test_prediction = tf.nn.softmax(logits_l_2_test)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy.eval(
predictions, batch_labels)
)
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
"""
Explanation: Problem
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes. This model should improve your validation / test accuracy.
Setting up the graph with rectified linear units and one hidden layer:
End of explanation
"""
|
chrismcginlay/crazy-koala | jupyter/03_processing_data.ipynb | gpl-3.0 | boys = int(input('How many boys are in the class: '))
girls = int(input('How many girls are in the class:'))
pupils = boys + girls
print('There are', pupils,'in the class altogether')
"""
Explanation: Processing Data
Working With Numbers
Let's get some integer (aka whole number) variables going and learn how to add, divide, subtract and multiply in Python.
Addition
Run the following code to see how Python adds together numbers
End of explanation
"""
bigger_number = 12
smaller_number = 10
difference = bigger_number - smaller_number
print('The difference between', bigger_number, 'and', smaller_number, 'is', difference)
"""
Explanation: Subtraction
Run the following code to see how Python subtracts numbers
End of explanation
"""
number1 = 5
number2 = 6
answer = number1 * number2
print(answer)
"""
Explanation: Multiplication
Python uses an asterisk * to show multiplication.
Run the following code to see how Python multiplies numbers
End of explanation
"""
big_number = 100
divisor_number = 25
dividend_answer = 100/25
print(big_number,'divided by', divisor_number, 'is', dividend_answer)
"""
Explanation: Division
Python uses a forward slash / to show division.
Run the following code to see how Python divides numbers
End of explanation
"""
big_number = 102
divisor_number = 25
remainder = 100%25
print('If you divide', big_number,'by', divisor_number, 'you get', remainder, 'left over')
"""
Explanation: Extra Useful Bit: Modulo Division (Remainder/Left Overs)
Python uses a percentage sign % to calculate remainders
Run the following code to see how Python can compute remainders when dividing
End of explanation
"""
phrase1 = 'The quick brown fox jumped over'
phrase2 = 'the moon'
sentence = phrase1+phrase2
print(sentence)
"""
Explanation: Working with Strings - aka Text
Text characters are called strings in Python - this is because they are stored as a string of characters.
Here are two example strings to look at:
python
phrase1 = 'The quick brown fox jumped over'
phrase2 = 'the moon'
I've assigned them to two variables called phrase1 and phrase2.
Run the following code to see how we can add strings.
End of explanation
"""
noun1 = 'turnip'
noun2 = 'elephant'
noun3 = 'worm'
noun4 = 'holiday'
noun5 = 'Scalloway'
verb1 = 'went'
verb2 = 'ate'
verb3 = 'sat'
verb4 = 'jumped'
preposition1 = 'on'
preposition2 = 'to'
preposition3 = 'with'
def_article = 'the'
indef_article = 'a'
example1 = def_article+' '+noun1+' '+verb1+' '+preposition1+' '+noun4+' '+preposition2+' '+noun5
example2 = 'change this line'
print(example1)
print(example2)
"""
Explanation: Change line 3 of the above program to put in a space
Hint: +' '+
If you've done it right, your output should look like:
The quick brown fox jumped over the moon
Exercise
Choose some of the word variables below, add them to make an example sentence of your own
End of explanation
"""
noun1 = 'turnip'
noun2 = 'elephant'
noun3 = 'worm'
noun4 = 'holiday'
noun5 = 'Scalloway'
verb1 = 'went'
verb2 = 'ate'
verb3 = 'sat'
verb4 = 'jumped'
preposition1 = 'on'
preposition2 = 'to'
preposition3 = 'with'
def_article = 'the'
indef_article = 'a'
print(def_article,noun1,verb1,preposition1,noun4,preposition2,noun5)
print('replace this with your own choice of variables')
"""
Explanation: Getting Rid of all the ' ' Space Bits
It's pretty horrible having to do noun1+' '+verb1, putting in all those ' ' spaces.
Here's the same program again, but this time using print() with commas to do the work.
The disadvantage of this is the sentence isn't being stored in a variable.
In the last line below, use commas to make your own sentence
End of explanation
"""
noun1 = 'turnip'
noun2 = 'elephant'
noun3 = 'worm'
noun4 = 'holiday'
"""
Explanation: How Long is a Piece of String
As well as all that science, Python can easily tell us how long strings really are! Who knew?
python
print(len("How long is this string?"))
Exercise
Print the lengths of each of the strings in these variables using the len() function
End of explanation
"""
|
ebellm/ztf_summerschool_2015 | notebooks/Machine_Learning_Light_Curve_Classification.ipynb | bsd-3-clause | shelf_file = " " # complete the path to the appropriate shelf file here
shelf = shelve.open(shelf_file)
shelf.keys()
"""
Explanation: <span style='color:red'>An essential note in preparation for this exercise.</span> We will use scikit-learn to provide classifications of the PTF sources that we developed on the first day of the summer school. Calculating the features for these light curves can be computationally intensive and may require you to run your computer for several hours. It is essential that you complete this portion of the exercise prior to Friday afternoon. Or, in other words, this is homework for Thursday night.
Fortunately, there is an existing library that will calculate all the light curve features for you. It is not included in the anaconda python distribution, but you can easily install the library using pip, from the command line (i.e. outside the notebook):
pip install FATS
After a short download, you should have the FATS (Feature Analysis for Time Series) library loaded and ready to use.
Note within a note The FATS library is not compatible with Python 3, thus is you are using Python 3 feel free to ignore this and we will give you an array with the necessary answers.
Hands-On Exercise 6: Building a Machine Learning Classifier to Identify RRL and QSO Candidates Via Light Curves
Version 0.1
We have spent a lot of time discussing RR Lyrae stars and QSOs. The importance of these sources will not be re-hashed here, instead we will jump right into the exercise.
Today, we will measure a large number of light curve features for each PTF source with a light curve. This summer school has been dedicated to the study of time-variable phenomena, and today, finally, everything will come together. We will use machine learning tools to classify variable sources.
By AA Miller (c) 2015 Jul 30
Problem 1) Calculate the features
The training set for today has already been made. The first step is to calculate the features for the PTF sources we are hoping to classify. We will do this using the FATS library in python. The basic steps are simple: the light curve, i.e. time, mag, uncertainty on the mag, is passed to FATS, and features are calculated and returned. Prior to calculating the features, FATS preprocesses the data by removing $5\sigma$ outliers and observations with anomolously large uncertainties. After this, features are calculated.
We begin by reading in the data from the first day.
End of explanation
"""
import FATS
reference_catalog = '../data/PTF_Refims_Files/PTF_d022683_f02_c06_u000114210_p12_sexcat.ctlg'
outfile = reference_catalog.split('/')[-1].replace('ctlg','shlv')
lc_mjd, lc_mag, lc_magerr = source_lightcurve("../data/"+outfile, # complete
[mag, time, error] = FATS.Preprocess_LC( # complete
plt.errorbar( # complete
plt.errorbar( # complete
"""
Explanation: Part A - Calculate features for an individual source
To demonstrate how the FATS library works, we will begin by calculating features for the source with $\alpha_{\rm J2000} = 312.23854988, \delta_{\rm J2000} = -0.89670553$. The data structure for FATS is a little different from how we have structured data in other portions of this class. In short, FATS is looking for a 2-d array that contains time, mag, and mag uncertainty. To get the required formatting, we can preprocess the dats as follows:
import FATS
[mag, time, error] = FATS.Preprocess_LC(lc_mag, lc_mjd, lc_magerr).Preprocess()
where the result from this call is a 2d array ready for feature calculations. lc_mag, lc_mjd, and lc_magerr are individual arrays for the source in question that we will pass to FATS.
Problem A1 Perform preprocessing on the source with $\alpha_{\rm J2000} = 312.23854988, \delta_{\rm J2000} = -0.89670553$ from the shelf file, then plot the light curve both before and after preprocessing using different colors to see which epochs are removed during preprocessing.
Hint - this won't actually affect your code, because FATS properly understands NumPy masks, but recall that each source in the shelf file has a different mask array, while none of the MJDs in that file have a mask.
End of explanation
"""
lc_mjd, lc_mag, lc_magerr = # complete
# complete
# complete
lc = # complete
feats = FATS.FeatureSpace( # complete
"""
Explanation: Problem A2 What do you notice about the points that are flagged and removed for this source?
Answer type your answer here
This particular source shows the (potential) danger of preprocessing. Is the "flare" from this source real, or is it the result of incorrectly calibrated observations? In practice, determining the answer to this question would typically involve close inspection of the actual PTF image where the flare is detected, and possibly additional observations as well. In this particular case, given that there are other observations showing the decay of the flare, and that $(g - r) = 1.7$, which is consistent with an M-dwarf, the flare is likely real. In sum, preprocessing probably would lead to an incorrect classification for this source. Nevertheless, for most applications it is necessary to produce reasonable classifications.
Part B - Calculate features and check that they are reasonable
Now we will focus on the source with $\alpha_{\rm J2000} = 312.50395, \delta_{\rm J2000} = -0.70654$ , to calculate features using FATS. Once the data have been preprocessed, features can be calcualted using the FeatureSpace module (note - FATS is designed to handle data in multiple passbands, and as such the input arrays passed to FATS must be specified):
lc = np.array([mag, time, error])
feats = FATS.FeatureSpace(Data=['magnitude', 'time', 'error']).calculateFeature(lc)
Following these commands, we now have an object feats that contains the features of our source. As there is only one filter with a light curve for this source, FATS will not be able to calculate the full library of features.
Problem B1 Preprocess the light curve for the source at $\alpha_{\rm J2000} = 312.50395, \delta_{\rm J2000} = -0.70654$, and use FATS to calculate the features for this source.
End of explanation
"""
# execute this cell
print('There are a total of {:d} features for single band LCs'.format(len(feats.result(method='array'))))
print('Here is a dictionary showing the features:')
feats.result(method='dict')
"""
Explanation: The features object, feats can be retrieved in three different ways: dict returns a dictionary with the feature names and their correpsonding values, array returns the feature values, and features returns an array with the names of the individual features.
Execute the cell below to determine how many features there are, and to examine the features calculated by FATS.
End of explanation
"""
plt.errorbar( # complete
"""
Explanation: For now, we will ignore the precise definition of these 59 (59!) features. But, let's focus on the first feature in the dictionary, Amplitude, to perform a quick check that the feature calculation is proceeding as we would expect.
Problem B2 Plot the light curve the source at $\alpha_{\rm J2000} = 312.50395, \delta_{\rm J2000} = -0.70654$, and check if the amplitude agrees with that calculated by FATS.
Note - the amplitude as calculated by FATS is actually the half amplitude, and it is calculated by taking half the difference between the median of the brightest 5% and the median of the faintest 5% of the observations. A quick eyeball test is sufficient.
End of explanation
"""
plt.errorbar( # complete
"""
Explanation: Now, let's check one other feature to see if these results make sense. The best fit lomb-scargle period is stored as PeriodLS in the FATS feature dictionary. The feature period_fit reports the false alarm probability for a given light curve. This sources has period_fit $\sim 10^{-6}$, so it's fairly safe to say this is a periodic variable, but this should be confirmed.
Problem B3 Plot the phase folded light curve the source at $\alpha_{\rm J2000} = 312.50395, \delta_{\rm J2000} = -0.70654$, using the period determined by FATS.
End of explanation
"""
%%capture
# not too many hints this time
Xfeats = # complete or incorporate elsewhere
# for loop goes here
# 2 lines below show an example of how to create an astropy table and then save the feature calculation as a csv file
# -- if you use these lines, be sure that the variable names match your for loop
#feat_table = Table(Xfeats, names = tuple(feats.result(method='features')))
#feat_table.write('PTF_feats.csv', format='csv')
"""
Explanation: Does this light curve look familiar at all? Why, it's our favorite star!
Part C - Calculate features for all PTF sources with light curves
Finally, and this is the most important portion of this exercise, we need to calculate features for all of the PTF sources for which we have light curves. Essentially, a for loop needs to be created to cover every light curve in the shelf file, but there are a few things you must keep in mind:
It is essential that the features be stored in a sensible way that can be passed to scikit-learn. Recall that features are represented as 2d arrays where each row corresponds to one source and each column corresponds to a given feature.
Finally, if you can easily figure it out, it would be good to store your data in a file of some kind so the features can be easily read by your machine in the future.
Problem C1 Measure features for all the sources in the PTF field that we have been studying this week. Store the results in an array Xfeats that can be used by scikit-learn.
Note - FATS will produce warnings for every single light curve in the loop, which results in a lot of output text. Thus, we have employed %%capture at the start of this cell to supress that output.
Hint - you may find it helpful to include a progress bar since this loop will take $\sim$2 hr to run. See the Making_a_Lightcurve notebook for an example. This is not necessary, however.
End of explanation
"""
ts = Table.read("../data/TS_PTF_feats.csv")
ts
"""
Explanation: Problem 2) Build the machine learning model
In many ways, the most difficult steps are now complete. We will now build a machine learning model (one that is significantly more complicated than the model we built yesterday, but the mechanics are nearly identical), and then predict the classification of our sources based on their light curves.
The training set is stored in a csv file that you have already downloaded: ../data/TS_PTF_feats.csv. We will begin by reading in the training set to an astropy Table.
End of explanation
"""
# the trick here is to remember that both the features and the class labels are included in ts
y = # complete
X = # complete
# complete
# complete
from sklearn.ensemble import RandomForestClassifier
RFmod = RandomForestClassifier(n_estimators = 100)
RFmod.fit( # complete
feat_order = np.array(ts.colnames)[np.argsort(RFmod.feature_importances_)[::-1]]
print('The 3 most important features are: {:s}, {:s}, {:s}'.format( # complete
"""
Explanation: As is immediately clear - this dataset is more complicated than the one we had yesterday. We are now calcualting 59 different features to characterize the light curves. 59 is a large number, and it would prove cumbersome to actually plot histograms for each of them. Also, if some of the features are uniformative, which often happens in problems like this, plotting everything can actually be a waste of time.
Part A - Construct the Random Forest Model
We will begin by constructing a random forest model from the training set, and then we will infer which features are the most important based on the rankings provided by random forest. [Note - this was the challenge problem from the end of yesterday's session. Refer to that if you do not know how to calculate the feature importances.]
Problem A1 Construct a random forest using the training set, and determine the three most important features as measured by the random forest.
Hint - it may be helpful to figure out the indicies corresponding to the most important features. This can be done using np.argsort() which returns the indicies associated with the sorted argument to the call. The sorting goes from smallest number to highest. We want the most important features, so the indicies corresponding to that can be obtained by using [::-1], which flips the order of a NumPy array. Thus, you can obtain indicies sorting the features from most important to least important with the following command:
np.argsort(RFmod.feature_importances_)[::-1]
This may, or may not depending on your approach, help you identify the 3 most important features.
End of explanation
"""
Nqso =
Nrrl =
Nstar =
for feat in feat_order[0:3]:
plt.figure()
plt.hist( # complete
# complete
# complete
plt.xscale('log')
plt.legend(fancybox = True)
"""
Explanation: As before, we are going to ignore the meaning of the three most important features for now (it is highly recommended that you check out Nun et al. 2015 to learn the definition of the features, but we don't have time for that at the moment).
To confirm that these three features actually help to separate the different classes, we will examine the histogram distributions for the three classes and these three features.
Problem A2 Plot the histogram distribution of each class for the three most important features, as determined by random forest.
Hint - this is very similar to the histogram plots that were created yesterday, consult your answer there for helpful hints about weighting the entries so that the sum of all entries = 1. It also helps to plot the x-axis on a log scale.
End of explanation
"""
print('There are {:d} QSOs, {:d} RRL, and {:d} stars in the training set'.format(# complete
"""
Explanation: Part B - Evaluate the accuracy of the model
Like yesterday, we are going to evaluate how well the model performs via cross validation. While we have looked at the feature distribution for this data set, we have not closely examined the classes thus far. We will begin with that prior to estimating the accuracy of the model via cross validation.
Recall that a machine learning model is only as good as the model training set. Since you were handed the training set without details as to how it was constructed, you cannot easily evaluate whether or not it is biased (that being said, the training set is almost certainly biased). We can attempt to determine if this training set is representative of the field, however. In particular, yesterday we constructed a model to determine whether sources were stars, RRL variables, or QSOs based on their SDSS colors. This model provides a rough, but certainly not perfect, estimate of the ratio of these three classes to each other.
Problem B1 Calculate the number of RRL, QSOs, and stars in the training set and compare these numbers to the ratios for these classes as determined by the predictions from the SDSS colors-based classifier that was constructed yseterday.
End of explanation
"""
from sklearn import cross_validation # recall that this will only work with sklearn v0.16+
RFmod = RandomForestClassifier( # complete
cv_accuracy = # complete
print("The cross-validation accuracy is {:.1f}%%".format(100*np.mean(cv_accuracy)))
"""
Explanation: While the ratios between the classes are far from identical to the distribution we might expect based on yesterday's classification, it is fair to say that the distribution is reasonable. In particular, there are more QSOs than RRL, and there are significantly more stars than either of the other two classes. Nevertheless, the training set is not large. Note - large in this sense is very relative, if you are searching for something that is extremely rare, such as the EM counterpart to a gravitational wave event, a training set of 2 might be large, but there are thousands of known QSOs and RRL, and a $\sim$billion known stars. Thus, this training set is small.
Problem B2 Do you think the training set is representative of the diversity in each of the three classes?
Answer type your response to this question here
At this point, it should be clear that the training set for this exercise is far from perfect. But guess what? That means this training set has something in common with every other astronomical study that utilizes machine learning. At the risk of beating the horse well beyond the grave, it is important to highlight once again that building a training set is extremely difficult work. There are almost always going to be (known and unknown) biases present in any sample used to train a model, and as a result predicted accuracies from cross validation are typically going to be over-optimistic. Nevertheless, cross validation remains one of the best ways to quantitatively examine the quality of a model.
Problem B3 Determine the overall accuracy of the time-domain machine learning model using 5-fold cross validation.
End of explanation
"""
from sklearn.metrics import confusion_matrix
y_cv_preds = # complete
cm = # complete
plt.imshow( # complete
plt.colorbar()
plt.ylabel( # complete
plt.xlabel( # complete
plt.tight_layout()
"""
Explanation: As noted earlier - this accuracy is quite high, and it is likely over-optimistic. While the overall accuracy provides a nice summary statistic, it is always useful to know where the classifier is making mistakes. As noted yesterday, this is most easily accomplished with a confusion matrix.
Problem B4 Plot the normalized confusion matrix for the new time-domain classifier. Identify the most common error made by the classifier.
Hint - you will (likely) need to run cross-validation again so you can produce cross-validated class predictions for each source in the training set.
End of explanation
"""
RFmod = # complete
# complete
PTF_classes = # complete
"""
Explanation: Problem 3) Use the model to identify RRL and QSO candidates in the PTF field
Now, virtually all of the hard work is done, we simply need to make some final predictions on our dataset.
Part A - Apply the machine learning model
Problem A1 Using the features that you calculated in Problem 1, predict the class of all the PTF sources.
End of explanation
"""
print('There are {:d} candidate QSOs, {:d} candidate RRL, and {:d} candidate stars.'.format(Nqso_cand, Nrrl_cand, Nstar_cand))
"""
Explanation: Problem A2 Determine the number of candidate sources belonging to each class.
End of explanation
"""
|
AI-Innovation/cs231n_ass1 | knn.ipynb | mit | # Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%reload_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
"""
Explanation: k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages:
During training, the classifier takes the training data and simply remembers it
During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
End of explanation
"""
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
"""
Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
End of explanation
"""
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
"""
Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer: fill this in.
End of explanation
"""
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
"""
Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:
End of explanation
"""
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
"""
Explanation: You should expect to see a slightly better performance than with k = 1.
End of explanation
"""
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 1
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
"""
Explanation: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
End of explanation
"""
|
GoogleCloudPlatform/mlops-on-gcp | immersion/kubeflow_pipelines/cicd/labs/lab-03_vertex.ipynb | apache-2.0 | PROJECT_ID = !(gcloud config get-value project)
PROJECT_ID = PROJECT_ID[0]
REGION = 'us-central1'
ARTIFACT_STORE = f'gs://{PROJECT_ID}-vertex'
"""
Explanation: CI/CD for a Kubeflow pipeline on Vertex AI
Learning Objectives:
1. Learn how to create a custom Cloud Build builder to pilote Vertex AI Pipelines
1. Learn how to write a Cloud Build config file to build and push all the artifacts for a KFP
1. Learn how to setup a Cloud Build GitHub trigger a new run of the Kubeflow PIpeline
In this lab you will walk through authoring of a Cloud Build CI/CD workflow that automatically builds, deploys, and runs a Kubeflow pipeline on Vertex AI. You will also integrate your workflow with GitHub by setting up a trigger that starts the workflow when a new tag is applied to the GitHub repo hosting the pipeline's code.
Configuring environment settings
End of explanation
"""
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
"""
Explanation: Let us make sure that the artifact store exists:
End of explanation
"""
%%writefile kfp-cli/Dockerfile
# TODO
"""
Explanation: Creating the KFP CLI builder for Vertex AI
Exercise
In the cell below, write a docker file that
* Uses gcr.io/deeplearning-platform-release/base-cpu as base image
* Install the python packages kfp with version 1.6.6 and google-cloud-aiplatform with version 1.3.0
* Starts /bin/bash as entrypoint
End of explanation
"""
KFP_CLI_IMAGE_NAME = 'kfp-cli-vertex'
KFP_CLI_IMAGE_URI = f'gcr.io/{PROJECT_ID}/{KFP_CLI_IMAGE_NAME}:latest'
KFP_CLI_IMAGE_URI
"""
Explanation: Build the image and push it to your project's Container Registry.
End of explanation
"""
!gcloud builds # COMPLETE THE COMMAND
"""
Explanation: Exercise
In the cell below, use gcloud builds to build the kfp-cli-vertex Docker image and push it to the project gcr.io registry.
End of explanation
"""
%%writefile cloudbuild_vertex.yaml
# Copyright 2021 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this
# file except in compliance with the License. You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS"
# BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
steps:
# Build the trainer image
- name: # TODO
args: # TODO
dir: # TODO
# Compile the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli-vertex'
args:
- '-c'
- |
dsl-compile-v2 # TODO
env:
- 'PIPELINE_ROOT=gs://$PROJECT_ID-vertex/pipeline'
- 'PROJECT_ID=$PROJECT_ID'
- 'REGION=$_REGION'
- 'SERVING_CONTAINER_IMAGE_URI=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest'
- 'TRAINING_CONTAINER_IMAGE_URI=gcr.io/$PROJECT_ID/trainer_image_covertype_vertex:latest'
- 'TRAINING_FILE_PATH=gs://$PROJECT_ID-vertex/data/training/dataset.csv'
- 'VALIDATION_FILE_PATH=gs://$PROJECT_ID-vertex/data/validation/dataset.csv'
dir: pipeline_vertex
# Run the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli-vertex'
args:
- '-c'
- |
python kfp-cli_vertex/run_pipeline.py # TODO
# Push the images to Container Registry
# TODO: List the images to be pushed to the project Docker registry
images: # TODO
# This is required since the pipeline run overflows the default timeout
timeout: 10800s
"""
Explanation: Understanding the Cloud Build workflow.
Exercise
In the cell below, you'll complete the cloudbuild_vertex.yaml file describing the CI/CD workflow and prescribing how environment specific settings are abstracted using Cloud Build variables.
The CI/CD workflow automates the steps you walked through manually during lab-02_vertex:
1. Builds the trainer image
1. Compiles the pipeline
1. Uploads and run the pipeline to the Vertex AI Pipeline environment
1. Pushes the trainer to your project's Container Registry
The Cloud Build workflow configuration uses both standard and custom Cloud Build builders. The custom builder encapsulates KFP CLI.
End of explanation
"""
SUBSTITUTIONS= f'_REGION={REGION}'
SUBSTITUTIONS
!gcloud builds submit . --config cloudbuild_vertex.yaml --substitutions {SUBSTITUTIONS}
"""
Explanation: Manually triggering CI/CD runs
You can manually trigger Cloud Build runs using the gcloud builds submit command.
End of explanation
"""
|
mjabri/holoviews | doc/Tutorials/Pandas_Conversion.ipynb | bsd-3-clause | import numpy as np
import pandas as pd
import holoviews as hv
from IPython.display import HTML
%reload_ext holoviews.ipython
%output holomap='widgets'
"""
Explanation: Pandas is one of the most popular Python libraries providing high-performance, easy-to-use data structures and data analysis tools. Additionally it provides IO interfaces to store and load your data in a variety of formats including csv files, json, pickles and even databases. In other words it makes loading data, munging data and even complex data analysis tasks a breeze.
Combining the high-performance data analysis tools and IO capabilities that Pandas provides with interactivity and ease of generating complex visualization in HoloViews makes the two libraries a perfect match.
In this tutorial we will explore how you can easily convert between Pandas dataframes and HoloViews components. The tutorial assumes you already familiar with some of the core concepts of both libraries, so if you need a refresher on HoloViews have a look at the Introduction and Exploring Data.
Basic conversions
End of explanation
"""
df = pd.DataFrame({'a':[1,2,3,4], 'b':[4,5,6,7], 'c':[8, 9, 10, 11]})
HTML(df.to_html())
"""
Explanation: The first thing to understand when working with pandas dataframes in HoloViews is how data is indexed. Pandas dataframes are structured as tables with any number of columns and indexes. HoloViews on the other hand deals with Dimensions. HoloViews container objects such as the HoloMap, NdLayout, GridSpace and NdOverlay have kdims, which provide metadata about the data along that dimension and how they can be sliced. Element objects on the other hand have both key dimensions (kdims) and value dimensions (vdims). The difference between kdims and vdims in HoloViews is that the former may be sliced and indexed while the latter merely provide a description about the values along that Dimension.
Let's start by constructing a Pandas dataframe of a few columns and display it as it's html format (throughtout this notebook we will visualize the DFrames using the IPython HTML display function, to allow this notebook to be tested, you can of course visualize dataframes directly).
End of explanation
"""
example = hv.DFrame(df)
"""
Explanation: Now that we have a basic dataframe we can wrap it in the HoloViews DFrame wrapper element.
End of explanation
"""
list(example.data.columns)
"""
Explanation: The HoloViews DFrame wrapper element can either be displayed directly using some of the specialized plot types that Pandas supplies or be used as conversion interface to HoloViews objects. This Tutorial focuses only on the conversion interface, for the specialized Pandas and Seaborn plot types have a look at the Pandas and Seaborn tutorial.
The data on the DFrame Element is accessible via the .data attribute like on all other Elements.
End of explanation
"""
example_table = example.table(['a', 'b'], 'c')
example_table
"""
Explanation: Having wrapped the dataframe in the DFrame wrapper we can now begin interacting with it. The simplest thing we can do is to convert it to a HoloViews Table object. The conversion interface has a simple signature, after selecting the Element type you want to convert to, in this case a Table, you pass the desired kdims and vdims to the corresponding conversion method, either as list of column name strings or as a single string.
End of explanation
"""
HTML(df.reset_index().to_html())
"""
Explanation: As you can see, we now have a Table, which has a and b as its kdims and c as its value_dimension. The index of the original dataframe was dropped however. So if your data has some complex indices set ensure to convert them to simple columns using the .reset_index method on the pandas dataframe:
End of explanation
"""
example_table[:, 4:8:2] + example_table[2:5:2, :]
"""
Explanation: Now we can employ the HoloViews slicing semantics to select the desired subset of the data and use the usual compositing + operator to lay the data out side by side:
End of explanation
"""
example.scatter('a', 'b')
"""
Explanation: Dropping and reducing columns
This was the simple case, we converted all the dataframe columns to a Table object. This time let's only select a subset of the Dimensions.
End of explanation
"""
%%opts Curve [xticks=3 yticks=3]
example.curve('a', 'b') + example_table
"""
Explanation: As you can see HoloViews simply ignored the remaining Dimension. By default the conversion functions ignore any numeric unselected Dimensions. All non-numeric dimensions are converted to dimensions on the returned HoloMap however. Both of these behaviors can be overridden by supplying explicit map dimensions and/or a reduce_fn.
You can perform this conversion with any type and lay your results out side-by-side making it easy to look at the same dataset in any number of ways.
End of explanation
"""
HTML(example_table.dframe().to_html())
"""
Explanation: Finally, we can convert all homogenous HoloViews types (i.e. anything except Layout and Overlay) back to a pandas dataframe using the dframe method.
End of explanation
"""
macro_df = pd.read_csv('http://ioam.github.com/holoviews/Tutorials/macro.csv', '\t')
"""
Explanation: Working with higher-dimensional data
The last section only scratched the surface, where HoloViews really comes into its own is for very high-dimensional datasets. Let's load a dataset of some macro-economic indicators for a OECD countries from 1964-1990 from the holoviews website.
End of explanation
"""
HTML(macro_df[0:10].to_html())
"""
Explanation: Now we can display the first ten rows:
End of explanation
"""
dimensions = {'unem': hv.Dimension('Unemployment', unit='%'),
'capmob': 'Capital Mobility',
'gdp': hv.Dimension('GDP Growth', unit='%')}
macro = hv.DFrame(macro_df, dimensions=dimensions)
"""
Explanation: As you can see some of the columns are poorly named and carry no information about the units of each quantity. The DFrame element allows defining either an explicit list of kdims which must match the number of columns or a dimensions dictionary, where the keys should match the columns and the values must be either string or HoloViews Dimension object.
End of explanation
"""
from holoviews.interface.pandas import DFrame as PDFrame
sorted([k for k in PDFrame.__dict__ if not k.startswith('_') and k != 'name'])
"""
Explanation: Let's list the conversion methods supported by the standard DFrame element, if you have the Seaborn extension the DFrame object that is imported by default will support additional conversions:
End of explanation
"""
%output dpi=100
options = hv.Store.options()
opts = hv.Options('plot', aspect=2, fig_size=250, show_grid=True, legend_position='right')
options.NdOverlay = opts
options.Overlay = opts
"""
Explanation: All these methods have a common signature, first the kdims, vdims, HoloMap dimensions and a reduce_fn. We'll see what that means in practice for some of the complex Element types in a minute.
Conversion to complex HoloViews components
We'll begin by setting a few default plot options, which will apply to all the objects. You can do this by setting the appropriate options directly Store.options with the desired {type}.{group}.{label} path or using the %opts line magic, see the Options Tutorial for more details.
Here we define some default options on Store.options directly using the %output magic only to set the dpi of the following figures.
End of explanation
"""
%%opts Curve (color=Palette('Set3'))
gdp_curves = macro.curve('year', 'GDP Growth')
gdp_curves.overlay('country')
"""
Explanation: Overlaying
Above we looked at converting a DFrame to simple Element types, however HoloViews also provides powerful container objects to explore high-dimensional data, currently these are HoloMap, NdOverlay, NdLayout and GridSpace. HoloMaps provide the basic conversion type from which you can conveniently convert to the other container types using the .overlay, .layout and .grid methods. This way we can easily create an overlay of GDP Growth curves by year for each country. Here 'year' is a key dimension and GDP Growth a value dimension. As we discussed before all non-numeric Dimensions become HoloMap kdims, in this case the 'country' is the only non-numeric Dimension, which we then overlay calling the .overlay method.
End of explanation
"""
%%opts NdOverlay [show_legend=False] Curve (color=Palette('Blues'))
hv.NdOverlay({i: gdp_curves.collapse('country', np.percentile, q=i) for i in range(0,101)})
"""
Explanation: Collapsing
Now that we've extracted the gdp_curves we can apply some operations to them. The collapse method applies some function across the data along the supplied dimensions. This let's us quickly compute a the mean GDP Growth by year for example, but it also allows us to map a function with parameters to the data and visualize the resulting samples. A simple example is computing a curve for each percentile and embedding it in an NdOverlay.
Additionally we can apply a Palette to visualize the range of percentiles.
End of explanation
"""
%opts Bars [bgcolor='w' aspect=3 figure_size=450 show_frame=False]
%%opts Bars [category_index=2 stack_index=0 group_index=1 legend_position='top' legend_cols=7 color_by=['stack']] (color=Palette('Dark2'))
macro.bars(['country', 'year'], 'trade')
"""
Explanation: Multiple key dimensions
Many HoloViews Element types support multiple kdims, including HeatMaps, Points, Scatter, Scatter3D, and Bars. Bars in particular allows you to lay out your data in groups, categories and stacks. By supplying the index of that dimension as a plotting option you can choose to lay out your data as groups of bars, categories in each group and stacks. Here we choose to lay out the trade surplus of each country with groups for each year, no categories, and stacked by country. Finally we choose to color the Bars for each item in the stack.
End of explanation
"""
%%opts Bars [padding=0.02 color_by=['group']] (alpha=0.6, color=Palette('Set1', reverse=True)[0.:.2])
countries = {'Belgium', 'Netherlands', 'Sweden', 'Norway'}
macro.bars(['country', 'year'], 'Unemployment').select(year=(1978, 1985), country=countries)
"""
Explanation: Using the .select method we can pull out the data for just a few countries and specific years. We can also make more advanced use the Palettes.
Palettes can customized by selecting only a subrange of the underlying cmap to draw the colors from. The Palette draws samples from the colormap using the supplied sample_fn, which by default just draws linear samples but may be overriden with any function that draws samples in the supplied ranges. By slicing the Set1 colormap we draw colors only from the upper half of the palette and then reverse it.
End of explanation
"""
%opts HeatMap [show_values=False xticks=40 xrotation=90 invert_yaxis=True]
%opts Layout [figure_size=150]
hv.Layout([macro.heatmap(['year', 'country'], value)
for value in macro.data.columns[2:]]).cols(2)
"""
Explanation: Combining heterogeneous data
Many HoloViews Elements support multiple key and value dimensions. A HeatMap may be indexed by two kdims, so we can visualize each of the economic indicators by year and country in a Layout. Layouts are useful for heterogeneous data you want to lay out next to each other. Because all HoloViews objects support the + operator, we can use np.sum to compose them into a Layout.
Before we display the Layout let's apply some styling, we'll suppress the value labels applied to a HeatMap by default and substitute it for a colorbar. Additionally we up the number of xticks that are drawn and rotate them by 90 degrees to avoid overlapping. Flipping the y-axis ensures that the countries appear in alphabetical order. Finally we reduce some of the margins of the Layout and increase the size.
End of explanation
"""
%%opts Scatter [scaling_factor=1.4] (color=Palette('Set3') edgecolors='k')
gdp_unem_scatter = macro.scatter('year', ['GDP Growth', 'Unemployment'])
gdp_unem_scatter.overlay('country')
"""
Explanation: Another way of combining heterogeneous data dimensions is to map them to a multi-dimensional plot type. Scatter Elements for example support multiple vdims, which may be mapped onto the color and size of the drawn points in addition to the y-axis position.
As for the Curves above we supply 'year' as the sole key_dimension and rely on the DFrame to automatically convert the country to a map dimension, which we'll overlay. However this time we select both GDP Growth and Unemployment but to be plotted as points. To get a sensible chart, we adjust the scaling_factor for the points to get a reasonable distribution in sizes and apply a categorical Palette so we can distinguish each country.
End of explanation
"""
%%opts Scatter [size_index=1 scaling_factor=1.3] (color=Palette('Dark2'))
macro.scatter('GDP Growth', 'Unemployment').overlay('country')
"""
Explanation: Since the DFrame treats all columns in the dataframe as kdims we can map any dimension against any other, allowing us to explore relationships between economic indicators, for example the relationship between GDP Growth and Unemployment, again colored by country.
End of explanation
"""
%%opts Curve (color='k') Scatter [color_index=2 size_index=2 scaling_factor=1.4] (cmap='Blues' edgecolors='k')
macro_overlay = gdp_curves * gdp_unem_scatter
annotations = hv.Arrow(1973, 8, 'Oil Crisis', 'v') * hv.Arrow(1975, 6, 'Stagflation', 'v') *\
hv.Arrow(1979, 8, 'Energy Crisis', 'v') * hv.Arrow(1981.9, 5, 'Early Eighties\n Recession', 'v')
macro_overlay * annotations
"""
Explanation: Combining heterogeneous Elements
Since all HoloViews Elements are composable we can generate complex figures just by applying the * operator. We'll simply reuse the GDP curves we generated earlier, combine them with the scatter points, which indicate the unemployment rate by size and annotate the data with some descriptions of what happened economically in these years.
End of explanation
"""
%opts Overlay [aspect=1]
%%opts NdLayout [figure_size=100] Scatter [color_index=2] (cmap='Reds')
countries = {'United States', 'Canada', 'United Kingdom'}
(gdp_curves * gdp_unem_scatter).select(country=countries).layout('country')
"""
Explanation: Since we didn't map the country to some other container type, we get a widget allowing us to view the plot separately for each country, reducing the forest of curves we encountered before to manageable chunks.
While looking at the plots individually like this allows us to study trends for each country, we may want to lay outa subset of the countries side by side. We can easily achieve this by selecting the countries we want to view and and then applying the .layout method. We'll also want to restore the aspect so the plots compose nicely.
End of explanation
"""
%%opts Layout [fig_size=100] Scatter [color_index=2] (cmap='Reds')
(macro_overlay.relabel('GDP Growth', depth=1) +\
macro.curve('year', 'Unemployment', group='Unemployment',) +\
macro.curve('year', 'trade', ['country'], group='Trade') +\
macro.points(['GDP Growth', 'Unemployment'], [])).cols(2)
"""
Explanation: Finally let's combine some plots for each country into a Layout, giving us a quick overview of each economic indicator for each country:
End of explanation
"""
|
rogerallen/kaggle | ncfish/roger.ipynb | apache-2.0 | #Verify we are in the lesson1 directory
%pwd
%matplotlib inline
import os, sys
sys.path.insert(1, os.path.join(sys.path[0], '../utils'))
from utils import *
from vgg16 import Vgg16
from PIL import Image
from keras.preprocessing import image
from sklearn.metrics import confusion_matrix
"""
Explanation: Based on fast.ai dogs_cats_redux notebook in order to make my own entry into the Kaggle competition.
https://www.kaggle.com/c/the-nature-conservancy-fisheries-monitoring
My dir structure is similar, but not exactly the same:
utils
ncfish
data
train
test
End of explanation
"""
current_dir = os.getcwd()
LESSON_HOME_DIR = current_dir
DATA_HOME_DIR = current_dir+'/data'
categories = sorted([os.path.basename(x) for x in glob(DATA_HOME_DIR+'/train/*')])
"""
Explanation: Note: had to comment out vgg16bn in utils.py (whatever that is)
End of explanation
"""
from shutil import copyfile
#Create directories
%cd $DATA_HOME_DIR
# did this once
%mkdir valid
%mkdir results
%mkdir -p sample/train
%mkdir -p sample/test
%mkdir -p sample/valid
%mkdir -p sample/results
%mkdir -p test/unknown
# Create subdirectories
for c in categories:
%mkdir -p valid/{c}
%mkdir -p sample/train/{c}
%mkdir -p sample/valid/{c}
%cd $DATA_HOME_DIR/train
# how many images we talking about?
for c in categories:
g = glob(c+"/*.jpg")
print c, len(g)
"""
Explanation: Create validation set and sample
ONLY DO THIS ONCE.
End of explanation
"""
validation_ratio = 0.1
for c in categories:
g = glob(c+"/*.jpg")
shuf = np.random.permutation(g)
num_valid = int(validation_ratio*len(g))
for i in range(num_valid):
#print shuf[i], DATA_HOME_DIR+'/valid/' + shuf[i]
os.rename(shuf[i], DATA_HOME_DIR+'/valid/' + shuf[i])
# Now, how many images we talking about?
for c in categories:
g = glob(c+"/*.jpg")
print c, len(g),
g = glob("../valid/"+c+"/*.jpg")
print len(g)
# now create the sample train subset of 10 per category
for c in categories:
g = glob(c+"/*.jpg")
shuf = np.random.permutation(g)
for i in range(10):
#print shuf[i], DATA_HOME_DIR+'/sample/train/' + shuf[i]
copyfile(shuf[i], DATA_HOME_DIR+'/sample/train/' + shuf[i])
%cd $DATA_HOME_DIR/valid
# now create the sample valid subset of 2 per category
for c in categories:
g = glob(c+"/*.jpg")
shuf = np.random.permutation(g)
for i in range(2):
#print shuf[i], DATA_HOME_DIR+'/sample/valid/' + shuf[i]
copyfile(shuf[i], DATA_HOME_DIR+'/sample/valid/' + shuf[i])
!ls {DATA_HOME_DIR}/train/*/* |wc -l
!ls {DATA_HOME_DIR}/valid/*/* |wc -l
!ls {DATA_HOME_DIR}/sample/train/*/* |wc -l
!ls {DATA_HOME_DIR}/sample/valid/*/* |wc -l
"""
Explanation: This was original output:
ALB 1719
BET 200
DOL 117
LAG 67
NoF 465
OTHER 299
SHARK 176
YFT 734
End of explanation
"""
# Create single 'unknown' class for test set
%cd $DATA_HOME_DIR/test_stg1
%mv *.jpg ../test/unknown/
!ls {DATA_HOME_DIR}/test
"""
Explanation: Training & 10% for Validation numbers
ALB 1548 171
BET 180 20
DOL 106 11
LAG 61 6
NoF 419 46
OTHER 270 29
SHARK 159 17
YFT 661 73
Rearrange image files into their respective directories
ONLY DO THIS ONCE.
End of explanation
"""
%cd $DATA_HOME_DIR
#Set path to sample/ path if desired
path = DATA_HOME_DIR + '/'
#path = DATA_HOME_DIR + '/sample/'
test_path = DATA_HOME_DIR + '/test/' #We use all the test data
results_path=DATA_HOME_DIR + '/results/'
train_path=path + '/train/'
valid_path=path + '/valid/'
vgg = Vgg16()
#Set constants. You can experiment with no_of_epochs to improve the model
batch_size=64
no_of_epochs=1
#Finetune the model
batches = vgg.get_batches(train_path, batch_size=batch_size)
val_batches = vgg.get_batches(valid_path, batch_size=batch_size*2)
vgg.finetune(batches)
#Not sure if we set this for all fits
vgg.model.optimizer.lr = 0.01
#Notice we are passing in the validation dataset to the fit() method
#For each epoch we test our model against the validation set
latest_weights_filename = None
#latest_weights_filename='ft24.h5'
#vgg.model.load_weights(results_path+latest_weights_filename)
"""
Explanation: Finetuning and Training
OKAY, ITERATE HERE
End of explanation
"""
# if you have run some epochs already...
epoch_offset=1 # trying again from ft1
for epoch in range(no_of_epochs):
print "Running epoch: %d" % (epoch + epoch_offset)
vgg.fit(batches, val_batches, nb_epoch=1)
latest_weights_filename = 'ft%d.h5' % (epoch + epoch_offset)
vgg.model.save_weights(results_path+latest_weights_filename)
print "Completed %s fit operations" % no_of_epochs
# only if you have to
latest_weights_filename='ft1.h5'
vgg.model.load_weights(results_path+latest_weights_filename)
"""
Explanation: if you are training, stay here. if you are loading & creating submission skip down from here.
End of explanation
"""
val_batches, probs = vgg.test(valid_path, batch_size = batch_size)
filenames = val_batches.filenames
expected_labels = val_batches.classes # 0 - 7
#Round our predictions to 0/1 to generate labels
#our_predictions = probs[:,0]
#our_labels = np.round(1-our_predictions)
our_labels = np.argmax(probs, axis=1)
cm = confusion_matrix(expected_labels, our_labels)
plot_confusion_matrix(cm, val_batches.class_indices)
#Helper function to plot images by index in the validation set
#Plots is a helper function in utils.py
def plots_idx(idx, titles=None):
plots([image.load_img(valid_path + filenames[i]) for i in idx], titles=titles)
#Number of images to view for each visualization task
n_view = 4
#1. A few correct labels at random
correct = np.where(our_labels==expected_labels)[0]
print "Found %d correct labels" % len(correct)
idx = permutation(correct)[:n_view]
plots_idx(idx, our_predictions[idx])
#2. A few incorrect labels at random
incorrect = np.where(our_labels!=expected_labels)[0]
print "Found %d incorrect labels" % len(incorrect)
idx = permutation(incorrect)[:n_view]
plots_idx(idx, our_predictions[idx])
val_batches.class_indices
#3a. The images we most confident were X, and are actually X
X='YFT'
Xi=val_batches.class_indices[X]
correct_cats = np.where((our_labels==Xi) & (our_labels==expected_labels))[0]
print "Found %d confident correct %s labels" % (len(correct_cats),X)
most_correct_cats = np.argsort(our_predictions[correct_cats])[::-1][:n_view]
plots_idx(correct_cats[most_correct_cats], our_predictions[correct_cats][most_correct_cats])
#4a. The images we were most confident were cats, but are actually dogs
incorrect_cats = np.where((our_labels==0) & (our_labels!=expected_labels))[0]
print "Found %d incorrect cats" % len(incorrect_cats)
if len(incorrect_cats):
most_incorrect_cats = np.argsort(our_predictions[incorrect_cats])[::-1][:n_view]
plots_idx(incorrect_cats[most_incorrect_cats], our_predictions[incorrect_cats][most_incorrect_cats])
#4b. The images we were most confident were dogs, but are actually cats
incorrect_dogs = np.where((our_labels==1) & (our_labels!=expected_labels))[0]
print "Found %d incorrect dogs" % len(incorrect_dogs)
if len(incorrect_dogs):
most_incorrect_dogs = np.argsort(our_predictions[incorrect_dogs])[:n_view]
plots_idx(incorrect_dogs[most_incorrect_dogs], our_predictions[incorrect_dogs][most_incorrect_dogs])
#5. The most uncertain labels (ie those with probability closest to 0.5).
most_uncertain = np.argsort(np.abs(our_predictions-0.5))
plots_idx(most_uncertain[:n_view], our_predictions[most_uncertain])
"""
Explanation: Validate Predictions
End of explanation
"""
batches, preds = vgg.test(test_path, batch_size = batch_size*2)
# Error allocating 3347316736 bytes of device memory (out of memory).
# got this error when batch-size = 128
# I see this pop up to 6GB memory with batch_size = 64 & this takes some time...
#For every image, vgg.test() generates two probabilities
#based on how we've ordered the cats/dogs directories.
#It looks like column one is cats and column two is dogs
print preds[:5]
filenames = batches.filenames
print filenames[:5]
#You can verify the column ordering by viewing some images
Image.open(test_path + filenames[1])
#Save our test results arrays so we can use them again later
save_array(results_path + 'test_preds.dat', preds)
save_array(results_path + 'filenames.dat', filenames)
"""
Explanation: Generate Predictions
End of explanation
"""
#Load our test predictions from file
preds = load_array(results_path + 'test_preds.dat')
filenames = load_array(results_path + 'filenames.dat')
#Grab the dog prediction column
isdog = preds[:,1]
print "Raw Predictions: " + str(isdog[:5])
print "Mid Predictions: " + str(isdog[(isdog < .6) & (isdog > .4)])
print "Edge Predictions: " + str(isdog[(isdog == 1) | (isdog == 0)])
#play it safe, round down our edge predictions
#isdog = isdog.clip(min=0.05, max=0.95)
#isdog = isdog.clip(min=0.02, max=0.98)
isdog = isdog.clip(min=0.01, max=0.99)
#Extract imageIds from the filenames in our test/unknown directory
filenames = batches.filenames
ids = np.array([int(f[8:f.find('.')]) for f in filenames])
subm = np.stack([ids,isdog], axis=1)
subm[:5]
%cd $DATA_HOME_DIR
submission_file_name = 'submission4.csv'
np.savetxt(submission_file_name, subm, fmt='%d,%.5f', header='id,label', comments='')
from IPython.display import FileLink
%cd $LESSON_HOME_DIR
FileLink('data/'+submission_file_name)
"""
Explanation: Submit Predictions to Kaggle!
End of explanation
"""
|
CUBoulder-ASTR2600/lectures | lecture_12_differentiation.ipynb | isc | %matplotlib inline
import numpy as np
import matplotlib.pyplot as pl
"""
Explanation: Numerical Differentiation
End of explanation
"""
from IPython.display import Image
Image(url='http://wordlesstech.com/wp-content/uploads/2011/11/New-Map-of-the-Moon-2.jpg')
"""
Explanation: Applications:
Derivative difficult to compute analytically
Rate of change in a dataset
You have position data but you want to know velocity
Finding extrema
Important for fitting models to data (ASTR 3800)
Maximum likelihood methods
Topology: finding peaks and valleys (place where slope is zero)
Topology Example: South Pole Aitken Basin (lunar farside)
Interesting:
Oldest impact basin in the solar system
Important for studies of solar system formation
Permananently shadowed craters
High concentration of hydrogen (e.g., LCROSS mission)!
Good place for an observatory (e.g., the Lunar Radio Array concept)!
End of explanation
"""
def forwardDifference(f, x, h):
"""
A first order differentiation technique.
Parameters
----------
f : function to be differentiated
x : point of interest
h : step-size to use in approximation
"""
return (f(x + h) - f(x)) / h # From our notes
def centralDifference(f, x, h):
"""
A second order differentiation technique.
Also known as `symmetric difference quotient.
Parameters
----------
f : function to be differentiated
x : point of interest
h : step-size to use in approximation
"""
return (f(x + h) - f(x - h)) / (2.0 * h) # From our notes
np.linspace(1,10,100).shape
def derivative(formula, func, xLower, xUpper, n):
"""
Differentiate func(x) at all points from xLower
to xUpper with n *equally spaced* points.
The differentiation formula is given by
formula(func, x, h).
"""
h = (xUpper - xLower) / float(n) # Calculate the derivative step size
xArray = np.linspace(xLower, xUpper, n) # Create an array of x values
derivArray = np.zeros(n) # Create an empty array for the derivative values
for index in range(1, n - 1): # xrange(start, stop, [step])
derivArray[index] = formula(func, xArray[index], h) # Calculate the derivative for the current
# x value using the formula passed in
return (xArray[1:-1], derivArray[1:-1]) # This returns TWO things:
# x values and the derivative values
"""
Explanation: Question
Image you're planning a mission to the South Pole Aitken Basin and want to explore some permanently shadowed craters. What factors might you consider in planning out your rover's landing site and route?
Most rovers can tolerate grades up to about 20%, For reference, the grade on I-70 near Eisenhower Tunnel is about 6%.
Differentiation Review
Numerical Derivatives on a Grid (Text Appendix B.2)
End of explanation
"""
def derivative2(formula, func, xLower, xUpper, n):
"""
Differentiate func(x) at all points from xLower
to xUpper with n+1 *equally spaced* points.
The differentiation formula is given by
formula(func, x, h).
"""
h = (xUpper - xLower) / float(n) # Calculate the derivative step size
xArray = np.linspace(xLower, xUpper, n) # Create an array of x values
derivArray = np.zeros(n) # Create an empty array for the derivative values
for index in range(0, n): # xrange(start, stop, [step])
derivArray[index] = formula(func, xArray[index], h) # Calculate the derivative for the current
# x value using the formula passed in
return (xArray, derivArray) # This returns TWO things:
# x values and the derivative values
"""
Explanation: Notice that we don't calculate the derivative at the end points because there are no points beyond them to difference with.
Q. So, what would happen without the [1:-1] in the return statement?
End of explanation
"""
tau = 2*np.pi
x = np.linspace(0, tau, 100)
# Plot sin and cos
pl.plot(x, np.sin(x), color='k');
pl.plot(x, np.cos(x), color='b');
# Compute derivative using central difference formula
xder, yder = derivative2(centralDifference, np.sin, 0, tau, 10)
# Plot numerical derivative as scatter plot
pl.scatter(xder, yder, color='g', s=100, marker='+', lw=2);
# s controls marker size (experiment with it)
# lw = "linewidth" in pixels
"""
Explanation: Example: Differentiate $\sin(x)$
We know the answer:
$$\frac{d}{dx} \left[\sin(x)\right] = \cos(x)$$
End of explanation
"""
# Plot sin and cos
pl.plot(x, np.sin(x), color='k')
pl.plot(x, np.cos(x), color='b')
# Compute derivative using central difference formula
xder, yder = derivative2(centralDifference, np.sin, 0, tau, 100)
# Plot numerical derivative as scatter plot
pl.scatter(xder, yder, color='g', s=100, marker='*', lw=2)
"""
Explanation: Notice that the points miss the curve.
Q. How can we improve the accuracy of our numerical derivative?
End of explanation
"""
numCraters = 5 # number of craters
widthMax = 1.0 # maximal width of Gaussian crater
heightMin = -1.0 # maximal depth of craters / valleys
heightMax = 2.0 # maximal height of hills / mountains
# 1-D Gaussian
def gaussian(x, A, mu, sigma):
return A * np.exp(-(x - mu)**2 / 2.0 / sigma**2)
# 1-D Gaussian (same thing using lambda)
#gaussian = lambda x, A, mu, sigma: A * np.exp(-(x - mu)**2 / 2. / sigma**2)
# Create an array of linearly spaced x values
xArray = np.linspace(0, 10, 500) # km
# Create an array of initially flat landscape (aka filled with 0's)
yArray = np.zeros_like(xArray)
# Add craters / mountains to landscape
for _ in range(numCraters): # '_' is the so called dummy variable
# Amplitude between heightMin and heightMax
A = np.random.rand() * (heightMax - heightMin) + heightMin
# Center location of the crater
center = np.random.rand() * xArray.max()
# Width of the crater
sigma = np.random.rand() * widthMax
# Add crater to landscape!
yArray += gaussian(xArray, A=A, mu=center, sigma=sigma)
pl.plot(xArray, yArray, color='k')
pl.xlabel('position [km]')
pl.ylabel('altitutde [km]')
"""
Explanation: Example: Traversing A 1-D landscape
Gaussian Equation:
$$f(x)=A * e^{-\frac{(x-\mu)^2}{2*\sigma}}$$
End of explanation
"""
dydx = np.diff(yArray) / np.diff(xArray)
"""
Explanation: Q. Where should our spacecraft land? What areas seem accessible?
Q. How do we find the lowest point? Highest? How could we determine how many "mountains" and "craters" there are?
End of explanation
"""
arr = np.array([1,4,10, 12,5, 7])
np.diff(arr)
"""
Explanation: Q. What do you think "diff" does?
End of explanation
"""
pl.plot(xArray[0:-1], dydx, color='r', label='slope')
pl.plot(xArray, yArray, color='k', label='data')
pl.xlabel('position [km]')
pl.ylabel('slope')
pl.plot([xArray.min(), xArray.max()], [0,0], color='k', ls=':')
#pl.ylim(-4, 4)
pl.legend(loc='best')
"""
Explanation: Q. What type of differentiation scheme does this formula represent? How is this different than our "derivative" function from earlier?
End of explanation
"""
slopeTolerance = 0.5
"""
Explanation: Q. How many hills and craters are there?
Q. Why did we use x[0:-1] in the above plot instead of x?
End of explanation
"""
myArray = np.array([0, 1, 2, 3, 4]) # Create an array
tfArray = np.logical_and(myArray < 2, myArray != 0) # Use boolean logic on array
print (myArray) # Print original array
print (tfArray) # Print the True/False array (from boolean logic)
print (myArray[tfArray]) # Print the original array using True/False array to limit values
reachable = np.logical_and(dydx < slopeTolerance, dydx > -slopeTolerance)
unreachable = np.logical_not(reachable)
pl.plot(xArray, yArray, color='k')
pl.scatter(xArray[:-1][unreachable], yArray[:-1][unreachable], color='r', label='bad')
pl.scatter(xArray[:-1][reachable], yArray[:-1][reachable], color='g', label='good')
pl.legend(loc='best')
pl.xlabel('position [km]')
pl.ylabel('altitude [km]')
"""
Explanation: Q. Using the slope, how could we determine which places we could reach and which we couldn't?
End of explanation
"""
|
misken/hillmaker-examples | notebooks/basic_usage_shortstay_unit_multicats.ipynb | apache-2.0 | import pandas as pd
import hillmaker as hm
"""
Explanation: Using hillmaker (v0.2.0)
In this notebook we'll focus on basic use of hillmaker for analyzing occupancy in a typical hospital setting. The data is fictitious data from a hospital short stay unit (SSU). Patients flow through a SSU for a variety of procedures, tests or therapies. Let's assume patients can be classified into one of five categories of patient types: ART (arterialgram), CAT (post cardiac-cath), MYE (myelogram), IVT (IV therapy), and OTH (other). In addition, patients are given a severity score of 1 or 2 which is related to the amount of time required in hte SSU and the level of resources required. From one of our hospital information systems we were able to get raw data about the entry and exit times of each patient along with their patient type and severity values. For simplicity, the data is in a csv file. We are interested in occupancy statistics (e.g. mean, standard deviation, percentiles) by time of day and by day of week. While overall occupancy statistics are important, we are also interested in occupancy statistics for different patient types and severity levels. Since we also are interested in required staffing for this unit, we'll also use hillmaker to analyze workload levels.
This example assumes you are already familiar with statistical occupancy analysis using the old version of Hillmaker or some similar such tool. It also assumes some knowledge of using Python for analytical work.
The following blog posts are helpful if you are not familiar with occupancy analysis:
New version of hillmaker (finally) released - and it's Python
Using hillmaker from R with reticulate to analyze time of day patterns in bike share data
Computing occupancy statistics with Python - Part 1 of 3
Computing occupancy statistics with Python - Part 2 of 3
Current status of code
The new hillmaker is implemented as a Python module which can be used by importing hillmaker and then calling the main hillmaker function, make_hills() (or any component function included in the module). This new version of hillmaker is in what I'd call an alpha state. The output does match the Access version for the ShortStay database that I included in the original Hillmaker. Use at your own risk.
It is licensed under an Apache 2.0 license. It is a widely used permissive free software license. See https://en.wikipedia.org/wiki/Apache_License for additional information.
Getting Started
In order to use hillmaker, the major steps are:
make sure you have Python and necessary packages installed,
download and install hillmaker,
load hillmaker and start using it from either a Jupyter notebook, Python terminal or Python script.
I'll go through each of these in more detail. As a big part of the audience for this post is former users of the MS Access version of Hillmaker using the Windows OS, many of whom have little experience with tools like Python, I'll try to make the transition as easy as possible.
Dependencies
Whereas the old Hillmaker required MS Access, the new one requires an installation of
Python 3 (3.7+) along
with several Python modules that are widely used for analytics and data science work.
Most importantly, hillmaker 0.2.0 requires pandas 1.0.0 or later.
Getting Python and many analytical packages via Anaconda
An very easy way to get Python 3 pre-configured with tons of analytical Python packages is to use the Anaconda distro for Python. From their Downloads page:
Anaconda is a completely free Python distribution (including for commercial use and redistribution).
It includes more than 300 of the most popular Python packages for science, math, engineering, and
data analysis. See the packages included with Anaconda and the Anaconda changelog.
There are several really nice reasons to use the Anaconda Python distro for data science work:
it comes preconfigured with hundreds of the most popular data science Python packages installed and they just work
large community of Anaconda data science users and vibrant user community on places like StackOverflow
it has a companion package manager called Conda which makes it easy to install new packages as well as to create and manage virtual environments
If you use Anaconda, you already have all of the necessary libraries for using hillmaker other than hillmaker itself.
Getting Hillmaker
Since 2016, hillmaker has been freely available from the Python Package Index known as PyPi as well as Anaconda Cloud. They are similar to CRAN for R. Source code is also be available from my GitHub site https://github.com/misken/hillmaker and it is an open-source project. If you work with Python, you should know a little bit about Python package installation. There is already a companion project on GitHub called hillmaker-examples which contains, well, examples of hillmaker use cases.
Installing Hillmaker
You can use either pip or conda to install hillmaker. I suggest learning about Python virtual environments and either using pyenv, virtualenv or conda (preferred) to create a Python virtual environment and then install hillmaker into it. This way you avoid mixing developmental third-party packages like hillmaker with your base Anaconda Python environment.
Step 1 - Open a terminal and install using Conda or Pip
To install using conda:
sh
conda install -c https://conda.anaconda.org/hselab hillmaker
OR
To install using pip:
sh
pip install hillmaker
Step 2 - Confirm that hillmaker was installed
Use the conda list command to see all the installed packages in your Anaconda3 root.
sh
conda list
You should see hillmaker in the listing.
Step 3 - Confirm that hillmaker can be loaded
Now fire up a Python session (just type python at a Linux/Mac shell or a Windows Anaconda command prompt) and try:
import hillmaker as hm
If the install went well, you shouldn't get any errors when you import hillmaker. To see the main help docstring, do the following at your Python prompt:
help(hm.make_hills)
Using hillmaker
The rest of this Jupyter notebook will illustrate a few ways to use the hillmaker package to analyze occupancy in our SSU.
Module imports
To run Hillmaker we only need to import a few modules. Since the main Hillmaker function uses Pandas DataFrames for both data input and output, we need to import pandas in addition to hillmaker.
End of explanation
"""
file_stopdata = '../data/ShortStay2.csv'
stops_df = pd.read_csv(file_stopdata, parse_dates=['InRoomTS','OutRoomTS'])
stops_df.info()
"""
Explanation: Read main data file containing patient visits to short stay unit
Here's the first few lines from our csv file containing the patient stop data:
PatID,InRoomTS,OutRoomTS,PatType,Severity,PatTypeSeverity
1,01/01/96 07:44 AM,01/01/96 08:50 AM,IVT,1,IVT_1
2,01/01/96 08:28 AM,01/01/96 09:20 AM,IVT,1,IVT_1
3,01/01/96 11:44 AM,01/01/96 01:30 PM,MYE,1,MYE_1
4,01/01/96 11:51 AM,01/01/96 12:55 PM,CAT,1,CAT_1
5,01/01/96 12:10 PM,01/01/96 01:00 PM,IVT,2,IVT_2
Read the short stay data from a csv file into a DataFrame and tell Pandas which fields to treat as dates.
End of explanation
"""
stops_df.head(7)
stops_df.tail(5)
"""
Explanation: Check out the top and bottom of stops_df.
End of explanation
"""
stops_df.groupby('PatType')['PatID'].count()
stops_df.groupby('Severity')['PatID'].count()
"""
Explanation: Enhancement to handle multiple categorical fields
Notice that the PatType field are strings while Severity is integer data. In the previous version of hillmaker (v0.1.1), you could only specify a single category field and it needed to be of type string. So, to compute occupancy statistics by Severity required some data wrangling (convert int to string) and to analyze occupancy by PatType and Severity required further wrangling to concatenate the two fields into a single field that we could feed to hillmaker. Note in the output above that I've included an example of such a concatenation just for illustration purposes.
In this latest version, you can specify zero or more categorical fields which can either be string or integer data types. There is no need to create a concatenated version such as the PatTypeSeverity field above. We'll see that you also have finer control over category field subtotaling.
Let's do some counts of patients by the two categorical fields.
End of explanation
"""
help(hm.make_hills)
"""
Explanation: No obvious problems. We'll assume the data was all read in correctly.
Creating occupancy summaries
The primary function in Hillmaker is called make_hills and plays the same role as the Hillmaker function in the original Access VBA version of Hillmaker. Let's get a little help on this function.
End of explanation
"""
# Required inputs
scenario = 'example1'
in_fld_name = 'InRoomTS'
out_fld_name = 'OutRoomTS'
start = '1/1/1996'
end = '3/30/1996 23:45'
# Optional inputs
cat_fld_name = ['PatType', 'Severity']
verbose = 1
output = './output'
"""
Explanation: Most of the parameters are similar to those in the original VBA version, though a few new ones have been added. Since the VBA version used an Access database as the container for its output, new parameters were added to control output to csv files and/or pandas DataFrames instead.
Example 1: 60 minute bins, PatientType and Severity, export to csv
Specify values for all the required inputs:
End of explanation
"""
hm.make_hills(scenario, stops_df, in_fld_name, out_fld_name, start, end,
catfield=cat_fld_name,
export_path = output, verbose=verbose)
"""
Explanation: Now we'll call the main make_hills function. We won't capture the return values but will simply take the default behavior of having the summaries exported to csv files. You'll see that the filenames will contain the scenario value.
End of explanation
"""
!ls ./output/example1*.csv
"""
Explanation: Let's list the contents of the output folder containing the csv files created by hillmaker. For Windows users, the following is the Linux ls command. The leading exclamation point tells Jupyter that this is an operating system command. To list the files in Windows, the equivalent would be:
!dir output\example1*.csv
End of explanation
"""
pd.set_option('precision', 2)
pd.read_csv("./output/example1_occupancy_PatType_Severity_dow_binofday.csv").iloc[100:110]
"""
Explanation: There are three groups of statistical summary files related to arrivals, departures and occupancy. In addition, the intermediate "bydatetime" files are also included. The filenames indicate whether or not the statistics are by category we well as if they are by day of week and time of day.
Occupancy, arrival and departure summaries
Let's look at the occupancy summaries (the structure is identical for arrivals and departures.) Here's a peek into the middle of example1_occupancy_PatType_Severity_dow_binofday.csv.
End of explanation
"""
pd.read_csv("./output/example1_occupancy_dow_binofday.csv").iloc[20:40]
"""
Explanation: Statistics by day and time but aggregated over all the categories are also available.
End of explanation
"""
pd.read_csv("./output/example1_occupancy_PatType_Severity.csv").head(20)
"""
Explanation: For those files without "dow_binofday" in their name, the statistics are by category only.
End of explanation
"""
pd.read_csv("./output/example1_occupancy.csv")
"""
Explanation: There's even a summary that aggregates over categories and time. Obviously, it contains a single row.
End of explanation
"""
pd.read_csv("./output/example1_bydatetime_datetime.csv").iloc[100:125]
pd.read_csv("./output/example1_bydatetime_PatType_Severity_datetime.csv").iloc[100:125]
"""
Explanation: Intermediate bydatetime files
The intermediate tables used to compute the summaries we just looked at, are also available both by category and overall. Each row is a single time bin (e.g. date and hour of day). Note that the occupancy values are not necessarily integer since hillmaker's default behavior is to use fractional occupancy contributions for the bins in which the patient arrives and departs (e.g. if the patient arrived half-way through the time bin, they contribute 0.5 to total occupancy during that time bin). This behavior can be changed by specifying edge_bins=2 when calling make_hills.
End of explanation
"""
# Required inputs
scenario = 'example2'
in_fld_name = 'InRoomTS'
out_fld_name = 'OutRoomTS'
start = '1/1/1996'
end = '3/30/1996 23:45'
# Optional inputs
cat_fld_name = ['PatType', 'Severity']
totals= 2
percentiles=[0.5, 0.95]
verbose = 0 # Silent mode
output = './output'
export_bydatetime_csv = True
export_summaries_csv = True
"""
Explanation: If you've used the previous version of Hillmaker, you'll recognize these files. The default behavior has changed to compute fewer percentiles but any percentiles you want can be computed by specifying them in the percentiles argument to make_hills.
Example 2: Compute totals for individual category fields, select percentiles, output to DataFrames
We'll repeat the example above but use totals=2 so that we get totals computed for each of the category fields in addition to overall totals. I'm also specifying a custom list of percentiles to compute. Instead of exporting CSV files, we'll capture the results as a dictionary of DataFrames.
End of explanation
"""
example2_dfs = hm.make_hills(scenario, stops_df, in_fld_name, out_fld_name, start, end, cat_fld_name,
totals=totals, export_path=output, verbose=verbose,
export_bydatetime_csv=export_bydatetime_csv,
export_summaries_csv=export_summaries_csv)
"""
Explanation: Now we'll call make_hills and tuck the results (a dictionary of DataFrames) into a local variable. Then we can explore them a bit with Pandas.
End of explanation
"""
example2_dfs.keys()
"""
Explanation: The example2_dfs return value is several nested dictionaries eventually leading to pandas DataFrames as values. Let's explore the key structure. It's pretty simple.
End of explanation
"""
example2_dfs['summaries'].keys()
example2_dfs['summaries']['nonstationary'].keys()
example2_dfs['summaries']['nonstationary']['Severity_dow_binofday'].keys()
example2_dfs['summaries']['nonstationary']['Severity_dow_binofday']['occupancy']
"""
Explanation: Let's explore the 'summaries' key first. As you might guess, this will eventually lead to the statistical summary DataFrames.
End of explanation
"""
example2_dfs['bydatetime'].keys()
example2_dfs['bydatetime']['PatType_Severity_datetime']
"""
Explanation: The stationary summaries are similar except that there are no day of week and time bin of day related files.
Now let's look at the 'bydatetime' key at the top level. Yep, gonna lead to bydatetime DataFrames.
End of explanation
"""
severity_to_workload = {'1':0.25, '2':0.5}
stops_df['workload'] = stops_df['Severity'].map(lambda x: severity_to_workload[str(x)])
stops_df.head(10)
"""
Explanation: Example 3 - Workload hills instead of occupancy
Assume that we are doing a staffing analysis and want to look at the distribution of workload by time of day and day of week. In order to translate patients to workload, we'll use simple staff to patient ratios based on severity. For example, let's assume that for Severity=1 we want to have a 1:4 staff to patient ratio and for Severity=2 we need a 1:2 ratio. Let's create a new field called workload using these ratios.
End of explanation
"""
# Required inputs
scenario = 'example3'
in_fld_name = 'InRoomTS'
out_fld_name = 'OutRoomTS'
start = '1/1/1996'
end = '3/30/1996 23:45'
# Optional inputs
occ_weight_field = 'workload'
verbose = 0
output = './output'
example3_dfs = hm.make_hills(scenario, stops_df, in_fld_name, out_fld_name, start, end,
occ_weight_field=occ_weight_field,
export_path = output, verbose=verbose)
example2_dfs['summaries']['stationary']['Severity']['occupancy']
example3_dfs['summaries']['stationary']['']['occupancy']
"""
Explanation: Now we can create workload hills. I'm just going to compute overall workload by not specifiying a category field. Notice the use of the occ_weight_field argument.
End of explanation
"""
import numpy as np
mean_occ = np.asarray(example2_dfs['summaries']['stationary']['Severity']['occupancy'].loc[:,'mean'])
mean_occ
ratios = [severity_to_workload[str(i+1)] for i in range(2)]
ratios
overall_mean_workload = np.dot(mean_occ, ratios)
overall_mean_workload
"""
Explanation: We can check the overall mean workload in example3 by doing a weighted average of the mean occupancies by Severity from example2 with the workload ratios as weights.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.