markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Filter for results that we want
We want to remove words that aren't associated with a type (null WordType) | wordlist_filtered = wordlist[wordlist["WordType"].notnull()] | build_wordlist.ipynb | sdaros/placeword | unlicense |
We want to remove words that contain non word characters (whitespace, hypens, etc.) | # we choose [a-z] here and not [A-Za-z] because we do _not_
# want to match words starting with uppercase characters.
# ^to matches verbs in the infinitive from `dictcc`
word_chars = r'^[a-z]+$|^to\s'
is_word_chars = wordlist_filtered["Word"].str.contains(word_chars, na=False)
wordlist_filtered = wordlist_filtered[is_word_chars]
wordlist_filtered.describe()
wordlist_filtered["WordType"].value_counts() | build_wordlist.ipynb | sdaros/placeword | unlicense |
We want results that are less than 'x' letters long (x+3 for verbs since they are in their infinitive form in the dictcc wordlist) | lt_x_letters = (wordlist_filtered["Word"].str.len() < 9) |\
((wordlist_filtered["Word"].str.contains('^to\s\w+\s')) &\
(wordlist_filtered["Word"].str.len() < 11)\
)
wordlist_filtered = wordlist_filtered[lt_x_letters]
wordlist_filtered.describe() | build_wordlist.ipynb | sdaros/placeword | unlicense |
We want to remove all duplicates | wordlist_filtered = wordlist_filtered.drop_duplicates("Word")
wordlist_filtered.describe()
wordlist_filtered["WordType"].value_counts() | build_wordlist.ipynb | sdaros/placeword | unlicense |
Load our wordlists into nltk | # The TaggedCorpusReader likes to use the forward slash character '/'
# as seperator between the word and part-of-speech tag (WordType).
wordlist_filtered.to_csv("dictcc_moby.csv",index=False,sep="/",header=None)
from nltk.corpus import TaggedCorpusReader
from nltk.tokenize import WhitespaceTokenizer
nltk_wordlist = TaggedCorpusReader("./", "dictcc_moby.csv") | build_wordlist.ipynb | sdaros/placeword | unlicense |
NLTK
Use NLTK to help us merge our wordlists | # Our custom wordlist
import nltk
custom_cfd = nltk.ConditionalFreqDist((tag, word) for (word, tag) in nltk_wordlist.tagged_words() if len(word) < 9 and word.isalpha)
# Brown Corpus
import nltk
brown_cfd = nltk.ConditionalFreqDist((tag, word) for (word, tag) in nltk.corpus.brown.tagged_words() if word.isalpha() and len(word) < 9)
# Merge Nouns from all wordlists
nouns = set(brown_cfd["NN"]) | set(brown_cfd["NP"]) | set(custom_cfd["NOUN"])
# Lowercase all words to remove duplicates
nouns = set([noun.lower() for noun in nouns])
print("Total nouns count: " + str(len(nouns)))
# Merge Verbs from all wordlists
verbs = set(brown_cfd["VB"]) | set(brown_cfd["VBD"]) | set(custom_cfd["VERB"])
# Lowercase all words to remove duplicates
verbs = set([verb.lower() for verb in verbs])
print("Total verbs count: " + str(len(verbs)))
# Merge Adjectives from all wordlists
adjectives = set(brown_cfd["JJ"]) | set(custom_cfd["ADJ"])
# Lowercase all words to remove duplicates
adjectives = set([adjective.lower() for adjective in adjectives])
print("Total adjectives count: " + str(len(adjectives))) | build_wordlist.ipynb | sdaros/placeword | unlicense |
Make Some Placewords Magic Happen | def populate_degrees(nouns):
degrees = {}
nouns_copy = nouns.copy()
for latitude in range(60):
for longtitude in range(190):
degrees[(latitude,longtitude)] = nouns_copy.pop()
return degrees
def populate_minutes(verbs):
minutes = {}
verbs_copy = verbs.copy()
for latitude in range(60):
for longtitude in range(60):
minutes[(latitude,longtitude)] = verbs_copy.pop()
return minutes
def populate_seconds(adjectives):
seconds = {}
adjectives_copy = adjectives.copy()
for latitude in range(60):
for longtitude in range(60):
seconds[(latitude,longtitude)] = adjectives_copy.pop()
return seconds
def populate_fractions(nouns):
fractions = {}
nouns_copy = nouns.copy()
for latitude in range(10):
for longtitude in range(10):
fractions[(latitude,longtitude)] = nouns_copy.pop()
return fractions
def placewords(degrees,minutes,seconds,fractions):
result = []
result.append(populate_degrees(nouns).get(degrees))
result.append(populate_minutes(verbs).get(minutes))
result.append(populate_seconds(adjectives).get(seconds))
result.append(populate_fractions(nouns).get(fractions))
return "-".join(result)
# Located at 50°40'47.9" N 10°55'55.2" E
ilmenau_home = placewords((50,10),(40,55),(47,55),(9,2))
print("Feel free to stalk me at " + ilmenau_home) | build_wordlist.ipynb | sdaros/placeword | unlicense |
Below we will examine the different aspects/objects that define a plot in Plotly. These are:
Data
Layout
Figure
We will first follow with a few examples to showcase Plotly. | py.offline.iplot({
"data": [Scatter(x=[1, 2, 3, 4], y=[4, 3, 2, 1])],
"layout": Layout(title="hello world")
})
# do a table
df = pd.read_csv("https://raw.githubusercontent.com/plotly/datasets/master/school_earnings.csv")
table = ff.create_table(df)
py.offline.iplot(table, filename='jupyter/table1') | Python/Plotly/PlotlyTutorial.ipynb | jaabberwocky/jaabberwocky.github.io | mit |
Under every graph is a JSON object, which is a dictionary like data structure. Simply changing values of some keywords and we get different plots. | # follows trace - data - layout - figure semantic
trace1 = Scatter(
x = [1,2,3],
y = [4,5,6],
marker = {'color':'red', 'symbol':104, 'size':"10"},
mode = "markers+lines",
text = ['one','two','three'],
name = '1st Trace'
)
data = Data([trace1])
layout = Layout(
title="First Plot",
xaxis={'title':'x1'},
yaxis ={'title':'x2'}
)
figure=Figure(data=data, layout=layout)
py.offline.iplot(figure)
df = pd.read_csv('https://raw.githubusercontent.com/yankev/test/master/life-expectancy-per-GDP-2007.csv')
americas = df[(df.continent=='Americas')]
europe = df[(df.continent=='Europe')]
trace_comp0 = Scatter(
x = americas.gdp_percap,
y=americas.life_exp,
mode='markers',
marker=dict(size = 12,
line = dict(width=1),
color="navy"),
name = "Americas",
text=americas.country,
)
trace_comp1 = Scatter(
x = europe.gdp_percap,
y=europe.life_exp,
mode='markers',
marker=dict(size = 12,
line = dict(width=1),
color="orange"),
name = "Europe",
text=europe.country,
)
data = [trace_comp0, trace_comp1]
layout = Layout(
title="YOUR MUM", # sorry
hovermode="closest",
xaxis=dict(
title='GDP per capita (2000 dollars)',
ticklen=5,
zeroline=False,
gridwidth=2,
),
yaxis=dict(
title="Life expectancy (years)"
)
)
fig = Figure(data=data, layout=layout)
py.offline.iplot(fig) | Python/Plotly/PlotlyTutorial.ipynb | jaabberwocky/jaabberwocky.github.io | mit |
Data
We see that data is actually a list object in Python. Data will actually contain all the traces that you wish to plot. Now the question may be, what is a trace? A trace is just the name we give a collection of data and the specifications of which we want that data plotted. Notice that a trace will also be an object itself, and these will be named according to how you want the data displayed on the plotting surface. | # generate data
x = np.linspace(0,np.pi*8,100)
y = np.sin(x)
z = np.cos(x)
layout = Layout(
title="My First Plotly Graph",
xaxis = dict(
title="x"),
yaxis = dict(title="sin(x)")
)
trace1 = Scatter(
x = x,
y = y,
mode = "lines",
marker = dict(
size=8,
color="navy"
),
name="Sin(x)"
)
trace2 = Scatter(
x = x,
y = z,
mode = "markers+lines",
marker = dict(
size=8,
color="red"
),
name="Cos(x)",
opacity=0.5
)
# load data and fig with nec
data = Data([trace1,trace2])
fig = Figure(data=data, layout=layout)
#plot
py.offline.iplot(fig)
# look at hover text
x = np.arange(1,3.2,0.2)
y = 6*np.sin(x)
layout = Layout(
title="My Second Plotly Graph",
xaxis = dict(
title="x"),
yaxis = dict(title="6 * sin(x)")
)
trace1 = Scatter(
x=[1,2,3],
y=[4,5,6],
marker={'color': 'red', 'symbol': 104, 'size': "10"},
mode="markers+lines",
text=["one","two","three"],
name="first trace")
trace2 = Scatter(x=x,
y=y,
marker={'color': 'blue', 'symbol': 'star', 'size': 10},
mode='markers',
name='2nd trace')
data = Data([trace1,trace2])
fig = Figure(data=data,layout=layout)
py.offline.iplot(fig) | Python/Plotly/PlotlyTutorial.ipynb | jaabberwocky/jaabberwocky.github.io | mit |
Layout
The Layout object will define the look of the plot, and plot features which are unrelated to the data. So we will be able to change things like the title, axis titles, spacing, font and even draw shapes on top of your plot! | layout | Python/Plotly/PlotlyTutorial.ipynb | jaabberwocky/jaabberwocky.github.io | mit |
Annotations
We added a plot title as well as titles for all the axes. For fun we could add some text annotation as well in order to indicate the maximum point that's been plotted on the current plotting surface. | # highest point
layout.update(dict(
annotations=[Annotation(
text="Highest Point",
x=3,
y=6)]
)
)
py.offline.iplot(Figure(data=data, layout=layout), filename='pyguide_4')
#lowest point
layout.update(dict(
annotations = [Annotation(
text = "lowest point",
x=1,
y=4)]))
py.offline.iplot(Figure(data=data,layout=layout)) | Python/Plotly/PlotlyTutorial.ipynb | jaabberwocky/jaabberwocky.github.io | mit |
Shapes
Let's add a rectangular block to highlight the section where trace 1 is above trace2. | layout.update(dict(
annotations=[Annotation(
text="Highest Point",
x=3,
y=6)],
shapes = [
# 1st highlight during Feb 4 - Feb 6
{
'type': 'rect',
# x-reference is assigned to the x-values
'xref': 'x',
# y-reference is assigned to the plot paper [0,1]
'yref': 'y',
'x0': '1',
'y0': 0,
'x1': '2',
'y1': 7,
'fillcolor': '#d3d3d3',
'opacity': 0.2,
'line': {
'width': 0,
}
}]
)
)
py.offline.iplot(Figure(data=data, layout=layout), filename='pyguide_4')
# plot scatter with color
x = np.random.randint(0,100,100)
y = [x + np.random.randint(-100,100) for x in x]
z = np.random.randint(0,3,100)
layout = Layout(
title = "Color Scatter Plot",
xaxis = dict(title="x"),
yaxis = dict(title="y")
)
trace1 = Scatter(
x = x,
y = y,
mode="markers",
marker=dict(
size = 12,
color = z,
colorscale = "heatmap-discrete-colorscale",
showscale=True
)
)
data = Data([trace1])
fig = Figure(data=data,layout=layout)
py.offline.iplot(fig)
# a better implementation would be to use different traces for different colors
df = pd.DataFrame({
'x':x,
'y':y,
'z':z
})
df.z.value_counts()
layout = Layout(
title = "Color Scatter Plot (Improved)",
xaxis = dict(title="x"),
yaxis = dict(title="y")
)
trace1 = Scatter(
x = df.query('z==0')['x'],
y = df.query('z==0')['y'],
mode="markers",
marker=dict(
size = 12,
color = "orange",
),
name = "Z = 0"
)
trace2 = Scatter(
x = df.query('z==1')['x'],
y = df.query('z==1')['y'],
mode="markers",
marker=dict(
size = 12,
color = "red",
),
name="Z = 1"
)
trace3 = Scatter(
x = df.query('z==2')['x'],
y = df.query('z==2')['y'],
mode="markers",
marker=dict(
size = 12,
color = "blue",
),
name="Z = 2"
)
data = Data([trace1, trace2,trace3])
fig = Figure(data=data,layout=layout)
py.offline.iplot(fig)
# 3d scatter plot
x = np.random.randint(0,100,100)
y = np.random.randint(0,100,100)
z = np.random.randint(0,10,100)
layout = Layout(
title="3d Scatter Plot",
xaxis = dict(
title = "X"
)
)
trace0 = Scatter3d(
x=x,
y=y,
z=z,
mode="markers",
marker = dict(
size=6,
color=z,
colorscale="Plasma",
opacity=0.6
)
)
data = Data([trace0])
fig = Figure(data=data, layout=layout)
py.offline.iplot(fig)
# 3d bubble charts using pokemon data
# URL: https://www.kaggle.com/rounakbanik/pokemon/data
dataset = pd.read_csv("pokemon.csv")
dataset.dtypes
layout = Layout(
title="Pokemon!",
autosize = False,
width= 1000,
height= 1000,
scene = dict(
zaxis=dict(title="Attack"),
yaxis=dict(title="Defense"),
xaxis=dict(title="Type 1 Class.")
)
)
trace0 = Scatter3d(
z = dataset.attack,
y = dataset.defense,
x = dataset.type1,
text = dataset.name,
mode = "markers",
marker = dict(
size = dataset.weight_kg/10,
opacity = 0.5,
color = dataset.hp,
colorscale = 'Viridis',
showscale=True,
colorbar=dict(title="HP")
)
)
data = Data([trace0])
fig = Figure(data=data, layout=layout)
py.offline.iplot(fig) | Python/Plotly/PlotlyTutorial.ipynb | jaabberwocky/jaabberwocky.github.io | mit |
Load some data | from datasets import get_pbc
d = get_pbc(prints=True, norm_in=True, norm_out=False)
durcol = d.columns[0]
eventcol = d.columns[1]
if np.any(d[durcol] < 0):
raise ValueError("Negative times encountered")
# Sort the data before training - handled by ensemble
#d.sort(d.columns[0], inplace=True)
# Example: d.iloc[:, :2] for times, events
d | AnnGroups.ipynb | spacecowboy/article-annriskgroups-source | gpl-3.0 |
Create an ANN model
With all correct parameters, ensemble settings and such. | import ann
from classensemble import ClassEnsemble
mingroup = int(0.25 * d.shape[0])
def get_net(func=ann.geneticnetwork.FITNESS_SURV_KAPLAN_MIN):
hidden_count = 10
outcount = 2
l = (d.shape[1] - 2) + hidden_count + outcount + 1
net = ann.geneticnetwork((d.shape[1] - 2), hidden_count, outcount)
net.fitness_function = func
net.mingroup = mingroup
# Be explicit here even though I changed the defaults
net.connection_mutation_chance = 0.0
net.activation_mutation_chance = 0
# Some other values
net.crossover_method = net.CROSSOVER_UNIFORM
net.selection_method = net.SELECTION_TOURNAMENT
net.population_size = 100
net.generations = 1000
net.weight_mutation_chance = 0.15
net.dropout_hidden_probability = 0.5
net.dropout_input_probability = 0.8
ann.utils.connect_feedforward(net, [5, 5], hidden_act=net.TANH, out_act=net.SOFTMAX)
#c = net.connections.reshape((l, l))
#c[-outcount:, :((d.shape[1] - 2) + hidden_count)] = 1
#net.connections = c.ravel()
return net
net = get_net()
l = (d.shape[1] - 2) + net.hidden_count + 2 + 1
print(net.connections.reshape((l, l)))
hnets = []
lnets = []
netcount = 2
for i in range(netcount):
if i % 2:
n = get_net(ann.geneticnetwork.FITNESS_SURV_KAPLAN_MIN)
hnets.append(n)
else:
n = get_net(ann.geneticnetwork.FITNESS_SURV_KAPLAN_MAX)
lnets.append(n)
e = ClassEnsemble(hnets, lnets) | AnnGroups.ipynb | spacecowboy/article-annriskgroups-source | gpl-3.0 |
Train the ANNs
And print groupings on training data. | e.fit(d, durcol, eventcol)
# grouplabels = e.predict_classes
grouplabels, mems = e.label_data(d)
for l, m in mems.items():
print("Group", l, "has", len(m), "members") | AnnGroups.ipynb | spacecowboy/article-annriskgroups-source | gpl-3.0 |
Plot grouping | from lifelines.plotting import add_at_risk_counts
from lifelines.estimation import KaplanMeierFitter
from lifelines.estimation import median_survival_times
plt.figure()
fitters = []
for g in ['high', 'mid', 'low']:
kmf = KaplanMeierFitter()
fitters.append(kmf)
members = grouplabels == g
kmf.fit(d.loc[members, durcol],
d.loc[members, eventcol],
label='{}'.format(g))
kmf.plot(ax=plt.gca())#, color=plt.colors[mi])
print("End survival rate for", g, ":",kmf.survival_function_.iloc[-1, 0])
if kmf.survival_function_.iloc[-1, 0] <= 0.5:
print("Median survival for", g, ":",
median_survival_times(kmf.survival_function_))
plt.legend(loc='best', framealpha=0.1)
plt.ylim((0, 1))
add_at_risk_counts(*fitters) | AnnGroups.ipynb | spacecowboy/article-annriskgroups-source | gpl-3.0 |
Load data
Here we consider lin2 data for gor01 on the first recording day (6-7-2006), since this session had the most units (91) of all the gor01 sessions, and lin2 has position data, whereas lin1 only has partial position data. | datadirs = ['/home/etienne/Dropbox/neoReader/Data',
'C:/etienne/Dropbox/neoReader/Data',
'/Users/etienne/Dropbox/neoReader/Data']
fileroot = next( (dir for dir in datadirs if os.path.isdir(dir)), None)
animal = 'gor01'; month,day = (6,7); session = '16-40-19' # 91 units
spikes = load_data(fileroot=fileroot, datatype='spikes',animal=animal, session=session, month=month, day=day, fs=32552, verbose=False)
eeg = load_data(fileroot=fileroot, datatype='eeg', animal=animal, session=session, month=month, day=day,channels=[0,1,2], fs=1252, starttime=0, verbose=False)
posdf = load_data(fileroot=fileroot, datatype='pos',animal=animal, session=session, month=month, day=day, verbose=False)
speed = klab.get_smooth_speed(posdf,fs=60,th=8,cutoff=0.5,showfig=False,verbose=False) | ModelSelection.ipynb | kemerelab/NeuroHMM | mit |
Find most appropriate number of states using cross validation
Here we split the data into training, validation, and test sets. We monitor the average log probability per sequence (normalized by length) for each of these sets, and we use the validation set to choose the number of model states $m$.
Note to self: I should re-write my data splitting routines to allow me to extract as many subsets as I want, so that I can do k-fold cross validation. | ## bin ALL spikes
ds = 0.125 # bin spikes into 125 ms bins (theta-cycle inspired)
binned_spikes_all = klab.bin_spikes(spikes.data, ds=ds, fs=spikes.samprate, verbose=True)
## identify boundaries for running (active) epochs and then bin those observations into separate sequences:
runbdries = klab.get_boundaries_from_bins(eeg.samprate,bins=speed.active_bins,bins_fs=60)
binned_spikes_bvr = klab.bin_spikes(spikes.data, fs=spikes.samprate, boundaries=runbdries, boundaries_fs=eeg.samprate, ds=ds)
## stack data for hmmlearn:
seq_stk_bvr = sq.data_stack(binned_spikes_bvr, verbose=True)
seq_stk_all = sq.data_stack(binned_spikes_all, verbose=True)
## split data into train, test, and validation sets:
tr_b,vl_b,ts_b = sq.data_split(seq_stk_bvr, tr=60, vl=20, ts=20, randomseed = 0, verbose=False)
Smax = 40
S = np.arange(start=5,step=1,stop=Smax+1)
tr_ll = []
vl_ll = []
ts_ll = []
for num_states in S:
clear_output(wait=True)
print('Training and evaluating {}-state hmm'.format(num_states))
sys.stdout.flush()
myhmm = sq.hmm_train(tr_b, num_states=num_states, n_iter=30, verbose=False)
tr_ll.append( (np.array(list(sq.hmm_eval(myhmm, tr_b)))/tr_b.sequence_lengths ).mean())
vl_ll.append( (np.array(list(sq.hmm_eval(myhmm, vl_b)))/vl_b.sequence_lengths ).mean())
ts_ll.append( (np.array(list(sq.hmm_eval(myhmm, ts_b)))/ts_b.sequence_lengths ).mean())
clear_output(wait=True)
print('Done!')
sys.stdout.flush()
num_states = 35
fig = plt.figure(1, figsize=(12, 4))
ax = fig.add_subplot(111)
ax.annotate('plateau at approx ' + str(num_states), xy=(num_states, -38.5), xycoords='data',
xytext=(-140, -30), textcoords='offset points',
arrowprops=dict(arrowstyle="->",
connectionstyle="angle3,angleA=0,angleB=-90"),
)
ax.plot(S, tr_ll, lw=1.5, label='train')
ax.plot(S, vl_ll, lw=1.5, label='validation')
ax.plot(S, ts_ll, lw=1.5, label='test')
ax.legend(loc=2)
ax.set_xlabel('number of states')
ax.set_ylabel('normalized (to single time bin) log likelihood')
ax.axhspan(-38.5, -37.5, facecolor='0.75', alpha=0.25)
ax.set_xlim([5, S[-1]]) | ModelSelection.ipynb | kemerelab/NeuroHMM | mit |
Remarks: We see that the training error is decreasing (equivalently, the training log probability is increasing) over the entire range of states considered. Indeed, we have computed this for a much larger number of states, and the training error keeps on decreasing, whereas both the validation and test errors reach a plateau at around 30 or 35 states.
As expected, the training set has the largest log probability (best agreement with model), but we might expect the test and validation sets to be about the same. For different subsets of our data this is indeed the case, but the more important thing in model selection is that the validation and test sets should have the same shape or behavior, so that we can choose an appropriate model parameter.
However, if we wanted to predict what our log probability for any given sequence would be, then we probably need a little bit more data, for which the test and validation errors should agree more.
Finally, we have also repeated the above analysis when we restricted ourselves to only using place cells in the model, and although the log probabilities were uniformly increased to around $-7$ or $-8$, the overall shape and characteristic behavior were left unchanged, so that model selection could be done either way.
Place field visualization
Previously we have only considered varying the number of model states for model selection, but of course choosing an appropriate timescale is perhaps just as important. We know, for example, that if our timescale is too short (or fast), then most of the bins will be empty, making it difficult for the model to learn appropriate representations and transitions. On the other hand, if our timescale is too coarse (or long or slow) then we will certainly miss SWR events, and we may even miss some behavioral events as well.
Since theta is around 8 Hz for rodents, it might make sense to consider a timescale of 125 ms or even 62.5 ms for behaviorally relevant events, so that we can hope to capture half or full theta cycles in the observations.
One might also reasonably ask: "even though the log probability has been optimized, how do we know that the learned model makes any sense? That is, that the model is plausible and useful?" One way to try to answer this question is to again consider the place fields that we learn from the data. Place field visualization is considered in more detail in StateClustering.ipynb, but here we simply want to see if we get plausible, behaviorally relevant state representations out when choosing different numbers of states, and different timescales, for example.
Place fields for varying velocity thresholds
We train our models on RUN data, so we might want to know how sensitive our model is to a specific velocity threshold. Using a smaller threshold will include more quiescent data, and using a larger threshold will exclude more data from being used to learn in the model. | from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
vth = ii+1
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, verbose=False)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth), y=1.02)
ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.axis('tight')
| ModelSelection.ipynb | kemerelab/NeuroHMM | mit |
Remarks: As can be expected, with low velocity thresholds, we see an overrepresentation of the reward locations, and only a relatively small number of states that are dedicated to encoding the position along the track.
Recall that the track was shortened halfway through the recording session. Here, the reward locations for the longer track (first half of the experiment) and shorter track (second half of the experiment) are shown by the ends of the dashed lines.
We notice that at some point, the movement velocity (for fixed state evolution) appears to be constant, and that at e.g. 8 units/sec we see a clear bifurcation in the place fields, so that states encode both positions before and after the track was shortened.
Place fields for varying number of states
Next, we take a look at how the place fields are affected by changing the number of states in the model. | from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
num_states = 5 + ii*5
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states), y=1.02)
ax.axis('tight')
saveFigure('posterfigs/numstates.pdf') | ModelSelection.ipynb | kemerelab/NeuroHMM | mit |
Remarks: First, we see that independent of the number of states, the model captures the place field like nature of the underlying states very well. Furthermore, the bifurcation of some states to represent both the first and second halves of the experiment becomes clear with as few as 15 states, but interestingly this bifurcation fades as we add more states to the model, since there is enough flexibility to encode those shifting positions by their own states.
Warning: However, in the case where we have many states so that the states are no longer bimodal, the strict linear ordering that we impose (ordering by peak firing location) can easily mask the underlying structural change in the environment.
Place fields for varying timescales
Next we investigate how the place fields are affected by changing the timescale of our observations. First, we consider timescales in the range of 31.25 ms to 375 ms, in increments of 31.25 ms. | from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
ds = (ii+1)*0.03125
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.axis('tight') | ModelSelection.ipynb | kemerelab/NeuroHMM | mit |
Remarks: We notice that we clearly see the bimodal place fields when the timescales are sufficiently small, with a particularly clear example at 62.5 ms, for example. Larger timescales tend to focus on the longer track piece, with a single trajectory being skewed away towards the shorter track piece.
Next we consider timescales in increments of 62.5 ms. | from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
ds = (ii+1)*0.0625
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.axis('tight') | ModelSelection.ipynb | kemerelab/NeuroHMM | mit |
Remarks: Again, we see that with larger timescales, the spatial resolution becomes more coarse, because we don't have that sufficiently many observations, and the modes of the place fields tend to lie close to those associated wit the longer track.
Splitting the experimment in half
Just as a confirmation of what we've seen so far, we next consider the place fields obtained when we split the experiment into its first and second halves, correponding to when the track was longer, and shorter, respectively. | from placefieldviz import hmmplacefieldposviz
num_states = 25
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
state_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')
state_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')
state_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
saveFigure('posterfigs/expsplit.pdf') | ModelSelection.ipynb | kemerelab/NeuroHMM | mit |
Remarks: We clearly see the bimodal place fields when we use all of the data, and we see the unimodal place fields emerge as we focus on either the first, or the second half of the experiment.
Notice that the reward locations are more concentrated, but that the velocity (with fixed state progression) is roughly constant.
However, if we increase the number of states: | from placefieldviz import hmmplacefieldposviz
num_states = 45
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
state_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')
state_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')
state_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight') | ModelSelection.ipynb | kemerelab/NeuroHMM | mit |
then we stat to see the emergence of the S-shaped place field progressions again, indicating that the reward locations are overexpressed by several different states.
This observation is even more pronounced if we increase the number of states further: | from placefieldviz import hmmplacefieldposviz
num_states = 100
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
state_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')
state_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')
state_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax2.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
8, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax2.add_patch(
patches.Rectangle(
(41, 0), # (x,y)
11, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
ax3.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
14, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.add_patch(
patches.Rectangle(
(35, 0), # (x,y)
15, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
) | ModelSelection.ipynb | kemerelab/NeuroHMM | mit |
With enough expressiveness in the number of states, we see the S-shaped curve reappear, which suggests an overexpression of the reward locations, which is consistent with what we see with place cells in animals. | import matplotlib.patches as patches
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[stateorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[stateorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 13], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 7], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.plot([35, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([41, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax2.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
8, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax2.add_patch(
patches.Rectangle(
(41, 0), # (x,y)
11, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.matshow(state_pos_2[stateorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 13], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 7], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.plot([35, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([41, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
ax3.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
14, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.add_patch(
patches.Rectangle(
(35, 0), # (x,y)
15, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
fig.suptitle('State ordering not by peak location, but by the state transition probability matrix', y=1.08, fontsize=14)
saveFigure('posterfigs/zigzag.pdf')
state_pos_b[state_pos_b < np.transpose(np.tile(state_pos_b.max(axis=1),[state_pos_b.shape[1],1]))] = 0
state_pos_b[state_pos_b == np.transpose(np.tile(state_pos_b.max(axis=1),[state_pos_b.shape[1],1]))] = 1
state_pos_1[state_pos_1 < np.transpose(np.tile(state_pos_1.max(axis=1),[state_pos_1.shape[1],1]))] = 0
state_pos_1[state_pos_1 == np.transpose(np.tile(state_pos_1.max(axis=1),[state_pos_1.shape[1],1]))] = 1
state_pos_2[state_pos_2 < np.transpose(np.tile(state_pos_2.max(axis=1),[state_pos_2.shape[1],1]))] = 0
state_pos_2[state_pos_2 == np.transpose(np.tile(state_pos_2.max(axis=1),[state_pos_2.shape[1],1]))] = 1
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight') | ModelSelection.ipynb | kemerelab/NeuroHMM | mit |
Dataset
We will use the Wisconsin breast cancer dataset for the following questions | import pandas as pd
wdbc_source = 'https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data'
#wdbc_source = '../datasets/wdbc/wdbc.data'
df = pd.read_csv(wdbc_source, header=None)
from sklearn.preprocessing import LabelEncoder
X = df.loc[:, 2:].values
y = df.loc[:, 1].values
le = LabelEncoder()
y = le.fit_transform(y)
le.transform(['M', 'B'])
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.20, random_state=1)
import matplotlib.pyplot as plt
%matplotlib inline | 5 Training and Ensemble.ipynb | irsisyphus/machine-learning | apache-2.0 |
K-fold validation (20 points)
Someone wrote the code below to conduct cross validation.
Do you see anything wrong with it?
And if so, correct the code and provide an explanation. | import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.linear_model import Perceptron
from sklearn.pipeline import Pipeline
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import StratifiedKFold
else:
from sklearn.model_selection import StratifiedKFold
scl = StandardScaler()
pca = PCA(n_components=2)
# original: clf = Perceptron(random_state=1)
# data preprocessing
X_train_std = scl.fit_transform(X_train)
X_test_std = scl.transform(X_test)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
# compute the data indices for each fold
if Version(sklearn_version) < '0.18':
kfold = StratifiedKFold(y=y_train,
n_folds=10,
random_state=1)
else:
kfold = StratifiedKFold(n_splits=10,
random_state=1).split(X_train, y_train)
num_epochs = 2
scores = [[] for i in range(num_epochs)]
enumerate_kfold = list(enumerate(kfold))
# new:
clfs = [Perceptron(random_state=1) for i in range(len(enumerate_kfold))]
for epoch in range(num_epochs):
for k, (train, test) in enumerate_kfold:
# original:
# clf.partial_fit(X_train_std[train], y_train[train], classes=np.unique(y_train))
# score = clf.score(X_train_std[test], y_train[test])
# scores.append(score)
# new:
clfs[k].partial_fit(X_train_pca[train], y_train[train], classes=np.unique(y_train))
score = clfs[k].score(X_train_pca[test], y_train[test])
scores[epoch].append(score)
print('Epoch: %s, Fold: %s, Class dist.: %s, Acc: %.3f' % (epoch,
k,
np.bincount(y_train[train]),
score))
print('')
# new:
for epoch in range(num_epochs):
print('Epoch: %s, CV accuracy: %.3f +/- %.3f' % (epoch, np.mean(scores[epoch]), np.std(scores[epoch]))) | 5 Training and Ensemble.ipynb | irsisyphus/machine-learning | apache-2.0 |
Answer
Problems with the original code:
Use partial fit for all folds, which results in dependent training and thus cumulative learning.<br>
Due to problem 1, we the CV accuracy is incorrect, and not able to show score for every epoch. <br>
There are two major changes:
We create a list of Perceptrons, each corresponds to one fold. Within each fold, we run partial fit for a given number of epochs. <br>
Also, we create a list of accuracy scores, each corresponds to one epoch. Within each epoch, we append scores of all folds. <br>
There is one minor change:
To enable PCA, we change X_train_std into X_train_pca.<br>
Precision-recall curve (40 points)
We have plotted ROC (receiver operator characteristics) curve for the breast cancer dataset.
Plot the precision-recall curve for the same data set using the same experimental setup.
What similarities and differences you can find between ROC and precision-recall curves?
You can find more information about precision-recall curve online such as: http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_curve.html
Answer | from sklearn.metrics import roc_curve, precision_recall_curve, auc
from scipy import interp
from sklearn.linear_model import LogisticRegression
pipe_lr = Pipeline([('scl', StandardScaler()),
('pca', PCA(n_components=2)),
('clf', LogisticRegression(penalty='l2',
random_state=0,
C=100.0))])
# intentionally use only 2 features to make the task harder and the curves more interesting
X_train2 = X_train[:, [4, 14]]
X_test2 = X_test[:, [4, 14]]
if Version(sklearn_version) < '0.18':
cv = StratifiedKFold(y_train, n_folds=3, random_state=1)
else:
cv = list(StratifiedKFold(n_splits=3, random_state=1).split(X_train, y_train))
fig = plt.figure(figsize=(7, 5))
# **************************************
# ROC - 2 Features
# **************************************
print ("ROC Curve - 2 Features")
mean_tpr = 0.0
mean_fpr = np.linspace(0, 1, 100)
all_tpr = []
for i, (train, test) in enumerate(cv):
probas = pipe_lr.fit(X_train2[train],
y_train[train]).predict_proba(X_train2[test])
fpr, tpr, thresholds = roc_curve(y_train[test],
probas[:, 1],
pos_label=1)
mean_tpr += interp(mean_fpr, fpr, tpr)
#mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
plt.plot(fpr,
tpr,
lw=1,
label='ROC fold %d (area = %0.2f)'
% (i+1, roc_auc))
mean_tpr /= len(cv)
mean_tpr[0] = 0.0
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, 'k--',
label='mean ROC (area = %0.2f)' % mean_auc, lw=2)
plt.plot([0, 1],
[0, 1],
linestyle='--',
color=(0.6, 0.6, 0.6),
label='random guessing')
plt.plot([0, 0, 1],
[0, 1, 1],
lw=2,
linestyle=':',
color='black',
label='perfect performance')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.title('Receiver Operator Characteristic')
plt.legend(loc='lower left', bbox_to_anchor=(1, 0))
plt.tight_layout()
# plt.savefig('./figures/roc.png', dpi=300)
plt.show()
# **************************************
# ROC - All Features
# **************************************
print ("ROC Curve - All Features")
mean_tpr = 0.0
mean_fpr = np.linspace(0, 1, 100)
all_tpr = []
for i, (train, test) in enumerate(cv):
probas = pipe_lr.fit(X_train[train],
y_train[train]).predict_proba(X_train[test])
fpr, tpr, thresholds = roc_curve(y_train[test],
probas[:, 1],
pos_label=1)
mean_tpr += interp(mean_fpr, fpr, tpr)
#mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
plt.plot(fpr,
tpr,
lw=1,
label='ROC fold %d (area = %0.2f)'
% (i+1, roc_auc))
mean_tpr /= len(cv)
mean_tpr[0] = 0.0
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, 'k--',
label='mean ROC (area = %0.2f)' % mean_auc, lw=2)
plt.plot([0, 1],
[0, 1],
linestyle='--',
color=(0.6, 0.6, 0.6),
label='random guessing')
plt.plot([0, 0, 1],
[0, 1, 1],
lw=2,
linestyle=':',
color='black',
label='perfect performance')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.title('Receiver Operator Characteristic')
plt.legend(loc='lower left', bbox_to_anchor=(1, 0))
plt.tight_layout()
# plt.savefig('./figures/roc.png', dpi=300)
plt.show()
# **************************************
# Precision-Recall - 2 Features
# **************************************
print ("Precision Recall Curve - 2 Features")
mean_pre = 0.0
mean_rec = np.linspace(0, 1, 100)
all_pre = []
for i, (train, test) in enumerate(cv):
probas = pipe_lr.fit(X_train2[train],
y_train[train]).predict_proba(X_train2[test])
## note that the return order is precision, recall
pre, rec, thresholds = precision_recall_curve(y_train[test],
probas[:, 1],
pos_label=1)
## flip the recall and precison array
mean_pre += interp(mean_rec, np.flipud(rec), np.flipud(pre))
pr_auc = auc(rec, pre)
plt.plot(rec,
pre,
lw=1,
label='PR fold %d (area = %0.2f)'
% (i+1, pr_auc))
mean_pre /= len(cv)
mean_auc = auc(mean_rec, mean_pre)
plt.plot(mean_rec, mean_pre, 'k--',
label='mean PR (area = %0.2f)' % mean_auc, lw=2)
# random classifier: a line of Positive/(Positive+Negative)
# here for simplicity, we set it to be Positive/(Positive+Negative) of fold 3
plt.plot(mean_rec,
[pre[0] for i in range(len(mean_rec))],
linestyle='--', label='random guessing of fold 3', color=(0.6, 0.6, 0.6))
# perfect performance: y = 1 for x in [0, 1]; when x = 1; down to the random guessing line
plt.plot([0, 1, 1],
[1, 1, pre[0]],
lw=2,
linestyle=':',
color='black',
label='perfect performance')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('recall')
plt.ylabel('precision')
plt.title('Precision-Recall')
plt.legend(loc='lower left', bbox_to_anchor=(1, 0))
plt.tight_layout()
# plt.savefig('./figures/roc.png', dpi=300)
plt.show()
# **************************************
# Precision-Recall - All Features
# **************************************
print ("Precision Recall Curve - All Features")
mean_pre = 0.0
mean_rec = np.linspace(0, 1, 100)
all_pre = []
for i, (train, test) in enumerate(cv):
probas = pipe_lr.fit(X_train[train],
y_train[train]).predict_proba(X_train[test])
## note that the return order is precision, recall
pre, rec, thresholds = precision_recall_curve(y_train[test],
probas[:, 1],
pos_label=1)
## flip the recall and precison array
mean_pre += interp(mean_rec, np.flipud(rec), np.flipud(pre))
pr_auc = auc(rec, pre)
plt.plot(rec,
pre,
lw=1,
label='PR fold %d (area = %0.2f)'
% (i+1, pr_auc))
mean_pre /= len(cv)
mean_auc = auc(mean_rec, mean_pre)
plt.plot(mean_rec, mean_pre, 'k--',
label='mean PR (area = %0.2f)' % mean_auc, lw=2)
# random classifier: a line of Positive/(Positive+Negative)
# here for simplicity, we set it to be Positive/(Positive+Negative) of fold 3
plt.plot(mean_rec,
[pre[0] for i in range(len(mean_rec))],
linestyle='--', label='random guessing of fold 3', color=(0.6, 0.6, 0.6))
# perfect performance: y = 1 for x in [0, 1]; when x = 1; down to the random guessing line
plt.plot([0, 1, 1],
[1, 1, pre[0]],
lw=2,
linestyle=':',
color='black',
label='perfect performance')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('recall')
plt.ylabel('precision')
plt.title('Precision-Recall')
plt.legend(loc='lower left', bbox_to_anchor=(1, 0))
plt.tight_layout()
# plt.savefig('./figures/roc.png', dpi=300)
plt.show() | 5 Training and Ensemble.ipynb | irsisyphus/machine-learning | apache-2.0 |
Explanation
Definition
REC(recall) = TPR(true positive rate) = $\frac{TP}{TP+FN}$<br>
PRE(precision) = $\frac{TP}{TP+FP}$<br>
FPR(false positive rate) = $\frac{FP}{FP+TN}$<br>
TP: Actual Class $A$, test result $A$<br>
FP: Actual Class $B$, test result $A$<br>
TN: Actual Class $B$, test result $B$<br>
FN: Actual Class $A$, test result $B$<br>
Similarities
When number of features increases, in ROC curve, true positive rate is close to 1 regardless of false positive rate (not equals 0); in Precison-Recall curve, precision is close to 1 regardless of recall (not equals 1).<br>
The training results are better than random guessing in this example.<br>
Differences
For monotonicity
ROC curve: As false positive rate increase, the true positive rate generally increases, and the effect is huger when number of features is small.<br>
This is because when avoid classifying class $B$ as class $A$, we actually increase the probability of successfully classifying class $A$ as class $A$, which increases the true positive rate.<br>
Actually, the curve is almost concave. The image below illustrates the reason.
<img src="https://github.com/irsisyphus/Pictures/raw/master/ML-Exercises/roc.png"><br>
Please keep online to view the picture. Reference: https://upload.wikimedia.org/wikipedia/commons/5/5c/ROCfig.PNG<br><br>
Precision-Recall curve: As recall increases, the precision generally decreases, especially when number of features is small.<br>
This is because when we try to increase recall (reduce false negative), we avoid classifying samples with acutal class $A$ as class $B$, that is, we classify samples similar to class $A$ as class $A$. However, whis will also increase the probability of classifying samples with acutal class $B$ (but tested to be similar to class $A$) as class $A$, which is FP.<br>
Notice that when FP increases, it is not necessary that precision decrease since TP also increases. Only when the ratio $\frac{TP}{TP+FP}$ drops, which is often the general case when number of features is small, precision drops. The image below illustrates the reason.
<img width=50% src="https://github.com/irsisyphus/Pictures/raw/master/ML-Exercises/pre-rec.png"><br>
Please keep online to view the picture. Reference: http://numerical.recipes/CS395T/lectures2008/17-ROCPrecisionRecall.pdf
For random guessing
ROC curve: true positive rate = false positive rate. This is because the propotion of TP in (TP+FN) equals the propotion of FP in (FP+TN), because declaring a result has no dependence on the actuall class label for random guessing.
Precision-Recall curve: the random guessing is a horizontal line with y = P/(P+N) for given boundary of P and N, which varies with cases. (In this example, we choose P/(P+N) of fold 3 for simplicity).
For perfect performance
ROC curve: The perfect performance is (0, 0) -> (0, 1) -> (1, 1). We aim to have result in the "top-left" corner to obtain low false positive rate and high true positive rate.
Precision-Recall curve: The perfect performance is (0, 1) -> (1, 1) -> (1, P/(P+N)). We aim to have result in the "top-right" corner to obtain high recall and high precision.
Ensemble learning
We have used the following code to compute and plot the ensemble error from individual classifiers for binary classification: | from scipy.misc import comb
import math
import numpy as np
def ensemble_error(num_classifier, base_error):
k_start = math.ceil(num_classifier/2)
probs = [comb(num_classifier, k)*(base_error**k)*((1-base_error)**(num_classifier-k)) for k in range(k_start, num_classifier+1)]
return sum(probs)
import matplotlib.pyplot as plt
%matplotlib inline
def plot_base_error(ensemble_error_func, num_classifier, error_delta):
error_range = np.arange(0.0, 1+error_delta, error_delta)
ensemble_errors = [ensemble_error_func(num_classifier=num_classifier, base_error=error) for error in error_range]
plt.plot(error_range, ensemble_errors,
label = 'ensemble error',
linewidth=2)
plt.plot(error_range, error_range,
label = 'base error',
linestyle = '--',
linewidth=2)
plt.xlabel('base error')
plt.ylabel('base/ensemble error')
plt.legend(loc='best')
plt.grid()
plt.show()
num_classifier = 11
error_delta = 0.01
base_error = 0.25
print(ensemble_error(num_classifier=num_classifier, base_error=base_error))
plot_base_error(ensemble_error, num_classifier=num_classifier, error_delta=error_delta) | 5 Training and Ensemble.ipynb | irsisyphus/machine-learning | apache-2.0 |
Number of classifiers (40 points)
The function plot_base_error() above plots the ensemble error as a function of the base error given a fixed number of classifiers.
Write another function to plot ensembe error versus different number of classifiers with a given base error.
Does the ensemble error always go down with more classifiers?
Why or why not?
Can you improve the method ensemble_error() to produce a more reasonable plot?
Answer
The code for plotting is below: | def plot_num_classifier(ensemble_error_func, max_num_classifier, base_error):
num_classifiers = range(1, max_num_classifier+1)
ensemble_errors = [ensemble_error_func(num_classifier = num_classifier, base_error=base_error) for num_classifier in num_classifiers]
plt.plot(num_classifiers, ensemble_errors,
label = 'ensemble error',
linewidth = 2)
plt.plot(range(max_num_classifier), [base_error]*max_num_classifier,
label = 'base error',
linestyle = '--',
linewidth=2)
plt.xlabel('num classifiers')
plt.ylabel('ensemble error')
plt.xlim([1, max_num_classifier])
plt.ylim([0, 1])
plt.title('base error %.2f' % base_error)
plt.legend(loc='best')
plt.grid()
plt.show()
max_num_classifiers = 20
base_error = 0.25
plot_num_classifier(ensemble_error,
max_num_classifier=max_num_classifiers,
base_error=base_error) | 5 Training and Ensemble.ipynb | irsisyphus/machine-learning | apache-2.0 |
Explanation
Observations:
The ensemble error DOES NOT ALWAYS go down with more classifiers.<br>
Overall, the ensemble error declines as the number of classifiers increases. However, when the number of calssifiers $N = 2k$, the error is much higher than $N = 2k-1$ and $2k+1$.
Reason:
This is because when a draw comes, i.e. $K$ subclassifiers are wrong and $K$ subclassifiers are correct, it is considered as "wrong prediction" by the original function.
Describe a better algorithm for computing the ensemble error. | def better_ensemble_error(num_classifier, base_error):
k_start = math.ceil(num_classifier/2)
probs = [comb(num_classifier, k)*(base_error**k)*((1-base_error)**(num_classifier-k)) for k in range(k_start, num_classifier+1)]
if num_classifier % 2 == 0:
probs.append(-0.5*comb(num_classifier, k_start)*(base_error**k_start)*((1-base_error)**(num_classifier-k_start)))
return sum(probs)
plot_num_classifier(better_ensemble_error,
max_num_classifier=max_num_classifiers,
base_error=base_error) | 5 Training and Ensemble.ipynb | irsisyphus/machine-learning | apache-2.0 |
The frame represented by video 98, frame 1 is shown here:
Feature selection for training the model
The objective of feature selection when training a model is to choose the most relevant variables while keeping the model as simple as possible, thus reducing training time. We can use the raw features already provided or derive our own and add columns to the pandas dataframe asl.df for selection. As an example, in the next cell a feature named 'grnd-ry' is added. This feature is the difference between the right-hand y value and the nose y value, which serves as the "ground" right y value. | asl.df['grnd-ry'] = asl.df['right-y'] - asl.df['nose-y']
asl.df.head() # the new feature 'grnd-ry' is now in the frames dictionary | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
Try it! | from asl_utils import test_features_tryit
# TODO add df columns for 'grnd-rx', 'grnd-ly', 'grnd-lx' representing differences between hand and nose locations
asl.df['grnd-rx'] = asl.df['right-x'] - asl.df['nose-x']
asl.df['grnd-ly'] = asl.df['left-y'] - asl.df['nose-y']
asl.df['grnd-lx'] = asl.df['left-x'] - asl.df['nose-x']
# test the code
test_features_tryit(asl)
# collect the features into a list
features_ground = ['grnd-rx','grnd-ry','grnd-lx','grnd-ly']
#show a single set of features for a given (video, frame) tuple
[asl.df.ix[98,1][v] for v in features_ground] | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
Build the training set
Now that we have a feature list defined, we can pass that list to the build_training method to collect the features for all the words in the training set. Each word in the training set has multiple examples from various videos. Below we can see the unique words that have been loaded into the training set: | training = asl.build_training(features_ground)
print("Training words: {}".format(training.words)) | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
The training data in training is an object of class WordsData defined in the asl_data module. in addition to the words list, data can be accessed with the get_all_sequences, get_all_Xlengths, get_word_sequences, and get_word_Xlengths methods. We need the get_word_Xlengths method to train multiple sequences with the hmmlearn library. In the following example, notice that there are two lists; the first is a concatenation of all the sequences(the X portion) and the second is a list of the sequence lengths(the Lengths portion). | training.get_word_Xlengths('CHOCOLATE') | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
More feature sets
So far we have a simple feature set that is enough to get started modeling. However, we might get better results if we manipulate the raw values a bit more, so we will go ahead and set up some other options now for experimentation later. For example, we could normalize each speaker's range of motion with grouped statistics using Pandas stats functions and pandas groupby. Below is an example for finding the means of all speaker subgroups. | df_means = asl.df.groupby('speaker').mean()
df_means | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
To select a mean that matches by speaker, use the pandas map method: | asl.df['left-x-mean']= asl.df['speaker'].map(df_means['left-x'])
asl.df.head() | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
Try it! | from asl_utils import test_std_tryit
# TODO Create a dataframe named `df_std` with standard deviations grouped by speaker
df_std = asl.df.groupby('speaker').std()
# test the code
test_std_tryit(df_std) | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
<a id='part1_submission'></a>
Features Implementation Submission
Implement four feature sets and answer the question that follows.
- normalized Cartesian coordinates
- use mean and standard deviation statistics and the standard score equation to account for speakers with different heights and arm length
polar coordinates
calculate polar coordinates with Cartesian to polar equations
use the np.arctan2 function and swap the x and y axes to move the $0$ to $2\pi$ discontinuity to 12 o'clock instead of 3 o'clock; in other words, the normal break in radians value from $0$ to $2\pi$ occurs directly to the left of the speaker's nose, which may be in the signing area and interfere with results. By swapping the x and y axes, that discontinuity move to directly above the speaker's head, an area not generally used in signing.
delta difference
as described in Thad's lecture, use the difference in values between one frame and the next frames as features
pandas diff method and fillna method will be helpful for this one
custom features
These are your own design; combine techniques used above or come up with something else entirely. We look forward to seeing what you come up with!
Some ideas to get you started:
normalize using a feature scaling equation
normalize the polar coordinates
adding additional deltas | # TODO add features for normalized by speaker values of left, right, x, y
# Name these 'norm-rx', 'norm-ry', 'norm-lx', and 'norm-ly'
# using Z-score scaling (X-Xmean)/Xstd
columns = ['right-x','right-y','left-x','left-y']
features_norm = ['norm-rx','norm-ry', 'norm-lx','norm-ly']
for i,f in enumerate(features_norm):
means = asl.df['speaker'].map(df_means[columns[i]])
standards = asl.df['speaker'].map(df_std[columns[i]])
asl.df[f]=(asl.df[columns[i]] - means) / standards
# TODO add features for polar coordinate values where the nose is the origin
# Name these 'polar-rr', 'polar-rtheta', 'polar-lr', and 'polar-ltheta'
# Note that 'polar-rr' and 'polar-rtheta' refer to the radius and angle
features_polar = ['polar-rr', 'polar-rtheta', 'polar-lr', 'polar-ltheta']
columns = [['grnd-rx','grnd-ry'],['grnd-lx','grnd-ly']]
def radius(x, y):
return np.sqrt(x ** 2 + y ** 2)
def theta(x, y):
return np.arctan2(x, y)
for i, f in enumerate(features_polar):
if i % 2 == 0:
asl.df[f] = radius(asl.df[features_ground[i]],
asl.df[features_ground[i + 1]])
else:
asl.df[f] = theta(asl.df[features_ground[i - 1]],
asl.df[features_ground[i]])
# TODO add features for left, right, x, y differences by one time step, i.e. the "delta" values discussed in the lecture
# Name these 'delta-rx', 'delta-ry', 'delta-lx', and 'delta-ly'
features_delta = ['delta-rx', 'delta-ry', 'delta-lx', 'delta-ly']
columns = ['right-x','right-y','left-x','left-y']
for i,f in enumerate(features_delta):
asl.df[f] = asl.df[columns[i]].diff().fillna(0.0)
# TODO add features of your own design, which may be a combination of the above or something else
# Name these whatever you would like
# TODO define a list named 'features_custom' for building the training set
custom_features = ['norm-delta-rx', 'norm-delta-ry', 'norm-delta-lx', 'norm-delta-ly']
columns = ['polar-rr', 'polar-rtheta', 'polar-lr', 'polar-ltheta']
df_new_means = asl.df.groupby('speaker').mean()
df_new_std = asl.df.groupby('speaker').std()
for i,f in enumerate(custom_features):
means = asl.df['speaker'].map(df_new_means[columns[i]])
standards = asl.df['speaker'].map(df_new_std[columns[i]])
asl.df[f]=((asl.df[columns[i]] - means) / standards)+ asl.df[features_delta[i]] | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
Question 1: What custom features did you choose for the features_custom set and why?
Answer 1: The custom features choosen is the addition of normalized polar coordinates with the delta values. Polar coordinates are used to make the nose the origin of the frame and all the frame points orbit around it. Normalization is used as the gaussian fit normal distribution very well. The delta values show the change in the positions with time and help measure up the probabilities and gaussian size.
<a id='part1_test'></a>
Features Unit Testing
Run the following unit tests as a sanity check on the defined "ground", "norm", "polar", and 'delta"
feature sets. The test simply looks for some valid values but is not exhaustive. However, the project should not be submitted if these tests don't pass. | import unittest
# import numpy as np
class TestFeatures(unittest.TestCase):
def test_features_ground(self):
sample = (asl.df.ix[98, 1][features_ground]).tolist()
self.assertEqual(sample, [9, 113, -12, 119])
def test_features_norm(self):
sample = (asl.df.ix[98, 1][features_norm]).tolist()
np.testing.assert_almost_equal(sample, [ 1.153, 1.663, -0.891, 0.742], 3)
def test_features_polar(self):
sample = (asl.df.ix[98,1][features_polar]).tolist()
np.testing.assert_almost_equal(sample, [113.3578, 0.0794, 119.603, -0.1005], 3)
def test_features_delta(self):
sample = (asl.df.ix[98, 0][features_delta]).tolist()
self.assertEqual(sample, [0, 0, 0, 0])
sample = (asl.df.ix[98, 18][features_delta]).tolist()
self.assertTrue(sample in [[-16, -5, -2, 4], [-14, -9, 0, 0]], "Sample value found was {}".format(sample))
suite = unittest.TestLoader().loadTestsFromModule(TestFeatures())
unittest.TextTestRunner().run(suite) | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
<a id='part2_tutorial'></a>
PART 2: Model Selection
Model Selection Tutorial
The objective of Model Selection is to tune the number of states for each word HMM prior to testing on unseen data. In this section you will explore three methods:
- Log likelihood using cross-validation folds (CV)
- Bayesian Information Criterion (BIC)
- Discriminative Information Criterion (DIC)
Train a single word
Now that we have built a training set with sequence data, we can "train" models for each word. As a simple starting example, we train a single word using Gaussian hidden Markov models (HMM). By using the fit method during training, the Baum-Welch Expectation-Maximization (EM) algorithm is invoked iteratively to find the best estimate for the model for the number of hidden states specified from a group of sample seequences. For this example, we assume the correct number of hidden states is 3, but that is just a guess. How do we know what the "best" number of states for training is? We will need to find some model selection technique to choose the best parameter. | import warnings
from hmmlearn.hmm import GaussianHMM
def train_a_word(word, num_hidden_states, features):
warnings.filterwarnings("ignore", category=DeprecationWarning)
training = asl.build_training(features)
X, lengths = training.get_word_Xlengths(word)
model = GaussianHMM(n_components=num_hidden_states, n_iter=1000).fit(X, lengths)
logL = model.score(X, lengths)
return model, logL
demoword = 'BOOK'
model, logL = train_a_word(demoword, 3, features_ground)
print("Number of states trained in model for {} is {}".format(demoword, model.n_components))
print("logL = {}".format(logL)) | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
The HMM model has been trained and information can be pulled from the model, including means and variances for each feature and hidden state. The log likelihood for any individual sample or group of samples can also be calculated with the score method. | def show_model_stats(word, model):
print("Number of states trained in model for {} is {}".format(word, model.n_components))
variance=np.array([np.diag(model.covars_[i]) for i in range(model.n_components)])
for i in range(model.n_components): # for each hidden state
print("hidden state #{}".format(i))
print("mean = ", model.means_[i])
print("variance = ", variance[i])
print()
show_model_stats(demoword, model) | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
Try it!
Experiment by changing the feature set, word, and/or num_hidden_states values in the next cell to see changes in values. | my_testword = 'CHOCOLATE'
model, logL = train_a_word(my_testword, 3, features_ground) # Experiment here with different parameters
show_model_stats(my_testword, model)
print("logL = {}".format(logL)) | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
Visualize the hidden states
We can plot the means and variances for each state and feature. Try varying the number of states trained for the HMM model and examine the variances. Are there some models that are "better" than others? How can you tell? We would like to hear what you think in the classroom online. | %matplotlib inline
import math
from matplotlib import (cm, pyplot as plt, mlab)
def visualize(word, model):
""" visualize the input model for a particular word """
variance=np.array([np.diag(model.covars_[i]) for i in range(model.n_components)])
figures = []
for parm_idx in range(len(model.means_[0])):
xmin = int(min(model.means_[:,parm_idx]) - max(variance[:,parm_idx]))
xmax = int(max(model.means_[:,parm_idx]) + max(variance[:,parm_idx]))
fig, axs = plt.subplots(model.n_components, sharex=True, sharey=False)
colours = cm.rainbow(np.linspace(0, 1, model.n_components))
for i, (ax, colour) in enumerate(zip(axs, colours)):
x = np.linspace(xmin, xmax, 100)
mu = model.means_[i,parm_idx]
sigma = math.sqrt(np.diag(model.covars_[i])[parm_idx])
ax.plot(x, mlab.normpdf(x, mu, sigma), c=colour)
ax.set_title("{} feature {} hidden state #{}".format(word, parm_idx, i))
ax.grid(True)
figures.append(plt)
for p in figures:
p.show()
visualize(my_testword, model) | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
ModelSelector class
Review the ModelSelector class from the codebase found in the my_model_selectors.py module. It is designed to be a strategy pattern for choosing different model selectors. For the project submission in this section, subclass SelectorModel to implement the following model selectors. In other words, you will write your own classes/functions in the my_model_selectors.py module and run them from this notebook:
SelectorCV: Log likelihood with CV
SelectorBIC: BIC
SelectorDIC: DIC
You will train each word in the training set with a range of values for the number of hidden states, and then score these alternatives with the model selector, choosing the "best" according to each strategy. The simple case of training with a constant value for n_components can be called using the provided SelectorConstant subclass as follow: | from my_model_selectors import SelectorConstant
training = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1
word = 'VEGETABLE' # Experiment here with different words
model = SelectorConstant(training.get_all_sequences(), training.get_all_Xlengths(), word, n_constant=3).select()
print("Number of states trained in model for {} is {}".format(word, model.n_components)) | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
Cross-validation folds
If we simply score the model with the Log Likelihood calculated from the feature sequences it has been trained on, we should expect that more complex models will have higher likelihoods. However, that doesn't tell us which would have a better likelihood score on unseen data. The model will likely be overfit as complexity is added. To estimate which topology model is better using only the training data, we can compare scores using cross-validation. One technique for cross-validation is to break the training set into "folds" and rotate which fold is left out of training. The "left out" fold scored. This gives us a proxy method of finding the best model to use on "unseen data". In the following example, a set of word sequences is broken into three folds using the scikit-learn Kfold class object. When you implement SelectorCV, you will use this technique. | from sklearn.model_selection import KFold
training = asl.build_training(features_ground) # Experiment here with different feature sets
word = 'VEGETABLE' # Experiment here with different words
word_sequences = training.get_word_sequences(word)
split_method = KFold()
for cv_train_idx, cv_test_idx in split_method.split(word_sequences):
print("Train fold indices:{} Test fold indices:{}".format(cv_train_idx, cv_test_idx)) # view indices of the folds | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
Tip: In order to run hmmlearn training using the X,lengths tuples on the new folds, subsets must be combined based on the indices given for the folds. A helper utility has been provided in the asl_utils module named combine_sequences for this purpose.
Scoring models with other criterion
Scoring model topologies with BIC balances fit and complexity within the training set for each word. In the BIC equation, a penalty term penalizes complexity to avoid overfitting, so that it is not necessary to also use cross-validation in the selection process. There are a number of references on the internet for this criterion. These slides include a formula you may find helpful for your implementation.
The advantages of scoring model topologies with DIC over BIC are presented by Alain Biem in this reference (also found here). DIC scores the discriminant ability of a training set for one word against competing words. Instead of a penalty term for complexity, it provides a penalty if model liklihoods for non-matching words are too similar to model likelihoods for the correct word in the word set.
<a id='part2_submission'></a>
Model Selection Implementation Submission
Implement SelectorCV, SelectorBIC, and SelectorDIC classes in the my_model_selectors.py module. Run the selectors on the following five words. Then answer the questions about your results.
Tip: The hmmlearn library may not be able to train or score all models. Implement try/except contructs as necessary to eliminate non-viable models from consideration. | words_to_train = ['FISH', 'BOOK', 'VEGETABLE', 'FUTURE', 'JOHN']
import timeit
# TODO: Implement SelectorCV in my_model_selector.py
from my_model_selectors import SelectorCV
training = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1
sequences = training.get_all_sequences()
Xlengths = training.get_all_Xlengths()
for word in words_to_train:
start = timeit.default_timer()
model = SelectorCV(sequences, Xlengths, word,
min_n_components=2, max_n_components=15, random_state = 14).select()
end = timeit.default_timer()-start
if model is not None:
print("Training complete for {} with {} states with time {} seconds".format(word, model.n_components, end))
else:
print("Training failed for {}".format(word))
# TODO: Implement SelectorBIC in module my_model_selectors.py
from my_model_selectors import SelectorBIC
training = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1
sequences = training.get_all_sequences()
Xlengths = training.get_all_Xlengths()
for word in words_to_train:
start = timeit.default_timer()
model = SelectorBIC(sequences, Xlengths, word,
min_n_components=2, max_n_components=15, random_state = 14).select()
end = timeit.default_timer()-start
if model is not None:
print("Training complete for {} with {} states with time {} seconds".format(word, model.n_components, end))
else:
print("Training failed for {}".format(word))
# TODO: Implement SelectorDIC in module my_model_selectors.py
from my_model_selectors import SelectorDIC
training = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1
sequences = training.get_all_sequences()
Xlengths = training.get_all_Xlengths()
for word in words_to_train:
start = timeit.default_timer()
model = SelectorDIC(sequences, Xlengths, word,
min_n_components=2, max_n_components=15, random_state = 14).select()
end = timeit.default_timer()-start
if model is not None:
print("Training complete for {} with {} states with time {} seconds".format(word, model.n_components, end))
else:
print("Training failed for {}".format(word)) | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
Question 2: Compare and contrast the possible advantages and disadvantages of the various model selectors implemented.
Answer 2:
SelectorBIC (lowest Baysian Information Criterion(BIC) score)
Advantage : It penalizes the complexity of the model where complexity refers to the number of parameters in the model.
Disadvantage: The above approximation is only valid for sample size n {\displaystyle n} n much larger than the number k where k of parameters in the model and BIC cannot handle complex collections of models as in the variable selection (or feature selection) problem in high-dimension.
SelectorDIC (Discriminative Information Criterion)
Advantage : DIC is easily calculated from the samples generated by a Markov chain Monte Carlo simulation. BIC require calculating the likelihood
Disdvantage : DIC equation is derived under the assumption that the specified parametric family of probability distributions that generate futuire observations encompasses the true model. This assumption does not always hold. -The observed data are both used to construct the posterior distribution and to evaluate the estimated models. DIC therfore tends to select over-fitted models.
SelectorCV (average log Likelihood of cross-validation folds):
Advantage : High Accuracy given large amount of training data and the knowledge that the unseen data does not deviate much from the seen data. of this method over repeated random sub-sampling (see below) is that all observations are used for both training and validation, and each observation is used for validation exactly once
Disdvantage : When the training data set is small, the model will overfit -If the training data set is small and the unseen data deviates significantly from the training data set, accuracy will be low. -Calculation of the folds introduces increased time and space complexity.
<a id='part2_test'></a>
Model Selector Unit Testing
Run the following unit tests as a sanity check on the implemented model selectors. The test simply looks for valid interfaces but is not exhaustive. However, the project should not be submitted if these tests don't pass. | from asl_test_model_selectors import TestSelectors
suite = unittest.TestLoader().loadTestsFromModule(TestSelectors())
unittest.TextTestRunner().run(suite) | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
<a id='part3_tutorial'></a>
PART 3: Recognizer
The objective of this section is to "put it all together". Using the four feature sets created and the three model selectors, you will experiment with the models and present your results. Instead of training only five specific words as in the previous section, train the entire set with a feature set and model selector strategy.
Recognizer Tutorial
Train the full training set
The following example trains the entire set with the example features_ground and SelectorConstant features and model selector. Use this pattern for you experimentation and final submission cells. | # autoreload for automatically reloading changes made in my_model_selectors and my_recognizer
%load_ext autoreload
%autoreload 2
from my_model_selectors import SelectorConstant
def train_all_words(features, model_selector):
training = asl.build_training(features) # Experiment here with different feature sets defined in part 1
sequences = training.get_all_sequences()
Xlengths = training.get_all_Xlengths()
model_dict = {}
for word in training.words:
model = model_selector(sequences, Xlengths, word,
n_constant=3).select()
model_dict[word]=model
return model_dict
models = train_all_words(features_ground, SelectorConstant)
print("Number of word models returned = {}".format(len(models))) | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
Load the test set
The build_test method in ASLdb is similar to the build_training method already presented, but there are a few differences:
- the object is type SinglesData
- the internal dictionary keys are the index of the test word rather than the word itself
- the getter methods are get_all_sequences, get_all_Xlengths, get_item_sequences and get_item_Xlengths | test_set = asl.build_test(features_ground)
print("Number of test set items: {}".format(test_set.num_items))
print("Number of test set sentences: {}".format(len(test_set.sentences_index))) | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
<a id='part3_submission'></a>
Recognizer Implementation Submission
For the final project submission, students must implement a recognizer following guidance in the my_recognizer.py module. Experiment with the four feature sets and the three model selection methods (that's 12 possible combinations). You can add and remove cells for experimentation or run the recognizers locally in some other way during your experiments, but retain the results for your discussion. For submission, you will provide code cells of only three interesting combinations for your discussion (see questions below). At least one of these should produce a word error rate of less than 60%, i.e. WER < 0.60 .
Tip: The hmmlearn library may not be able to train or score all models. Implement try/except contructs as necessary to eliminate non-viable models from consideration. | # TODO implement the recognize method in my_recognizer
from my_recognizer import recognize
from asl_utils import show_errors
# TODO Choose a feature set and model selector
features = custom_features # change as needed
model_selector = SelectorCV # change as needed
# TODO Recognize the test set and display the result with the show_errors method
models = train_all_words(features, model_selector)
test_set = asl.build_test(features)
probabilities, guesses = recognize(models, test_set)
show_errors(guesses, test_set)
# TODO Choose a feature set and model selector
features = custom_features # change as needed
model_selector = SelectorBIC # change as needed
# TODO Recognize the test set and display the result with the show_errors method
models = train_all_words(features, model_selector)
test_set = asl.build_test(features)
probabilities, guesses = recognize(models, test_set)
show_errors(guesses, test_set)
# TODO Choose a feature set and model selector
features = custom_features # change as needed
model_selector = SelectorDIC # change as needed
# TODO Recognize the test set and display the result with the show_errors method
models = train_all_words(features, model_selector)
test_set = asl.build_test(features)
probabilities, guesses = recognize(models, test_set)
show_errors(guesses, test_set) | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
Question 3: Summarize the error results from three combinations of features and model selectors. What was the "best" combination and why? What additional information might we use to improve our WER? For more insight on improving WER, take a look at the introduction to Part 4.
Answer 3: Using the custom_features, but with different model_selector.
SelectorCV WER = 0.651685393258427
SelectorBIC WER = 0.5898876404494382
SelectorDIC WER = 0.6179775280898876
Custom features are expected to provide an appropriate dataset with all the objects orbiting around nose and the normalisation makes the data fit for gaussian models. By varying the choice of model, BIC appears to give the "best" combination. This is because BIC has a strict penalization based on the the number of components and hence provides a inference for the model.
There are multiple techniques which we can use to improve WER, such as assigning higher probability to real and frequently observed sentences, use the shanon visualisation method, or use n gram models with generalisation by zeros. In addition, we can use less context data to avoid overfitting and use interpolation of probabilities to compensate.
<a id='part3_test'></a>
Recognizer Unit Tests
Run the following unit tests as a sanity check on the defined recognizer. The test simply looks for some valid values but is not exhaustive. However, the project should not be submitted if these tests don't pass. | from asl_test_recognizer import TestRecognize
suite = unittest.TestLoader().loadTestsFromModule(TestRecognize())
unittest.TextTestRunner().run(suite) | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
<a id='part4_info'></a>
PART 4: (OPTIONAL) Improve the WER with Language Models
We've squeezed just about as much as we can out of the model and still only get about 50% of the words right! Surely we can do better than that. Probability to the rescue again in the form of statistical language models (SLM). The basic idea is that each word has some probability of occurrence within the set, and some probability that it is adjacent to specific other words. We can use that additional information to make better choices.
Additional reading and resources
Introduction to N-grams (Stanford Jurafsky slides)
Speech Recognition Techniques for a Sign Language Recognition System, Philippe Dreuw et al see the improved results of applying LM on this data!
SLM data for this ASL dataset
Optional challenge
The recognizer you implemented in Part 3 is equivalent to a "0-gram" SLM. Improve the WER with the SLM data provided with the data set in the link above using "1-gram", "2-gram", and/or "3-gram" statistics. The probabilities data you've already calculated will be useful and can be turned into a pandas DataFrame if desired (see next cell).
Good luck! Share your results with the class! | # create a DataFrame of log likelihoods for the test word items
df_probs = pd.DataFrame(data=probabilities)
df_probs.head() | Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb | joelowj/Udacity-Projects | apache-2.0 |
Structures like these are encoded in "PDB" files
How can we parse a complicted file like this one? | import pandas as pd
pd.read_table("data/1stn.pdb") | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
We can do better by manually parsing the file.
Our test file
Predict what this will print | f = open("test-file.txt")
print(f.readlines())
f.close() | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
Predict what this will print | f = open("test-file.txt")
for line in f.readlines():
print(line)
f.close() | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
Predict what this will print | f = open("test-file.txt")
for line in f.readlines():
print(line,end="")
f.close() | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
Basic file reading operations:
Open a file for reading: f = open(SOME_FILE_NAME)
Read lines of file sequentially: f.readlines()
Read one line from the file: f.readline()
Read the whole file into a string: f.read()
Close the file: f.close()
Now what do we do with each line?
Predict what the following program will do | f = open("test-file.txt")
for line in f.readlines():
print(line.split())
f.close() | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
Predict what the following program will do | f = open("test-file.txt")
for line in f.readlines():
print(line.split("1"))
f.close() | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
Splitting strings
SOME_STRING.split(CHAR_TO_SPLIT_ON) allows you to split strings into a list.
If CHAR_TO_SPLIT_ON is not defined, it will split on all whitespace (" ","\t","\n","\r")
"\t" is TAB, "\n" is NEWLINE, "\r" is CARRIAGE_RETURN.
Predict what the following will do | f = open("test-file.txt")
lines = f.readlines()
f.close()
line_of_interest = lines[-1]
value = line_of_interest.split()[0]
print(value) | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
Predict what will happen: | print(value*5) | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
value is a string of "1.5". You can't do math on it yet.
The solution is to cast it into a float | value_as_float = float(value)
print(value_as_float*5) | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
Cast calls:
float, int, str, list, tuple | list("1.5") | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
Write a program that grabs the "1" from the first line in the file and multiplies it by 75. | f = open("test-file.txt")
lines = f.readlines()
f.close()
value = lines[0].split(" ")[1]
value_as_int = int(value)
print(value_as_int*75) | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
What about writing to files?
Basic file writing operations:
Open a file for writing: f = open(SOME_FILE_NAME,'w') will wipe out file immediately!
Open a file to append: f = open(SOME_FILE_NAME,'a')
Write a string to a file: f.write(SOME_STRING)
Write a list of strings: f.writelines([STRING1,STRING2,...])
Close the file: f.close() | def file_printer(file_name):
f = open(file_name)
for line in f.readlines():
print(line,end="")
f.close() | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
Predict what this code will do | a_list = ["a","b","c"]
f = open("another-file.txt","w")
for a in a_list:
f.write(a)
f.close()
file_printer("another-file.txt") | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
Predict what this code will do | a_list = ["a","b","c"]
f = open("another-file.txt","w")
for a in a_list:
f.write(a)
f.write("\n")
f.close()
file_printer("another-file.txt") | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
Predict what this code will do | a_list = ["a","b","ccat"]
f = open("another-file.txt","w")
for a in a_list:
f.write("A test {{}} {}\n".format(a))
f.close()
file_printer("another-file.txt") | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
format lets you make pretty strings | print("The value is: {:}".format(10.35151))
print("The value is: {:.2f}".format(10.35151))
print("The value is: {:20.2f}".format(10.35151))
print("The value is: {:}".format(10))
print("The value is: {:20d}".format(10)) | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
String formatting
Pretty decimal printing: "{:LENGITH_OF_STRING.NUM_DECIMALSf}".format(FLOAT)
Pretty integer printing: "{:LENGTH_OF_STRINGd}".format(INT)
Pretty string printing: "{:LENGTH_OF_STRINGs}".format(STRING)
Create a loop that prints 0 to 9 to a file. Each number should be on its own line, written to 3 decimal places. | f = open("junk","w")
for i in range(10):
f.write("{:.3f}\n".format(i))
f.close()
file_printer("junk") | chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb | harmsm/pythonic-science | unlicense |
Create an array of 10 zeros | np.zeros(10) | Python-for-Data-Analysis/NumPy/Numpy Exercise - Solutions.ipynb | iannesbitt/ml_bootcamp | mit |
Create an array of 10 ones | np.ones(10) | Python-for-Data-Analysis/NumPy/Numpy Exercise - Solutions.ipynb | iannesbitt/ml_bootcamp | mit |
Create an array of 10 fives | np.ones(10) * 5 | Python-for-Data-Analysis/NumPy/Numpy Exercise - Solutions.ipynb | iannesbitt/ml_bootcamp | mit |
Create an array of the integers from 10 to 50 | np.arange(10,51) | Python-for-Data-Analysis/NumPy/Numpy Exercise - Solutions.ipynb | iannesbitt/ml_bootcamp | mit |
Create an array of all the even integers from 10 to 50 | np.arange(10,51,2) | Python-for-Data-Analysis/NumPy/Numpy Exercise - Solutions.ipynb | iannesbitt/ml_bootcamp | mit |
Create a 3x3 matrix with values ranging from 0 to 8 | np.arange(9).reshape(3,3) | Python-for-Data-Analysis/NumPy/Numpy Exercise - Solutions.ipynb | iannesbitt/ml_bootcamp | mit |
Create a 3x3 identity matrix | np.eye(3) | Python-for-Data-Analysis/NumPy/Numpy Exercise - Solutions.ipynb | iannesbitt/ml_bootcamp | mit |
Use NumPy to generate a random number between 0 and 1 | np.random.rand(1) | Python-for-Data-Analysis/NumPy/Numpy Exercise - Solutions.ipynb | iannesbitt/ml_bootcamp | mit |
Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution | np.random.randn(25) | Python-for-Data-Analysis/NumPy/Numpy Exercise - Solutions.ipynb | iannesbitt/ml_bootcamp | mit |
Numpy Indexing and Selection
Now you will be given a few matrices, and be asked to replicate the resulting matrix outputs: | mat = np.arange(1,26).reshape(5,5)
mat
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[2:,1:]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[3,4]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[:3,1:2]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[4,:]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[3:5,:] | Python-for-Data-Analysis/NumPy/Numpy Exercise - Solutions.ipynb | iannesbitt/ml_bootcamp | mit |
Now do the following
Get the sum of all the values in mat | mat.sum() | Python-for-Data-Analysis/NumPy/Numpy Exercise - Solutions.ipynb | iannesbitt/ml_bootcamp | mit |
Get the standard deviation of the values in mat | mat.std() | Python-for-Data-Analysis/NumPy/Numpy Exercise - Solutions.ipynb | iannesbitt/ml_bootcamp | mit |
Get the sum of all the columns in mat | mat.sum(axis=0) | Python-for-Data-Analysis/NumPy/Numpy Exercise - Solutions.ipynb | iannesbitt/ml_bootcamp | mit |
TensorBoard Scalars: Logging training metrics in Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tensorboard/scalars_and_keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/scalars_and_keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorboard/blob/master/docs/scalars_and_keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Overview
Machine learning invariably involves understanding key metrics such as loss and how they change as training progresses. These metrics can help you understand if you're overfitting, for example, or if you're unnecessarily training for too long. You may want to compare these metrics across different training runs to help debug and improve your model.
TensorBoard's Scalars Dashboard allows you to visualize these metrics using a simple API with very little effort. This tutorial presents very basic examples to help you learn how to use these APIs with TensorBoard when developing your Keras model. You will learn how to use the Keras TensorBoard callback and TensorFlow Summary APIs to visualize default and custom scalars.
Setup | # Load the TensorBoard notebook extension.
%load_ext tensorboard
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
import numpy as np
print("TensorFlow version: ", tf.__version__)
assert version.parse(tf.__version__).release[0] >= 2, \
"This notebook requires TensorFlow 2.0 or above." | site/en-snapshot/tensorboard/scalars_and_keras.ipynb | tensorflow/docs-l10n | apache-2.0 |
Set up data for a simple regression
You're now going to use Keras to calculate a regression, i.e., find the best line of fit for a paired data set. (While using neural networks and gradient descent is overkill for this kind of problem, it does make for a very easy to understand example.)
You're going to use TensorBoard to observe how training and test loss change across epochs. Hopefully, you'll see training and test loss decrease over time and then remain steady.
First, generate 1000 data points roughly along the line y = 0.5x + 2. Split these data points into training and test sets. Your hope is that the neural net learns this relationship. | data_size = 1000
# 80% of the data is for training.
train_pct = 0.8
train_size = int(data_size * train_pct)
# Create some input data between -1 and 1 and randomize it.
x = np.linspace(-1, 1, data_size)
np.random.shuffle(x)
# Generate the output data.
# y = 0.5x + 2 + noise
y = 0.5 * x + 2 + np.random.normal(0, 0.05, (data_size, ))
# Split into test and train pairs.
x_train, y_train = x[:train_size], y[:train_size]
x_test, y_test = x[train_size:], y[train_size:] | site/en-snapshot/tensorboard/scalars_and_keras.ipynb | tensorflow/docs-l10n | apache-2.0 |
Training the model and logging loss
You're now ready to define, train and evaluate your model.
To log the loss scalar as you train, you'll do the following:
Create the Keras TensorBoard callback
Specify a log directory
Pass the TensorBoard callback to Keras' Model.fit().
TensorBoard reads log data from the log directory hierarchy. In this notebook, the root log directory is logs/scalars, suffixed by a timestamped subdirectory. The timestamped subdirectory enables you to easily identify and select training runs as you use TensorBoard and iterate on your model. | logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(learning_rate=0.2),
)
print("Training ... With default parameters, this takes less than 10 seconds.")
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback],
)
print("Average test loss: ", np.average(training_history.history['loss'])) | site/en-snapshot/tensorboard/scalars_and_keras.ipynb | tensorflow/docs-l10n | apache-2.0 |
Examining loss using TensorBoard
Now, start TensorBoard, specifying the root log directory you used above.
Wait a few seconds for TensorBoard's UI to spin up. | %tensorboard --logdir logs/scalars | site/en-snapshot/tensorboard/scalars_and_keras.ipynb | tensorflow/docs-l10n | apache-2.0 |
<!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/scalars_loss.png?raw=1"/> -->
You may see TensorBoard display the message "No dashboards are active for the current data set". That's because initial logging data hasn't been saved yet. As training progresses, the Keras model will start logging data. TensorBoard will periodically refresh and show you your scalar metrics. If you're impatient, you can tap the Refresh arrow at the top right.
As you watch the training progress, note how both training and validation loss rapidly decrease, and then remain stable. In fact, you could have stopped training after 25 epochs, because the training didn't improve much after that point.
Hover over the graph to see specific data points. You can also try zooming in with your mouse, or selecting part of them to view more detail.
Notice the "Runs" selector on the left. A "run" represents a set of logs from a round of training, in this case the result of Model.fit(). Developers typically have many, many runs, as they experiment and develop their model over time.
Use the Runs selector to choose specific runs, or choose from only training or validation. Comparing runs will help you evaluate which version of your code is solving your problem better.
Ok, TensorBoard's loss graph demonstrates that the loss consistently decreased for both training and validation and then stabilized. That means that the model's metrics are likely very good! Now see how the model actually behaves in real life.
Given the input data (60, 25, 2), the line y = 0.5x + 2 should yield (32, 14.5, 3). Does the model agree? | print(model.predict([60, 25, 2]))
# True values to compare predictions against:
# [[32.0]
# [14.5]
# [ 3.0]] | site/en-snapshot/tensorboard/scalars_and_keras.ipynb | tensorflow/docs-l10n | apache-2.0 |
Not bad!
Logging custom scalars
What if you want to log custom values, such as a dynamic learning rate? To do that, you need to use the TensorFlow Summary API.
Retrain the regression model and log a custom learning rate. Here's how:
Create a file writer, using tf.summary.create_file_writer().
Define a custom learning rate function. This will be passed to the Keras LearningRateScheduler callback.
Inside the learning rate function, use tf.summary.scalar() to log the custom learning rate.
Pass the LearningRateScheduler callback to Model.fit().
In general, to log a custom scalar, you need to use tf.summary.scalar() with a file writer. The file writer is responsible for writing data for this run to the specified directory and is implicitly used when you use the tf.summary.scalar(). | logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
file_writer = tf.summary.create_file_writer(logdir + "/metrics")
file_writer.set_as_default()
def lr_schedule(epoch):
"""
Returns a custom learning rate that decreases as epochs progress.
"""
learning_rate = 0.2
if epoch > 10:
learning_rate = 0.02
if epoch > 20:
learning_rate = 0.01
if epoch > 50:
learning_rate = 0.005
tf.summary.scalar('learning rate', data=learning_rate, step=epoch)
return learning_rate
lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(),
)
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback, lr_callback],
) | site/en-snapshot/tensorboard/scalars_and_keras.ipynb | tensorflow/docs-l10n | apache-2.0 |
Let's look at TensorBoard again. | %tensorboard --logdir logs/scalars | site/en-snapshot/tensorboard/scalars_and_keras.ipynb | tensorflow/docs-l10n | apache-2.0 |
<!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/scalars_custom_lr.png?raw=1"/> -->
Using the "Runs" selector on the left, notice that you have a <timestamp>/metrics run. Selecting this run displays a "learning rate" graph that allows you to verify the progression of the learning rate during this run.
You can also compare this run's training and validation loss curves against your earlier runs.
You might also notice that the learning rate schedule returned discrete values, depending on epoch, but the learning rate plot may appear smooth. TensorBoard has a smoothing parameter that you may need to turn down to zero to see the unsmoothed values.
How does this model do? | print(model.predict([60, 25, 2]))
# True values to compare predictions against:
# [[32.0]
# [14.5]
# [ 3.0]] | site/en-snapshot/tensorboard/scalars_and_keras.ipynb | tensorflow/docs-l10n | apache-2.0 |
Perplexity on Each Dataset | %matplotlib inline
from IPython.display import HTML, display
def display_table(data):
display(HTML(
u'<table><tr>{}</tr></table>'.format(
u'</tr><tr>'.join(
u'<td>{}</td>'.format('</td><td>'.join(unicode(_) for _ in row)) for row in data)
)
))
def bar_chart(data):
n_groups = len(data)
train_perps = [d[1] for d in data]
valid_perps = [d[2] for d in data]
test_perps = [d[3] for d in data]
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n_groups)
bar_width = 0.3
opacity = 0.4
error_config = {'ecolor': '0.3'}
train_bars = plt.bar(index, train_perps, bar_width,
alpha=opacity,
color='b',
error_kw=error_config,
label='Training Perplexity')
valid_bars = plt.bar(index + bar_width, valid_perps, bar_width,
alpha=opacity,
color='r',
error_kw=error_config,
label='Valid Perplexity')
test_bars = plt.bar(index + 2*bar_width, test_perps, bar_width,
alpha=opacity,
color='g',
error_kw=error_config,
label='Test Perplexity')
plt.xlabel('Model')
plt.ylabel('Scores')
plt.title('Perplexity by Model and Dataset')
plt.xticks(index + bar_width / 3, [d[0] for d in data])
plt.legend()
plt.tight_layout()
plt.show()
data = [['<b>Model</b>', '<b>Train Perplexity</b>', '<b>Valid Perplexity</b>', '<b>Test Perplexity</b>']]
for rname, report in reports:
data.append([rname, report['train_perplexity'], report['valid_perplexity'], report['test_perplexity']])
display_table(data)
bar_chart(data[1:])
| model_comparisons/noingX_bow_compared.ipynb | kingb12/languagemodelRNN | mit |
Loss vs. Epoch | %matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][1], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][2], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show() | model_comparisons/noingX_bow_compared.ipynb | kingb12/languagemodelRNN | mit |
Perplexity vs. Epoch | %matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][3], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][4], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show() | model_comparisons/noingX_bow_compared.ipynb | kingb12/languagemodelRNN | mit |
Generations | def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
def display_sample(samples, best_bleu=False):
for enc_input in samples:
data = []
for rname, sample in samples[enc_input]:
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Generated: </b>' + sample['generated']])
if best_bleu:
cbm = ' '.join([w for w in sample['best_match'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Closest BLEU Match: </b>' + cbm + ' (Score: ' + str(sample['best_score']) + ')'])
data.insert(0, ['<u><b>' + enc_input + '</b></u>', '<b>True: ' + gold+ '</b>'])
display_table(data)
def process_samples(samples):
# consolidate samples with identical inputs
result = {}
for rname, t_samples, t_cbms in samples:
for i, sample in enumerate(t_samples):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
if t_cbms is not None:
sample.update(t_cbms[i])
if enc_input in result:
result[enc_input].append((rname, sample))
else:
result[enc_input] = [(rname, sample)]
return result
samples = process_samples([(rname, r['train_samples'], r['best_bleu_matches_train'] if 'best_bleu_matches_train' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_train' in reports[1][1])
samples = process_samples([(rname, r['valid_samples'], r['best_bleu_matches_valid'] if 'best_bleu_matches_valid' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_valid' in reports[1][1])
samples = process_samples([(rname, r['test_samples'], r['best_bleu_matches_test'] if 'best_bleu_matches_test' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_test' in reports[1][1]) | model_comparisons/noingX_bow_compared.ipynb | kingb12/languagemodelRNN | mit |
BLEU Analysis | def print_bleu(blue_structs):
data= [['<b>Model</b>', '<b>Overall Score</b>','<b>1-gram Score</b>','<b>2-gram Score</b>','<b>3-gram Score</b>','<b>4-gram Score</b>']]
for rname, blue_struct in blue_structs:
data.append([rname, blue_struct['score'], blue_struct['components']['1'], blue_struct['components']['2'], blue_struct['components']['3'], blue_struct['components']['4']])
display_table(data)
# Training Set BLEU Scores
print_bleu([(rname, report['train_bleu']) for (rname, report) in reports])
# Validation Set BLEU Scores
print_bleu([(rname, report['valid_bleu']) for (rname, report) in reports])
# Test Set BLEU Scores
print_bleu([(rname, report['test_bleu']) for (rname, report) in reports])
# All Data BLEU Scores
print_bleu([(rname, report['combined_bleu']) for (rname, report) in reports]) | model_comparisons/noingX_bow_compared.ipynb | kingb12/languagemodelRNN | mit |
N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations | # Training Set BLEU n-pairs Scores
print_bleu([(rname, report['n_pairs_bleu_train']) for (rname, report) in reports])
# Validation Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_valid']) for (rname, report) in reports])
# Test Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_test']) for (rname, report) in reports])
# Combined n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_all']) for (rname, report) in reports])
# Ground Truth n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_gold']) for (rname, report) in reports]) | model_comparisons/noingX_bow_compared.ipynb | kingb12/languagemodelRNN | mit |
Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores | def print_align(reports):
data= [['<b>Model</b>', '<b>Average (Train) Generated Score</b>','<b>Average (Valid) Generated Score</b>','<b>Average (Test) Generated Score</b>','<b>Average (All) Generated Score</b>', '<b>Average (Gold) Score</b>']]
for rname, report in reports:
data.append([rname, report['average_alignment_train'], report['average_alignment_valid'], report['average_alignment_test'], report['average_alignment_all'], report['average_alignment_gold']])
display_table(data)
print_align(reports) | model_comparisons/noingX_bow_compared.ipynb | kingb12/languagemodelRNN | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.